Define the Choice


Desired Agreements

Drawing of a handshakeFuSca™ hinges on a set of agreements among everyone involved. The set below is for Scrum and release planning, and there is a slightly modified version for Kanban teams. Only the first subset is mandatory, in the sense that the entire system is based on it (initially, though it can be changed later). If you cannot agree to the first bullet item below, do not use FuSca’s version of Scrum or release planning. The rest of the agreements can be left optional, but the fewer the organization adopts, the longer implementation will take, and the higher the risk of failure to change:

  • Teams and programs must aim to meet the Agile Performance Standards (detailed further down):
    • Delivery of 100% of stories committed to the sprint most sprints, at a sustainable pace.
    • If Planning Releases are used, delivery of 80% of planned features most releases.
    • No escaped defects.
    • No blaming of an individual for failed stories, or of other teams for failed epics.
  • All individuals assigned to teams or programs using FuSca in any role will try the system as described on this site until they are hitting the Agile Performance Standards.
    Exception: Current Scrum teams with stable velocities and able to hit the standards do not have to make any other team-level changes to implement FuSca release planning.
  • Most individuals will be assigned to FuSca projects full-time, except for some Guidance Roles identified below.
  • Teams will be:
    • Cross-functional and full stack in almost all cases.
    • Stable, meaning:
      • Managers cannot assign a worker to another team, even temporarily, without both teams’ permission.
      • When members are done with the current project, new work will be moved to them as a team rather than breaking up and re-forming teams to match the work.
    • Preferably co-located, and no more than three time zones apart.
      Note: People in Guidance Roles other than the Team Guide and Facilitator can be assigned part time, and be more distant if they agree to work early or late hours. I’ve been in meetings starting as early as 5:30 a.m. my time, as late (early?) as 1:30 a.m., and everything in between.
  • Work cannot be requested or accepted outside of the FuSca planning processes, even by or from direct supervisors, unless it is a true emergency.

By a “true” emergency, I mean health or safety issues; a customer having critical issues with your product; a manufacturing plant unable to do work; and so on. Helping out another manager or getting data for a report you knew about prior to the team’s sprint do not constitute emergencies. The supervisors can still make the request, but should understand that sprint stories will come first. FuSca includes techniques allowing for less-critical nonsprint work to get done without impacting the sprint.

Hopefully your executive’s span of control also includes the unit’s Human Resources and Information Technology functions, so the leader can drive the following agreements as well. At least they can try to negotiate these with their bosses:

  • HR:
    • Understands that a small number of people may file formal complaints about the change process.
    • Is prepared to help managers move or remove people who cannot make the switch, or whose jobs become unnecessary.
    • Accepts that appraisal and compensation systems must change to support team (vs. strictly individual) performance
  • IT agrees you may use any software tool you choose to track the work (per “Choose a Tracker”), providing that:
    • The organization adopting FuSca will pay for it from its budget.
    • It meets company security policies.
    • IT will not be required to support it beyond initial integrations to existing tools.

Only you can decide whether to proceed with the effort if upper managers “cherry pick” which of the prerequisites to follow. I refuse to, and have lost a number of contracts because of it, happily: I am willing to give up pay to avoid setting myself up to fail. Success is not impossible without all of them. It’s just less likely, harder to achieve, and not worth the pain in my personal equation.

Trying the System

Drawing of someone riding an old-fashioned bicycle with a large front wheelUntil you are running Full Scale agile smoothly and hitting the Agile Performance Standards consistently, I strongly urge you to follow the steps exactly as shown. Learn how to ride the bike before you start doing fancy tricks! As PMI’s Agile Practice Guide says, “adopt a formal agile approach, intentionally designed and proven to achieve desired results. Then take the time to learn and understand the agile approaches before changing or tailoring them. Premature and haphazard tailoring can minimize the effects of the approach and thus limit benefits.”

Continuing the vehicle metaphor, try all the team-level steps before you start dropping parts from it. Like a machine, its parts are interlocking, each serving a specific purpose that supports other parts. You would not buy a new car, open the hood, and start ripping out parts you don’t like the looks of. So don’t remove parts from whatever system you try until you find out what they do. Every time someone has said to me, “Scrum didn’t work for us,” within a few questions I have been able to point out the problem was not Scrum, but the partial implementation of Scrum. (Earlier in my career, I had the exact same experience regarding self-directed work teams!) One of the lessons Primavera Systems drew from its conversion to Agile was, “There aren’t many rules in Scrum, but you need to adhere to the ones that exist.”[1] (For more evidence, see “Go ‘All In’” in the Agile Transformation Process.)

Yes, self-organizing teams outperform expert-directed teams. However, that is when comparing similar, mature teams in similar environments. Plenty of self-directed teams fail, and in my experience the top reason is that they are improperly formed. As I wrote in “Extreme Self-Organizing,” you cannot just say, “Poof, you’re a self-directed team,” and expect members to rapidly improve productivity by magic. The players on every championship sports team first learned to perform the sport using fundamental skills and patterns of interactions (“plays”) mandated by a coach.

I relate this to Situational Leadership, the method popularized by management maven Ken Blanchard. He and his co-creators drew the progression of a new employee on a four-quadrant grid based on two axes, “Leadership Style” and “Maturity.” A new employee needs a lot of direction from the manager, since she knows nothing of the company, its processes, its customers, or her specific tasks. But she doesn’t need a lot of emotional support because she’s excited about her new job. As the reality of the job starts to hit, she still needs direction, but now also needs some emotional support added to the manager’s Leadership Style to deal with the negatives of any new workplace and the frustrations of getting up to speed. After she matures more in the job, she no longer needs as much direction but continues to need support to assure her she is doing well as she becomes more independent. Finally, though, she is both fully competent and therefore confident, so she no longer needs much direction or support.

In my experience, a self-organizing team is such a new experience for most people, it goes through the same cycle. This kind of team needs a formal structure including team rules, role definitions, self-chosen procedures, and so forth—in other words, high direction about how to be a self-organizing team. Do this right, and the conflict many teambuilding consultants mistakenly consider inevitable can be mostly avoided. I know because my teamwork method, now the “Self-Directed agile” chapter of this site, accomplished it many times.

The Agile community recognizes this through a concept that was borrowed from the martial arts:

  1. Shu (Following)—Novice or beginner; narrowly following given practices.
  2. Ha (Detaching):
    • Journeyman; following, but extending, perfecting, occasionally breaking the rules.
    • Mentoring in specific strength area.
  3. Ri (Fluent):
    • Expert; perfecting, to creating your own, practices.
    • Coaching; mentoring.
    • “Sticky” practices.[2]

I have been a martial artist since 1981, and certainly followed this path. I spent four years learning one style, tae kwon do, and earning my first black belt. Contrary to common misconception, a black belt is not mastery; I liken it to earning an undergraduate degree. As I moved around the country, I studied with the best instructor in town regardless of style, eventually earning another black belt, and now when fighting I use whatever technique is applicable in the moment, without having to think about it.

Another analogy comes from sports. When a university or professional team hires a head coach, the employer understands it is hiring that coach’s way of doing that sport. Everything from the types of players the coach recruits, to the team culture, to specific plays called during the game, are left to the coach. So, too, is team discipline. As long as the coach adheres to applicable laws, regulations, and ethical codes, they are given complete control over how the team goes about trying to win. And the coach is not even expected to win for the first couple of years, until the system is fully in place!

A college basketball coach who was consistently successful for three decades and introduced many innovations to the game, Dean Smith of the University of North Carolina, summed up the “try it” approach perfectly. He said he told his players, “If you do what we ask you to do, the victories will belong to you, and the losses to me.”[3]

As a history buff, I could bring up countless examples about the value of “systems thinking” from science, politics, and warfare. If you aren’t convinced by now, to be blunt, I don’t believe you are going to be successful in driving change in your organization.

Agile Performance Standards

Meeting the Goals of Full Scale agile

As mentioned, the FuSca system is founded on four fundamental goals I call the “Agile Performance Standards.” They drive three characteristics that in turn drive customer satisfaction: predictability, quality, and accountability. Deliver the product or service the customer expected, defect-free, when the customer expected it (because you managed those expectations), and they will be happy. A sense of accountability to each other and the customer is the most critical part of building that team “win.”

In the next three sections, I’ll take each of those characteristics and explain the standards. The Agile Performance Standards are the only part of FuSca everyone is agreeing to for the long term. They are the most required of the “Desired Agreements” I will detail further down.

High Predictability

Some Scrum coaches will argue it’s okay to miss one or a few stories each sprint as long as you always deliver a relatively consistent number. Others go further, recommending extra stories as “stretch goals.” Always deliver around 12 stories per sprint, they say, and it is okay or even preferable to put 14 or 15 in each Sprint Plan. (Actually, they would likely talk about story points instead of stories, but the result is the same.)

One problem with that approach is, the customers can never rely on any of those 15 stories being delivered, because they can’t know which of the 15 won’t be. Only promise 12 and almost always deliver 12, and customers now can rely on getting the specific deliverables they are counting on.

A bigger problem is, you lose the power of deadline pressure. When I began working with technical teams in the 1990s, I could not go to a leadership event without hearing about, “Forming, Storming, Norming, and Performing.” These were the phases new teams all go through, everyone said. I believed it, and put it in the first version of my teamwork book, The SuddenTeams™ Program. After continuing my research in the scientific literature, though, I had to admit in the second edition that I was wrong. Those phases came from a 1965 proposed model of team development that seems logical, but does not hold up to the hard evidence. Instead, repeated studies found that we humans get serious about a project halfway through—regardless of the length of the project—and make changes to become more productive or efficient. (The scientists gave it the lyrical term, “punctuated equilibrium.”) And we all know what happens later, when the deadline is looming, and we still aren’t done. How many of you “pulled all-nighters” to get school projects done? Or work projects?

That is the power of the deadline. I said before that prior to Agile, I began training nonproject teams to create continuous improvement projects to harness it. It worked. Scrum leverages that power by giving you a deadline every week or four. But the deadline pressure goes away, and you will not get maximum speed, if you do not hold yourself accountable to delivering everything every iteration.

Hence these two standards:

  • Delivery of 100% of stories committed to the sprint most sprints, at a sustainable pace.
  • If Planning Releases are used, delivery of 80% of planned epics most releases.

The lower number for releases recognizes the realities of longer-term prediction, as discussed related to waterfall. I have achieved 90% in four-month releases, however, so I know that figure is realistic. Lower release predictability is acceptable as long as most epics are delivered, for two reasons described under “The Waterfall Myth.” First, waterfall projects often have far lower than 80% predictability over 90 days, even as late as one month before a release date. Second, the higher transparency in agile eliminates bad surprises for the customer.

Note that in Full Scale agile, nothing prevents a team that promised 12 stories in a sprint from completing 15. If those extra stories get accepted, they can still be demonstrated and delivered. We just don’t promise them, and the extras become what some marketing folks call “delighters.” Of course, if you consistently over-deliver, you should raise the number of stories you commit so your longer-term predictions are more accurate. All this will be covered in detail later.

High Quality

As you read through the “Twelve Principles of Agile,” it becomes clear that a dedication to high quality is critical. You cannot deliver working products every few weeks, satisfy the customer, or maintain a sustainable pace if your product has a lot of defects. Each defect requires backtracking to previous work and labor hours to identify the cause. “If they are controlled and fixed at earlier phases of software development, they save much time and budget,” concluded two scholars after reviewing 12 studies on Agile defect management.[4] This is due to the cost of stopping current development, finding the source of the problem, and retesting after the fix. The labor-hour costs of fixing bugs post-release is significantly higher than preventing them, or finding and fixing them during development.

Two professors who created a Center for Empirically Based Software Engineering reviewed data from various studies to come up with the “Software Defect Reduction Top 10 List” (Boehm & Basili 2001). Among their findings were that:

  • “Finding and fixing a software problem after delivery is often 100 times more expensive than finding and fixing it during the requirements and design phase,” though the “factor for small, noncritical software systems” was closer to 5:1, still a substantial figure.
  • “Current software projects spend about 40 to 50 percent of their effort on avoidable rework.”
  • “About 40 to 50 percent of user programs contain nontrivial defects.”

One study “found that 44 percent of 27 spreadsheet programs produced by experienced spreadsheet developers contained nontrivial defects—mostly errors in spreadsheet formulas. Yet the developers felt confident that they had produced accurate spreadsheets.” Fortunately, eliminating the vast majority of rework does not require a huge investment of time, the list indicates:

  • “About 80 percent of avoidable rework comes from 20 percent of the defects.”
  • “About 90 percent of the downtime comes from, at most, 10 percent of the defects.”
  • “Peer reviews catch 60 percent of the defects.”
  • “Disciplined personal practices can reduce defect introduction rates by up to 75 percent,” especially when coupled with mature team and organizational processes, they add.

Errors that become “escaped defects,” caught by another group in your organization—or worse, the customer—create a sense of urgency bound to disrupt your Sprint Plans and add stress for most people. It also harms the team’s, if not your organization’s, reputation and thus its credibility. If the defects keep getting to the customer, the company’s reputation is harmed and thus the company’s sales. As the manager of a car repair shop I’ve come to trust, Tom Ashley, says, “Find time to do it right, or you will find time to do it again.”

The emphasis of this standard is to please customers and save your organization money by giving them trouble-free products: “No escaped defects.” Using the techniques described under “Multiple Layers of Quality,” FuSca drives teams to eliminate every defect that could be noticed by customers. Simple in concept, and easier in practice than people think, this means we try not to allow bugs to “escape” the team. In programs requiring system tests of deliverables from different teams, or when those teams work with outputs from others, each team makes every effort to ensure no defects are found by those tests or teams. And everyone works together to ensure the customers and end users find nothing wrong.

High Accountability

Mutual accountability has great value in improving team performance. “Team cohesion,” the level of group identification and mutual accountability among members to each other, has been researched since the 1950s and shown consistently to correlate to higher team productivity and job satisfaction. FuSca provides multiple opportunities at every stage of the sprint cycle for members to raise questions of each other, and expose potential problems while they can still be prevented or fixed. That means everyone on the team has a role to play in ensuring each story is completed within one sprint, whether they work on the story or not. When that doesn’t happen, everyone on the team has a share of the blame.

Scaling the same logic to releases, in FuSca all teams have multiple opportunities to influence the release-level decisions of the other teams. Therefore the teams commit to the decisions together.

At both levels, making clear up front that everyone will share both the glory and the blame encourages higher levels of involvement in decision-making and higher motivation to carry those decisions through. The resulting standard is: “No blaming of an individual for failed stories, or of other teams for failed epics.”

Customer Satisfaction

Picture of someone enjoying an ice cream coneGiven the emphasis in Agile on customer satisfaction, it should seem odd to you that high satisfaction ratings are not an Agile Performance Standard. I encourage you to develop a standard along those lines, but feel I cannot make a specific suggestion that will translate across organizations the way those above can. Measurement is the key barrier. You must have a way to objectively and fairly measure satisfaction, with enough potential variation from one period to the next to be meaningful.

Many companies measure end-user satisfaction, of course. You know this from the constant requests for your ratings as a customer, in pop-up Web surveys; e-mails after you make an online purchase; phone system requests after you speak with Customer Service; and links printed on paper receipts.

In the case of a Customer Support team, it is easy to tie these results to the team. Just as important, there are many inputs from a lot of different people, so the numbers can change a lot. Reports from 200 people in a month on a three-point satisfaction scale yields 600 points, so it’s relatively easy to prove improvement. To turn a 90% satisfaction rate (540/600) to a 92% rate, improve your approach such that 10 people give you one extra point. Also, with more raters, personal biases are less of a problem. Even if some of your customers are prejudiced about your nose ring or your gender, you can still find 10 who won’t care. (Or remove the nose ring while at work!)

However, most workers do not directly provide deliverables to large numbers of people, and they are not solely responsible for the “sat score.” (Even Customer Support people can only do so much; if the product they are supporting sucks, they can get low scores despite doing their support duties perfectly.) Most people are in the reverse case where their input is only a small part of a larger output. If your team is responsible for laying water pipes in a new residential neighborhood, the project manager may be the only person you have to please. Here’s hoping the PM isn’t bigoted against the majority race on your team—nothing you do is likely to get a positive sat score. Beyond that, a problem with leaky pipes could have occurred in the design, materials, manufacture, storage, transportation, and/or laying of the pipe. Plus the PM has often moved on by the time problems arise, and the general contractor may never know who was at fault!

That scenario also raises the issue of delays in measurement. Say customers are really unhappy with one feature on a new oven design. You could potentially tie that to the team who designed that feature. But there may be a gap of years from the time they developed it to the time reports come rolling in. And again, the team members don’t have full control: Maybe they gave the product manager exactly what she wanted.

A more logical approach seems to be having your internal Customer supply the ratings. Now you are back to the low-variability issue, however. One person using a 10-point scale is likely to only go up or down by a point or two per period, but that jump would equal a substantial 10%−20% change. Furthermore, they will be influenced by the desire to keep a good working relationship with you, and can’t be anonymous, so they might not give honest ratings. Even if you throw in some other stakeholders, that doesn’t help much. With one or a few raters, bias becomes a factor again, and the ratings can be strongly impacted by irrational demands. I have been forced to work with more than one product or sales manager who made impossible promises to their customers without consulting the R&D teams, and then got angry when the teams could not deliver on those promises.

In short, I have rarely come across cases outside of customer service where meaningful, fair customer satisfaction ratings could be gathered and tied to specific teams, with results that were mostly under the teams’ control. The best way around this is to get satisfaction scores for the entire enterprise and apply them to all teams. Every team in your organization impacts the satisfaction of the customers. Imagine how delayed your projects would get if the janitorial staff wasn’t on top of things! Similar to this approach but more specific, on projects with external users, you could gather results on the deliverables and apply them to everyone on the project. That is, if only 73% of customers like the oven from the example above, everyone from the top managers who signed off on the project to the product managers to R&D to Manufacturing has room to improve.

Without knowing a lot of details about your organization, I can’t tell you how best to measure satisfaction fairly, through there probably is a way. If you can figure it out, I’m all for it: Create an Agile Performance Standard based on customer satisfaction and add that to the list.

Otherwise, in effect most Agilists seem to treat customer satisfaction as binary: Either you are satisfying your Customer and stakeholders, or you aren’t. Assuming they are fully engaged as described on this site, you will know. And if they aren’t fully engaged, their lack of satisfaction is at least partially their fault.

Decide to Change | ← Considerations for Switch | → Gain Buy-in

Full references are in the Agility Bibliography.

[1] Schatz & Abdelshafi 2005.

[2] Simplilearn Solutions 2013.

[3] Smith 1999.

[4] Noor & Fahad Khan 2014.

Share this page