- Agile Budgeting
- Agile Contracts
- Making Gates Agile
The budgeting process in most companies is simply broken. It is based on the same falsehood underlying the Waterfall Myth, that humans can predict the future. Then many companies refuse to let people change the goals even when hard data shows they will be missed, and beat up middle managers for not meeting those impossible goals. This, in turn, causes a host of bad behaviors ranging from manipulation of the data to outright lying. In publicly held companies, this can cause organizations to prevent revenues or profits in a given period because they believe stock value is hurt when companies over-shoot their goals as much as when they under-shoot them! (I have witnessed the delay of a shipment to customers by a week, surely reducing customer satisfaction, so the revenue would not post until the next fiscal month.) To some degree this belief is true in the short term, but it is a self-perpetuating issue: Because analysts are given few other metrics to use, they will conclude that the company doesn’t know what it is doing rather than looking at the underlying causes of the missed goal.
The company also thinks it has to project high earnings to look good, while thinking it is penalized when it fails to meet them. “In fact,” the American Association of Independent Investors states, “studies show that over the long run, stocks with high expected earnings growth tend to underperform stocks with low growth rates and low expectations because it is difficult to meet and exceed high expectations over an extended period of time” (italics added).
What really matters for shareholder value is earnings over time. For example, a European Central Bank analysis of 30 years of data found a significant correlation between earnings and stock prices in 13 countries including the United States. A book on valuation by McKinsey & Company summarizes the findings this way:
Companies with higher returns and higher growth (at returns above the cost of capital) are valued more highly in the stock market.
To value stocks, markets primarily focus on the long-term and not short-term economic fundamentals. Although some managers may believe that missing short-term earnings per share (EPS) targets always has devastating share price implications, the evidence shows that share price depends on long-term returns, not short-term EPS performance itself.
By contrast, consider startup companies. Startup investors and managers know there is a “burn rate,” the amount of money the company will spend each period of time to pay salaries and rent and keep the lights on. They also know the guesses at revenues the entrepreneurs have made—and that those are just guesses. The company focuses on work that will bring in the most money quickly.
Believe it or not, there are entire companies that operate this way even after achieving significant size and going public. They do not create detailed budgets, instead focusing on the likely value of various initiatives. One of the largest banks in Europe—one with no taint of misconduct during the worldwide recession of the late 2000s—is an example. Or at least a company budgets realistically from the bottom up, like Southwest Airlines, instead of forcing a fit into an overall figure based on wishful thinking.
For a proven alternative to the annual budgeting trap, I highly recommend the book Beyond Budgeting from Harvard Business School Press. It describes a proven approach to saving the months of labor hours and stress normally wasted each year on budgeting exercises while also improving financial performance and customer satisfaction in a wide range of company types.
Any CFO should love this approach. As the book authors and conference speakers have pointed out, every CFO they’ve met already knew how much he or she wanted to spend in a year. In this agile philosophy, as soon as they know that figure for the next year, budgeting is done! Then the organization delivers as much as it can for that price as shown in the next two sections.
If you are given the option, I recommend agile teams take the same approach as entrepreneurs. Act like each team or program is a little startup, with a “run rate” exactly like a startup’s burn rate. A run rate totals costs for the team each year, usually shown over multiple years with a small adjustment for raises, supply cost inflation, etc. Assume the team’s run rate will be the same regardless of what the team is working on or its size. For purpose of this exercise, adding or losing a few people will not significantly change the rate.
Taking this approach means all you have to focus on is which projects are likely to add the most value:
In the figure, the run rate (dotted line) is essentially flat with small cost-of-living adjustments. Projected annual revenues for the different projects overlay it. Clearly, Project C is the one the team should focus on from a strictly financial standpoint. Note that it doesn’t matter if the cost of the team goes higher because people are added to the team. It also doesn’t matter if the product manager who calculated the potential revenues was too optimistic about how fast they would come in each case. These circumstances would change the angles of the three lines so additional revenues are earned earlier or later, and profit margins would get wider or narrower. However, the changes would impact all three project lines roughly the same, so unless information specific to “C” drops its line below that of Project B, “C” is still the way to go. Substitute costs for an entire business unit instead of a single team, and the result is the same.
Perhaps, though, the company wants to also do “B” based on strategic reasons, like creating a relationship with a particular customer or preparing for the future market. It can quickly figure out from the budget-to-value ratio how much of the run rate to apply to each project. In a multi-team program, this translates to what percentage of the teams’ sprint or release plans to apply to each project. Want to give Project C 60% of your organization’s effort, Project B 30%, and continuous improvement efforts 10%? If you have the luxury of 10 full-stack teams, have them split the projects out at a 6:3:1 ratio (six teams on “C,” etc.). Otherwise have the Release Planners assign epics from each project to each release at roughly that ratio. As explained elsewhere on this site, size differences will average out over the releases.
Fixing slow progress by justifying additional budget for people is easy if you are using capacity planning. Given the hard data that technique supplies, you will be able to:
- Show that people are maxed out.
- Calculate how many people you will need to speed up a project.
- Prove that progress is slowed on a given project because you are running out of capacity for this or that role (or lack of an additional team).
Since most executives think Agile is something development teams do, they may not change their demands for project-level costing. Fortunately, FuSca™ release planning allows teams to come up with an initial budget number with relative ease. Once you know which teams will work on the project, the Agile Release Manager can:
- Facilitate creation of the proposed epics of the Minimally Releasable Product.
- Divide the number of epics by the Epic Velocity (number of epics completed per release by the teams involved) to obtain the number of releases.
- Determine the number of weeks within that number of releases.
- Obtain from your Finance or Human Resources department either of these for each team:
- The total compensation of all team members per week, or
- The “standard labor rate” used in calculating costs per person, which you can multiply by the number of members times 40 hours to determine a weekly cost.
- Multiply the Step 3 number of weeks times the Step 4 figure.
- Add the costs for any equipment, software, other supplies using the traditional project management means for estimating these.
Example: For two software teams totaling 16 people and a standard labor rate of $68 per hour, for work equivalent to 2.7 releases (rounded to 3) releases of three months each (or 12 weeks), with no additional supply costs:
- 16 x 40 hours x $68 = $43,520
- $43,520 x 12 weeks = $522,240
At the end of the release, the team(s) will have produced as much scope as humanly possible at high quality, and the enterprise can decide whether to “buy” another iteration by allocating that much budget to another release.
Hardware programs should begin with an initial “Revision A” project resulting in at least a digital prototype and a draft Bill of Materials (list of parts). After the program is approved based on those, estimate time and materials for the next iteration (revision), and so on until the final “rev.”
Nothing in the process promises specific deliverables. The estimate simply comes up with an amount of work, translated to a specific cost to provide a number. In fact, there is an even more streamlined approach in line with the Beyond Budgeting summary above. The same way CFOs know how much they want to spend in a year, most project sponsors know how much they want to spend on a project. Make that figure the project budget! The run rate per team times the number of teams tells you how many sprints/releases you will get for that amount, and teams using FuSca will deliver as much functionality as humanly possible in that time.
One advantage to an Agile method with full-stack teams over waterfall is that the labor costs are directly correlated to time periods, making it much easier to understand the impacts of added scope and resulting sprints. In waterfall, different resources participate in the project at different times to some degree, and the percentage of their hours applied to the project on a given calendar date can be impacted as the schedule changes. This can make cost impacts a nightmare to calculate during a formal Change Management Process, and regularly renews fights over resources.
The Agile Manifesto is littered with ideas that confound the traditional new product development contract, beginning with two of the four primary statements:
- “Customer collaboration over contract negotiation.”
- “Responding to change over following a plan.”
In the vast majority of companies, development contracts or related statements of work specify what will be delivered by when at what cost. This is another driver behind executive insistence that project managers commit to and meet the “Triple Constraint” (scope, schedule, cost). On the other side of the negotiation table, customers who don’t understand Agile—which is still most of them—want to know what they are going to get for their money by when. That is easy to do when the product already exists. It is impossible to do when the product doesn’t exist.
Fortunately, the evidence from research into customer satisfaction proves these customers don’t know what they really want, which is to be happy with what they get for their money. One example of the science is a two-year study of 8,000 customers of internet-service providers, banks, and large retailers. “The gap between perceived quality and expected quality, called ‘expectancy disconfirmation,’ is a strong predictor of customer satisfaction,” it found. Another strong predictor was captured in a sample question the journal article quoted from another researcher: “Considering the products and services that your vendor offers, are they worth what you paid for them?” A number of studies cited in that article show that customer satisfaction foreshadows future purchases and customer retention.
The issue was directly addressed by an IT professor years ago (Gable 1996). In a survey of all participants in an IT consulting program sponsored by Singapore, he compared their responses about the project’s overall success to answers about its process and results. He thought one factor would be whether “actual project resource and time requirements (equaled) those originally estimated.” He was right about four other factors, but not this one. Budget and schedule accuracy had no correlation to overall satisfaction. Instead he found a factor he called “Performance Reasonability,” meaning whether the fees and time required were judged reasonable after the project. Do the job transparently and right, and your customers will be satisfied, whatever it cost and however long it took.
Agile rejects the false expectations set by the Iron Triangle, greatly increases transparency, and radically improves quality. It follows that Agile provides a smaller gap between what the customer expects and what the customer gets. Again, the critical finding: It is not when or what is delivered by itself that matters. What matters is how those compare to customer expectations, and whether the customers feel they got their money’s worth at the time of delivery.
A Contract to Match Reality
Corporate lawyers are almost universally ignorant of that finding, and that is not their fault. It is not up to lawyers to come up with the general approach to customer relations and related project governance, only to put that approach into the language needed to prevent disputes and protect their clients if those occur. Project management is not their area of expertise; their clients have not forced them to learn about agility; and there are very few sources from which they can learn about it.
I am not a lawyer, of course, but training on various contract types is part of becoming a project manager, and I have been involved in many contract negotiations over the years. By both means, I am well acquainted with the strengths and weaknesses of the common types for projects, from a business perspective. Not a “legal” one, obviously, so be sure to talk with your counsel about my recommendations. I will repeat some information from the rest of the site in this section so you can give it to them as background for that conversation. You might also give them the link to “The Difference between Agile and Waterfall” as background, or at least spend a few minutes explaining sprints and releases.
As of 2016, I could only find one book, one significant paper, and a couple of templates related to Agile contracts. In 2017 the Project Management Institute included two pages on contracts in its Agile Practice Guide. After reviewing all these, I propose here an approach that modifies a known type of contract to reflect the Agile mindset, which I will call the Agile Capped Time and Materials Contract. The standard T&M contract charges the client for labor time and supplies until the defined product (“scope”) is delivered to the customer’s satisfaction. In a waterfall world, this type rightly troubles customers, because they think they are taking all of the risk. That is, they fear that paying the vendor for the time spent encourages the vendor to stretch the project out.
Therefore, many such contracts add a maximum amount or “cap.” In theory this motivates the vendor to finish up before that amount is spent. Because of the impossibility of predicting delivery in R&D, what usually happens is the product is incomplete when the cap is met, causing acrimonious negotiations for a new cap and/or the hassles of transfer to a new vendor. At the very least, quality is harmed in the rush to get the product out the door, and the vendor ends up continuing work for free under the warranty. Plus, their reputation is hurt. In any of those cases, no one is happy (except the new vendor!).
Unfortunately, the other common contract types rely on the myth that project management can accurately predict the Iron Triangle (scope, schedule, and budget). Therefore, all too many companies I have worked with said they wanted to be agile, yet asked for—or too often, sent down from on high—the dates by which a specific feature or product would be delivered. Results like those in the previous paragraph occur under these contracts as well. The Agile Capped T&M Contract attempts to break these patterns by focusing solely on customer satisfaction. This is ensured by matching expectations to reality.
Like a standard T&M capped contract, the agile version reflects how much the vendor will be paid per period of time plus how supply and material expenses will be repaid. But in this case:
- Scope is only described as a goal statement and undated objectives within the contract or statement of work.
- A high level of customer involvement per the Agile Manifesto is prescribed, to:
- Ensure expectations align with outcomes, nearly guaranteeing customer satisfaction.
- Ease customer fears by giving them a high sense of control.
- Easy “off ramps” are provided, based on the assumption that both parties are better off moving on if the project isn’t working out.
Heavy customer involvement ensures the customer complete visibility into and control over the project decisions as often as every sprint. If scope is added, it’s because the customer wants it despite impacts on project length. Fully aware of how many requirements are delivered in a given period, the customer does not have to ask the vendor for the impact of adding or changing a requirement. They already know! If adding resources is suggested, the customer understands the reason—and in fact, may be the party suggesting them. A feature of Full Scale agile™ not shared by all Agile-at-scale models is hard proof that the teams are working as fast as they can without risking burnout or bugs.
One Contract, Two SOWs
Note the emphasis on “new” products in that discussion. For all but Web-based software projects, development is followed by software implementation and/or deliveries of hardware. In those cases, a light over-arching contract would encompass two more-detailed statements of work (SOWs). The first, which I call the “development” SOW, covers the design, creation, and testing of a new product or a new major version of an existing one. This would definitely use the Agile Capped T&M approach.
A second “delivery” SOW, if needed, covers the implementation of the final version as if it already existed. The Delivery SOW could cover implementations small enough to be highly predictable across clients or sites, and therefore could instead invoke one of the standard date-centric, waterfall-based contract models. So too would manufacturing deliveries after the final product design is approved by the customer.
The more variations there are from previous projects, however, the more I recommend the Agile approach for the Delivery SOW as well. Regardless of the type, it is possible for the two SOWs to be in effect at the same time. It would be very agile indeed to continue improving the product under a Development SOW while installing the base version under a Delivery SOW!
The Development SOW would specify terms something like the following, in loose chronological order:
- The customer and vendor representatives draft initial requirements:
- For smaller, rapid-release projects, these may take the form of user stories provided directly to one or two teams via their Team Guide(s).
- For larger multi-team efforts, these take the form of multi-story “epics.”
- In either case, the point is not to identify the actual scope that will be delivered, but to estimate the amount of work.
- The vendor drafts a project charter resulting in:
- A “price per sprint” or “per release” or, for multi-team programs using Joint Demonstration Ceremonies, per Demo.
Note: I’ll call this the “price per period” (PPP).
- The initial number of sprints/releases/Demos.
- A “price per sprint” or “per release” or, for multi-team programs using Joint Demonstration Ceremonies, per Demo.
- The vendor and customer negotiate a cap based on the resulting cost (see “Budgeting an Agile Project”).
- Scope is not fixed until, for contracts using:
- Stories and Sprints—The Planning Ceremony for each sprint, after which no stories in the sprint can be added.
- Epics and Planning Releases—One sprint after the start of a planning release, after which no epics can be added.
Note: A planning release may or may not result in a version handed off to the customer, depending on the type of deliverables and customer preferences.
- Requirements can be paused, reduced, or deleted by the customer during a sprint/release, but cannot be replaced or revised to add scope.
Note: Teams that complete remaining requirements can work the next highest ones in the backlog, which may include new ones. This honors the Agile Principle about accepting change, because the new requirement can quickly be workshopped and its story or stories placed at the top of the backlog.
- Progress is reported primarily via customer participation in the Demonstration Ceremony (Joint or not), and/or “sprintly” using the format under “Send Sprintly Reports.”
- Customer acceptance testing takes place after each period (sprint, etc.), with defects communicated in a specified way detailed below.
- Initial delivery occurs, and the Delivery SOW takes over, for:
- Software or services—One sprint/release after the customer signs off on acceptance testing.
- Hardware—Within a specified period after the customer signs off, based on the company’s historic ramp-up time for manufacturing and delivering new products.
The Development SOW would require the customer to name an Agile Liaison (AL) who:
- Can make decisions on behalf of the customer.
Note: This means they are not simply messengers who have to check all decisions with higher managers, which would greatly slow the development process.
- Is the only person at the customer’s company authorized to funnel requirements to the vendor.
- Meets with the vendor representative weekly to reach agreement on the wording of the requirements proposed for the next sprint or planning release.
Note: In the case of releases, this would be done through participation in the normal release planning process.
- Replies to vendor representative contacts within one business day.
- Attends Demonstration Ceremonies.
- Identifies a backup within the customer’s company and:
- Coordinates with that person to ensure seamless representation in the AL’s absence.
- Communicates with the backup so they can step in without the vendor having to repeat much information.
If the vendor is using a tracking tool available via the Web, both individuals would be granted “Viewer” rights.
Note that acceptance testing by the customer is done after the vendor representative has “accepted” the requirements from the team as described earlier in this site. Any standard approach to “user acceptance testing” (UAT) is fine, and may result in “standalone defects” whose fix must be started in the next sprint. If any are found, approval of all fixes results in customer acceptance of the deliverables.
The Development SOW would specify that Acceptance Criteria negotiated for each requirement prior to the work are the sole grounds for accepting or rejecting a requirement. That way the vendor gets credit for delivery, and the customer recognizes their own impact on progress if the customer changes their mind after the work is done. During the sprint/release, as noted before, the customer can reduce or cancel the requirement. Afterward, the parties can add a new requirement to remove or revise that feature in the next increment. To reiterate, the customer doesn’t have to keep a feature they don’t like; they just have to recognize that the vendor delivered what they originally agreed upon.
A “Definition of Done” in the SOW specifies the assumptions the customer can make when a story is presented for acceptance even though they are not repeatedly specified in the Acceptance Criteria, such as:
- Types of tests performed.
- 100% passage of tests.
- Updating of documentation and training materials.
- Delivery of the prototype or placement of code in the customer’s UAT location, if relevant.
If at any point the budget cap will be passed in the next Planning Release (or some number of mutually agreeable sprints), negotiations are begun with the customer to either:
- Raise the cap.
- Accept the product as “good enough” as of the end of the warning period (see next section), at which point the Delivery SOW kicks in.
- Terminate the contract.
Per the Agile Principle emphasizing customer satisfaction, the contract specifies no delivery dates. The development continues until the customer says they are happy. After final UAT and fixes are done, the Development SOW is considered fulfilled. Since testing and fixing has been happening throughout the project, there should be no bugs, or few enough to fix in a single sprint after they are identified. For hardware, defects may require a new product increment taking one or more Planning Releases. In either case, the emphasis on building quality in from the start means the teams can move onto their next projects, leaving a little time in their sprints for UAT bug-fixing. Meanwhile, the Delivery SOW takes effect, likely overlapping the end of the Development SOW.
During development, the customer can cancel with two sprints’ notice (or longer, if more time would be needed to transfer the work to a new vendor). The customer would only pay for the number of sprints completed by the end of that time. This power provides the protection clients often consider to be missing from T&M contracts, because it creates an incentive for the vendor to maintain a pace and quality that keep the customer happy. The vendor still has the usual protection of these contracts, plus the costs of switching vendors. And both sides are protected by the high level of transparency—each is fully aware of how their actions are impacting the project.
The usual T&M invoicing schemes should work fine for the Agile version, except that you would replace the typical unit of measure (billable hours) with the simpler Price Per Period. For example, the Development SOW could call for:
- A down payment of 20% of the cap.
- Use of that to cover initial invoices until emptied.
- Billing of the remaining invoices at 90% until the product is accepted or the contract terminated.
- Payment of the remainder upon customer acceptance of the final iteration.
- No charges for the final defect-fixing-only sprint(s) or release(s), as this is effectively warranty work and the teams can be doing other development.
- A switch to the terms of the Delivery SOW at that point.
All defects—and I do mean “all”—identified within a set period of time after the Delivery SOW kicks in would be fixed at no additional charge. This would drive teams to maximize built-in quality and prevent executives from trying to hurry the project by short-cutting testing.
Many larger companies with centralized planning and processes enforce “gates” where project leaders must convince an approval board of upper managers to continue the project. Generally there are a number of “stages” or “phases,” each leading to a “gate meeting.” Each gate requires the completion of a number of documents intended to ensure various good business practices are met or regulations satisfied. A key tool of corporate governance as commonly practiced, the phase/gate model is built on the idea that executives need to ensure the organization’s project dollars are spent wisely.
Unfortunately, this means people far removed from the needs of a unit’s customers are making decisions that directly impact the satisfaction of those customers. Every gate model I’ve seen was extremely waterfall in nature, progressing through the project in traditional project management phases. Each phase was filled with the kind of “comprehensive documentation” the Agile Manifesto devalues, most of it guaranteed to quickly become out of date in research and development projects. Finally, the phase/gate effort was usually so burdensome, I have yet to be in a company where the model was followed religiously.
Many companies and some academics have attempted to create hybrid methods that apply a traditional stage-gate model to the overall project, and an agile model to the development phases. My reading of this literature is that it shows a lack of holistic thinking. One 2012 journal article, for example, starts from the assumption that Agile was created specifically for control of team-level software development, ignoring roots in manufacturing and broader cross-functional applications detailed in this site. It goes on, however, to show how iterative planning has successfully been used in earlier planning phase-gates. I hope I have made clear in this site that Scrum can be applied to any kind of research and development. That includes iterative investigation of project feasibility!
For all these reasons, I debated for months whether to cover this topic at all. My simplest answer to a gate model is, “Don’t do it!” As you are about to see, FuSca covers all of the goals of corporate governance of projects that are valid in a decentralized, agile organization. However, because I keep running into people trying to fit the square peg of formal Agile into the round holes of a waterfall gate system, and I have an answer for rounding the edges off, I felt compelled to give it some space here.
Some of the documentation required in waterfall gate models is valid. Any project needs some level of business justification, such as that captured in a project charter. Companies in highly regulated industries like medical devices, or those maintaining certifications from organizations like Underwriters Laboratories Inc. (UL), are required to keep and provide some documents in prescribed formats. Many times gates are instituted because projects are not identifying these business requirements.
Instead of telling people what documents must be completed by when in each project, however, I solve the needs by two means. One is the Agile Liaison role. As mentioned in its description, each business unit becomes aware of the project status and responsible for requesting the information it needs through that role throughout the project. This is done by creating epics or stories for required documentation, such as certification forms.
The second method is the “template program.” A generic set of projects and/or epics are created that identify all of the “must-have” documents and other business requirements. Each time a new program is created, its sponsor copies the template program and revises it into his or her new program. For details, see “Create a Template Program.”
I have failed to convince any client company to do this, but it is possible to flex an existing gate model such that it does not interfere with agile organizations within. In fact, the 1986 article that introduced the term “scrum” to project work includes a gate model with four overlapping phases adapted from a sequential set of six. First used to develop a copier, the company later improved and spread the model. “Compared with that effort, a new product today requires one-half of the original total manpower,” the article says. “Fuji-Xerox has also reduced the product development cycle from 4 years to 24 months.”
Every gate model I have seen has phases along the lines of: “Propose, Initiate, Plan, Design, Develop, Test, Close.” In the section introducing agile, I likened agile to a series of mini-waterfalls. That is how I make the translation to an agile model.
We already established that you have to initiate an agile program like any other, so we’re going to leave that phase in. The models usually split this into two or more steps. The first amounts to preparing for the second, and in every case I have witnessed, the participants have ended up combining the two with the complicity of the approvers. I deal with that reality by combining the two, allowing requesters to prepare any way they define.
As shown under “The 30-Second Explanation,” all of the remaining steps repeat with each iteration in Agile. At some point the code or hardware revision is released to customers for testing, but we don’t want to hold up the team from starting the next iteration. As the agile contracts section details, the iteration should have few if any defects, so the team sets aside some time for defect fixing and keeps going. From the aspect of the customer, though, the iteration does not end until UAT is done. We’ll keep a phase for that, but overlap it with the start of the next iteration. You’ve already seen in the release planning section how planning for that next iteration overlaps with the previous one. To meet your executives’ requirements for a gated approach, after initiation we’ll apply the remaining phases to each Version Release. In other words, for phase/gate purposes we treat each Version Release as a separate project, except it does not require another Initiation Phase. I assume here that you are doing multiple Planning Releases per Version Release, trusting you can figure out how to condense the steps below if not.
Here, then, are the phases and the deliverables required in each:
- For the first Version Release:
- Draft epics.
- Version plan, meaning proposed teams and resulting cost.
- Project Charter.
Note: The remaining phases repeat for each Version Release.
- For the first Version Release:
- Release plan for the first planning release, including proposed epics, teams, and cost.
- Architectural Runway (first planning release) or updated architecture documentation.
- Accepted epics.
- Release reporting.
- UAT defects fixed.
- Project reporting required by the company.
For subsequent versions, the Planning phase overlaps the last planning release of the current version, and Closing overlaps the first planning release of the next version. Some examples will help you understand.
Let’s walk through the cycle with a couple of programs. In the table below, the first program gets approved before any release planning is done, and the Planning gate is passed before development starts. The rows after that are three-month planning releases (PRs). After getting approval at the end of its Initiation phase, Program 1 requires two version releases (VRs). Each VR repeats the last three phase/gates. (The first VR requires four PRs, while the second needs only three.) The Initiation and first Planning phases for Program 2 kick in as Program 1 is finishing up. That way, as the teams finish one, they go directly into the new one with no loss of productivity.
This table illustrates the overlapping gate cycles for a company using quarterly planning releases:
|PR||Program 1||VR1||VR2||Program 2|
|2021-C||Development||Planning (P2 VR1)|
This is a very idealized model, so let’s look at a couple of variations:
- In companies requiring a bill of materials (BOM) for sign-off, the first “Planning” phase in the program could last an entire VR, but the BOM would still be developed through iterative PRs.
- Each VR results in a “revision” of the product: VR1 produces “Rev. A,” usually just a digital prototype on which the BOM is based; VR2 produces a “Rev. B” physical prototype; and so on.
- Because of the length of hardware stress testing, manufacturing qualification testing, etc., the “Close” phase could require more than one PR as well.
- Shorter Cycles—Software and non-technical programs that can release deliverables every PR will be in a state of continuous overlap in which they:
- Develop the current Version Release (also the PR).
- Close the previous VR.
- Plan the next VR.
 AAII 2016.
 Durré & Giot 2005.
 McKinsey & Company 2005.
 Hope & Fraser 2003.
 An average for all workers or all workers in a category, usually “loaded” with average benefits costs and sometimes overhead like office rental and administrative support.
 Keiningham, et al. 2007.
 Opelt, Gloger, Pfarl & Mittermayr 2013.
 Arbogast, Larman & Vodde 2016.
 Cooper 2016.
 Takeuchi & Nonaka 1986.