Concept: When work is done out of sequence, knowledge gaps are inevitable. To bridge the gaps, performers must make assumptions about missing inputs. At some point, the missing information becomes available, and when it does, it often differs from the assumptions. That means what has already been done must be reworked.
Rework is doubly wasteful. First, and obviously, re-doing what has already been done is lost time and money—time that could be spent completing new deliverables. Second, and less obviously, the time spent making the assumptions is also lost time and money—it too could have been spent on completing new deliverables. In both cases, performers feel frustration with a work process that is the exact opposite of “one and done”. They want to stop wasting time.
Knowing rework will be required and even knowing which tasks are likely to cause rework are important but only part of what you need to know to stop losing time. What’s lacking is a fix on the impact of rework.
[Stamping out Waste, Part 2:]
Practice: At ProjectFlightDeck, we initially set numeric thresholds to gauge the impact of rework. We said that baseline deviations of up to 10% were generally manageable, but if they went up to 20% caution was warranted. Anything beyond 20% was considered out of control and negotiations on budget and timeline were required.
We quickly learned, however, that the numbers alone were not enough. What does it mean for rework costs to be running at 12% of the budget? Is that bad? If so, how bad is it? A threshold breach on one project seemed less serious on one project than the same breach on another. That 12% deviation had one impact given a $500K budget and altogether different impact given a $500MM budget.
We then tried to identify common thresholds by pooling the intuition and experience of PMs. Again, we found lots of variation. [1] From PM to PM and project to project, there was so much variation that we gave up a single set of thresholds. Instead, we categorized projects along several dimensions and posited thresholds for each category.
That didn’t last long. The matrix of threshold values became so unwieldly that even experts found it difficult to navigate. Worse, the maze of thresholds obscured our clients’ line of sight to the reasons for project decisions and action plans.
What we needed was a “yardstick” that was consistent, simple, and accessible.
- Consistency: the yardstick must apply across PMs and projects but at the same time acknowledge the unique features of each project based on empirical evidence.
- Simplicity: it must be easy to use with minimal demand on the project’s (usually the PM’s) time.
- Accessibility: it had to yield clear, quick guidance on the severity of deviations so that PMs could readily devise remediating steps and clients could readily understand the reason for the PM’s decisions and action plans.
Allowances for uncertainty gave us the foundation for the yardstick.
[Stamping out Waste, Part 3:]
In all projects, there is uncertainty. As Glen Alleman has pointed out, there is epistemic uncertainty and aleatory uncertainty. Epistemic uncertainty results from the lack of knowledge and can be reduced. Aleatory uncertainty results from randomness and cannot be reduced. [2]
To address epistemic uncertainty, projects identify the likelihood and seriousness of the uncertainty being instantiated. Risk management plans identify work that deals with such uncertainty. The work is represented in the schedule as discrete tasks (the Contingency allowance).
To address aleatory uncertainty, similar projects can be analyzed to assess the probability of on-budget delivery. Based on the analysis, a block of work (and cost) can be allocated as Reserve. [3] Unlike Contingency, Reserve does not buy more information and thereby reduce uncertainty. Instead, it is "margin" that covers naturally occurring variations in timeline and budget.
As per PMI usage, the Budget at Completion (BAC) represents the baseline. It includes Contingency, but it excludes Reserve. (Recall that Contingency is represented by specific tasks in the schedule whereas Reserve is held as block outside the schedule.)
The yardstick is straightforward:
Figure 1
Figure 2 illustrates the yardstick. Note that, as risks might or might not be instantiated, Contingency and Reserve represent variable costs to the project. Let’s call that cost allowance, Variable Budget. By contrast, work apart from risk management is expected to be instantiated. Let’s call that cost allowance, Fixed Budget.
Using the illustration and terminology, a detailed description of the yardstick follows. If the brief explanation already given is sufficient for your purposes, skip the Details.
Figure 2
Details: As explained in last month’s post, the historical emphasis has been on uncertainties that cause budget overruns. But, there are also uncertainties that result in budget underruns.
It’s important to identify underruns because they can indicate inflated estimates. There are cases where an estimate goes through several levels in an organization, and each level adds its own uncertainty allowance. The result is a budget that unnecessarily drains resources from other initiatives: running under plan is not always better.
Uncertainties that cause overruns are addressed by “overrun” Contingency (i.e., +Contingency) and Reserve (i.e., +Reserve) allowances. Uncertainties that lead to underruns are addressed by “underrun” Contingency (i.e., -Contingency) and Reserve (i.e., -Reserve) allowances.
In Figure 2, if the cost of rework is greater than the Fixed Budget, but less than the Fixed Budget plus Contingency (i.e., it is within BAC), rework costs are on track to complete within the budget, and the project is labelled as Green. If the forecast is under the Fixed Budget but more than the Fixed Budget less Contingency, the outlook for finishing within budget for rework is also good, and the project is again labelled as Green. [4]
The area around the Fixed Budget bounded by Contingency is called the Contingency Zone. As expressed in Figure 2, rework costs in the Contingency Zone indicate a sound plan and a good outlook for meeting the budget. The project is labelled as Green.
As shown in Figure 2, if the cost of rework exceeds the Fixed Budget plus Contingency but remains within the Reserve Zone, the cost of rework is not on track to complete as planned (i.e., it’s not within the BAC), but it should still finish as committed. [5] So, the project is labelled as Yellow.
If the cost of rework is less than the Fixed Budget minus Contingency, the project is, again, not on track to complete as planned and is labelled as Yellow. In this case, it is Yellow not because the cost of rework is likely to exceed budget but because the planned amount is unsound. After all, an allowance was made for less rework, and the forecast makes it appear that something was mistaken. Either the relevant uncertainties were incorrectly identified or the allowance for less rework was too small (i.e., there is considerably less rework than anticipated). [6]
The yardstick can also be represented as a set of threshold values, as illustrated in Figure 3.
Figure 3
If the cost of rework is within the Fixed Budget plus or minus the Contingency allowance, the budget status is Good, i.e., the budget appears to be realistic and within reach. The project is labelled as Green.
If the cost of rework is beyond the Contingency Zone but within the Reserve Zone (regardless of whether it is overrun or underrun), the Contingency allowance is insufficient, implying that the plan is questionable. Whichever Reserve Zone the cost of rework inhabits (overrun or underrun), the budget status is poor, as the budget has either under-estimated or over-estimated the amount of rework. The project is labelled as Yellow because it might still be salvaged by Reserve.
Finally, if the cost of rework exceeds both the Contingency Zone and the Reserve Zone, the project will not meet the committed cost of rework, even if both Contingency and Reserve are consumed. The budget status is very poor because it has badly under-estimated the amount of rework. The project is labelled as Red because Reserve will not save it.
By the same token, if the cost of rework is below both the Contingency Zone and Reserve Zone, it appears that there is far less rework than planned. The budget status is very poor because it appears to be wildly inflated. Again, the project is labelled as Red because neither Contingency nor Reserve were set at appropriate levels.
Refactoring: As the yardstick was applied at ProjectFlightDeck, we made one refinement. We now identify the portion of uncertainty allowance due specifically to rework.
For us, this is an easy step to take because we already have a “Refactoring” task in our standard Work Breakdown Structure (WBS). [7] We also routinely identify tasks inserted in the schedule to address specific risks. This is useful not only for assessing rework but also for updating the Risk Register we maintain for each project.
For projects that lack such information, the cost of rework must be viewed as the “canary in the mine”. Exceeding thresholds means the project is in trouble, regardless of how the schedule performs otherwise. This is because there are two major factors that affect schedule performance: impeded/constrained deliveryand out-of-sequence delivery.
Even if no tasks appear to be impeded or constrained, tasks can be performed out of sequence. (For an explanation of the distinction, click here.) Out-of-sequence delivery breeds rework costs. If rework alone exceeds thresholds, the project is in trouble. Why? Because it is likely that there will be problems beyond rework. If they occur, there is no buffer left to absorb them.
So, if a project does not differentiate the cost of rework from other portions of Contingency and Reserve, any threshold breach due solely to rework represents a serious problem. The only question is: how serious? The answer depends, as described above, on which thresholds are breached.
Conclusion: The cost of rework yardstick described above meets the criteria set out earlier.
- Consistency: The yardstick applies across PMs and projects. It leverages the investment made in identifying uncertainties and associated risks. It also quantifies the cost of work for specific risks and of lump sum allocations. In those ways, it acknowledges empirical evidence unique to each project.
- Simplicity: As illustrated by the figures, it is simple to understand and apply. (Admittedly, the yardstick depends on uncertainty analysis, risk management, and budget tracking--none of which are trivial processes. But, if they are not being done on the project, there are bigger issues than the cost of rework.)
- Accessibility: The figures give clear, quick guidance on the severity of deviations. So, the PMs know immediately if there’s a problem and how severe it is. For clients, the financial boundary lines demarcated in the figures are familiar and decisive.
The next post describes taking action on assessments from the yardstick.
Notes:
[1] The problems of intuitive decision making were detailed by Kahneman and Tversky (see References). Although not without controversy, their views are widely accepted as describing systematic errors that frequently occur when quick, intuitive judgments are made. The antidote to such errors is deliberate, rational thinking attended by objective data. The difference is popularized in Kahneman’s book, Thinking, Fast and Slow (Kahneman, 2011).
[2] See Alleman on Uncertainty and Risk.
[3] I have simplified the terminology here. As Alleman has pointed out, the work for aleatory uncertainty is sometimes called Margin, and epistemic uncertainty can be addressed simply by a block of time and be called Reserve.
[4] Think of the scenario this way: risks might or might not be realized. If they are not realized, Contingency tasks are not done. Admittedly, it is unlikely that no risks are instantiated, but it is possible. If there are none, the budget runs under the Fixed amount by the Contingency allowance.
[5] Recall that the committed cost of rework includes Reserve and is greater than the Contingency.
[6] In this scenario, not only are Contingency tasks unrealized but also naturally occurring variations are absent. Without such variations, Reserve is also not utilized. It appears that the plan itself is faulty.
[7] We have developed generic WBSs that we re-use on projects. We have found that activities in the WBS are similar to tasks in Agile Task Lists. Hence, we use WBSs on both plan-driven and Agile projects. We include Refactoring in the WBSs because we recognize a persistent need to revise deliverables based on knowledge gained during their creation.
References:
Lipke, W. (2012). Schedule Adherence and Rework. CrossTalk, November-December.
Lipke, W. (2011b) Schedule Adherence and Rework. PM World Today, July.
Lipke, W. (2011a) Schedule Adherence and Rework. The Measurable News, Issue 1 (corrected version).
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: heuristics and biases. Science, 185 (4157), 1124-1131.
|