The Earned Schedule Exchange


November 29, 2021
ES Basics Revisited: Applying Cost of Rework

Concept: When work is done out of sequence, knowledge gaps are inevitable. To bridge the gaps, performers must make assumptions about missing inputs. At some point, the missing information becomes available, and when it does, it often differs from the assumptions. That means what has already been done must be reworked.

Knowing rework will be required and even knowing which tasks are likely to cause rework are important but only part of what you need to know to stop losing time.  What’s missing is a way to assess the impact of rework.

tn_Cost_of_Rework_Loop.png

Practice: At ProjectFlightDeck, we initially set numeric thresholds to gauge the impact of rework. We said that baseline deviations of up to 10% were generally manageable, but if they crossed that threshold, caution was warranted. Anything beyond 20% was considered out of control and required negotiations on budget and timeline.

We quickly learned, however, that the numbers alone were not enough. What does it mean for rework costs to be running at 12%? Is that bad? If so, how bad is it? A threshold breach on one project seemed less serious on one project than the same breach on another. That 12% deviation had one impact given a $500K budget and altogether different impact given a $500MM budget.

We then tried to identify common thresholds by pooling the intuition and experience of PMs. Again, we found lots of variation. From PM to PM and project to project, there was so much variation that we gave up a single set of thresholds. Instead, we categorized projects along several dimensions and posited thresholds for each category.

That didn’t last long. The matrix of threshold values became so unwieldly that even experts found it difficult to navigate. Worse, the maze of thresholds obscured our clients’ line of sight to the reasons for project decisions and action plans.

What we needed was a “yardstick” that was consistent, simple, and accessible.

Consistency: the yardstick must apply across PMs and projects but at the same time acknowledge the unique features of each project based on empirical evidence.

Simplicity: it must be easy to use with minimal demand on the project’s time.

Accessibility: it had to yield clear, quick guidance on the severity of deviations so that PMs could readily devise remediating steps, and clients could readily understand the reason for the PM’s decisions and action plans.

Allowances for uncertainty gave us the foundation for the yardstick.

In all projects, there is uncertainty. As Glen Alleman has pointed out, there is epistemic uncertainty and aleatory uncertainty. Epistemic uncertainty results from the lack of knowledge and can be reduced. Aleatory uncertainty results from randomness and cannot be reduced. [1]

To address epistemic uncertainty, projects identify the likelihood and seriousness of the uncertainty being instantiated. Risk management plans identify work that addresses such uncertainty. The work is represented in the schedule as discrete tasks (the Contingency allowance).

To address aleatory uncertainty, similar projects can be analyzed to assess the probability of on-budget delivery. Based on the analysis, a block of work (and cost) can be allocated as Reserve. [2] Unlike Contingency, Reserve does not buy more information—it covers the naturally occurring variations in timeline and budget.

As per PMI usage, the Budget at Completion (BAC) represents the baseline. It includes Contingency, but it excludes Reserve. (Recall that Contingency is represented by specific tasks in the schedule whereas Reserve is held as block outside the schedule.)

The yardstick is straightforward:

The cost of rework yardstick meets the criteria set out earlier.

Consistency: The yardstick applies across PMs and projects. It leverages the investment made in identifying uncertainties and associated risks. It also quantifies the cost of work for specific risks and of lump sum allocations. In those ways, it acknowledges empirical evidence unique to each project.

Simplicity: It is simple to understand and apply. (Admittedly, the yardstick depends on uncertainty analysis, risk management, and budget tracking--none of which are trivial processes. But, if they are not being done on the project, there are bigger issues than the cost of rework.)

Accessibility: The figures give clear, quick guidance on the severity of deviations. So, PMs know immediately if there’s a problem and how severe it is. For clients, the financial boundary lines demarcated in the figures are familiar and decisive.

When a threshold is breached, what action is required?

The generic response is the same, regardless of whether it is Contingency or Reserve that is breached:

  • identification of potential problems,
  • root-cause analysis,
  • action planning for remediation,
  • remediation, and
  • stakeholder communication.

The urgency of the analysis, extent of remediation, and type of communication vary, depending on root cause, and remediating action, and whether it is Contingency or Reserve that is breached.

 

That completes the review of ES basics. Before moving onto a new topic, there’s one other aspect of Schedule Adherence to cover. It’s not being "revisited". It’s new, as of 2020. But, it rounds out the treatment of SA and deserves to be mentioned in this series of posts. Here it is: duration increase due to rework.

 

Notes:

[1] See Alleman on Uncertainty and Risk.

[2] I have simplified the terminology here. As Alleman has pointed out, the allowance for aleatory uncertainty is sometimes called Margin, and epistemic uncertainty can be addressed simply by a block of time and be called Reserve.

Add new comment

All fields are required.

*

*

*

No Comments




Archives