The Earned Schedule Exchange


December 29, 2021
Basics *NEW*: Duration Increase from Rework

Concept: Schedule Adherence measures the alignment of value delivery with its planned sequence. Deviations cause P-Factor to decrease (and SAI to increase).

Value delivery gets out-of-sequence because work is impeded/constrained or is done before it is planned. Put intuitively, the former work is behind schedule, and the latter is ahead of schedule.

Work done ahead of schedule is doubly wasteful. First, it is a candidate for rework. That is, the work must be repeated to earn the same value. Second, the time spent in repetition could have been spent earning new value. Consequently, emphasis has been placed on reducing rework.

Knowing rework will be required and even knowing which tasks are likely to cause rework are important but only part of what you need to know to stop the waste.  What’s missing is a way to assess the impact of rework.

Initially, the gap was filled by the cost of rework (Rework$). The metric is derived from ES equations and associated mathematical models. As the cost can be compared to allowances for uncertainty, assessment of its impact is quantitative.

But, there was a lingering doubt about the metric: wouldn’t it be even better if the quantitative assessment were in terms of time, rather than cost? ES is, after all, focused on time. So, why not assess impact in terms of duration?

tn_Dur_Incr_of_Rework_small.png

Practice: In 2020, Walt dispelled the doubt. He introduced the Duration Increase (DI) from rework. It’s an impact measurement natively in terms of time.

Calculation of DI is simple, but its derivation is “difficult and complex” (Lipke, 2020). Let’s address the simple part first. Here are the calculations related to DI.

EQ01: DI = DI% * Number of Baseline Periods

EQ02: DI% = Intercept – Slope * SPIt

…where DI = Duration Increase, DI% = Percentage of Duration Increase, Intercept = 2.1092 * Percentage of Rework, Slope = 1.2068 * Percentage of Rework, and SPIt = Schedule Performance Index for time. [1]

The number of baseline periods is known. The constants are given. The SPIt and Percentage of Rework are available from other calculations. (For more on SPIt, click here. For more on Percentage of Rework, click here.) So, it’s easy to calculate DI.

Derivation of the equations and constants is anything but easy. Why is it difficult?  Because there is no obvious way to model duration increase.

By way of contrast, consider the derivation of cost of rework. Cost of rework follows directly from the mathematical model for the amount of rework. Given cost = rework * rate, the amount of rework and its cost are proportional (at a set rate). In short, more rework entails more cost in the same pattern described by the mathematical model.

The connection between duration and rework, however, is not straightforward. The duration of a unit of rework can vary depending on factors such as resources, complexity, uncertainty, and others.

One option for modeling DI would be to gather and analyze empirical measurements. To do so, projects would have to identify and measure candidates for rework. They would then have to track the amount of rework that followed. Such measurements are not common practice today and would be difficult to obtain retrospectively.

An alternative approach is to use a simulation, and that is the approach taken by Walt.

His simulation comprised ten projects. Each used the same set of inputs. To avoid “cooking” the data, randomness was introduced into the calculation of key inputs, such as Earned Value. P-Factor was used in the simulation, and it was varied to systematically increase to 1.0 by project end. Finally, six different levels of rework were applied, depending on the P-Factor.

The result was 54 output sets which were consolidated into 3 graphs expressing High, Moderate, or Poor efficiency (SPIt).

DurIncrHMLGraphs.png

The output was then analyzed statistically. Two findings emerged:

  • Rework is not a consequence of schedule performance efficiency. Regardless of the SPIt, the percentage of rework appeared in exactly the same location in all three graphs (blue line).
  • There is negative correlation between efficiency and duration increase (DI%). The greater the efficiency (green line), the less the duration increase (red line).

 Although important, the first finding is unsurprising. In ES theory, sequence and volume of delivery are distinct and separate. Results from the simulation support the theory.

The second finding is a bit surprising. The amount of rework is independent of efficiency, but the amount of rework would also seem to have an effect on duration. So, why would greater efficiency correlate with smaller duration increase? Maybe, the “blunt force” of greater efficiency promotes shorter durations, regardless of the amount of rework.

In any case, a strong negative correlation appeared in all but one level of rework in the simulation. (The r values [2] for the six levels were: .9769, .9728, .9625, .9698, .8443, and .5454.) Such strong correlation suggests that duration increase can be modeled mathematically.

The first step in such modeling is to formulate parametric equations.

DurIncrParaEq.png

They express curves for the duration increases produced in the simulation.

DurIncrParaCurves.png

Analysis of the equations shows that more rework plus greater efficiency means smaller duration increases.

Further analysis of the equations shows that intercepts of the curves increase systematically. The same is true of slopes. The increases show strong linear relationships (r = .9960 for intercept and .9923 for slope).

From the shape of the curves for intercept and slope, constants can be derived to forecast the intercept and slope versus rework.

DurIncr_mb.png

Thus, we have the following equations.

EQ03: Intercept = 2.1092 * rework

EQ04: Slope = 1.2068 * rework

That yields the construct for DI% (to repeat):

EQ02: DI% = Intercept – (Slope * SPIt) = (2.1092 * %rework) – ((1.2068 * %rework) * SPIt)

DI, then, follows by multiplying DI% by baseline duration.

EQ01: DI = DI% * Number of Baseline Periods

Note that the model has limitations.

  • Rework <= 20%: The parametric equations were formulated specifically for just six levels of rework: 16%, 13%, 10%, 7%, 4%, and 1%. The model clearly applies when the percentage of rework is less than or equal to 16%. [3] But, the relationship between the intercept and slope variables appears to be strong. So, the limit can reasonably be extended to 20%.
  • SPIt < 1.74776: The value 1.74776 is the point at which duration increase equals 0. The value is calculated by dividing the intercept constant by the slope constant. [4] If the SPIt increase beyond that value, the duration increase is less than 0. Negative values for duration increase are considered nonsensical.

In sum, EQ01 and EQ02 provide good estimates for duration increase when the percentage of rework is less than or equal to 20%, and the SPIt is less than 1.74776.

Applying Duration Increase

As described last month, at ProjectFlightDeck, we initially set numeric thresholds to gauge the impact of rework cost. We said that deviations from baseline of up to 10% were generally manageable, but if they crossed that threshold, caution was warranted. Anything beyond 20% was considered out of control and required negotiation on the budget.

In a similar way, the thresholds apply to duration increase. If the baseline is Planned Duration rather than Budget at Completion, any over/under runs in duration due to rework can be assessed against the 10% and 20% thresholds.

Based on our experience with cost of rework, ProjectFlightDeck skipped using fixed percentages as thresholds. Instead, we moved directly to using uncertainty allowances as thresholds. As pointed out last month, uncertainty allowances offer consistency across projects while at the same time acknowledging each project’s uniqueness.

All projects face uncertainty. There is epistemic uncertainty from the lack of knowledge and aleatory uncertainty from randomness.[5] The former is reducible; the latter is not.

Risk management plans address epistemic uncertainty with scheduled work (Contingency) and aleatory uncertainty with a block of time (Reserve). [6]

As per PMI usage, Planned Duration includes Contingency but excludes Reserve. (Recall that Contingency is represented by specific tasks in the schedule whereas Reserve is held as block outside the schedule.)

Here is the yardstick we use to assess the impact of duration increase.

DurIncrYardstick.png

The yardstick applies across PMs and projects. It leverages the investment made in identifying uncertainties and associated risks. It acknowledges empirical evidence unique to each project.

It is simple to understand and apply. (Admittedly, the yardstick depends on uncertainty analysis, risk management, and budget tracking--none of which are trivial processes. But, if they are not being done on the project, there are bigger issues than the cost of rework.)

The thresholds give clear, quick guidance on the severity of deviations. So, PMs know immediately if there’s a problem and how severe it is. For clients, the duration boundary lines are familiar and decisive.

When a threshold is breached, what action is required?

The generic response is the same, regardless of whether it is Contingency or Reserve that is breached:

  • identification of potential problems,
  • root-cause analysis,
  • action planning for remediation,
  • remediation, and
  • stakeholder communication.

The urgency of the analysis, extent of remediation, and type of communication vary, depending on root cause, and remediating action, and whether it is Contingency or Reserve that is breached.

♦♦♦♦♦

That completes the review of ES basics plus the addition of a new metric. The new metric rounds out Schedule Adherence measurements because it is expressed natively in terms of time.

Notes:

[1] The r value (aka, Pearson's r correlation coefficient) measures the strength of linear association between two variables. 

[2]The correlation for the 1% case is not strong, but given that duration increase shows minor variation over the range of rework values, any error in the estimate is probably small.

[3] For derivation of the calculation, see Appendix A.

[5] I have simplified the terminology here. See Alleman (ibid) for details.

[6] Percentage of Rework = total cost of rework / Budget at Completion. For details, click here. For details on SPIt, click here.

References:

Alleman, G. (2013). Retrieved from https://herdingcats.typepad.com/my_weblog/2013/05/aleatory-and-epistemic-uncertainty-both-create-risk.html

Lipke, W. (2020). Project duration increase from rework. PM World Journal, IX, IV, April.

Lipke, W. (2019) Schedule Adherence and Rework, PM World Journal, Vol. VIII, Issue VI.

Lipke, W. (2013). Schedule Adherence …a useful measure for project management. PM World JournalVol II, Issue VI.

Lipke, W. (2008). Schedule Adherence: A Useful Measure for Project Management. CrossTalk, April.

Appendix A

Start with the linear equation Y=mX+b, where X and Y are values from the x-axis and y-axis respectively, m is the slope (Y2-Y1)/(X2-X1), and b is the intercept. Solve for X, given that Y = 0 (because it is at the origin), b = 2.1092, m = 1.2068.

  1.  Y = mX + b
  2. -mX = b – Y
  3. mX = Y-b
  4. Y= 0 (@ origin)
  5. mX = b
  6. X = b / m
  7. b = 2.1092
  8. m = 1.2068
  9. X = 1.74776

Add new comment

All fields are required.

*

*

*

No Comments




Archives