Concept: R-tasks and IC-tasks can be used to analyze the root causes of adherence failure. The analysis shows what tasks must be changed to improve schedule performance. But, how do you know when a response is called for, and what urgency it has?
Metrics associated with R-tasks and IC-tasks offer an answer.
Let’s start with measurements of the rate at which Rework and Impediments/Constraints occur on a project. The rates are given by the Schedule Adherence Index (SAI) and the Impediments/Constraints Index (ICI).
Both the derivation and calculation of SAI and ICI are complicated. Lengthy posts have described the derivation of SAI (here) and its calculation (here). The same holds for ICI (click here and here ).
Details are beyond the scope of this post, but a brief explanation of the derivations and calculations follows. Then, the focus shifts to Pro’s and Con’s associated with the indices.
Practice: Both SAI and ICI depend on the quantification of work associated with adherence failure. For SAI, the (re-)work occurs in the future. So, the quantity has to be estimated. Mathematical models are used to estimate the quantity and the rate at which Rework (R) is expected to occur.
For ICI, the work that is impeded or constrained has already occurred. So, in theory, it can simply be calculated by brute force: add up the quantities for each period. In practice, the quantity in a given period is estimated using a mathematical model like the one used for R. (Want to know why it’s modeled? Check out note 4 after you click here.) The rate at which tasks are impeded or constrained (IC) can then be calculated.
In both cases, the rates reflect normalization to the remaining budget. For SAI, the amount of Rework is scaled to the residual budget, R/(BAC – EV). For ICI, the amount that is Impeded or Constrained is scaled to the residual budget, IC/(BAC – EV).
Normalization avoids an inevitable rise in SAI and ICI as the project winds down. The rise occurs because, put simply, there are fewer tasks to go wrong. (For details, click here.) Scaling R and IC to the remaining budget means that, as the project proceeds, the divisor, BAC – EV, shrinks. As it shrinks, the impact of remaining R and IC increases, offsetting the downward pressure of fewer adherence failures.
**Pro:** Due to normalization, the indices are accurate throughout the project lifecycle.
If the indices increase, schedule adherence is suffering. How so? As the BAC is fixed and the amount of EV cannot decrease from one period to the next, the amount of Rework (or Impediments/Constraints) must be increasing. A larger numerator over a fixed denominator spells an increase in the ratio. Thus, schedule adherence must be worse.
If they decrease, adherence is improving. Either the amount of R (or IC) is staying the same and value is being added in alignment with the schedule, or the amount of R (or IC) is decreasing, and so the EV is increasing. (If R or IC decrease, work is being done, and EV must grow.) In both cases, the fraction has shrunk. Thus, schedule adherence is better.
Such changes in SAI and ICI signal when problems are emerging or receding. That, in combination with the identification of problem tasks, gives project managers a head start on improving schedule performance.
**Con:** The derivation of SAI and ICI is complicated, requiring an understanding of mathematics. While the indices are useful tools, their derivation puts off many project managers, among whom mathemaphobia is rampant.
Similarly, the calculation of SAI and ICI is equally complex. That makes tool support a virtual necessity, further inhibiting adoption.
There are also concerns with the behaviour pattern of the two metrics.
At the start of a project, the period-to-period rise and fall of SAI and ICI is synchronous, although the amplitude of the rise and fall can vary. Later in a project, the syncronicity can break down and the difference in amplitude can grow. (See Figure 1.)
The culprit is the underlying math more than the vagaries of schedule performance.The two metrics ultimately depend on the balance between EV@AT and PV@ES. Early in the project, the mathematical model for R keeps the estimated quantity of rework close to 100% of that value. So, the amount of value driving SAI is close to that driving ICI.
As the project progresses, the value likely to require rework systematically drops away from the value earned at AT. [1] But, the value lost to impediments and constraints, which is determined empiricially, need not follow the same pattern.
Figure 1
The divergence of the two metrics sends mixed signals on schedule performance. For example, during periods 24-27 in Figure 1, the SAI is improving while the ICI is worsening. [2] Does that mean performance is better, or is it worse? The behaviour seems confusing. [3]
Finally, the indices’ rise and fall only *suggests *that problems are emerging or receding. What’s missing? Clear-cut reference points for the urgency of the problems.
As argued elsewhere, reference points should be consistent across projects and yet tailored to each project. For such benchmarks, we must look to the next set of metrics.
Notes:
[1] Note that the value of *candidates *for rework, i.e., tasks in which the EV@AT > PV@ES, equals the value lost to impediments and constraints. That’s a “necessary balance” ensured by the math behind schedule adherence.
[2] In Figure 1, the scales of the two metrics have been adjusted to clarify the pattern of rise and fall. The adjustment skews the amplitude of differences, making them appear to be smaller than they actually are.
[3] The “Con” stands even though I believe the confusion can be dispelled. Although the two metrics are inextricably linked, differences in the underlying math mean that they can vary. SAI is set mathematically by a model. ICI is set empirically by history. “Mixed signals” demonstrate that the metrics do not have to be homogeneous: one can improve while the other worsens. In other words, schedule performance is relative to the dimension being measured. |