Concept: Some metrics (namely, TSPI (prior to 1.1), Window of Opportunity, and Opportunity Profile) allow subjectivity to “creep back” into decisionmaking on recovery. The next, and final, recovery metric doesn’t.
The Probability of Recovery is purely quantitative. Yet, hitting the threshold value does not entail unrecoverability, as does for a metric such as TSPI.
Practice: For this post, the order of presentation is reversed. It starts with Probability of Recovery’s contribution to the previous example. Its Pros and Cons follow. Then, the “Runup” charts from the previous example are repeated. Finally, the math behind the probability is summarized.
The Decision
Suppose another month has elapsed in the example, and it’s now the end of July (see “Run up” for other months). Efficiency remains essentially unchanged at .958. Consequently, the SPIt and EACt remain much the same and are not shown.
With the shortening of available schedule, however, the TSPI and Window of Opportunity are significantly altered. TSPI rises to 1.097, slightly below the threshold. The Window of Opportunity shrinks to .006, slightly open.
Strictly speaking, neither measurement breaks a threshold value and so are not shown. Recovery is still possible, although intuitively it seems unlikely.
The final recovery metric decides the issue.
The probability of successful recovery peaks in April. [1] Then, from April through June, its decline is unremittent. That’s bad enough, but in July it gets worse.
The decline not only continues; it reaches 50.82. And, that’s a threshold.
At 50%, successful recovery becomes a matter of chance. Recovery must depend on more than a roll of the dice. It demands action, and so, delay is no longer an option. It’s time to recover the project. [2]
The Probability of Recovery has Pros and Cons. Here they are:
Pro:
 Quantitative metrics resist cognitive bias and strategic misrepresentation.
 The Probability of Recovery is mathematically based on TSPI and on statistical Building Blocks, both of which have theoretical and empirical support.
 The threshold value is quantitative and can be reached before the recovery threshold of 1.1 is hit. It signals the need for action while recovery is still possible.
Con:
 Calculation of the probability is involved and warrants automation.
The “Runup” Example
Say that at the halfway point on a 12 month project, the Schedule Performance Index for time looks like this:
Sure, the project got off to a slow start, but it improved significantly, and the SPIt is still climbing!
No call to action here.
How does the Estimate at Complete for time look?
Given that the Planned Duration is 10 months, the estimates are not too bad, but they’re not great, either. The bounds are converging, but the nominal seems to be stuck at slightly more than the plan.
Maybe, the project is not in big trouble, and there’s no justification for initiating recovery.
What does the To Complete Schedule Performance Index tell us?
The late start and nil SPIt throw off the first measurement. The other measurements seem to be hovering below the 1.10 threshold. Again, there’s no clear signal to start recovery.
So, what do you do?
Don’t ignore the situation and hope for the best. Don’t flip a coin. Get more information.
Another measurement from Earned Schedule helps: the Window of Opportunity.
Based on current performance, the Window of Opportunity for recovery is .132.
To put that into perspective, if all the remaining time were available, the Window would be 1.000. If no remaining time were available, it would be 0.000. So, it’s possible, but the Window looks small.
You might think, “Wait, the project is only 60% complete. Surely, recovery is possible with a full four months to go!”
Dream on.
Walt Lipke’s research has shown that once the TSPI hits 1.10, recovery is very unlikely. The math behind the Window of Opportunity is tied directly to that break point. The Window measures the time available before the threshold is breached.
So, it’s not four months of runway. It’s less than 15% of the remaining timeline, or about 16 days. Not four months; instead, about one month. At the current level of performance, the project will be unrecoverable in a month.
The percentage is informative, but a picture is better.
With just a month to recover, it would be prudent to take action now. But, it’s possible that you might still hold off the decision. Recovery is a dramatic step, and a Window remains open.
Fortunately, there’s other information that helps us decide: the Improvement Profile, for one. (See the dotted line and four rightmost columns in the table). The Profile identifies the efficiencies required in each period of the Window.
Look at the required efficiencies and see if the project achieved similar efficiencies in the past. Also, you can check whether or not the required rate of change is similar to the rate achieved earlier in the project.
The answers tell you whether recovery is realistic for your project. If so, there is good reason to start down the road to recovery.
In the graph and the table, the previous efficiencies are followed by the recovery efficiencies.
The recovery efficiencies are striking. Almost all of them exceed the maximum efficiency attained thus far.
Furthermore, after a sharp initial increase, previous efficiency leveled offwell below the needed level.
That’s more reason to start recovery.
And, yet, it might be argued that the desired recovery rate appears to be lower than the rate achieved through the first three periodsthe slope of the first increase appears to be sharper that of the second.
It might be that the project is capable of attaining a rate as high as the one required.
That’s a thin thread, but it’s a reason to hold the decision in abeyance.
The final recovery metric, Probability of Recovery, decides the case.
The Math
The math behind Probability of Recovery is nontrivial. It starts with standard statistical Building Blocks: sample mean, variance, and standard deviation, and it also uses a score—call it the Probability Score.
Previous posts contained descriptions of the building blocks that won’t be repeated here. As for the Probability Score, it’s derived from a standard statistical equation:
…where Probability Score is the value that will be converted to probability, X̅ is the sample mean, V is a selected value, and the term S / √n expresses the likely spread of data in the sample.
The equation is a variation on a familiar metric: the Zscore. Given a normal distribution, the Zscore measures how far a given point is from the mean. Analogously, the Probability Score, or PRscore for short, measures how far the mean is from a selected value. In both cases, the measurement can be converted into a probability.
In this context, the sample mean, , is the log of the cumulative SPIt (ln cumulative SPIt ), or in the simplified equation simply ln p. Why? Think of it this way: SPIt = ES/AT. ES is the cumulative total of periods that are earned. The total is divided by the number of periods, AT. That’s the same as the mean: sum of observations / number of observations.
S is the sample standard deviation. There will be an additional “tweak” to S, but for now, only the variable, V, needs further explanation.
V is the threshold value against which performance values such as ln p are compared. Performance values that fall short of V are not powerful enough to ensure ontime completion. So, they induce lower probabilities of recovery.
What’s the threshold value, V? The math is complicated, but the intuition is straightforward. It’s the schedule performance relative to 1.1. That is, take the time already earned toward 1.1, and divide it by the time remaining until 1.1 is breached.
Recall that schedule performance must be transformed into the right “shape”. That’s accomplished by taking the natural log of the value. So, the threshold value becomes: ln V.
The spread of data in the sample is a function of the sample size. Put simply: more measurements in the sample means narrower spread.
How does express this? Think of it this way: square the top and bottom: . Intuitively, that means, the variance (S^{2}) is inversely proportional to the number of measurements.
As a divisor, it means that the schedule performance is normalized to sample size.
It’s time to “tweak” S.
The sheer size of the sample affects the spread. As the sample size grows in relation to its maximum, the spread narrows. There’s a widelyused statistical equation for this: √((Nn)/(n1), where N is the maximum size and n is the sample size.
Walt Lipke has interpreted the equation specifically for Earned Schedule:
…where; PD is the Planned Duration; ES is the Earned Schedule, and the term, 1, is replaced with the unit value of Earned Schedule. That is, Earned Schedule divided by the current period. [3]
To “tweak” S, multiply it by the square root.
Now, we’re ready to solve for PRscore.
…where ln c is the log of the cumulative SPIt (i.e., the mean); ln V is the threshold performance for comparison, and the “tweaked” S normalizes the difference.
The result is interpreted as a tscore. The tscore represents how far an observation is from the mean, where the distance is measured by the number of standard deviations.
In this context, the observation is the natural log of V (ln V). The PRscore is the distance V is from the mean (ln c) in standard deviations. The probability is the area under the curve starting at V and excluding only the left tail.
Figure 1
Once PRscore is known, the probability can easily be computed using the TDIST function in Excel. It takes the following parameters: tscore (i.e., PRscore), degrees of freedom (i.e., n1), and cumulative (distribution function) = True [4].
Notes:
[1] Although there is Planned Value for January and February, there is no Earned Value for January. To calculate probability, you need at least a couple of periods of EV. So, probability can’t be calculated until March.
[2] Starting recovery a month earlier would have been appropriate. (See below for the “Run up” to the decision.) At that point, there were already three periods in which the window of opportunity narrowed, requisite efficiencies climbed, and probability of success dropped. The trends make it reasonable to take action without waiting to hit the threshold.
[3] The current period is used because it equals the total number of measurements in the sample. That’s because each measurement represents one and only one measurement.
[4] The cumulative distribution applies because it is historical (i.e., cumulative) efficiency, rather than periodic efficiency, that is being analyzed. That’s also why Figure 1 illustrates the probability as onesided.
