The Earned Schedule Exchange


May 30, 2018
Schedule Adherence: Using the P-Factor

Concept: The application of ES metrics to schedule performance management often depends on threshold values. Thresholds mark boundary lines between performance levels and act as “trip wires” for responses. Setting threshold values can be a challenge. Although experience offers some indication of appropriate values, anecdotal evidence is notoriously subjective and idiosyncratic.

For most ES metrics, an objective, standardized base for thresholds can be found in uncertainty allowances. As the allowances are ultimately tied to the volume of value delivery, they are not relevant for setting P-Factor thresholds. What works, instead, is to compare trends in value: for instance, trends in P-Factor values versus trends for SPIt values. By fitting together the two metrics, we gain insight into schedule performance, and no longer depend on thresholds to guide the assessment.

PuzzlePieces.jpg

Practice: First, I’ll describe how to use the P-Factor with the SPIt. Then, I’ll then explain why threshold values are not a good fit for P-Factor evaluations of schedule performance.

For most ES metrics, threshold values guide assessment of schedule performance. [1] For P-Factor, we monitor trends in the P-Factor against trends in other ES metrics, in particular, the SPIt.

Recall that a trend occurs when the values of a metric flow in the same direction over several periods: rising, falling, or staying flat. [2] Although both SPIt values and P-Factor values frequently exhibit trends, the two trend lines don’t always correlate. When they do, the pattern can provide valuable information for managing schedule performance.

Occasionally, SPIt and P-Factor values simultaneously rise or fall together. On the rise, schedule performance is improving in both the volume of value delivered and its adherence to the plan. In contrast, if both metrics fall, the volume and adherence are concurrently worsening. In that case, it’s no surprise that the schedule is in trouble. Remediating action is required.

More revealing are cases in which one of the metrics rises or stays flat while the other metric drops. We have found that, if the P-Factor increases (or flat-lines) and the SPIt decreases, projects are often slavishly following the plan at the expense of delivery volume. On the other hand, if the SPIt increases (or flat-lines) and the P-Factor decreases, projects are generally sacrificing adherence to bolster delivery volume.

Here’s an example of the first scenario. It’s a snapshot from early in a long-running project. As you’ll note (Figure 1), the P-Factor is stable but the SPIt is slowly dropping.

 PFac_SPIt_Drop_in_SPIt.jpg

In this case, the Project Manager felt contractually bound to follow the delivery sequence specified in the schedule. The work that was completed adhered tightly to the planned sequence, but the pace of delivery began to slow. Few tasks were being done out-of-sequence, thus avoiding rework, while the amount of completed work fell. Again, an investigation was launched in response to the widening gap between the two trends. In this case, a new baseline was in order. [3]

Here’s a second case. In it, the SPIt was relatively stable, but there was a sudden drop in the P-Factor. The drop was so significant that the Project Management Office did not wait before investigating. [4]

PFac_SPIt_Drop_in_PFac.jpg

In this case, the Project Manager was trying to build a “delivery habit” by having the team complete quick deliverables, even though they were being done out-of-sequence. As a result, the SPIt was fairly stable, but the P-Factor dropped sharply. Presented with the likelihood of rework, the PM agreed to revise the approach.

Why use the P-Factor in conjunction with another ES metric such as the SPIt? Why not set P-Factor thresholds and monitor individual readings against them as we do for other ES metrics?

Recall that the SPIt is a measure of efficiency, but P-Factor is a measure of adherence. The two metrics provide fundamentally different views of project performance. SPIt depends on the volume of value delivery, whereas P-Factor depends on the sequence of value delivery. That fundamental difference has implications for the use of thresholds.

At ProjectFlightDeck, we originally used subjective experience to set threshold values. Over time, however, we found that the values varied widely between projects—both our own and those of other companies. So, we moved to a more objective basis for setting the values: uncertainty allowances. Why did we settle on uncertainty allowances?

In all projects, there is uncertainty. As Glen Alleman has pointed out, there is epistemic uncertainty and aleatory uncertainty. Epistemic uncertainty results from the lack of knowledge and can be reduced. Aleatory uncertainty results from randomness and cannot be predicted. [5]

To address epistemic uncertainty, projects identify the likelihood and seriousness of the uncertainty being realized. Risk management plans identify work that deals with such uncertainty. The work is represented in the schedule, either as discrete tasks (the Contingency allowance) or as an allocation of time (the Reserve allowance). Contingency is included in the baseline; Reserve is not.

To address aleatory uncertainty, similar projects can be analyzed to assess the probability of on-time delivery. Based on the analysis, a block of work can be allocated as schedule Margin. That work is not included in the baseline.

Thus, uncertainty allowances are expressed as additional amounts of work (or work’s time and cost equivalents). Threshold values for ES metrics are then set using the amount of allowances as guides. Both the allowances and the metrics have a common basis: the volume of work.

As repeatedly noted, however, Schedule Adherence is not tied to volume of delivery. That removes the basis for setting threshold values used for other ES metrics. We do not, therefore, tie the P-Factor to specific threshold values. Instead, we use the trends associated with the P-Factor and SPIt to assess schedule performance.

Notes:

[1] For examples, see SPIt ThresholdsEACt Thresholds, and TSPI Thresholds (especially the post on 02 July 2015).

[2] Trends were explained in an earlier post. For details, click here. Also, see Kesheh, M.Z. and Stratton, R. (2013) Taking the Guessing out of When to Rebaseline. The Measurable News, 4, 31-34.

[3] Ibid.

[4] As noted elsewhere, our practice is normally to wait for results over several successive periods before starting an investigation. There are exceptions to that heuristic, one of which is a sudden, dramatic change against a background of relative stability.

[5] See Alleman on Uncertainty and Risk.


References:

Lipke, W. (2013). Schedule Adherence …a useful measure for project management. PM World Journal, Vol II, Issue VI.

Lipke, W. (2012). Schedule Adherence and Rework. CrossTalk, November-December.

Lipke, W. (2011b) Schedule Adherence and Rework. PM World Today, July.

Lipke, W. (2011a) Schedule Adherence and Rework. The Measurable News, Issue 1 (corrected version).

Lipke, W. (2009b). Earned Schedule. Lulu

Lipke, W. (2009a). Schedule Adherence …a useful measure for project management. The Measurable News, Issue 3.

Lipke, W. (2008). Schedule Adherence: AUseful Measure for Project Management. CrossTalk, April.

Add new comment

All fields are required.

*

*

*

No Comments




Archives