Concept: What specific deliverables are causing deviations from the plan? Schedule Adherence (SA) has an answer. By the ES time, tasks should have earned all and only the value planned for them; if they don’t, they deviate from the schedule, and that’s where the problems are.
SA’s take on problem tasks has Pro’s and Con’s.
Pro: Schedule Adherence highlights the specific tasks that are being performed out-of-sequence. Identification of such tasks is especially valuable when the number of deliverables obscures what’s happening, as often happens in large projects.
By focusing attention on the subset of problem tasks, commonalities and trends are diagnosed more quickly. Once root causes are pinned down, remediation can begin. Thus, SA significantly accelerates the find-fix cycle for problem tasks.
Con: On the downside, the calculations required for SA are daunting, especially when schedules are sizeable. The amount of Earned Schedule must be determined and then used to assess how well each atomic [1] task complies with the plan.
For schedules with many deliverables, it is impractical to manually identify and analyze atomic tasks. That makes reliance on some form of automation necessary.
Practice: Early in ES history, when ES, SPIt, or EACt worsened, we focused on deliverables that were short of Earned Value by the Actual Time. Why consider them to be the problem? Because they caused the metrics to deteriorate. [2] Although relevant, such cases do not tell the whole story about schedule performance. They are only a subset of out-of-sequence deliverables.
Schedule Adherence fills the gap. As Walt Lipke pointed out, ES is the time at which the value currently earned should have been earned (Lipke, 2008). That makes the ES time a tipping point: the spot in the timeline where total Planned Value and total Earned Value are equal. By that time, each deliverable should have earned all and only the value planned for it. Any value earned outside that constraint does not adhere to the schedule.
An examination of the value planned and earned as of the ES time (in addition to the Actual Time) pinpoints the deliverables that are not adhering to the schedule. The examination is a natural consequence of calculating the P-Factor.
Here’s how it works. For a given task (i), if EVi@AT equals the PVi@ES, the task is on schedule. If the EVi@AT is less than the PVi@ES, the task has not earned value at the planned pace. It is running behind schedule and is in some way impeded or constrained (I/C). If the EVi@AT is greater than the PVi@ES, it is being done prematurely and runs the risk of rework (R).
Thus, tasks whose initial value is earned late are classified as I/C, and tasks whose value is earned prematurely are flagged as R. Otherwise, tasks are considered to be on schedule.
Pro: As explained in previous posts, knowing the I/C-tasks and the R-tasks is valuable information for managing schedule performance. An example graphically illustrates the advantage it confers.
Let’s start with this snapshot from a project’s Schedule Adherence report. [3]
Note the Task IDs: they are interspersed widely across the schedule. But, then, look at the descriptions. They generally involve creation, review, or approval of specifications or designs.
By highlighting the misaligned deliverables, a pattern was easy to see, and that made root-cause analysis and remediation easier to do.
The analysis uncovered the underlying issue. The project had changed the definition of “Done”. To build delivery momentum, the team had decided to allow “conditional” completion, i.e., the deliverable was considered done once drafted even if there were outstanding issues. As a result, there was a rush to claim completion of tasks. Lots of work appeared to be finished early.
Unfortunately, rework was likely, as the outstanding issues were not closed. To fill in the gaps, team members made assumptions. If the assumptions turned out to be incorrect (as often happens), deliverables would have to be re-opened and revised. Work contingent on them would also have to be revised, and problems would cascade through the schedule.
The PMO helped the project team to recognize the problem and develop a solution. Essentially, the team returned to a more rigorous definition of “Done” and added in deliverables to remediate the specs and designs that had already slipped through. Although the project was not able to regain all the time that had been lost and exhausted its Contingency, it was able to complete within its Margin.
Without the help of SA to identify where the “fires” were burning, it is unlikely that the problem would have been caught early enough to finish by the Sponsor’s deadline. [4]
Con: There are a couple of downsides to SA’s treatment of problem tasks. Both relate to practice, rather than theory.
First, work done out-of-sequence is often not recognized as having a negative effect on schedules. In fact, for work done ahead of schedule, it’s just the opposite: “Yay! We’re ahead of schedule!”
As Mike Mosley has commented, “when the value earned is not on the critical path to the target milestone”, [5] it’s a net loss, not a gain for the project. And, as Walt Lipke has shown, it gets worse: work done prematurely often leads to rework. You not only fail to gain value that counts toward your objective, you have to perform the work again to earn the same value.
Still, we rarely see early deliverables and consequent rework tracked on projects not using SA. That tells us SA is an exception to the common view. In short, it represents an unfamiliar view of schedule performance.
Second, measuring adherence is a challenge for projects of any real size. The level of detail required is significant. Each atomic task in the schedule must be identified. Sometimes, that’s not so easy. If value has been stored and maintained at different levels of the WBS, it’s difficult to pick out the tasks that need to be addressed . [6]
Then, for each relevant task, the following questions must be answered: does the task actually contribute value when it is planned to contribute it? If not, is it (partially or wholly) behind plan, or alternatively, does it deliver value prematurely?
As schedules can contain hundreds, if not thousands, of tasks, these preparatory steps are not trivial.
Assuming the data is available, the calculations on them are equally daunting. You need to know the value of ES. That’s calculated based on all the relevant tasks in the schedule. Next, for each relevant individual task (i), you need to compare EVi@AT to PVi@ES. The comparison leads to classification as I/C, R, or on schedule.
The amount of any difference and the classification of each atomic task need to be stored, at least long enough for reports to be generated. The stored data should also be available for sorting. Otherwise, analysis is difficult.
It’s impractical to perform these steps manually. In practice, automation of some sort is necessary.
Notes:
[1] Atomic Task = a work package that produces a tangible deliverable for which there is a clear criterion of completion. In the Work Breakdown Structure, this is the bottom level at which EV and PV are stored. The EV and PV can be rolled up to summary levels, but the roll-ups are not used in P-Factor calculations.
[2] If tasks are short of their planned value, the volume of delivery runs below plan. That means the amount of schedule earned is less than the current duration. That, in turn, means the SPIt is less than 1.0, as SPIt = ES / AT. Furthermore, EACt = (SPIt * PD) + AT. So, EACt lengthens as SPIt declines.
[3] The project is real. The report was generated from PFD’s Schedule Performance Analyzer. To preserve confidentiality, identifying information has been removed.
[4] The PMO provided oversight of and support to the line project team. With 985 tasks, the schedule was large enough to obscure what was going on.
[5] See Mike’s comment on the “SA as Performance Measure” post in the Earned Schedule LinkedIn Group here.
[6] For instance, I have seen value stored and tracked at the summary level, ostensibly to avoid the time and effort to do so at the bottom, “atomic” level in the Work Breakdown Structure. That was poor scheduling practice, but, worse, in those cases, some value was also stored at the atomic level. Without a unique identifier, the only way to tell where it was/was not stored would have been by analyzing every branch in the network of activities and inferring where the details were stored. That process would have beennot only be time consuming but error prone.
|