Concept: My posts on AgileES have thus far focused on “what” and “why”. That is, I’ve described the concepts behind AgileES and the benefits of using ES on Agile projects. It’s now time to shift the focus to “how to”. The next posts describe how to apply ES to Agile projects. There are several steps. Some of the steps, especially the initial ones, are controversial. In explaining each step, I will identify and address the issues and then describe the actions to be taken.
Practice: Up to this point, the “how-to” steps for AgileES have focused on calculating and assessing schedule performance. It is now time to describe how to apply the metrics. Their main function is to support Sprint planning.[1] Their secondary role is to communicate how well the schedule is performing.
6. Plan the next Sprint.
Sprint Planning precedes every Sprint. It sets the goal for the next Sprint, identifies the Product Backlog items to be delivered, details the tasks and responsibilities for the work to be done, and, most important, determines if a new Baseline is necessary.
In the Agile canon, the input to Sprint Planning comprises the following:
…the Product Backlog, the latest product Increment, projected capacity of the [Project] Team during the Sprint, and past performance of the [Project] Team. [2]
The Product Backlog and latest increment are used in Sprint Grooming. The Grooming ensures that the Backlog is up-to-date. Using the latest increment as the starting point, Sprint Planning adds new Backlog Items and Release Points, removes Items and Points no longer required, and flags Items and Points that have been completed.
Measures of past performance and projected capacity are enhanced by AgileES. Supplementing the Agile Release Date estimate and the familiar Agile Burndown chart, AgileES offers quantitative measures of schedule performance (SPIt and Rate of Discovery) and estimates of future performance (EACt and ECDt). The quantification means that thresholds can be set and monitored to guide Sprint planning.
Past Performance: The SPIt measures the efficiency of past schedule performance. Using the example introduced in Step 5, suppose that the Contingency allowance is 10%, and the Reserve allowance is an additional 5%. The following table shows the SPIt threshold values. Their application is then explained.
Figure 1
Individual readings in the .9 to 1.0 range are not a source of concern. It is likely that such shortfalls will be balanced by higher efficiency in the future. Similarly, individual readings in the 1.0 to 1.1 range are not an issue. Such variations naturally occur in projects. In both cases, no further action is needed.
If individual readings occur outside the thresholds labeled as “Green”, the response is different. In the .8-to-.9 range, and even more so, in the sub-.8 range, the poor performance might not be redeemable by future high performance. Such readings indicate a serious shortfall in productivity and require further action.
Even if the project appears to be highly efficient, there might be problems. Efficiency above 1.1, and especially above 1.2, often indicates that the original plan was based on “low ball” productivity estimates. If so, the plan for the next Sprint must be adjusted.
Dramatic SPIt changes (say, 20% or more) from one Sprint to the next are another sign of problems. While these differences are frequently caused by failures in reporting, there are cases where the project team has made a sudden change in tactics, causing productivity to dive or to soar. Such tactical changes warrant further action.
Threshold breaches are another worrisome type of change. Even if the Sprint-over-Sprint change is small, when the SPIt moves into an efficiency-level other than the one labeled as “Green”, follow-up action is required.
Finally, a series of related SPIt readings indicates a trend. At ProjectFlightDeck, we generally build action plans for Sprint adjustments based on trends, rather than on individual readings. For long-running projects, we require three or more consecutive readings headed in the same direction to mark a trend. For short-term projects, we reduce the required number of consecutive readings.
In most cases, if the project is labeled as “Red”, the response is immediate. For cases that do not warrant a “Red” label, the response is essentially the same, but it is moderately paced, involves smaller adjustments, and employs low-key messaging.
Whether it is an individual reading or a trend that demands further action, the generic response is the same: identification of potential problems, root-cause analysis, action planning for remediation, and stakeholder communication. Remediating actions vary widely but commonly include rapid, and sometimes large, shifts in the number and size of Backlog Items in a Sprint, and even in the number and expertise of team members.
Future Performance: Historical capacity dictates past performance. Similar future capacity determines future performance. The completion estimates (EACt and ECDt) use past performance in calculations of expected future performance. For simplicity, the following discussion is framed only in terms of EACt. Analogous comments apply to ECDt because it is simply Start Date plus EACt.
Again, using the example cited previously, suppose that the Deadline is 8 Sprints, Contingency is 1 Sprint, and Reserve is also 1 Sprint. Figure 2 shows the EACt threshold values for such a case.
Figure 2
The response to deviations in EACt is the same as that for deviations in SPIt, except that the parameters are Sprints rather than %. Instead of repeating the response, let’s move onto Rate of Discovery.
Rate of Discovery: The Agile framework recognizes the importance of discovery and learning on projects. In short, Agile welcomes change. When ProjectFlightDeck first used ES on Agile projects, our chief concern was whether there would be sufficient stability to gauge schedule performance. In other words, if all the parts were moving, there might not be a baseline against which we could measure performance.
What we learned is that change is constant and common on Agile projects, but much of the change is what we call “shuffling” and “elaboration”. Net new additions and subtractions are present, but they are neither constant nor common. Details follow.
Figure 3
Shuffling is moving Items between sprints. We found that maximizing the fungibility of Items enabled us to shift them from one Sprint to another with minimal destabilization.
Elaboration is decomposing an item into components. Although we try to refine Items into their constituents at the start, we invariably discover that some Items need further decomposition. The impact on stability is softened through risk analysis: it identifies Items likely to require further decomposition, and Contingency is added to cover the risk.
The addition or elimination of a whole Backlog Item occurs, but not as often as we feared. To reduce the effect on stability, we set aside Reserve allowance for such cases. It enables the baseline to remain relatively stable. Still, there are cases which exceed all allowances. These cases do not just weaken the baseline; they demand a whole new one. That’s why a key part of AgileES Sprint planning is deciding if a new baseline is required. (Details will be covered in Step 7.)
The Rate of Discovery is a good indicator of whether or not there is sufficient stability for ES metrics to be meaningful. If the RoD stays within Contingency and Reserve, there is sufficient stability to proceed with the current baseline.
Using the previous example, suppose that the original Minimum was 3500 Planned Release Points. Again, assume Contingency of 10% and Reserve an additional 5%. Figure 4 shows the RoD threshold values for such a case.
Figure 4
As long as the number of Planned Release Points remains within the Contingency allowance, it appears that risks associated with change were correctly identified and sized. There is no need to take further action.
Once the count exceeds Contingency, the risk analysis comes into question. It may be that Items are not as fungible as thought, that decomposition is more extensive than anticipated, or that additions and subtractions exceed expectation. The generic action steps described above need to be taken, and the next Sprint needs to be adjusted accordingly.
If the count goes beyond both Contingency and Reserve, even more dramatic action may be required. The whole baseline might need to be replaced. (Again, see Step 7 for more information.)
Combined Metrics: The EACt is derived directly from the SPIt. So, although not mathematically required, thresholds for the two metrics normally yield the same reading on performance. Individual readings sometimes differ between the metrics, usually because a threshold of one is breached before the other. [3]
Trend lines for the two metrics rarely head in different directions. Any divergence between the two trend lines warrants further investigation.
The Rate of Discovery is an important addition to the ES metrics in Sprint Planning. It can be a decisive factor in determining the type of response that deviations merit. As Agile welcomes change, its projects can be prone to large-scale additions and subtractions. The RoD provides early warning if they are problematic.
As detailed elsewhere, Agile Burndown and Agile Release Date metrics are also used in Sprint planning. [4] Differences between the Agile and AgileES metrics can occur when differential Rates are applied to Release Points. When differences occur, the reason for the difference must be determined.
The most common reason is the replacement of a Sprint’s Planned Release Points by others with different Rates. Sometimes, the substitution keeps the velocity the same but changes the Earned Value. Other times, it changes the velocity but keeps the Earned Value the same. Either the Sprint’s Earned Value differs from its Planned Value, or the actual Release Point count varies from the estimated velocity. The differences are reflected in Agile vs. AgileES metrics for the Sprint.
To keep Agile and AgileES metrics in sync, you need to set a mean Planned Value for each Sprint and to use that value to guide selection of Backlog Items for the next Sprint. Doing so ensures that the Planned Value and, therefore, the Planned Earned Schedule are linear. It follows that burn charts from Agile and AgileES will both depict the baseline as a straight line running from the end of the first Sprint to the end of the last planned Sprint. With the baselines isomorphic, it is fair to compare metrics from Agile and AgileES.
In summary, AgileES brings an array of metrics to schedule performance assessment and correction. It thereby gives Sprint planning a more robust basis for decision making on the next Sprint’s velocity, content, and team membership.
Communication: Once ES metrics are understood and applied to Sprint planning, the results need to be communicated. Consistent with the canonical Agile approach, the project team creates and uses the metrics as part of Sprint planning. The team thus has first-hand knowledge of the results. Communication occurs through participation. No additional action is required.
As for communication outside the team, the Agile canon is silent—there is no mention of Stakeholder communication or status reporting. Agile blogs are not so reticent. They are filled with vigorous attacks on anything that hints of outside communication such as status reporting:
… the idea of a "project status" is utter nonsense in a Scrum or Open Agile setting because it presupposes that you're working toward some enormous Big Bang release or similar milestone. If that's the case, you're simply not doing Agile. [5]
Many Agilists believe that participation in Sprint planning meetings or even Daily Scrums is the only way for Stakeholders to understand how things are going. Doing so eliminates the need for the team to spend any time on outside communication because there is no “outside”. As noted in one blog:
Someone who cannot be bothered to participate in any of these events [i.e., Sprint planning or Daily Scrums] is generally someone not worth involving in the project. [6]
Granted, “pointy-haired” managers à la Dilbert exist, but many more managers take seriously the fiduciary responsibility of their position. They have a duty to understand how their (or, more often, their shareholders’) money is being spent. That means they need to understand how things are going on projects in their portfolio—and, not just on major releases but on each Sprint.
At the same time, managers often have a wide span of authority, especially given the down-sized organizations that are the rule today. The size of project portfolios makes it practically impossible for managers to participate as a team member on all projects. It is, therefore, up to project teams to provide managers and related Stakeholders with the information they need to perform their duties.
Agile teams sometimes meet the need through demonstrations of working products. Undoubtedly, demos engage Stakeholder interest and give them a sense of progress, but they take time to prepare and deliver. They also may not reflect the overall project status. For instance, they do not show work done on non-functional requirements.
AgileES gives project teams a way to meet Stakeholder needs for status information on the whole project and to do so with no incremental impact on the team’s time. The ES Burndown Chart is already being produced as one of the inputs to Sprint planning. It can easily be re-purposed to provide Stakeholders with the information they need.
The ES Burndown Chart paints a quick, easy-to-understand picture of how the project is going. Unlike the Agile Burndown, the ES Burndown works whether uniform or differential Rates are used. And, ES Burndowns can also be used by plan-driven projects. So, in a hybrid portfolio, the manager has a consistent representation of schedule performance across projects.
Figure 5
A few rules-of-thumb help managers interpret the chart (see Figure 5). If the Baseline ES Burndown line is above the ES line, the project is behind. If the Baseline ES Burndown line is below the ES line, the project is ahead. The size, shape, and locations of gaps between the Baseline ES Burndown and ES Burndown indicate whether or not remediation is required. In general, if there is a gap and it is growing over two or three Sprints, it is time to take action.
On the project depicted in Figure 5, remediation by the project team started at Sprint 2 (12/31/12) and by management at Sprint 3 (01/14/13). As will become clear in subsequent posts, the project required management action to address its schedule problems. The Burndown Chart was supplemented with additional material to explain status, root causes, action plans, and decisions required. [7]
Notes and References
[1] Deviations are often due to idiosyncrasies in date calculations.
[5] Note that the labels on the horizontal axis of Figure 5 reflect Sprint end dates. We have found the use of dates to be more meaningful for communications outside the team. Within the team, Sprint numbers are more common because they function as shorthand.
[6] ES metrics are also an input to Sprint Reviews. The same performance zones, threshold values, and generic action steps are used to assess and adjust Sprint processes during the Reviews.
|