The Earned Schedule Exchange


May 15, 2023
PGCS Webinar: 6 Mario Vanhoeucke 20-Yr Journey in One Presentation

 

At the start of his “journey”, Mario focused on algorithms for constructing simulated schedules. He found that his students in business school were more interested in risk analysis and project control than algorithms.

In 2003, things changed. Walt Lipke published “Schedule Is Different”. For Mario, it was a “seminal” work—it strongly influenced him to move in a different direction.

First, Mario embedded Earned Schedule in his research. In a simulation study with Stephan Vandevoorde, he confirmed the accuracy of ES versus other metrics.

Then, a separate study showed that, given true scenarios, ES beat other metrics. But, for false scenarios, ES was not as good. Mario interpreted this as showing that ES is harder to mislead than other metrics. “Garbage in, garbage out” as he put it.

Another separate study showed that, in serialized schedules, ES performed well, but in parallel schedules, its results were not as good.

Around 2008, Mario researched the effect of control on project performance. He measured the effect by comparing the effort it required vs. the impact that it produced. The best combination is low-effort control and high-impact correction to recover.

Given this yardstick, he found that standard measures such as Earned Schedule worked better with serialized schedules than with parallel schedules.

From 2009-2013, Mario moved beyond the academic environment, engaging professionals in the field. He obtained a large research grant that enabled him to publish the formulas behind his work on project control.

The research showed that standard measures were “easy” and produced “good” results, whereas statistical control was “difficult” but produced “very good” results. Mario developed Analytical Project Control which was both “easy” and “very good”. He connected it to corrective actions and validated it empirically.

In 2015, Mario addressed the gap between simulated and real schedule data. The former were academic, general, and statistically distributed. The latter were professional, specific, and not statistically distributed. Data calibration performed statistical analysis of historical data and applied curve fitting and human expertise to calibrate the distribution. Calibrated data produced superior predictions.

The following years witnessed testing and evolution of Calibration. Each successive round improved the accuracy of data distribution, scaling up from 50% to 97%. In short, give the data to calibration and get back the real distribution.

And Mario’s journey continues… He and his team are researching resource skills, machine learning, protective/preventive risk control, action optimization, and contracts.

Finally, in addition to the “hard” skills of analysis and calculus, Mario foresees research into “soft” skills such as creativity and communications.

Add new comment

All fields are required.

*

*

*

No Comments




Archives