Big Data Meets Earned Value Management (EV-1)
Glen Alleman – Program Planning and Controls Lead, Niwot Ridge, LLC.
Thomas Coonce – Adjunct Research Staff Member, Institute for Defense Analyses
When the result of an action is of consequence, but cannot be known in advance with precision, forecasting may reduce decision risk by supplying additional information about the possible outcomes.
Data obtained from observations collected sequentially over time are common. Earned Value Management is an example where project performance data (BCWP) is collected from the status reports of planned work (BCWS) and a forecast of future performance is needed to manage the program.
With this periodic data, cumulative Cost Performance Index (CPIcum) and Schedule Performance Index (SPIcum) are produced. During the accumulation of BCWP and ACWP, variances on a period-by-period basis are washed out leaving only the cumulative data. This cumulative past and current period point value data is used to calculate an Estimate At Completion (EAC), Estimate To Complete (ETC), and a To Complete Performance Index (TCPI), using algebraic formulas. None of the past statistical behavior of the program is used for these calculations.
Earned Value Management System engines maintain period-by-period data in their underlying databases. With this time series performance information, analysis of trends, cost and schedule forecasts, and confidence levels of these performance estimates can be calculated using probabilistic techniques based on the Autoregressive Integrated Moving Average (ARIMA). Statistical forecasting techniques can be univariate (one driving variables) or multivariate (more than one driving variable). The three components of ARIMA can be tuned. Autoregression (AR) depends on a linear combination of previous observed values with a lag plus and error term. The Moving Average (MA) assumes the observed values are error terms plus some linear combination of previous random error terms up to a maximum lag. Integration (I) joins AR and MA for a powerful tool to forecast future behavior from past behavior.
Using ARIMA in place of cumulative and single point values provides a statistically informed EAC and ETC to the program in ways not available using standard Earned Value Management calculations. These observed behaviors might appear as random, orderly, or noisy processes. Using ARIMA reveals underlying trends not available through standard EVM engines calculations. The simple algebraic forecasting of EAC fails to recognize the underlying statistical nature of a project’s performance measures. Using a simple moving average formula of past cumulative observations equally hides the underlying statistical nature of the performance numbers. ARIMA can adjust the autoregression and moving averages attributes to reveal future performance not available with simple algebraic or linear smoothing.
With these leading indicators of cost and schedule performance, the program manager can take action to keep the program GREEN before it is too late.
Don’t Let EVM Data Mislead You (EV-2)
Steve Sheamer – Herren Associates
EVM data is a popular data source for cost estimators and for good reason; in theory, it should provide most of the data needed to develop an estimate for a program. For completed programs, it provides historical costs by WBS and for programs that are in work it provides a measure of the work completed, work remaining, and forecast of the work remaining. But during a period of frequent cost overruns, estimates built using EVM data often fail to forecast the extent of program overruns. Leveraging real world examples and first-hand experience with several of DoD’s largest acquisition programs, this paper will discuss common mistakes that cost estimators make when using EVM data to develop a cost forecast. The author will also provide recommendations to help those in the cost community better utilize EVM data and clearly understand potential pitfalls that could lead to significant estimating errors.
Trust but Verify – An Improved Estimating Technique Using the Integrated Master Schedule (IMS) (EV-3)
Eric Lofgren – Cost Analyst, Technomics, Inc.
It has long been the wonder of management why the Integrated Master Schedule (IMS) fails to give advanced warning of impending schedule delays. The oft-touted Government Accountability Office (GAO) 14-Point Check for Schedule Quality analyzes schedule health using key metrics, leading one to assume that such a test authenticates schedule realism. Why, then, do practitioners find themselves caught off guard to slips when their IMS appears in good health? Answers to this question follow when one attempts to independently trace IMS development over time. This paper presents the results, including a significantly improved new metric for independently estimating final schedule duration, as well as a startling conclusion about project planning and schedule maintenance.
As “living documents,” schedules evolve with the circumstances affecting a program. This implies that while looking at historical schedules may garner some additional context, only the current schedule incorporates the relevant information necessary for calculating the project end date. However, all maturing schedules descend from an original baseline agreed to by both client and performer. The metric proposed in this paper takes such a baseline and independently tracks near-term activities from the original schedule through subsequent schedules. In theory, the process mirrors what scheduling software does. A key difference is that the metric ignores behavior often associated with risk: baseline changes; task re-sequencing; insertion of hard-constraints, leads, and lags; and so on.
This paper relies on completed contracts for which schedules were available throughout the duration. With dozens of contract schedules analyzed across several Major Defense Acquisition Programs (MDAPs), all complete contracts showed remarkably consistent results: before contracts reached their schedule mid-points, the independent metric had quickly jumped up and stabilized near the true eventual end date (in most cases, a significant schedule slip). At that same point, while the contractor IMS deviated on average 16 months from the true end date, the independent metric found the value to within 2 months. Composite measures of accuracy abstract away from the power of this new metric, which is demonstrated by individual case studies.
Because the independent metric far outperformed the IMS in predicting contract schedule, one may conclude that the performance of early, well-defined, tasks compared to the initial baseline makes a good leading indicator for where the final schedule will end up. It also implies that schedule is unlikely to be saved by managers working-around issues, schedulers entering constraints, or CAMs planning optimistically, even if the reshuffling works in theory. This metric finds a strong place with decision makers for its early warning capabilities and its ease of visual comprehension. It also affords sufficient detail for the analyst to have pointed discussions with the scheduler, focusing attention on the sources of potential risk. Such a form of analysis becomes more important as project complexity grows, ensuring cross-IMS sanity and providing a second assessment of contract schedule.
A Cure For Unanticipated Cost and Schedule Growth (EV-4)
Thomas Coonce – Adjunct Research Staff Member, Institute for Defense Analyses
Glen Alleman – Program Planning and Controls Lead, Niwot Ridge, LLC.
Federal programs (DoD and civilian) often fail to deliver all that was promised and many times cost more than estimated and are often late.
Delivering programs with less capability than promised while exceeding the cost and planned undermines the Federal government’s credibility with taxpayers and contributes to the public’s negative support for these programs.
Many reasons have been hypothesized and documented for cost and schedule. The authors propose that government and contractors use the historical variability of the past programs to establish cost and schedule estimates at the outset and periodically update these estimates with up-to-date risks, to increase the probability of program success. For this to happen, the authors suggest a number of changes to the estimating, acquisition and contracting processes.
For government program offices, these include:
• Develop top-level probabilistic cost and schedule estimates based on the statistical variability of past programs with similar risks.
• Propose cost and schedule targets that have at least a Joint Confidence Level (JCL) of 70 percent;
• Develop a draft Integrated Master Plan (IMP) to achieve the desired capabilities and performance measures at this JCL.
• Through Request for Information (RFIs), seek advice from industry on the likelihood of delivering the stated capabilities within the proposed cost and schedule estimates, using a draft IMP and initial risk registers.
• Using this industry feedback, revise needed capabilities, cost and schedule to be included in the request for proposals (RFPs).
• Include realism of technical, cost, schedule and risks as a criteria for awarding contracts.
• Do not award contracts that have less than 50% JCL of meeting both cost and schedule targets.
• Ensure contractors have initial Program Management Baselines (PMBs) that have a JCL greater than 35%.
The authors suggest a number of improved processes for contractors for submitting an updated IMP, schedule distributions using their historical data in response to RFPs, and application of Technical Performance Measures (TPM) to objectively assess Earned Value Management performance (BCWP).
Unleashing the Power of MS Excel as an EVM Analysis Tool (EV-5)
Allen Gaudelli – Herren Associates
Steve Sheamer – Herren Associates
What do you do if you need to analyze or report on EVM data and you don’t have access to (or can’t afford) the latest industry software? Nearly everyone has a very powerful analysis and reporting tool on their desktop with the flexibility and capability to consolidate cost, schedule, and risk drivers into a single view. In this presentation, we will show you how to leverage and manipulate the inherent capabilities of Microsoft Excel to build interactive EVM dashboards that rival the reporting capabilities of many of the industry leading software tools. Also, the flexibility of Microsoft Excel makes the dashboards customizable so that the specific requirements of any program can be satisfied at no additional cost. The use of these dashboards improve a program manager’s ability to identify faulty contractor reporting, address contractor performance issues, and gain insight into lower level details of cost, schedule, and risk drivers. To complement the EVM dashboard, we will also demonstrate a VBA Tool that expedites the process of transferring the content of the EVM dashboard to PowerPoint. The use of this VBA tool eases the burden of copying, pasting and positioning multiple tables and charts into a standard brief. These tools, when used in parallel, have a synergistic effect resulting in superior program management.
Design to Cost: Misunderstood and misapplied (EV-6)
Erin Barkel– Canadian Parliamentary Budget Office
Tolga Yalkin – Canadian Parliamentary Budget Office
The Canadian Department of Defence maintains that concerns over cost overruns are overstated because it adopts a design to cost approach. According to the US Government, design to cost “embodies the early establishment of realistic but rigorous cost targets and a determined effort to achieve them.” From the beginning of a project to its completion, “[c]ost is addressed on a continuing basis as part of a system’s development and production process.”
In numerous projects, the statements, actions, and omissions of the Canadian Department of Defence suggest that it views design to cost as involving the second but not the first step. In other words, while it says that it addresses cost on a continuing basis, it fails to set “realistic but rigorous” budgets.
The failure to set realistic budgets can have serious consequences for programs. If the initial budget is not realistic, it may become necessary to re-baseline the program, starting with a completely new design in the hopes that the new design will fit within the budget. This phenomenon has been witnessed in a number of Canadian defence procurements, resulting in unnecessary expense and delay.
This paper explores a handful of Canadian procurements, illustrating the pitfalls of failing to set realistic budgets at the outset. It suggests that the refrains of the Canadian Department of Defence that budgetary concerns are minimized by design to cost are not true. Finally, it argues for more rigorous cost analysis prior to the setting of budgets, and provides suggestions on how this might be accomplished.