Cost Risk Allocation Theory and Practice

Posted by

Cost Risk Allocation Theory and Practice

Journal of Cost Analysis and Parametrics

Downloadable Files
https://www.iceaaonline.com/ready/wp-content/uploads/2020/07/1941658X.2014.922907.pdf

Abstract:

Risk allocation is the assignment of risk reserves from a total project or portfolio level to individual constituent elements. For example, cost risk at the total project level is allocated to individual work breakdown structure elements. This is a non-trivial exercise in most instances, because of issues related to the aggregation of risks, such as the fact that percentiles do not add. For example, if a project is funded at a 70% confidence level then one cannot simply allocate that funding to work breakdown structure elements by assigning each its 70% confidence level estimate. This is because the resulting sum may (but not necessarily will) be larger than the total 70% confidence estimate for the entire project. One method for allocating risk that has commonly been used in practice and has been implemented in a cost estimating integration software package is to assign risk by assigning the element’s standard deviation as a proportion of the sum of the standard deviations for all work break-down structure elements (Sandberg, 2007). Another popular method notes that risk is typically not symmetric, and looks at the relative contribution of the element’s variation above the mean or other reference estimate. Dr. Steve Book first presented this concept to a limited Government audience in 1992 and presented it to a wider audience several years later (Book, 1992, 2006). This technique, based on the concept of “need,” has been implemented in the NASA/Air Force Cost Model (Smart, 2005). These contributions represent the current state-of-the-practice in cost analysis. The notion of positive semi-variance as an alternative to the needs method was brought forth by Book (2006) and further propounded by Sandberg (2007). A new method proposed by Hermann (personal communication, 2010) discusses the concept of optimality in risk allocation and proposes a one-sided moment objective function for calculating the optimal allocation. An older method, developed in the 1990s by Lockheed Martin, assigns equal percentile allocations for all work breakdown structure elements (Goldberg and Weber, 1998). This method claims to be optimal, and Goldberg and Weber (1998) show that under a very specific assumption, that this is true. Aside from Hermann’s paper and the report by Goldberg and Weber on the Lockheed Martin method, cost risk allocation has typically not been associated with optimality. Neither the proportional standard deviation method nor the needs method guarantees the allocation scheme will be optimal or even necessarily desirable. Indeed, the twin top-ics of risk measurement and risk allocation have either been treated independently (Book, 2006), or they have been treated as one and the same (Sandberg, 2007). Regardless, the current situation is muddled, with no clear delineation between the two. In this article, the present author introduces to cost analysis the concept of gradient risk allocation, which has been recently used in the areas of finance and insurance (McNeil, Frey, & Embrechts, 2005). Gradient allocation clearly illustrates that the notions of risk measure and risk allocation are distinct but intrinsically linked. This method is shown to be an optimal method for allocation using three distinct arguments—axiomatic, game-theoretic, and economic (optimal is used in this context as desirable or good, not as the minimum or maximum of a specified objective function). It is also shown that the gradient risk allocation method is intrinsically tied to the method used to measure risk, a concept not heretofore considered in cost analysis. Gradient allocation is applied to five risk measures, resulting in five different allocation methods, each optimal for the risk measure from which they are derived. Considerations on when the proportional standard deviation and needs method are optimal are discussed, and a link between Hermann’s method and the proportional standard deviation method is demonstrated.

Authors:

Dr. Christian Smart is the Director of Cost Estimating and Analysis for the Missile Defense Agency. In this capacity, he is responsible for overseeing all cost estimating activities developed and produced by the agency, and directs the work of 100 cost analysts. Prior to joining MDA, Dr. Smart worked as a senior parametric cost analyst and program manager with Science Applications International Corporation. An experienced estimator and analyst, he was responsible for risk analysis and cost integration for NASAs Ares launch vehicles. Dr. Smart spent several years overseeing improvements and updates to the NASA/Air Force Cost Model and has developed numerous cost models and techniques that are used by Goddard Space Flight Center, Marshall Space Flight Center, and NASA HQ. In 2010, he received an Exceptional Public Service Medal from NASA for his contributions to the Ares I Joint Cost Schedule Confidence Level Analysis and his support for the Human Space Flight Review Panel led by Norm Augustine. He has won seven best paper awards at ISPA, SCEA, and ICEAA conferences, including five best overall paper awards. Dr. Smart was named the 2009 Parametrician of the Year by ISPA. He is an ICEAA certified cost estimator/analyst (CCEA). Dr. Smart is a past president of the Greater Alabama Chapter of SCEA, a past regional VP for SCEA, and is the managing editor for The Journal of Cost Analysis and Parametrics. Dr. Smart earned bachelors degrees in Economics and Mathematics from Jacksonville State University, and a Ph.D. in Applied Mathematics from the University of Alabama in Huntsville.