How Many Iterations Are Enough?
A common question regarding uncertainty analysis is: How many iterations are needed to obtain stable and reliable simulation results? The answer is always: It depends. Many factors influence the stability of the uncertainty simulation: how many uncertain elements are modeled; how many layers deep is uncertainty assigned; to what degree are uncertainty distributions skewed; what proportion of the distributions are left, center or right skewed; what is the proportion of “simple” vs. “complex” distribution shapes; how many cyclic relationships in the WBS; is correlation applied; and to what degree is functional correlation present.
Central to answering the question “how many iterations are necessary” is defining the appropriate test. What statistics should be monitored? How should results be calculated and displayed? How do you define “good enough”? If you reach “good enough” at a certain number of iterations, does that mean any additional iterations cannot yield “worse” results? These questions are addressed.
Popular risk tools have features to test for “convergence” or “accuracy” of the simulation results. But they all use different methods. Rather than using a specific tool (method), a simple, generic approach using the simulation results is introduced. Simple charts to visualize where the simulation stabilizes are proposed. The following question is also addressed: can the data from a single 10k simulation be used or is it necessary to run independent simulations at some interval (100, 200, 300, … 10,000) to properly assess convergence? A recommended approach is applied to simulation results obtained from Crystal Ball, @Risk and ACEIT using the same, reasonably complex model (dozens of distributions with functional and applied correlations). The results show that all three tools converge in a similar manner.
Several large and small models are also analyzed to illustrate results at various levels within the same model. “Good enough” criteria are proposed
Alfred earned a Bachelor Mechanical Engineering degree from the Canadian Royal Military College and a Master of Science with Distinction in naval architecture from the University College, London, England. He spent 21 years in the Canadian Navy driving submarines (Navigator, Operations Officer) and ten years as a naval architect. He
has over 20 years experience leading, executing or contributing to life cycle cost model development and cost uncertainty analysis for a wide variety of military, Coast Guard, NASA and foreign projects. He has been with Tecolote since 1995 and since 2000 has been the General Manager for Tecolote’s Software Products/Services Group, responsible for the development, distribution and support of a variety of web and desktop tools including ACEIT. Alfred has delivered numerous papers on cost risk analysis topics and was the lead writer of the AFCAA Cost Risk an Uncertainty Handbook.