2026 Workshop Breakout Sessions


Artificial Intelligence Track

Counting the Cost of Bias: Unveiling Hidden Bias in AI and Cost Data
Artificial Intelligence (AI01)
Patrick Casey

As cost analysts increasingly rely on data analytics and AI, understanding bias has become essential to maintaining estimate integrity. This session explores hidden biases that permeate both AI systems and traditional cost datasets: biases rooted in human judgment, sampling limitations, and historical inequities. Real-world examples, from biased algorithms to cost models trained on incomplete project data, illustrate how unrecognized bias can distort forecasts and decisions. Attendees will learn how AI trained on internet data can magnify existing societal and institutional biases, creating a false sense of objectivity. The session connects these insights to common pitfalls in cost estimating, offering frameworks for detecting and mitigating bias through better data practices, validation checks, and human-in-the-loop oversight. Participants will leave with actionable steps for improving fairness, accuracy, and transparency in AI-enabled and traditional cost-estimating workflows—ensuring that models serve analysis, not shape it.

Keywords: Bias, Decision Analysis, AI


AI as an Assistive Partner in the GAO 12-Step Process
Artificial Intelligence (AI02)
Darrin L. DeReus, Ph.D.
K. Randall Lantz

This presentation introduces a conceptual framework for incorporating artificial intelligence into the GAO 12-Step Cost Estimating Process as an assistive tool—not a replacement for human cost estimators. The discussion explores how AI can enhance the process and automate routine tasks, assisting workflow by supporting each step of the process including normalization, documentation, and risk modeling while maintaining transparency and accountability. By using natural language models, predictive analytics, and intelligent automation, cost professionals can streamline routine tasks and devote more time to critical analysis, validation, and communication. The concept highlights how AI can serve as a force multiplier to improve consistency, timeliness, and analytical rigor across all twelve GAO steps. Attendees will gain insights into practical opportunities, challenges, and ethical considerations of integrating AI responsibly within cost estimating to strengthen—not substitute—the judgment & expertise of professional estimators.

Keywords: Data Collection, Government, Methods, Modeling, Monte Carlo, Risk, Uncertainty, Documentation, Artificial Intelligence in Cost Estimating, GAO 12-Step Process, AI-Assisted Analysis, Data Normalization and Automation, Predictive Analytics, Cost Risk and Sensitivity, Human–Machine Collaboration, Digital Transformation in Estimating, Federal Acquisition Cost Estimating, ICEAA Best Practices


Rebuilding Cost Confidence Amid Economic Volatility and the Rise of AI
Artificial Intelligence (AI03)
Brian D. Glauser

Global economic turbulence, workforce shifts, and the rise of artificial intelligence are redefining what “credible” cost estimating means. As inflation, supply instability, and AI-generated analytics alter data availability and interpretation, the profession faces new challenges in maintaining realism and trust. This paper explores strategies for preserving cost credibility when traditional benchmarks are in flux. It examines how data governance, adaptive calibration, and transparent analytic processes can restore confidence across government and industry. Attendees will gain insight into how macroeconomic uncertainty and digital automation are reshaping both the policy environment and the practical expectations of cost professionals. The session emphasizes that in today’s environment, the most valuable estimator skill may be agility by balancing methodological rigor with adaptive judgment in an AI-influenced economy.

Keywords: Cost Management, Data-Driven, Government, Methods, Regression, Risk, Uncertainty, AI, Policy, Economic Trends, Cost Credibility


AI & Cost Estimation: LLMs and Data Science’s Expanding Role in Cost Estimating
Artificial Intelligence (AI04)
Daniel Harper
Kevin McKeel

AI/LLM Tools such as Claude, Grok, and Chat GPT will take on an expanded presence in Cost Analysis over the next 5 years and beyond. And the state of change is incredible – from year to year we’re seeing enormous changes not just in models but in best practices. This presentation will address key developments over the year to keep you informed as to how these changes will affect you in estimating and acquisition. The goal of this presentation is to give you, the professional estimator, a primer on AI and hopefully raise your comfort level with how AI should (and should not!) be used in our world. Our presentation will also touch on AI and data visualization/ economic and environment impact (chip manufacture, data centers) of AI; tips on AI prompting; and using AI securely.

Keywords: Artificial Intelligence, Data Science, Machine Learning, NLP. ChatGPT, LLMs


A Fool with an AI Tool is Still a Fool
Artificial Intelligence (AI04)
Carol Dekkers
Dan French

We explore the integration of Artificial Intelligence (AI) in sizing software projects using the IFPUG function points methodology and software estimates. As organizations increasingly seek efficiency and accuracy in project estimation, AI offers transformative potential by automating the function point counting process. By leveraging machine learning algorithms, AI can analyze historical project data, identify patterns, and predict function point counts with enhanced precision. This not only streamlines the estimation process but also reduces human error and bias. AI-driven tools provide insights into project complexity and resource allocation, enabling more informed decision-making. The synergy between AI and IFPUG function points represents a significant advancement in software estimating practices, facilitating better project planning and management. This approach enhances reliability of software project estimates, paving the way for successful project delivery in an increasingly dynamic technological landscape.

Keywords: Program Management, Software


AI, the Great and Powerful: Cautious Enthusiasm & Purpose in PA&E’s AI Strategy
Artificial Intelligence (AI06)
Maura Anne Lapoff
Erika Rivera
Dr. Charles Loelius
Dr. Zachary Matheson

Today’s trope is that AI is either entirely useless or will solve all our problems. So, how can the cost community ensure that AI is truly great and powerful and not just a “confidence man” behind the curtain? The NNSA Office of Programming, Analysis, and Evaluation (PA&E) is answering this very question by developing an in-house AI Strategy and Implementation Plan, improving the efficacy of applied AI without risking mission. This strategy thoughtfully and intentionally approaches AI for programmatic cost estimation and PPBE, built upon the sturdy foundation of an AI-literate workforce with AI-ready data. We will describe the strategy development process, how it supports NNSA’s cost estimating goals, current progress toward strategic objectives, PA&E’s development of an AI Platform Evaluation, implementation challenges, and lessons learned. Further, we will warn what happens when you stray from the yellow brick road leveraging AI tools without a coherent AI strategy.

Keywords: Data-Driven, Government, Story Points, Artificial Intelligence, Strategy


Estimating with the AI Stack
Artificial Intelligence (AI07)
Patrick McGarrity
Chad Lucas
Rainey Southworth

Integrating Artificial Intelligence Systems (AIS) on board military assets is being aggressively pursued, yet a framework for developing predicative programmatic models (PPM) has not been standardized. In this presentation, we propose using the Carnegie Melon AI Stack concept to evaluate AIS. AI is not just advanced algorithms, but a network dependent on several layers of computational development and human capital. Decomposing the elements integral to a successful AIS empowers analysts to assess operational efficiency of AI solutions. The Carnegie Melon AI Stack approach allows us to develop a WBS and to evaluate AI systems during AoA’s to determine the total system cost. While AI at the top level of the stack may be demonstrated at a high TRL level, there lies several factors to consider in the underlying stack to determine feasibility, mission effectiveness & cost operational effectiveness.

Keywords: Cost Management, Cost/Benefit Analysis, Risk


From Code to Cognition: Advancing Cost Estimation for AI-Enabled Systems
Artificial Intelligence (AI08)
Arlene F. Minkiewicz
Dr. Vivian Tang
Alexander Johnson

Artificial Intelligence (AI) and Machine Learning (ML) are reshaping software-intensive systems across aerospace, defense, and automotive – yet traditional cost estimation methods fail to capture the unique nature of AI development. Activities such as data acquisition, labeling, annotation, experimentation, and continuous MLOps introduce new effort drivers absent from code-centric models. This paper presents a structured framework defining measurable size proxies and cost drivers for these AI-specific activities, establishing a foundation for credible estimation across the AI lifecycle. It also describes an ongoing research effort to evolve this framework into a cost model focused explicitly on AI development and sustainment. By combining analytical rigor with empirical learning, the work advances the profession’s ability to estimate cost and effort for intelligent, adaptive systems – providing estimators with practical tools to plan and justify investments in next-generation AI capabilities with greater confidence and consistency.

Keywords: Data-Driven, IT, Parametrics, Artificial Intelligence (AI), Machine Learning (ML)


Benefits, Barriers, and Best Practices for AI Cost Estimation: A Survey for OEMs
Artificial Intelligence (AI09)
Karen Mourikas
Brent M. Johnstone
Kevin Hewitt
Title/summary not approved for public release

Keywords: Budgeting, Data-Driven, Decision Analysis, Early Cost, IT, Methods, Modeling, Project Controls, Software, Artificial Intelligence, Large Language Models


Real-World Applications of Explainable AI in Cost Estimation
Artificial Intelligence (AI10)
Charles Orlando

Defense programs are shifting from pilots to operational use of explainable AI in estimation. This paper presents Estimation-Centric Artificial Intelligence (ECAI), an agentic framework that joins automated prompt engineering, contextual agent selection, retrieval-augmented generation, and human-in-the-loop review to accelerate proposal intake, hardware work-breakdown structuring, manufacturing cost modeling, and software scope estimation. ECAI links every output to its sources and reviewer decisions, producing auditable records that map to widely adopted governance principles for traceability and human oversight. Four case studies—RFI/RFP compliance mapping, hardware WBS generation, manufacturing Design-to-Cost and Design for Manufacturing, and software sizing across lifecycles—show gains in cycle time, first-pass quality, and trace depth. A proposed validation framework outlines how transparency and performance may coexist in defense-grade settings.

Keywords: Budgeting, Cost Management, Early Cost, International, Manufacturing, Uncertainty, artificial intelligence


AI-Driven Digital Threads for Cost Prediction
Artificial Intelligence (AI11)
Tom Shanahan

Digital engineering promises continuous traceability, yet rapid requirement changes often outpace estimating processes, leaving cost variance visible but its causes obscured. The requirements-to-cost segment of the digital thread—critical for understanding change impact—remains largely absent from current estimating systems. This paper presents an AI-enabled framework that reconnects this missing link by integrating requirement evolution, versioning, and DIFF analytics to forecast cost growth. The approach semantically maps requirements to work packages using natural language processing with human validation, versions both requirements and estimates to capture temporal change, and attributes cost deltas to their originating requirement modifications. Explainable machine learning is then applied to identify predictive patterns of cost growth and key change drivers. Using historical baselines and actuals, results demonstrate improved predictive accuracy and earlier visibility into high-impact requirement changes. The framework is tool-agnostic, integrates with existing estimating workflows, and enhances transparency, traceability, and proactive cost control.

Keywords: Cost Management, Data Collection, Data-Driven, Decision Analysis, IPM, Life Cycle, Modeling, Performance Management, Project Controls, Digital Engineering, Digital Thread, Requirements Analysis, NLP, Machine Learning, Explainable AI


AI in Parametric Estimating: Enhancing Judgment with Data
Artificial Intelligence (AI12)
Gustavo Vinueza

Parametric estimating has long relied on regression models and expert judgment to identify and calibrate cost drivers. However, traditional approaches often struggle to capture nonlinear relationships or quantify uncertainty in parameter selection for distribu. This paper presents an AI-augmented framework where machine learning algorithms and Gen AI techniques assist estimators in discovering hidden cost drivers, testing variable transformations, and assessing model sensitivity. This proposed approach combines expert oversight with data-driven pattern detection to enhance both transparency and credibility. It emphasizes practical methods for integrating AI into cost estimating workflows while maintaining traceability, governance, and validation in alignment with ICEAA’s best-practice standards. The presentation concludes by outlining how estimators can adopt AI-supported tools to complement, rather than replace, professional judgment—building a bridge between traditional parametric models and adaptive, data-driven cost analysis.

Keywords: Data-Driven, Decision Analysis, Monte Carlo


 

Analytical Methods Track

Weighted Residuals in CER Development Using Escalation Indices
Analytical Methods (AM01)
Timothy P. Anderson

Typical CERs contain older data points (1980s era) and newer data points (2020s era), along with everything in between, with older data points carrying just as much weight in the CER development process as newer data points. But manufacturing techniques, acquisition practices, and systems engineering processes have changed over time, which could impact cost. Moreover, the older the data point, the more it is subject to adjustment for escalation in the normalization process, so older actual costs are less accurate than newer actual costs because escalation adjustment factors are themselves estimated values. Therefore, older data should be less impactful in the CER development process than newer data. This paper describes a robust weighting process, based on escalation indices, that enables cost analysts to use all relevant data points in the CER development process while simultaneously de-emphasizing those with less importance, and emphasizing those with greater importance.

Keywords: Bias, Data Collection, Methods, Modeling, Parametrics, Regression, Uncertainty, Weighted, Residuals, Escalation


Decision at the Speed of Mission: A 2-Track Agile Framework for Cost Estimation (AM02)
Dr. Scott Willette
Dr. Ekaterina Brancato

Traditional acquisition cost estimation uses a waterfall path: slow queues and reviews. We built a market-like alternative using agile methods, pricing work by information and speed. The Sprint lane time-boxes short tasks into user stories with ready/done criteria: data pedigree, traceable estimates, and acceptance checks. Every three weeks we deliver an MVCE that evolves through risk dials and what-if trades, giving leaders choices on scope, risk, and time. The Kanban lane handles deeper work: Life-Cycle Cost Estimates and method development, managed with WIP limits and flow metrics to maintain rigor. We combine cost estimation, agile methods, and economic thinking in one framework and tie decisions to sprint cadence. Across 17 sprints the team delivered dashboards, tools, and deep dives and absorbed unplanned demand. The Kanban lane cleared 1,300 points with fewer task switches and higher rigor, enabled by a cost backlog, MVCE gates, traceable tickets, and dual-hatted leads aligned with contracting needs.

Keywords: Agile, Data-Driven, Early Cost, Methods, Kanban, Sprints


Stuck in Drydock: Why Shipyard Overhead Doesn’t Turn with the Tide
Analytical Methods (AM03)
Dr. Brian J. Flynn

The U.S. shipbuilding industrial base struggles to ramp up production capacity in response to rising fleet‐size requirements, aging platforms, and maintenance backlogs. Shipyards face lengthy design times, shortages in skilled trades, and large fixed investments, creating inertia in overhead structures. This paper examines how quickly shipyards adjust their overhead accounts when production signals change, providing new empirical evidence on the responsiveness of the industrial base. An econometric model traces how overhead costs evolve toward a desired level determined by expected workload, backlog, and contract awards. The dynamic explains why overhead rates stay elevated even as production ramps up, and why cost shocks propagate across programs for years. Understanding the adjustment lags is essential to better predict maintenance schedules, to right-size workloads, and to better target investments. A sharper understanding of shipyard overhead costs, then, is not just an accounting exercise – it’s a readiness multiplier.

Keywords: Cost Management, Data-Driven, Infrastructure, Methods, Statistics, Overhead Costs, Shipyard Industrial Base


To the BATCAIV! A Practical Guide for Building to Cost Targets
Analytical Methods (AM04)
Logan Hartley-Sanguinett
Zachary Matheson
Payton Deeney

In the traditional cost estimating paradigm, estimators have fixed project scope with which to produce cost estimates. However, in today’s agile and adaptive acquisition environment, estimators must now invert the traditional paradigm by utilizing a cost/schedule-as-independent-variable (CAIV/SAIV) model where definitive scope may still be undetermined. CAIV models can range from relatively rudimentary parametric transformations to rigorous regression analysis. By understanding the requisite model needs and properly utilizing CAIV principles, analysts are able to inform requirements, provide more realistic cost objectives, and participate in the cost management process earlier. This paper will discuss principles and best practices, posit the tradeoffs of -AIV models, and showcase the development of the NNSA’s Business Center Acquisition Toolkit Cost-As-Independent-Variable (BATCAIV) model and how it was used to inform leadership in actual operations.

Keywords: Budgeting, Cost Management, Data-Driven, Government, Methods, Parametrics, Regression, Statistics


Small Data, Big Problems: Can Constraints and Penalties Save Regression?
Analytical Methods (AM05)
Kevin Joy
Max Watstein

Small datasets with intercorrelation pose serious challenges to the stability of coefficients generated by Ordinary Least Squares (OLS) regression. A motivating example in cost estimating is Cost Improvement Curve with Rate Effect analysis, where datasets are typically small, the lot mid-point (Learning) is correlated to lot size (Rate) as production ramps up, and slopes are expected to be less than or equal to 100%. Lasso and Ridge regularization methods address multicollinearity by penalizing coefficients. Separately, constrained optimization methods can impose explicit restrictions on coefficient values when prior knowledge about their behavior is known—such as bounding slopes within a known range. This paper investigates the combined effects of penalized regularization methods (Lasso and Ridge) and constrained optimization methods. We explore how to assess model stability and goodness-of-fit using likelihood-free diagnostic techniques suited to optimization-based regressions, such as cross-validation for generalization error.

Keywords: Learning Curves, Methods, Modeling, Parametrics, Regression, Machine Learning, Boostrapping, Optimization


Forget Tea Leaves, Accelerate Resource Planning with Time Series Models
Analytical Methods (AM06)
Alan Karickhoff
Remmie Arnold
Justin Alcorta

In budget constrained environments, federal organizations must budget efficiently and effectively to minimize waste and maximize mission delivery. Easier said than done, these organizations routinely face challenges with moving pieces and interdependencies. That said, federal organizations can often leverage historical budget and spending trends to streamline how they plan their resource needs for operations programs, which are rarely supported by a formal estimate and are without fixed start/end dates. This paper demonstrates the use of time series modeling techniques to forecast future operational costs to benchmark funding requests to Congress. Operations activities present a unique opportunity to leverage historical cost data because they generally exhibit predictable cost patterns both backwards and forwards. These conditions combined with robust data collection and management processes enable cost forecasts to be partially automated, which gives agencies an efficient and data-driven basis to level set their budget requests.

Keywords: Budgeting, Data Collection, Data-Driven, Modeling, Operations


Availability vs Affordability: Relating RAM Metrics to Cost
Analytical Methods (AM07)
Stephen Koellner

Availability, maintainability, and reliability are the main tenets of the standard OSD RAM-C report. The RAM-C report documents the sustainment and support metrics of a fielded system throughout its lifecycle and enables the ability to track performance to the Key Performance Parameters (KPPs) and Key System Attributes (KSAs) documented in a program’s requirements documentation. How are these metrics determined in the first place, and how do they relate to the overall concept of affordability of a system? The methodology behind these optimizations is rudimentary but are informative for both cost analysts and systems engineers involved in conducting trade-off analysis during early-life cycle conceptual design of a system. This topic will illustrate these interactions through a time-stepped simulation of system use, maintenance, failures, repairs, and associated costs.

Keywords: Cost/Benefit Analysis, Decision Analysis, Early Cost, Life Cycle, Modeling, Uncertainty, Variables, Systems Engineering


Spin Rates and Spend Rates – Cost Modeling with PRWG & GSA Advantage
Analytical Methods (AM08)
Carson Lo
Moses Kim
Frederick Hargrove

Parametric cost estimating relationships (CERs) are often limited by the availability of robust historic data and knowledge of cost drivers. This study leveraged data scraping tools and OpenAI to extract technical specifications from publicly available data to supplement existing data in the NNSA’s database of programmatic equipment. Ensuring trust in the cleaned and normalized dataset involved a meticulous process leveraging validation processes such as pattern-matching, prompt engineering, and data mining. This enabled the development of regression-based CERs that predict the purchase price of commonly procured equipment types. This allows for more accurate and uncertainty-informed budgeting of future expenditures and has the benefit of alleviating additional effort during data calls to supply technical specifications into the database. The data will not only help the program offices to understand the most critical equipment recapitalization needs but also support cost estimators performing cost management of these equipment.

Keywords: Data Collection, Methods, Regression


Where It All Breaks Down: How Cost growth exists within specific WBS elements
Analytical Methods (AM09)
Zachary Matheson
Vikram Basude

Capital projects executed by the National Nuclear Security Administration (NNSA) often experience drastic cost growth across all phases of a project despite extensive planning. As part of a detailed exploration of this phenomenon, the NNSA Office of Programming, Analysis, and Evaluation (PA&E) identified cost growth within specific Work Breakdown Structure (WBS) elements. To achieve this, PA&E mapped project Earned Value Management (EVM) data to a standardized WBS using a hierarchical classification machine learning model. The team verified and validated the model outputs, tracking cost growth across numerous WBS elements of NNSA projects over multiple years of post-baseline execution. Study findings confirm that EVM data serves to enable first-time analytics into which lower-level cost elements tend to experience more growth in the NNSA. PA&E’s detailed WBS cost growth framework facilitates estimators to better quantify cost growth risks early, identify mitigation strategies, ultimately leading to more informed mission execution.

Keywords: Cost Management, Cost/Benefit Analysis, Data-Driven, Decision Analysis, Early Cost, EVM Function Points, Government, Infrastructure, Life Cycle, Methods, Microsoft Excel, Modeling, Monte Carlo, Parametrics, Performance Management, Program Management, Risk, Scheduling, Statistics, Variables,


Estimating in Different Level of Details by Machine Learning
Analytical Methods (AM11)
Barış Özkaya

In this paper, findings of my Phd. research will be presented. Three different cost estimating approaches are evaluated according to their level of detail. Aim is to estimate the cost of a component level end item which is an assembly of different detail parts. In the first approach, detail parts are estimated using the formulations different for each manufacturing process. In the second approach, again detail parts are estimated but using the same formulation which includes the manufacturing processes as parameters. In the third approach, estimation is made in the end item level. Artificial neural networks, random forest and linear regression methods are used in each approach and their estimating performances are compared to each other. Since each three cost estimating approaches are different in level of detail, time needed for making the estimation and the accuracy of each approach is also different from each other.

Keywords: Manufacturing, Regression, estimating, machine learning


Title/summary not approved for public release
Analytical Methods (AM12)
Bopha Seng

Title/summary not approved for public release

Keywords: Modeling, Monte Carlo, Parametrics, Regression, Risk, Statistics, Uncertainty,


Integrating Carbon Accounting into Cost Estimation for IT and Hardware Systems
Analytical Methods (AM13)
Dr. Vivian Tang
Gurney Thompson

As industries advance toward sustainability, incorporating environmental performance into cost estimation is increasingly important. This study presents a framework for evaluating carbon emissions and costs across infrastructure services and hardware systems. The method estimates greenhouse gas (GHG) emissions for IT systems, with particular attention to the growing energy demands of AI-driven computing, and for hardware including components, COTS items, and complete hardware lifecycles. The assessment covers production, manufacturing, transportation, and operation, with plans to expand to a full life cycle assessment (LCA). Developed in accordance with ISO 14040/44 and ISO 14067 standards, the framework consistently integrates carbon data into engineering and cost analyses. It identifies emission hotspots, assesses design and sourcing alternatives, and aligns cost efficiency with sustainability goals. By linking carbon performance to financial results, the approach supports transparent, climate-aware cost management across sectors.

Keywords: Cost/Benefit Analysis, Data Collection, Infrastructure, Life Cycle, Manufacturing, AI


Evaluating Cost–Performance Efficiency of Fighter Jets Using DEA
Analytical Methods (AM14)
Yusuf Ozan Üzgün

This study aims to evaluate the cost–performance efficiency of modern fighter jet programs using Data Envelopment Analysis (DEA). While fighter aircraft are among the most complex and capital-intensive defense systems, their development and procurement costs often diverge significantly across countries and models. By applying DEA, this research intends to identify the relative efficiency of selected fighter jets by treating costs like development cost, production cost, and unit price as inputs, and key performance indicators—such as thrust, range, payload capacity, and speed—as outputs. The analysis seeks to establish a benchmarking framework that reveals whether certain aircraft achieve superior operational capabilities at proportionate cost levels. Furthermore, the study aspires to infer optimal or “efficient” cost levels based on the frontier constructed by DEA results. The findings are expected to contribute to defense economics and procurement decision-making by introducing a quantitative approach to cost and performance benchmarking in military aviation.

Keywords: Cost Management, Cost/Benefit Analysis, Data Envelopment Analysis, Fighter Jet Cost Efficiency, Defense Economics, Cost–Performance Benchmarking, Aircraft Development and Procurement


 

Data Science Track

Say What?! Prompting Large Language Models for Smarter Cost Analysis
Data Science (DS01)
Anil Divvela

As Large Language Models (LLMs) such as ChatGPT transform analytical workflows, cost estimators can leverage prompting to enhance productivity and insight. This presentation introduces the fundamentals of prompting and demonstrates how structured, well-framed questions can yield defensible, repeatable outputs useful in cost analysis. Core prompting concepts will be outlined, followed by practical use cases tied to real-world cost analysis activities. The session will also cover methods for documenting and validating AI-assisted work to maintain analytical rigor. This session is intended for analysts seeking to responsibly integrate emerging tools into established cost estimating processes to improve quality, transparency, and efficiency.

Keywords: Data Collection, Early Cost, Methods, Artificial Intelligence, Prompting, Data collection


EVM From an Industrial Engineering Lens: Factory Operations Modeling
Data Science (DS02)
Dan Hearn

A Prime Contractor’s ability to meet the contractual delivery schedule is essential to ensuring the warfighter is properly equipped. What can the Government do when a contractor has fallen behind schedule and promises of future deliveries are consistently missing the target? How can a program leverage existing information to determine what a more realistic outlook is for future capacity? This would allow both the contractor and the government to be better informed of what capabilities will be at a given time and how to properly assess decisions that may affect the supply chain. This presentation will cover our effort to model the Prime’s factory operations, evaluate throughput at set steps in the assembly process, and predict deliveries based on historical performance using tools like Python and PowerBI.

Keywords: Data-Driven, DOD/MOD, Government, Modeling, Program Management, Data Science, Programming, Python, PowerBI


Turning Jumbled Text into Useful Information
Data Science (DS03)
Rivers Jenkins

Cost analysts often face the challenge of working with data that’s messy, inconsistent, or buried in text fields. How do you quickly find, clean, or reformat what you need without hours of manual effort? Enter regular expressions (regex) — a simple yet powerful tool for identifying patterns in text. This session will introduce regex in a clear, beginner-friendly way, with a focus on its practical value for cost estimators and analysts. We’ll cover what regex is, why it matters, and how it can make common tasks—such as cleaning raw inputs, extracting key values, or searching large datasets—much faster and more reliable. No programming background is required; examples will use familiar tools like Microsoft Excel and other accessible environments. If you’ve ever wished there were a faster way to wrangle messy data, this session will show you how regex can turn chaos into clarity.

Keywords: Data Collection, Data-Driven, Microsoft Excel


A Validation a Day Keeps the Errors Away: Refining Data with Novel Health Checks
Data Science (DS04)
Emily Johnson
Daniel Puentes

Estimating efficacy is contingent on high quality data, ideally from official databases that have been validated with data health checks. But what happens when these checks fail to holistically diagnose inconsistencies, leaving estimators to deal with the symptoms of unreliable data? The Department of Energy’s real property database of record has many such checks but fails to pinpoint the root cause of its data’s inconsistencies, raising questions surrounding its reliability. Instead of disregarding these inconsistencies or avoiding the database altogether, this paper presents a centralized error tracking dashboard that locates these issues by identifying both unreliable and, more critically, reliable data. This allows analysts to filter out unpredictable parameters in the data, improving data quality and reliability of estimates. This paper will then provide recommendations for conducting similar external validation on any database, and lessons learned for database owners to supplement data health checks and improve database utility.

Keywords: Data Collection, Data-Driven, Methods, Modeling


The Life and Death of a Milestone
Data Science (DS05)
Obai Kamara

Traditional Earned Value Management (EVM) and scheduling analytics rely on deterministic indicators such as cost and schedule variances, performance indices, and trend-based forecasts to assess project health. However, these metrics often fail to quantify the probability and timing of milestone delays, which drive downstream cost and schedule risk. This research applies survival analysis, a statistical framework commonly used in reliability and biomedical studies, to model the expected duration until such late events occur within EVM-governed programs. Using historical schedule and EVM data, time-to-event modeling techniques such as Kaplan–Meier estimators and Cox proportional hazards models estimate the likelihood that a project milestone “survives” without delay over time. The results show how survival curves and hazard ratios can serve as early-warning indicators, complementing traditional EVM metrics with probabilistic insight into when and under what conditions key schedule risks emerge.

Keywords: Data-Driven, Government, Methods, Performance Management, Project Controls, Risk, Statistics, Data Science, Survival Analysis, Forecasting, Schedule, Applied Statistics


Cost of Being a Superhero
Data Science (DS06)
Matthew McGlone
Sebastian Rodriguez Traconis
Brooke Bires

This project streamlines cost estimation by integrating the Power Platform tools of Power Query, Power BI, and Power Automate to optimize data modeling, visualization, and reporting. Using a standardized Excel format, Power Query transforms and appends cost data for consistent analysis. Power BI then generates dynamic, customizable visualizations that reflect real-time changes based on user-selected criteria. Finally, Power Automate delivers automated updates and alerts, enabling faster decision-making and continuous insights without requiring advanced coding skills. To make the experience engaging, the team introduces a playful twist: users build a superhero with selectable traits, gear, and locations that each influence the cost model. As selections change, visualizations and total costs update instantly, offering a fun yet functional way to explore budgeting scenarios. This approach not only enhances accuracy and transparency but also encourages user interaction and creativity, making cost estimation more accessible and enjoyable.

Keywords: Cost/Benefit Analysis, Data-Driven, Modeling, Data Analysis


How Computational Law Predicts the Future of Cost Estimating
Data Science (DS07)
Julia Peters
Taylor Fountain

Since the release of open-source generative AI in 2022, several ICEAA presentations have documented the impact of this technology on the field of cost estimating. Tools like ChatGPT have been utilized to generate code and support data analysis; however, it remains unclear how the advancement of computational capabilities could reduce cost estimators’ reliance on human programmatic expertise. The objective of this paper is to demonstrate how current innovations in the field of computational law may offer a window into the future of the cost estimating industry. We examine projects at CodeX, the Stanford Center of Legal Informatics, that exemplify how researchers increase the efficiency of legal processes by translating contract requirements into code. Cost estimators can utilize this newfound understanding of legal mechanization to anticipate forthcoming industry changes, as researchers strive to make computable contracts the de facto standard.

Keywords: Communication, Methods, Modeling, Software, AI, Natural Language Processing


The Digital Keel: Data Visibility for Fleet Readiness
Data Science (DS08)
John Rosak
Jordan Smith

The U.S. Navy faces significant challenges causing persistent deferrals to ship delivery and maintenance, exacerbated by the inherent complexities of maintaining an aging fleet while simultaneously building new capabilities. The Navy has set strategic goals to achieve “readiness for sustained high-end joint and combined combat by 2027” and 80% combat surge ready posture for ships, submarines, and aircraft by the same year. The Navy will not achieve these goals without significant changes to their approach. The immediate and critical challenge lies within the Navy’s ability to maintain and sustain the existing fleet to meet readiness and operational needs. Grounded in research and coupled with recent experience with Material Reclamation process improvement and Waterfront Support efforts, this paper identifies gaps in the management of Navy inventory data that directly impacts Fleet Readiness. Our approach improves data visibility across stakeholders, enabling the Navy to effectively deliver parts and improve operational readiness.

Keywords: Data Collection, Data-Driven, DOD/MOD, Government, Infrastructure, Operations, Program Management, Scheduling, Readiness


How Defense Programs Behave: Deep Learning on SARs
Data Science (DS09)
Peter Shmorhun
John Maddrey

Selected Acquisition Reports (SARs) contain detailed but unstructured information on Defense Acquisition program performance. Traditional methods manually extract a handful of metrics, discarding almost all available information. This project uses deep learning with PyTorch, including Long Short-Term Memory (LSTM) and Transformer models to leverage all SAR information, including narratives, schedule, funding, and performance data to predict cost growth, risk factors, and schedule delays. Our research analyzed 7,000 SARs across 301 programs spanning aircraft, missiles, ships, ground vehicles, and electronics. We identify critical factors driving prediction accuracy: program scale, data availability, and category-specific cost dynamics. Results reveal systematic patterns in acquisition program behavior, enabling earlier identification of at-risk programs. We also discuss how the project redefined our conceptions of data processing and modeling, along with potential applications of deep learning for cost estimators.

Keywords: Budgeting, Data-Driven, Modeling, Program Management, Deep Learning, PyTorch, Machine Learning, Neural Networks


Modular Cost-Modelling Framework: Bridging Excel-Based Estimating & Data Science
Data Science (DS10)
Robert Smale

Excel-based cost models offer accessibility but lack scalability, transparency, and assurance. As estimating teams modernise, they need frameworks that retain core cost-engineering logic while embracing analytical rigour. This paper presents a modular Python-based framework that replicates and enhances Excel workflows. Key components—data exploration, preparation, modelling, validation, and visualisation—are implemented as standalone scripts, enabling flexible reuse and review. The framework integrates assurance practices like version control, branching, and peer review to support collaboration and governance. Lightweight machine-learning techniques automate driver identification and scenario testing, with results benchmarked against legacy Excel models to assess efficiency and auditability. The paper offers a blueprint for sustainable, incremental adoption of data-driven cost engineering, showing how modularisation and transparent reporting can elevate technical quality and organisational confidence.

Keywords: Cost Management, Methods, Microsoft Excel, Python, Modular Workflows


 

IT & Cloud Track

Lean, Mean, Natural Language Processing Machines
IT & Cloud (IT01)
Kyle Ferris
Eric Hagee

Over the past few years, commercially available Large Language Models (LLMs) have demonstrated value in automating tedious tasks ranging from content summarization to code generation. Although the business case for LLMs is somewhat established, considerable challenges limit workplace implementation across small- to mid-size enterprises, including computational constraints and the need for containerized security. Enter Small Language Models (SLMs). This presentation will provide an overview of SLMs, comparing them to their LLM counterparts, while addressing the key benefits of on-premise deployment, faster processing, parameter fine-tuning, and reduced computation and energy consumption requirements that make them accessible to small- to mid-size enterprises. Furthermore, this presentation will outline practical SLM use cases for the cost community, explaining how cost firms can integrate and leverage SLMs to automate tasks associated with custom tool development, technical documentation, and Retrieval Augmented Generation (RAG).

Keywords: IT, Software, AI/ML, NLP, Data Science


Quantum Computing for Real Time Defense Scheduling and Simulations
IT & Cloud (IT02)
Claudia McCarthy
Gabriella Magasic

Quantum computing for real time defense scheduling and simulations can be broken down into three main technological categories: sensing, communication, and computing. With the ability of this technology to deliver advanced weapon systems, faster simulations, and communication, it can execute real-time threat prediction within a major cybersecurity threat landscape. Many experts argue that quantum techniques will be required to solve defense problems at larger scales. Quantum applications range from homeland security responses to defense and space programs to logistics. This wide range of applications calls for a diversified utilization of schedule best practice procedures to properly deliver dynamic models. This paper will discuss how to utilize real-time schedule situations, logistics, management, and programs to better support the defense industry.

Keywords: Government, Program Management, Scheduling


Episode II: Return to Cloud City
IT & Cloud (IT03)
Alex Smith
Kyle Davis

A long time ago, in a galaxy not so very far away, stove-piped Information Technology (IT) Systems were replaced by Data Centers and Cloud Computing technology. We learned the appropriate Cost Estimating Relationships (CERs), Cloud Computing Models (CCMs), and vendor-specific pricing models to implement to create credible cost estimates. So why are technical inputs still so difficult to get for our cloud infrastructure estimates? And what more have we learned since our first visit to Cloud City? This session will explore the technical parameters of cloud computing – stepping through what cloud cores and vCPUs are, how they’re calculated, and how we can cross-check them vs. prior on-premise requirements. It will additionally cover what we’ve learned on our cloud journey – cost drivers, various Instance and VM types, latest pricing models for AWS and Azure inclusive of Government regions, and more. See you in the clouds.

Keywords: Infrastructure, IT, Cloud Technical Parameters


The Value of Shared Data in Cost Estimating: Collaboration, Reuse & Insight
IT & Cloud (IT04)
Courtney Smith

This session explores how organizations can transform their cost estimating capability through shared, centralized databases. By connecting cost models, project data, and estimating inputs across teams, enterprises unlock opportunities for collaboration, consistency, and reuse that dramatically improve estimating speed and accuracy. Participants will see how shared data infrastructures support better decision-making—enabling benchmarking, AI-assisted insights, and standardized practices across departments. Using real-world examples, the session will illustrate how a collaborative database approach reduces duplication, improves transparency, and strengthens the credibility of estimates. Attendees will leave with a clear understanding of how shared data strategies can bridge silos between engineering, finance, and program management to accelerate enterprise-wide value.

Keywords: Keywords not provided


Advancing Cost Estimation for IT Planning and Deployment
IT & Cloud (IT05)
Francis Gurney Thompson III

Estimating the cost and effort of IT planning and deployment projects remains a complex challenge across government and industry. This presentation explores approaches for simplifying and unifying cost estimation models within a modular framework that better captures how IT systems are planned, built, modernized, deployed and operated. These projects involve using cost models for architectural design, IT equipment configuration and network engineering, software development, system-wide integration testing, deployment, and system validation. Research topics include improving deployment modeling capability, refining configuration and network engineering models, incorporating cloud services, and modeling system validation efforts. The goal is to establish a foundation for more coherent, repeatable estimating of complex IT planning and deployment efforts.

Keywords: Early Cost, Infrastructure, IT, Modeling, Parametrics


 

Management, EVM & Risk Track

Tripped and Wired: A Recipe for Comprehensive Risk Assessment
Management, EVM & Risk (ME01)
Peter Braxton
Sean Wells
Christina DeAngelo

The “Iron Triangle” relates cost, schedule, and performance to help managers identify the source of and mitigations for program risks. However, the rust begins to form when event-focused risk management (RM) tangles with uncertainty-focused risk analysis (RA). Fundamentally, these divergent paradigms inhibit communication between RM and RA practitioners, leaving program managers with one hand tied behind their back. This paper addresses these crossed wires by introducing a novel approach that leverages earned-value-style “tripwires” and a bit of calculus to translate between the discrete and the continuous. For RA, this bridges the gap to ensure models capture RM inputs and enables effective application of Management Reserve. For RM, this allows monitoring and mitigation of continuous factors that would otherwise confound the risk register. In an environment hyper-focused on early detection and cost management, this approach gives cost estimators the relevant tools to drive effective decision making and program success.

Keywords: Communication, IPM, Methods, Modeling, Risk, Uncertainty, Risk Management, Risk Analysis


Bringing Narrative to the Noise: Detailed Storytelling Through IPMDAR Data
Management, EVM & Risk (ME02)
Jacob Cronin
Aaron Everly
Neha Gunapati

Accurate and credible Estimate at Completion (EAC) forecasts are essential to effective EVM and program manager decision-making. Traditional methods rely heavily on cumulative performance indices that can mask emerging problems, while overly granular techniques can introduce statistical “noise.” The expanded structure of IPMDARs provides greater consistency and depth in reported data, enabling a new generation of analysis that treats performance as a dynamic process rather than a static record of past results. This paper presents a statistically-grounded method that constrains EVM forecasts within empirically derived bounds, balancing precision and realism. By leveraging IPMDAR data, the approach enables analysts to be better storytellers; reducing distortion from extreme values while preserving the ability to zoom in to true performance trends. The result is a transparent and defensible forecasting methodology improved from the Gold Card; enhancing confidence in EACs and supporting timely, data-driven decisions.

Keywords: Data-Driven, EVM/Function Points, Methods, Performance Management, Statistics


Scheduling for Speed: Harnessing Forensic Methods to Drive Acceleration
Management, EVM & Risk (ME03)
Patrick K. Malone

Delays in defense acquisitions remain a significant threat to capability delivery and cost control. The GAO’s latest assessment shows the average time for major defense acquisition programs to field initial capability continues to slip, highlighting the need for greater schedule rigor. Using a “Data to Discovery” approach using forensic schedule-analysis methods to extract insights from schedule data. Then convert them into actionable schedule optimization. We examine techniques such as critical path analysis, fragnet modeling, contemporaneous period analysis and logic re-sequencing to identify latent schedule inefficiencies and risk exposure. With a focus on defense acquisition programs, we evaluate realistic schedule acceleration strategies and show their benefits and risks. Using synthetic data, we demonstrate how these methods can achieve schedule reductions while minimizing risk exposure. The result is a structured, repeatable methodology for program managers to proactively optimize schedules and improve capability timelines.

Keywords: Decision Analysis, Scheduling, Uncertainty


Into the Void: A Foray Into Agile Performance Management
Management, EVM & Risk (ME04)
William Railey
Isabella Ferreira
Suzanne Burger

With software at the forefront of warfighting capability, the Government is focused on modernizing antiquated software development practices to enable resilience and speed. Government focus in policy and acquisition now focuses on the Agile framework, favoring rapid, iterative delivery over rigorous upfront planning. While having policy directing that we move faster to meet the threat is good, it isn’t enough. Our team identified voids in current policy related to performance management structures for programs performing agile software development. Without analytically rigorous performance management structures, decision makers are unable to manage and assess their programs’ performance, leading to cost overruns, schedule delays, and reduced mission effectiveness. This paper identifies gaps in PM and provides frameworks that standardize processes to improve programs’ cost, schedule, and technical outcomes. With these processes, we’ll show how effective PM results in increased readiness through iterative software delivery.

Keywords: Agile, Data-Driven, Performance Management, Software


Parts Unknown: Charting Government Progress Without an EVM Compass
Management, EVM & Risk (ME05)
Ryan Webster
Daniel Larison

Many programs escape formal EVM requirements yet still need credible answers when leadership inquires “How are we doing?” The Integrated Baseline Review (IBR) is the perfect tool to establish a shared understanding of scope and is a critical step in enabling a measurable baseline. This presentation will dissect a real-world modified IBR process which can be performed on government warfare centers and vendor efforts where EVM is not formally required. This presentation will cover how to establish a meaningful scope baseline, practical methods for claiming performance, effective baseline change procedures, useful metrics, data visualization, and critical narrative techniques to ensure leadership and other project stakeholders have meaningful insight into the project. After one year of performance management, we will cover our predictive accuracy, successes, and lessons learned. Attendees will leave with confidence that performance on every project is measurable.

Keywords: Performance Management, IBR, Dashboarding, Negotiations


Improving Software Schedule Estimation Through SRDR Data Analysis
Management, EVM & Risk (ME06)
Graham A. Wood
Kaelyn L. Richardson
Casey J. Rowzee Smith

Estimating software development schedules remains a challenge across DoW programs, where cost and schedule realism are critical to acquisition success. The industry benchmark, Dr. Barry Boehm’s COCOMO-based Schedule Estimating Relationship (SER), predicts duration from development effort, productivity, and subjective scale factors. However, this approach appears insufficient to explain much of the variation in modern software timelines. Increasing software complexity and evolving development practices motivate revisiting Boehm’s model using more recent, expansive DoW program data. This study analyzes a large subset of the SRDR database (approximately 700 observations spanning 20 years) to derive a statistically validated SER with stronger predictive performance. Advanced regression techniques are used to assess additional, more objective predictors, including development methodology, application domain, and staffing profiles. The analysis also quantifies the limits of schedule compression and the effects of Agile adoption. Results will equip the DoW cost community with enhanced tools for software schedule estimation and acquisition planning.

Keywords: Agile, Data-Driven, Regression, Scheduling, Software, Statistics, Variables


 

Modeling Track

Learning Curves: What Have We Learned – and What Remains to Be Learned?
Modeling (MD01)
Lisa Pelled Colabella
Éder M. Sousa
Timothy E. Conley
Jan Osburg

Since their first documented use in aircraft production nearly a century ago, learning curves have remained central to management research and practice. They have been studied extensively and applied across industries for cost estimating and other purposes (e.g., determining workforce training needs, supply chain management, and performance measurement). This presentation synthesizes key insights from learning curve research and industry applications over the years—and suggests new directions for learning curve modeling in the era of Big Data and Artificial Intelligence (AI). Questions discussed include the following: What learning curve characteristics are common across industries? How and why do learning rates differ across sectors? How might AI affect their application? The session concludes with implications for organizational improvement and future research opportunities.

Keywords: Learning Curves, Manufacturing


The Shape of Risk: Making Monte Carlo Results Coherent with the Galton Board
Modeling (MD02)
Garrethe Edge
Patrick Casey

Monte Carlo simulation is a powerful tool for modeling uncertainty in cost estimates, but communicating its outputs—like probabilistic cost distributions and S-curves—to leadership can be challenging. This presentation introduces a fresh, visual approach using a physical Galton board to explain core risk concepts such as confidence levels, cost variability, and the meaning of cumulative distribution functions (CDFs). Designed for the cost analysis community, this session bridges the gap between technical rigor and executive comprehension. We’ll reference best practices from the Joint Agency Cost Risk and Uncertainty Handbook to ensure analytical fidelity, while also exploring creative techniques to clarify complex outputs. Attendees will learn how to relate simulation results to real-world budget decisions, how to frame percentiles in actionable terms, and how to use physical or visual analogies to demystify uncertainty. Come for the S-curves, stay for the falling beads—and leave with new tools to make risk actionable.

Keywords: Communication, Modeling, Monte Carlo, Statistics


Title/summary not approved for public release
Modeling (MD03)
Brian Fitzpatrick

Title/summary not approved for public release

Keywords: Decision Analysis, DOD/MOD, Methods, Microsoft Excel, Modeling, Uncertainty, Proliferated


Beyond the Spreadsheet: Exploring Human Decision Behaviour in Analysis of Alternatives (MD04)
Joseph Guy

Decision makers in capability development face complex trade-offs between cost, risk and operational effectiveness, yet their choices are ultimately influenced by human judgement. While Analysis of Alternatives (AoA) provides a framework for rigorous quantitative evidence, the behavioural dynamics that shape final investment decisions remain largely unexplored. This study introduces a simulation-based experiment designed to observe how individuals and teams navigate these complex trade-off spaces and respond to uncertainty. Participants act as strategic decision makers tasked with allocating resources in a scenario-based wargame that reflects real world AoA challenges. Their decisions are analysed to identify patterns in risk appetite, bias, collaboration and adaptability. This study aims to reveal how behavioural factors affect the interpretation of analytical evidence, offering insights to improve how AoA results are communicated, understood and used in capability development.

Keywords: Data Collection, Data-Driven, Decision Analysis, Story Points, Uncertainty, Wargaming, Alternative Analysis


Cost, Demand, and Financial CATscans
Modeling (MD05)
Doug Howarth

In the estimating world, the focus is on costs, which is normal, as runaway costs can sink any program. What is less well understood is the relationship between costs, features, and the quantity produced for any given project. Specifically, the value of various product features offered is seldom compared to their costs, as limited by demand and learning curves, all of which can change. This paper offers a solution for first capturing and then reducing these terms. Financial CATscans take representative 2D Demand Planes (with Quantity and Price as dimensions) and discover Demand Frontiers. That Frontier sets the Price-Quantity limits. If we pick a target Quantity, we set a limiting Price. A Price limit, in turn, reveals an adjacent 3D Value Space (with Feature 1, Feature 2, and Price as dimensions). Financial CATscans reveal how to reduce the resulting 4D systems down to one term that sets product specifications.

Keywords: Cost Management, Cost/Benefit Analysis, Modeling, Performance Management, Hypernomics, Multidimensional


Spoiler Alert! The Cost of Downforce in an MDAO Framework
Modeling (MD06)
Kristen Jingco

In many aerospace and defense programs, cost estimating remains disconnected from the iterative design process, resulting in technically optimal solutions that later prove unaffordable or contributive to overrun. This paper presents a practical approach to embedding cost analysis as a formal discipline within a Multidisciplinary Design Analysis & Optimization (MDAO) environment using integration platforms. The proposed framework connects traditional engineering analyses – such as aerodynamics, structures, and mass properties (via CAD and code-based performance analyses) – to parametric cost models. Two optimization cases surrounding the design of a Formula 1 rear-wing will be explored: one driven by aerodynamic performance and another incorporating cost as a coupled objective. Anticipated results aim to illustrate how visibility of affordability metrics can influence design decisions, offering a practical foundation for integrating MDAO into future cost estimation workflows for high-performance systems.

Keywords: Cost Management, Data-Driven, Decision Analysis, Early Cost, Methods, Modeling, Parametrics, Process Engineering, Software, Variables, Multidisciplinary Design Analysis & Optimization, MDAO, Design-to-Cost, Affordability Analysis, Aerodynamics, Design Optimization, Tradeoffs, Computational Modeling


When History Rhymes: Cost Estimating Lessons from Seven Legacy Aircraft
Modeling (MD07)
Brent M. Johnstone

Every historical aircraft program carries a narrative that can inform present day cost estimating. By “reading” the manufacturing learning curves of seven legacy aircraft (F-102, F-106, B-58, F-111, L-188, P-3, and L-1011) we uncover recurring themes confirming the adage that history does not repeat itself, but it does rhyme. This study identifies quantitative and qualitative lessons from these legacy programs, revealing how design changes, delivery rate fluctuations, production gaps, end of program tail-up, and the knowledge transfer across multiple variants shaped cost curves. The analysis confirms that even decades-old programs can provide actionable insights for today’s estimators, offering a framework to anticipate cost drivers, calibrate learning curve models, and improve forecasting accuracy for current and future aircraft estimates.

Keywords: Data-Driven, Labor, Learning Curves, Manufacturing, Methods, Modeling


Learning Your ABC’s: EMD Schedule Analysis Across DoD Acquisition Milestones
Modeling (MD08)
Lindsey Jones

Schedule estimating in the Engineering and Manufacturing Development (EMD) phase of acquisition is mission-critical to aligning funding profiles, capability delivery, and acquisition strategy execution. However, EMD schedules continue to grow past projections, in-part due to technical uncertainty, integration and testing challenges, and evolving acquisition approaches. This study leverages Selected Acquisition Report (SAR) data from over 50 programs to examine the differences in EMD schedule duration for programs spanning multiple commodities, decades, Services, and acquisition approaches. Our approach documents the data-quality challenges in the SAR data and how we overcame them, enabling our research to inform decision-makers. The paper presents empirical schedule benchmarks across several programmatic variables, including commodity and acquisition pathway, and derives Schedule Estimating Relationships (SERs) to provide data-driven insights to improve schedule realism in future acquisition planning and cost estimates.

Keywords: Data Collection, Data-Driven, Scheduling, Variables


Site-Split Major Modernization Model Development
Modeling (MD09)
Gabriel Sandler

The National Nuclear Security Administration (NNSA) is analyzing where it can build efficiencies within its upcoming stockpile modernization acquisition programs. To account for possible compressed acquisition schedules during development, the NNSA’s Office of Programming, Analysis, and Evaluation (PA&E) developed an analogy-based, site-level cost estimating methodology for stockpile modernization programs that expanded upon PA&E’s Major Modernization Model (MMM) and Scope, Complexity, Options, Risks, Excursions (SCORE) process. The modified MMM, referred to as the site-split MMM, allows NNSA to develop cost estimates at the appropriate level of detail within expedited timeframes and without the usual amount of design definition found in typical bottoms-up estimate. The site-split MMM is a Monte Carlo-based model that uses actual and forecasted costs of past and ongoing weapons programs along with technical complexity information derived by subject matter experts to estimate the acquisition cost of stockpile modernization programs.

Keywords: Keywords not provided


Acquisition Planning Evolved: Portfolio Optimization using Genetic Algorithms
Modeling (MD10)
Matt Siiro
Shahriar Rayhan
Rebecca Lilley

Effective portfolio analysis requires evaluating investments under significant uncertainty and complex interdependencies. Traditional linear optimization methods fail to guide leadership effectively because the algorithms cannot account for complex relationships and shifting constraints, leading to inefficient resource allocation and misplaced confidence. This paper presents the application of Genetic Algorithms (GAs) to optimize National Nuclear Security Administration (NNSA) infrastructure and weapons portfolios. By emulating evolutionary processes, GAs efficiently explore vast solution spaces, adapt to changing priorities, and identify near-optimal investment strategies. This paper demonstrates the method’s robustness in addressing competing objectives and dynamic constraints inherent to large-scale acquisition portfolios. This exploration highlights advantages of GAs for strategic decision analysis within the NNSA and demonstrates their broader applicability to portfolio optimization across government acquisition portfolios.

Keywords: Data-Driven, Government, Infrastructure, Program Management, Scheduling, Optimization


Preparing for the Predictable: Applying the CRED to Past Acquisition Programs
Modeling (MD11)
Gwenevere Tirpak
Russel Mccawley

The Cost Risk/Uncertainty Exposure Determination (CRED) model is a way to measure risk and exposure as Cost Estimates are developed. Measuring the risk and exposure of Navy Acquisition Programs during execution allows for more robust plans and responses to change in Cost or Schedule. This tool allows analysts to better identify what knowledge gaps exist within a cost estimate, and account for significant risks that could greatly affect the accuracy of the model. This paper will apply the CRED model to a past acquisition program in a case study to determine if observed cost and schedule overruns could have been better predicted when the original estimate was being constructed, and thus, better prepared for.

Keywords: Cost Management, Data-Driven, Modeling, Risk, Uncertainty


 

Processes Track

Let’s Ask the Audience: An Interactive Presentation on Briefing a Cost Estimate
Processes (PR01)
Wendy Cassidy

Pause right there, before your answer grows legs and walks up the reporting chain; let’s discuss! In this scene your team is set to brief the results of your program office estimate when your team lead calls out sick. You must step-up and brief the customer in their absence. Although you played a large role in creating the estimate and have been part of the preparations, there are still some questions that might cause you to stumble through the answer. This interactive presentation will explore methods to respond to unexpected or challenging customer questions, ways to recognize areas within your estimate that are likely to be scrutinized, techniques to ensure your answers are correct and tips on how to deliver a brief that instills the customer with confidence in the estimate. Throughout the presentation the audience will be called on to guide the analyst’s approach based on customer reactions.

Keywords: Communication, Program Management


Sales Principles for Cost Analysts: How to Win in the Room Where It Happens
Processes (PR02)
Matthew Hoffman

Cost estimators may not be on Broadway, but every briefing, justification, and “walk me through this number” moment puts the analyst in their own Hamilton moment. This presentation explores how classic sales principles guide cost analysts as trust is built, stakeholder needs are understood, and buy-in is earned for complex financial models. Analysts can avoid throwing away their shot by moving beyond dense tables and eye charts to tell stories that highlight value, risk, and mission impact while communicating with the clarity decision makers expect. Attendees learn how to anticipate objections, tailor insights to the priorities of technical teams, finance, and senior leadership, and foster collaboration that strengthens both relationships and results. With open communication, focused storytelling, and a little Founding Father flair, analysts can steer the discussion long before the final performance. Every estimate becomes an opportunity to influence decisions in the room where it happens.

Keywords: Keywords not provided


From Black Box to Blueprint: Building Better Cost Models for Defense Programs
Processes (PR03)
Kat Lemmons
Markie Harris

Is your MDAP cost model a black box? In the high-stakes world of DoD acquisitions, guesswork isn’t an option. This presentation pulls back the curtain on building a reliable and defensible cost model, emphasizing the importance of structure in the development process. We’ll explore how a well-defined framework of cost drivers, elements, and logic, supported by techniques like parametric modeling and sensitivity analysis, is essential for producing accurate estimates. Attendees will learn how to avoid common pitfalls, such as oversimplification, unclear assumptions, and ignoring uncertainty. Through real-world scenarios, we’ll demonstrate how to enhance model accuracy, improve auditability, and drive cost-effective outcomes, ultimately empowering data-driven decisions within complex defense programs.

Keywords: Cost Management, Decision Analysis, Modeling


GAO: A Look at Recent EVM Audits
Processes (PR04)
Jennifer Leotta

The Government Accountability Office (GAO) Cost Guide discusses best practices associated with developing reliable, high-quality cost estimates. But that’s not all! The Cost Guide also provides criteria related to earned value management (EVM) to ensure that the EVM system provides a strong, integrated view of a project’s progress and performance. This presentation will provide an overview of the Cost Guide’s EVM criteria, patterns and trends we have observed through audits using the EVM criteria, and a look at two case studies from recently published reports.

Keywords: Project Controls, Best Practices


Sustainment Reviews (SRs): The Lessons We’ve Learned
Processes (PR05)
Lisa Mably

The 4323 statute requires services to review how we conduct sustainment of our major systems. After covered programs are five years past their Initial Operating Capability milestone, there is a ten element requirement which compares how the program is performing in operations compared to how we planned for it to perform. One key element is the Independent Cost Estimate (ICE). In the last five years, the DAF has completed more than 25 SRs for 22 separate programs and multiple types of weapon systems. By tracking issues we’ve been able to implement process improvements. And by reviewing the SR’s results over the past five years, we are finally able to answer “Do DAF Weapon Systems experience Critical O&S Cost Growth?”. This first cycle of DAF programs provided distinct lessons learned about our systems, process, and the SR requirement. This briefing consolidates those lessons learned to share with others.

Keywords: Sustainment Reviews, Sustainment, O&S


Workforce Cost, Stabilization & Projections Analysis – A System Dynamic Approach
Processes (PR06)
Dr. Stephen R. Parker, Ph.D., P.E.

A unique approach is further developed evaluating government, industry (civilian) and military workforce requirements, projections, stabilization, and associated costs. This System Dynamic continuous simulation focuses on personnel by position or grade applying goal seeking analysis to achieve defined end-states through time, optimizing hiring/promotion strategies maintaining workforce obligations. Analysis draws upon historical data; hires (includes special and scheduled), promotions, attritions (retirements, losses, relocations) and associated cost of personnel by position, including STEM. Additional analysis includes sudden reduction, (lay-off) scenarios, and strategies to evaluate resilience of the workforce. Results exhibit hiring strategies coupled with heat maps to support hiring focus areas by position, with planning forecasts to support fiscal year(s) budget projections. Workforce benefits are modeled and calculated to depict total cost of personnel providing accurate cost profiles for inclusion into annual budget builds.

Note: The opinions and views expressed are those of the author and do not authenticate content or reflect the opinions or views of the IC or the USG. Additionally, all data presented in this paper is notional for analytical and demonstration purposes and does not reflect the status quo for commercial industry (civilian) or government agency.

Keywords: Decision Analysis, Cost Management, Modeling, Monte Carlo, Data Driven, System Dynamics, Continuous Simulation, Resiliency


From Mr. Worldwide to Mr. Bottom Line: Cost Estimating When Info is Scarce
Processes (PR07)
Nikol Podlacha
Shannon Cardoza

Imagine this: Pitbull, Mr. Worldwide himself, is your program manager, handing you a handful of vague details about a DoD project—no technical specs, no schedules, just a few catchy phrases and a whole lot of attitude. Your mission? Develop a cost estimate, and he wants you to “give him everything, tonight.” Sounds impossible, right? But this is precisely the challenge cost estimators face when program managers and engineers provide limited or unclear information. In this presentation, we’ll break down how to transform ambiguity into clarity, using Pitbull’s lyrics as our guide. We’ll give techniques for reading between the lines to turn scarce data from a negative to a positive, improving estimate fidelity and bolstering risk assessments. Walk away energized, equipped with practical tools, and ready to make your next estimate a chart-topper — because in the world of cost analysis, “you can’t stop the party.”

Keywords: Communication, Data Collection, Data-Driven, Functional Requirements, Risk, Uncertainty, Variables, Music, Mr. Worldwide, Party, Technical Ambiguity


 

Trending Topics Track

The Journey to Defining Prediction Horizons
Trending Topics (TT01)
John Bowers
Brett Fitti-Hafer
Niraj Amin
Leslie Bond

As cost estimators, we are routinely called upon to accurately predict the behaviors of future escalation; however, in practice there are a range of influencing factors that can make accurate escalation forecasting difficult. Given this challenge, is it possible to detect when our estimates lose their integrity? Our goal was to explore exactly that; perhaps identify a replicable process; to find the point where forecasting becomes unpredictable and loses credibility—an interval of time we call, the “prediction horizon.” Reviewing historical data, economic modeling practices, and building visualizations we set out to identify patterns that reaffirm the volatility of estimates far into the future. By quantifying a prediction horizon and identifying the point where forecasting becomes less reliable, we consequently improve the integrity of our estimations. This panel covers our journey to quantifying a “prediction horizon” and what we observed in the process.

Keywords: Data-Driven, Uncertainty, Escalation, Inflation


From Rivets to Requirements: Modernizing the Ship WBS
Trending Topics (TT02)
Jonathan Caldwell-Dafoe
Bobby Alvarez
Sean Wells
Andrew Shober

Shipbuilding is a forty-one billion dollar industry critical to national defense that despite its recent elevated importance cannot shake a history of uncontrollable cost and schedule growth. The Ship Work Breakdown Structure (SWBS) underpins planning, costing, and management of ship programs. However, the SWBS is insufficient and lacks granularity in key labor activities, leaving analysts and engineers mired in confusing and duplicative structures and without the ability to analyze performance accurately. Our paper presents the thorough process the authors used to derive the updated SWBS from the prior structure, including logic checks and text analysis, producing a framework that spans across the naval industry. This modernized SWBS has better applicability across all vessel types and acquisition phases. By harnessing more granular and consistent data across labor categories, the SWBS enables more realistic and defensible cost estimates, cleaner comparisons across vendors, and better ability to predict and manage growth.

Keywords: Data Collection, DOD/MOD, International, Labor, Program Management, WBS, EVM, Shipbuilding, Naval


Location Location Location! Foreign Manufacturing and Real Price Change
Trending Topics (TT03)
Mo Deane
Erik Gyorgy
Wade Wathen

Should analysts incorporate different real price change for foreign versus domestic products that are within the same commodity area? Recently published estimates put the cost of imported goods at 3% to 15% more than what pre-tariff trends would have predicted. In this study, we use vendor pricing information publicly available online to compare recent price changes for COTS products manufactured domestically in the U.S. versus imported to the U.S., and provide considerations for approaching escalation analyses based on these findings.

Keywords: Keywords not provided


Termination for (In)Convenience: Framework for Estimating Termination Liability
Trending Topics (TT04)
Meave Fryer
Katie Spearly
Mary Keenan
Sean Wells

Cost analysts are increasingly asked to estimate the cost of ending government contracts as terminations become more frequent and politically salient. In 2024, agencies terminated nearly 40,000 contracts, and the current administration has placed new emphasis on reviewing programs for potential restructuring and cancellation. While FAR Part 49 defines allowable settlement costs, it doesn’t capture the broader government costs of unwinding a contract – such as internal closeout labor, data and property disposition, and re-procurement or restart costs – that together represent a truer measure of termination liability. This paper introduces a standardized, practical framework for identifying and estimating termination liability beyond just the FAR. Grounded in regulatory guidance, public data, corporate experience, and historical cases, this paper demystifies termination in today’s evolving acquisition environment and helps analysts more quickly and consistently inform leadership decisions on when ending a contract makes fiscal sense.

Keywords: Cost Management, Government, Life Cycle, Methods, Estimating Framework, Termination for Convenience, Acquisition, Indirect Costs


The Ultimate What-If: USA vs. Global Coalition
Trending Topics (TT05)
Daniel Herrera
Sergey Kozin

At a time of shifting national security priorities, polarized political climates, and ambiguous economic relationships, the future is uncertain. Utilizing publicly available data reflecting $3T in annual global military expenditure we attempt to measure the military, manufacturing, and logistical capacities of the world’s nations and run the ultimate scenario: What if everyone attacked the USA? What would the most powerful military in the world have to spend to prevail? Is there any chance to win without turning to the deadliest of options that ensure mutually assured destruction? What are the macroeconomic and human implications of such a conflict? Join us for this thought-provoking assessment of the current global military posture and a hypothetical scenario where all options are on the table.

Keywords: Data Collection, DOD/MOD, Infrastructure, International, Manufacturing, Operations

CEBoK® Training Track

Complimentary access to CEBoK®2.0 is available for all current ICEAA members. Log on to your ICEAA profile to view CEBoK® materials.

Certification Program Overview
CEBoK® Training Track (CEB00)
Trainer(s) TBD
This interactive session introduces the ICEAA certifications – Certified Cost Estimator/Analyst (CCEA®), Professional Cost Estimator/Analyst (PCEA®), and Software Cost Estimating Certification (SCEC). It covers eligibility and certification requirements, examination topics, relationships to the Cost Estimating Body of Knowledge (CEBoK®), the online exam format, and recertification requirements. Great opportunity to talk with the ICEAA Board’s Professional Development team and get your questions answered.


Cost Estimating Basics, Costing Techniques, and Parametric Estimating
CEBoK® Training Track (CEB01)
Trainer(s) TBD
The Basics & Techniques session introduces an overview of cost estimating and analysis and the reasons for doing cost estimates, as well as four essential cost estimating techniques most often used to develop realistic and credible estimates. Additionally, we will review cost estimating products and related topics such as schedule and operations and support estimating, providing the background information and fundamental knowledge from CEBoK® Modules 1-3.


Data Collection and Normalization
CEBoK® Training Track (CEB04)
Trainer(s) TBD
This session covers the Core Knowledge section of CEBoK® Module 4: Data Collection. All estimating techniques and cost estimating models require credible data before they can be used effectively. In this module we will discuss the various types of data, processes needed to collect and analyze the data used in parametric applications, as well as data types, sources, and adjustment techniques.


Inflation and Index Numbers
CEBoK® Training Track (CEB05)
Trainer(s) TBD
This session covers the Core Knowledge section of Inflation and Index Numbers (CEBoK® Module 5). Proper inflation analysis is essential to the success of any cost estimate or economic analysis. Calculating inflation correctly and understanding the fundamental concepts will enable you to produce cost estimates that are timely, accurate, and credible to support your program’s lifecycle needs. It will also empower you to communicate with key stakeholders on the need to adjust your financial estimates based on changes in the economy.


Basic Data Analysis Principles and Probability and Statistics
CEBoK® Training Track (CEB06)
Trainer(s) TBD
This session discusses the analytical steps to take after obtaining a set of cost data and covers techniques for displaying and analyzing data graphically and statistical and graphical analysis of univariate and bivariate data sets (CEBoK® Modules 6 & 10). Other topics include measures of central tendency and dispersion and important probability distributions. We also introduce the concept of a random variable; Monte Carlo simulation; and the differences between the normal and lognormal distributions. Finally, we discuss hypothesis testing.


Learning Curve Analysis 
CEBoK® Training Track (CEB07)
Trainer(s) TBD
This is a training track presentation of the CEBoK® Module 7 (Learning Curves) will cover the key ideas, analytical constructs, and applications of the module. Beyond the theoretical information, we will present the study questions for Module 7 with steps required to solve the problems using only a calculator as is required on the certification exam.


Regression Analysis 
CEBoK® Training Track (CEB08)
Trainer(s) TBD
This course introduces the basic concepts of regression and provides a demonstration of a simple linear ordinary least squares model (CEBoK® Module 8). This session focuses on the basics required to build and evaluate a simple linear model such as a Cost Estimating Relationship (CER). Key concepts include correlation, minimizing error, homoscedasticity, statistical significance, goodness of fit, confidence intervals, uncertainty, and analysis of variance. The better you understand these concepts, the better you will be able to make inferences about cost data and employ more complicated regression techniques.


Cost and Schedule Risk Analysis
CEBoK® Training Track (CEB09)
Trainer(s) TBD
This session will provide motivation for the need for risk analysis and introduce the basic types and uses of risk (CEBoK® Module 9). It will focus on the practical execution of the general risk analysis process: develop a point estimate; identify the risk areas in the point estimate; determine uncertainty around the point estimate; apply correlation between uncertainty distributions; run the Monte Carlo simulation; assess the reasonableness of results; calculate, allocate, and phase risk dollars; and show the results.


Manufacturing Cost Estimating 
CEBoK® Training Track (CEB11)
Trainer(s) TBD
The goal of the Manufacturing Cost Estimating module (CEBoK® Module 11) is to arm the student with a set of techniques used to address issues unique to estimating in the manufacturing environment. It will be our objective in this module to raise a few of the most common general issues, considerations and concerns the estimator must be aware of in a typical major manufacturing environment and to provide techniques for addressing them. Depending on time and interest of attendees, example problems can be worked as exam preparation.


Software Cost Estimating Using CEBoK-S 
CEBoK® Training Track (CEB12)
Trainer(s) TBD
This session covers the core knowledge of Software Cost Estimating using CEBoK-S (all PCEA/CCEA testable topics are included). It will be of particular interest to anyone studying for the ICEAA certification exam. The session provides an introduction to the basics of the software development and maintenance processes and how to estimate the related effort. The key ideas of Software Cost Estimating include the cost drivers of size, complexity, and capability. In the sizing area, we’ll focus on understanding the physical size, functional size, relative effort measures (agile software development), and RICE(FW)1 objects. We’ll also discuss the primary software development paradigms – Predictive (waterfall), Predictive with modification (incremental, evolutionary, and spiral methods), Agile (iterative, scrum, SAFe), and Hybrid – and how to model them from a cost estimating perspective.


Economic Analysis 
CEBoK® Training Track (CEB13)
Trainer(s) TBD
This session covers the Core Knowledge section of Module 13 Economic Analysis of CEBoK®. It will be of particular interest to anyone studying for the ICEAA certification exam. The session provides a practitioner’s perspective for conducting an economic analysis (EA) by reviewing EA concepts, terminology, variables and measures-of-merit. By accounting for monetized costs, monetized benefits, opportunity costs and time-value-of-money (“discounting”), an EA enables one to calculate economic measures-of-merit.


Contract Pricing 
CEBoK® Training Track (CEB14)
Trainer(s) TBD
This session explores the basics of contract pricing (CEBoK® Module 14). We explore various contract types and the factors and considerations related to choosing a contract type. We also explore fee, shared risk, cost-price proposal preparation, the makeup of a good Basis of Estimate (BOE), and evaluation efforts. This session also provides an introduction to cost management. Some methods discussed include Total Ownership Cost (TOC), Cost As an Independent Variable (CAIV), Target Costing, and Activity Based Costing (ABC).


Earned Value Management and Cost Management  
CEBoK® Training Track (CEB15)
Trainer(s) TBD
This session will provide an introduction to the basic concepts of earned value management (CEBoK® Modules 15 & 16), with a focus on implementation, governance, and practical application in support of a project or program. Specific topics will include basic EVM components and data elements, as well as standard earned value analysis techniques. We will use practice problems throughout the presentation to demonstrate and reinforce the basic principles of EVM.