top of page

Monte Carlo analysis de-emphasizes extreme events, which are usually the things that cause major projects and capital program to fail.
Monte Carlo analysis de-emphasizes extreme events, which are usually the things that cause major projects and capital program to fail.

Monte Carlo analysis has some key limitations which make the technique a major cause of concern for new project development. This article discusses four contradictions that every organization should be aware (and beware) of when developing new capital programs. A balanced approach using qualitative and quantitative techniques is advised.


Caveats

Out of the gate, let me say that I love Monte Carlo analysis. I have used the approach for over thirty years. I have degrees in engineering and finance. I built an asset management practice that used Monte Carlo analysis as a core technique and even did a keynote address at an international user conference on how to build this type of practice. My current practice uses Monte Carlo analysis as a core technique.


This article is not about what is right with Monte Carlo analysis but rather what is wrong with it.


What is Monte Carlo Analysis

Monte Carlo analysis is a computer-based method of analysis developed in the 1940s that uses statistical sampling techniques to obtain a probabilistic approximation to the solution of a mathematical equation or model. This 1997 definition from the United States Environmental Protection Agency (USEPA) is painfully simple – too simple by today’s standards – but there are a few key insights to be gained from it.


First, Monte Carlo analysis has been around for 75 years. It is not mainstream, yet.


Second, it involves statistical sampling techniques. There are many assumptions and judgments involved in statistics and statistical sampling.


Third, Monte Carlo analysis approximates a model (a model approximates what happens in the real world). Hmm, an approximation of an approximation.


If it makes it any clearer, the international risk standard explains “techniques such as Monte Carlo simulation provide a way of undertaking the calculations and developing results. Simulation usually involves taking random sample values from each input distribution, performing calculations to derive a result value, and then repeating the process through a series of iterations to build up a distribution of the results. The result can be given as a probability distribution of the value or some statistic such as the mean value.”


Uses According to ISO 31010

In general, Monte Carlo simulation can be applied to any system in which a set of inputs interact to define an output. The relationship between the inputs and outputs can be expressed as a set of dependencies analytical techniques are not able to provide relevant results, or when there is uncertainty in the input data


Monte Carlo simulation can be used as part of risk assessment for two different purposes:

uncertainty propagation on conventional analytical models or probabilistic calculations when analytical techniques do not work (or are not feasible).


Uses According to USEPA

Again, much insight can be gained from the USEPA when considering human health risk assessments. A Monte Carlo analysis may be useful when:

  • screening calculations using conservative point estimates fall above the levels of concern

  • it is necessary to disclose the degree of bias associated with point estimates of exposure

  • it is necessary to rank exposures, exposure pathways, sites, or contaminants

  • the cost of regulatory or remedial action is high and the exposures are marginal

  • the consequences of simplistic exposure estimates are unacceptable


Limitations According to ISO 31010

ISO 31010 provides its list of limitations for using Monte Carlo analysis.

  • The accuracy of the solutions depends upon the number of simulations that can be performed.

  • The use of the technique relies on being able to represent uncertainties in parameters by a valid distribution.

  • Setting up a model that adequately represents the situation can be difficult.

  • Large and complex models can be challenging to the modeler and make it difficult for stakeholders to engage with the process.

  • The technique tends to de-emphasize high consequence/low probability risks.


Monte Carlo analysis prevents excessive weight from being given to unlikely, high-consequence outcomes by recognizing that all such outcomes are unlikely to occur simultaneously across a portfolio of risks. This can have the effect of removing extreme events from consideration, particularly where a large portfolio is being considered. This can give unwarranted confidence to the decision maker.

Paradox 1

The first paradox is that this quantitative approach highly depends on qualitative assumptions. Statistical data is unavailable for things that matter most because we do not run them to failure. And despite formal elicitation methods for developing distributions based on expert judgment dating back to 1989, judgment still involves subjectivity.


For that matter, qualitative judgment is also needed to evaluate data quality and the representativeness of the underlying models.


Paradox 2

Despite being used as a technique to understand better uncertainty and a near-obsession by experts to use widely skewed distributions, Monte Carlo simulations lead us back to the center. This reality is non-intuitive, but the methodology fundamentally recognizes that extreme outcomes are unlikely to occur simultaneously across a portfolio.


Death, divorce, bankruptcy, and natural disasters are uncertainties that impact our personal lives in ways we do not expect. The same happens in business, and Monte Carlo analysis tends to lead us to not focus on the extreme events that usually cause the greatest uncertainties.


Paradox 3

The idea of running tens of thousands of independent scenarios and examining the cumulative results is the underlying foundation of Monte Carlo analysis. However, the most fundamental assumption of it (and most statistical analysis) is also the most troublesome – independence.


Many years ago, a chief executive asked me how I knew there was a 90 percent chance of success. I responded, in simple terms, that I had performed 1000 scenarios, and 900 succeeded. He said the result was not good enough because the system failed 100 times. 100 times! We chuckled at what is called a numerator bias, and we conveniently termed him highly risk-averse.

But we were only partially right.


We missed his instincts that we would never let the system fail 100 times under any long-term operation. We would intercede long before that happened and change the “equation.” In the real world, every subsequent scenario depends on the preceding ones, especially when it comes to the things that matter most.


True that. The lack of independence was why we had to use expert elicitation to develop the input failure distributions on which the output was based, too.


Paradox 4

I can’t add or detract much from what USEPA stated in 1997.


“One of the most important challenges facing the risk assessor is to communicate, effectively, the insights an analysis of variability and uncertainty provides. It is important for the risk assessor to remember that insights will generally be qualitative in nature even though the models they derive from are quantitative.”


Incremental Approaches

Am I a fan of Monte Carlo analysis for project development? You bet I am, despite its limitations. That is the subject of a different article.


For now, watch out for snake-oil salespeople who pitch it as a cure-all. Monte Carlo analysis is just one tool in the tool bag.


An incremental approach helps decide whether or not a Monte Carlo analysis can add value to an assessment and decision. A tiered approach begins with a simple screening-level model, usually qualitative. It progresses to more sophisticated, realistic, and quantitative models only as warranted by the findings and value added to the decision.


Ironically, the quantitative analysis usually ends full circle with a qualitative discussion that results in a decision.

 

JD Solomon Inc provides solutions at the nexus of the facilities, infrastructure, and the environment. Contact us for more information on our services related to quantitative risk analysis, forecasting using Monte Carlo simulations and the development of major capital programs. Sign-up for monthly updates on how we are applying reliability and risk concepts to natural systems.


Three tools to take on every conditon assessment are the conditon assessment taxonomy, and handheld vibration meter, and a thermal camera.
Wood screw pump in New Orleans after its catastrophic failure.

I take these three tools to every asset management condition assessment I am a part of. Of course, the tools, techniques, and methods of the condition assessment change with the reason you perform it. And yes, my role has changed from a person doing the field assessments to someone leading or doing the quality assurance.


These three tools remain the same.


Condition Assessment Taxonomy

I keep the condition assessment taxonomy on my mobile device and a notecard in my field book. It is a straightforward 1 to 5 scale, with 5 being the worst. Each number has five basic categories associated with each rating, such as the percent corrective maintenance over the past five years. The taxonomy can be applied to all classes of assets.


I use the term taxonomy to reflect its definition of the technique used to make a classification. I reserve the term framework, meaning a basic conceptual structure, for those large questionnaire-type formats with different questions for every asset type.


I try to keep the condition assessment as concise as possible. The pocket taxonomy is a straightforward reminder to do so.


Handheld Vibration Analyzer

I usually encounter rotating equipment, or its effects, on most condition assessments. Handheld vibration analyzers generate a usable number for comparative analysis, are easy to use, have calibrated, and have basic training accessible. The tool is great for preliminary scans, whether a preliminary assessment or as part of a large quality assurance process. It is also safety friendly.


I love my TPI 9070 that I have been using for a decade. It provides vibration, bearing wear, and shaft alignment. It is durable and fits in the same box as my thermal camera.


Thermal Camera

Thermal cameras are great for establishing baseline equipment operating temperature, indications of insulation leaks, and fluid levels in opaque tanks or containers. Although I avoid touching process pipes and normally wear gloves, scanning process piping as you enter a production area is also a good health & safety measure.


I normally use my thermal camera for primary images in my reports and presentations. The temperature differentials tell the audience a story that a traditional image simply cannot tell.


I love my Flir E4 which I have also used for about a decade. The camera is still commercially available and has been out-positioned by the Flir E4, plus about 100 other thermal cameras that are newer and more affordable. There are many small details when choosing a device, and the camera type does not matter much for basic condition assessments. The main point is to have one. The E4 has never let me down and has proven durable in some rugged places.


The Field Condition Assessment Tools List

Reflecting on my Top 3 came as a follow-up to a client request for the tools I have used for field condition assessment over the past 30 years. My first reaction was, " are you kidding me?" They were not, and it has the added value of some reflection (and some old stories that I had almost forgotten).

  1. Sensory Inspection

  2. Vibration Monitoring

  3. Thermal Imagining

  4. Schmidt Rebound Hammer (concrete evaluation)

  5. Skidmore-Wilhelm Device (bolt torque)

  6. Velocity Meter (voids, primarily concrete)

  7. Ultrasonics (mainly pipes)

  8. CCTV (pipes)

  9. Crack monitors

  10. Pile Driving Analyzer (steel and concrete pile capacity)

  11. Resistivity Tester (soils; transmission tower grounding)

  12. Probe Rod

  13. Munsell Color Scale

  14. Split Spoon Sampler (soils)

  15. Water Bailer

  16. Turbidity Meter

  17. pH Meter

  18. Water Depth Indicator

  19. Jar Test

  20. Immunoassays (environmental contamination)

  21. Megohmmeter (Electrical Insulation)

  22. Volt-Ohm Meter

  23. Acoustic Testing Meters (rotating equipment, ambient noise)

Of course, general-purpose measuring devices in my red toolbox go into the field, like a handheld level, a line level, measuring tapes, a portable scale, and a GPS (now replaced by the one on my phone.) And remember these 10 things before going to the field on a condition assessment.


Applying It

A condition assessment taxonomy, handheld vibration analyzer, and thermal camera are the three tools I take on every assessment management condition assessment. There are many field condition tools I have used in my career which are applicable depending on the context of the work. However, these three tools go into the field every time, regardless of context or my role.


 

JD Solomon Inc provides solutions at the nexus of the facilities, infrastructure, and the environment. Contact us for more information on our asset management ASAP approach, condition assessments, renewal & replacement forecasting, preventative maintenance programs, and reliability assessments. Sign-up for monthly updates on how we are applying reliability and risk concepts to natural systems.


ISO 31000 and USEPA risk assessments are similar bit have some key differences.
PFAS raises many concerns from the public as rule-makers and risk managers await more information from USEPA. (Photo source: Spectrum news).

PFAS is an issue filled with complexity, difficulty, and uncertainty. The good news is that the USEPA does a good job with technical assessments, and the risk assessment framework is similar to that of the international risk standard, ISO 31000.


The concerning news is that risk terminology is a "cottage industry" rich with many differences, the standard methods for USEPA risk assessment are filled with the application of large uncertainty factors, and the statutes for determining risk characterization (i.e., regulatory doses) are different in soil, groundwater, surface water, and air.


USEPA is open that their approach is weight-of-evidence. This means that the science may be quantitative, but conservative judgment (and subjectivity) is applied in the risk assessment process. The public, state rulemaking bodies, and state agencies are left to sort out non-environmental factors based on various risk characterizations that are a hybrid of objective and subjective approaches.


PFAS

PFAS, or perfluoroalkyl or polyfluoroalkyl substances, are fluorinated carbon-chain compounds. PFAS has been used in fire response, industrial applications, and consumer products for decades. For these reasons, PFAS is also found in landfills and wastewater treatment facilities. PFAS chemicals are manufactured and produced worldwide.


There are over 7000 PFAS compounds that have properties that allow them to repel water and oil. For this reason, PFAS chemicals do not break down easily over time and have been dubbed "forever chemicals."


The health effects of PFAS compounds are largely unknown.


PFAS Uses

PFAS has been used in fire response, industrial applications, and consumer products for decades. For these reasons, PFAS is also founded in landfills and wastewater treatment facilities. PFAS chemicals are manufactured and produced worldwide.


PFAS Groupings

Three types of PFAS are in the process of being more heavily regulated. USEPA is in the final steps of developing compliance standards for PFOS and PFOA. GenX is a third form of PFAS specific to North Carolina but not to every state.


PFAS Regulation

Regulating this expansive group of chemicals can be difficult and complex. We understand that PFAS does not break down easily, but we do not fully understand how it is transported from one media to another. Our traditional sampling methods and laboratory standards have had to be improved to the "parts per trillion" level. And we simply do not have enough data from the field or human health studies.

USEPA recognizes the importance of the threshold concept in a regulatory context. Desired exposures are based on the lowest thresholds within a population. Getting from the science to the lowest thresholds within a given population is where technical judgment comes into play.


It is noteworthy that most of the human health analysis is not derived from studies on human beings. The studies on humans further complicate this since few focus on the targeted classes for protection, such as infants and the elderly. Therefore, several levels of uncertainty factors are applied in the risk analysis by USEPA.


USEPA Human Health Risk Assessments

USEPA Human Health Risk Assessments have a risk assessment component and a risk management component. The risk assessment component is performed by USEPA and results in a risk criterion. Risk mitigation, including regulatory doses or concentrations, is normally performed at the state level to incorporate local situations, affordability, state laws, and available treatment technologies.


Ideally, the federal risk assessment aspect and the state risk management evaluations would inform one another simultaneously. In practice, some state-specific issues are being addressed concurrently, but much work is done after the federal criterion is approved.


USEPA and ISO 31000 Are Similar

The risk assessment performed by USEPA consists of determining a reference dose, including uncertainty factors applied to it based on data confidence and probable exposure. The terminology in human health risk assessments and the international risk standard differs, potentially leading to confusion or misunderstanding when introduced to hybrid regulatory decision-making bodies and the public. However, the basic processes of the risk assessment followed by risk treatment decisions are the same.


USEPA Risk Assessments and IS 31000 are similar but use different terminology.
USEPA Risk Assessments and IS 31000 are similar but use different terminology.

USEPA Risk Assessments

EPA created the Integrated Risk Information System (IRIS) Program in 1985 to provide an internal database of human health assessments for chemicals found in the environment. The goal of the IRIS Program was to foster consistency in evaluating chemical toxicity across the Agency. Since then, the IRIS Program has also become an important public resource. The IRIS Program has evolved with the state of the science to produce high-quality, evidence-based assessments and provide increasing opportunities for public input into the IRIS process.


EPA created two Agency-wide workgroups at that time, the Carcinogen Risk Assessment Verification Endeavor Workgroup (CRAVE) and RfC/RfD Workgroup. These workgroups were formed to reach Agency consensus scientific positions on human health effects that may result from chronic oral or inhalation exposure to chemicals found in the environment.


USEPA Risk Analysis and Uncertainty Factors

Uncertainty factors are part of the risk analysis and a meaningful component of the RfD (Reference Dose). Uncertainty factors consist of multiples of 10, each representing a specific uncertainty inherent in the available data.


According to USEPA, while the original selection of safety factors appears to have been rather arbitrary (Lehman and Fitzhugh, 1954), subsequent analysis of data (Dourson and Stara, 1983) lends theoretical (and in some instances experimental) support for their selection. Further, some scientists, but not all, within the EPA interpret the absence of widespread effects in the exposed human populations as evidence of the adequacy of traditionally employed factors.


One and five uncertainty factors are applied to research data to develop the reference dose. Four are a binary choice of 1 to 10, and the fifth is a judgment between 1 and 10.

  • Use a 10-fold factor when extrapolating from valid experimental results in studies using prolonged exposure to average healthy humans.

  • Use an additional 10-fold factor when extrapolating from valid results of long-term studies on experimental animals when studies of human exposure are unavailable or inadequate.

  • Use an additional 10-fold factor when extrapolating from less than chronic results on experimental animals when there are no useful long-term human data.

  • Use an additional 10-fold factor when deriving an RfD from a LOAEL instead of a NOAEL.

  • Use professional judgment to determine the MF, which is an additional uncertainty factor greater than zero and less than or equal to 10.

The math of five factors yields the net effect that the reference dose will be 10 to 100,000 times the level determined by its underlying experiments. USEPA documents that references doses are frequently one order of magnitude higher than the underlying measurements due to uncertainty factors.


Risk Exposure

Depending on the dose employed, exposure to a given chemical may result in various toxic effects. The exposure assessment includes consideration of the size and nature of the populations exposed and the magnitude, frequency, duration, and routes of exposure, as well as evaluation of the nature of the exposed populations.


The USEPA's Public Health and Integrated Toxicology Division (PHITD) performs integrated epidemiological, clinical, animal, and cellular biological research and statistical modeling to provide the scientific foundation in support of hazard identification, risk assessment, and standard setting to protect public health and the environment.


PHITD scientists identify at-risk populations and evaluate the environmental risk to multiple aspects of human health, including reproduction, pregnancy, pre- and postnatal development, and the cardiac, immune, nervous, and endocrine systems. It uses an "Assay to Outreach" approach where fundamental research is performed to understand toxicological responses and mechanisms; these assays are confirmed in clinical and population-based studies that link environmental conditions to health.


Risk Characterization (Criterion)

Risk characterization is the final step in the risk assessment and the first input to the risk management (regulatory action) process. The purpose of risk characterization is to present the risk manager with a synopsis and synthesis of all the data that should contribute to a conclusion with regard to the nature and extent of the risk, including:


  • The qualitative ("weight-of-evidence") conclusions as to the likelihood that the chemical may pose a hazard to human health.

  • A discussion of the dose-response information considered in deriving the RfD (Reference Dose), including the uncertainty factors.

  • Data on the shapes and slopes of the dose-response curves for the various toxic endpoints, toxicodynamics (absorption and metabolism), structure-activity correlations, and the nature and severity of the observed effects.

  • Estimates of the nature and extent of the exposure and the number and types of people exposed.

  • Discussion of the overall uncertainty in the analysis, including the major assumptions made, scientific judgments employed, and an estimate of the degree of conservatism involved.

  • The kind of toxicity data used by EPA's Risk-Screening Environmental Indicators (RSEI) model and how the toxicity weights are calculated and selected for use in RSEI results, including RSEI Hazard and RSEI Scores, are provided on the USEPA website.


RSEI uses toxicity data from EPA's Integrated Risk Information System (IRIS) where possible. For chemicals with incomplete information in IRIS, RSEI uses the following sources (in order of preference):

  • EPA's National Air Toxics Assessment (NATA).

  • EPA's Office of Pesticide Programs (OPP) Acute Chronic and Reference Doses Table lists.

  • The Agency for Toxic Substances and Disease Registry (ATSDR) Minimum Risk Levels (MRLs).

  • California Environmental Protection Agency (CalEPA) Approved Risk Assessment Health Values.

  • EPA's Provisional Peer Reviewed Toxicity Values (PPRTVs).

  • EPA's Health Effects Assessment Tables (HEAST).

  • Derived Values. For a prioritized group of chemicals for which sufficient data was not found in the above sources, a group of EPA expert health scientists reviewed other available data to derive appropriate toxicity weights.

For each chemical, RSEI determines the following values, where possible:

  • Oral slope factor (OSF) in risk per mg/kg-day.

  • Inhalation unit risk (IUR) in risk per mg/m3.

  • Reference dose (RfD) in mg/kg-day.

  • Reference concentration (RfC) in mg/m3.


Risk Mitigation

Once the risk characterization is completed, the focus turns to risk management. Per USEPA, the risk manager utilizes the results of risk assessment, other technological factors, and legal, economic, and social considerations in reaching a regulatory decision. These additional factors include efficiency, timeliness, equity, administrative simplicity, consistency, public acceptability, technological feasibility, and the nature of the legislative (federal or state) mandate.


Risk management decisions must be made on a case-by-case basis because these risk management factors may impact different cases. And those decisions are desirably consistent, yet not necessarily identical.


One primary example is the federal statutes.

  • The Clean Water Act calls for decisions with "an ample margin of safety."

  • The Safe Drinking Water Act (SDWA) calls for standards that protect the public "to the extent feasible."

  • The Federal Insecticide, Fungicide and Rodenticide Act (FIFRA) calls for "an ample margin of safety," taking benefits into account.

A chemical with a specific RfD may be regulated under different statutes and situations through the use of different regulatory doses (RgDs). Per USEPA, the risk manager selects the appropriate statutory alternative after carefully considering the various environmental risk and non-risk (non-environmental) factors, regulatory options, and statutory mandates in a given case.


Public review and public comment are a required part of the process at both federal and state levels.


Summary

PFAS is an issue filled with complexity, difficulty, and uncertainty. Regulating this expansive group of chemicals can be difficult and complex. Many PFAS do not have enough data to determine a Reference Dose and Drinking Water Equivalency Level. Little information about health effects is known for many PFAS compounds. Field sampling and laboratory processes are evolving to "parts per trillion." And transport mechanisms between water, air, and solids are not understood well.


The good news is that the USEPA does a good job with technical assessments, and the risk assessment framework is similar to that of ISO 31000.


The concerning news is that risk terminology is a "cottage industry" rich with many differences, the standard methods for USEPA risk assessment are filled with the application of large uncertainty factors, and the statutes for determining risk characterization (i.e., regulatory doses) are different in soil, groundwater, surface water, and air.


USEPA is open that their approach is weight-of-evidence. This means that the science may be quantitative, but conservative judgment (and subjectivity) is applied in the risk assessment process. The public, state rulemaking bodies, and state agencies are left to sort out non-environmental factors based on various risk characterizations that are a hybrid of objective and subjective approaches.


Note: This article uses several USEPA reference documents.


 

JD Solomon Inc provides solutions for facilitation, asset management, and program development at the nexus of facilities, infrastructure, and the environment. JD is a current member and former chairman of the North Carolina Environmental Commission, the state's environmental rulemaking body. Sign-up for monthly updates.


Founded by JD Solomon, Communicating with FINESSE is a not-for-profit community of technical professionals dedicated to being highly effective communicators and trusted advisors to senior management and the public. Learn more about our publications, webinars, and workshops. Join the community for free.

Experts
bottom of page