Wednesday, April 5, 2017

ISO 31010:2011 Risk Assessment Techniques – V

Statistical Methods
In recent years, there is a diverse range of crises and controversies concerning food safety, animal health and environmental risks including foot and mouth disease, dioxins in seafood, GM crops and more recently the safety of Irish pork. This has led to the recognition that the handling of uncertainty in risk assessments needs to be more rigorous and transparent, where it means that decision makers and the public could be better informed on the limitations of scientific advice. The expression of the uncertainty may be qualitative or quantitative but it must be well documented. Thus, various approaches to quantifying uncertainty exist, but none are yet generally accepted amongst mathematicians, statisticians, natural scientists and regulatory authorities. Since QA managers tend to agree that most processing conditions have a multiple etiology, it is necessary to develop models that consider simultaneously the effect of several potential risk factors on the disease or condition of interest to have any understanding of the relative impact of potential risk factors. However, the selection of an appropriate analytic technique depends on several conditions. There are wide range of possible statistical techniques that may be applied to the problem of deriving a model for identification of multiple risk factors for quality issues, potential risks, diseases and conditions. Thus, use of statistical analysis are mostly considers on the events of complex risk analysis scenarios.

24. Markov Analysis
Markov analysis is a probabilistic technique which does not provide a recommended decision. Instead, it provides probabilistic information about a decision situation that can aid the decision maker to find a decision, whereas Markov analysis is not an optimization technique, but it is a descriptive technique that results in probabilistic information. Markov analysis provides a means of analyzing the reliability and availability of systems whose components exhibit strong dependencies. The method named after a Russian mathematician, best known for his work on stochastic processes, where a collection of random variables represents the evolution of some system of random values over time. Markov analysis, or State-space analysis, is commonly used in the analysis of repairable complex systems that can exist in multiple states, including degraded states, and where the use of a reliability block analysis would be inadequate to properly analyze the system. The Markov analysis process is a quantitative technique and can be discrete (using probabilities of change between the states) or continuous (using rates of change across the states). Other systems analysis methods (such as the Kinetic Tree Theory method employed in fault tree analyses) generally assume component independence that may lead to optimistic predictions for the system availability and reliability parameters. The nature of the Markov analysis techniques lends itself to the use of software.

According to the ISO 31010 standard, “The Markov analysis technique is centered around the concept of “states”, e.g. “available” and “failed”, and the transition between these two states over time based on a constant probability of change. A stochastic transitional probability matrix is used to describe the transition between each of the states to allow the calculation of the various outputs.”
The inputs essential to a Markov analysis are as follows:
List of various states that the system, sub-system or component can be in (e.g. fully operational, partially operation (i.e. a degraded state), failed state, etc);
A clear understanding of the possible transitions that are necessary to be modelled. For example, failure of a car tier needs to consider the state of the spare wheel and hence the frequency of inspection;
Rate of change from one state to another, typically represented by either a probability of change between states for discrete events, or failure rate (λ) and/or repair rate (ì) for continuous events.

The output from a Markov analysis is the various probabilities of being in the various states, and therefore an estimate of the failure probabilities and/or availability, one of the essential components of a system.

Strengths and Limitations of a Markov analysis
Markov diagrams for large systems are often too large and complicated to be of value in most business contexts and inherently difficult to construct. Markov models are more suited to analyzing smaller systems with strong dependencies requiring accurate evaluation. Other techniques, such as Fault Tree analysis may be used to evaluate large systems using simpler probabilistic calculation techniques. States depend on current state probabilities and the constant transition rates between states.

A part from this obvious drawback (complexity), a true Markovian process would only consider constant transition rates, which may not be the case in a real-world system. Events are statistically independent since future states are treated as independent of all past states, except for the state immediately prior. In this way, the Markov model does not need to know about the history of how the state probabilities have evolved in-time in-order-to calculate future state probabilities. However, computer programs are being marketed that allow time-varying transition rates to be defined. Markov analysis requires knowledge of matrix operations and the results are unsurprisingly hard to communicate with non-technical personnel.

25. Monte Carlo Simulation
Monte Carlo analysis consists of a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. This method can address complex situations that would be very difficult to understand and solve by an analytical method. Whenever there is significant uncertainty in a system and you need to make an estimate, forecast or decision, a Monte Carlo simulation could be the answer. The technique allows people to account for risk in quantitative analysis and decision making. The technique is used widely disparate fields such as finance, project management, energy, manufacturing, engineering, research and development, insurance, oil & gas, transportation, and the environment. Monte Carlo simulation performs risk analysis by building models of possible results by substituting a range of values; a probability distribution for any factor that has inherent uncertainty. Then it calculates results over and over, each time using a different set of random values from the probability functions. Depending upon the number of uncertainties and the ranges specified for them, a Monte Carlo simulation could involve thousands or tens of thousands of recalculations before it is complete. Monte Carlo simulation produces distributions of possible outcome values where Monte Carlo simulation furnishes the decision-maker with a range of possible outcomes and the probabilities they will occur for any choice of action. It shows the extreme possibilities “the outcomes of going for broke and for the most conservative decision” along with all possible consequences for middle-of-the-road decisions. By using probability distributions, variables can have different probabilities of different outcomes occurring.  Probability distributions are a much more realistic way of describing uncertainty in variables of a risk analysis. During a Monte Carlo simulation, values are sampled at random from the input probability distributions.  Each set of samples is called an iteration, and the resulting outcome from that sample is recorded.  Monte Carlo simulation does these hundreds or thousands of times, and the result is a probability distribution of possible outcomes.  In this way, Monte Carlo simulation provides a much more comprehensive view of what may happen.  It tells you not only what could happen, but how likely it is to happen.


How does Monte Carlo analysis model the effects of uncertainty?
Systems are sometimes too complex for the effects of uncertainty on them to be modelled using analytical techniques. However, they can be evaluated by considering the inputs as random variables and running a number N of calculations (so-called simulations) by sampling the input in order to obtain N possible outcomes of the wanted result.
Monte-Carlo analysis can be developed using spreadsheets, but software tools are readily available to assist with more complex requirements, many of which are now relatively inexpensive.
Monte Carlo simulations require you to build a quantitative model of your business activity, plan or process, which is often done by using Microsoft Excel with a simulation tool plug-in (a relatively inexpensive set of tools).
To deal with uncertainties using Monte Carlo analysis in your model, you’ll replace certain fixed numbers (for example in spreadsheet cells) with functions that draw random samples from probability distributions. 
To analyze the results of a simulation run, you’ll use statistics such as the mean, standard deviation, and percentiles, as well as charts and graphs.
For risk assessment using the Monte Carlo simulation, triangular distributions or beta distributions are commonly used.

26. Bayesian Statistics and Bayes Nets
A central element of any statistical analysis is the specification of a probability model which is assumed to describe the mechanism which has generated the observed data D as a function of a (possibly multidimensional) parameter (vector) ω , sometimes referred to as the state of nature, about whose value only limited information (if any) is available. All derived statistical conclusions are obviously conditional on the assumed probability model. Thus, mathematical statistics uses two major paradigms, conventional (or frequentist), and Bayesian. Bayesian methods provide a complete paradigm for both statistical inference and decision making under uncertainty. It may be derived from an axiomatic system, and hence provide a general, coherent methodology. Bayesian methods make it possible to incorporate scientific hypothesis in the analysis (by means of the prior distribution) and may be applied to problems whose structure is too complex for conventional methods to be able to handle. In a nutshell, it is a statistical procedure which utilizes prior distribution data to assess the probability of the result. These are often called conditional probabilities. “The probability of a hypothesis C given some evidence E equals our initial estimate of the probability times the probability of the evidence given the hypothesis C divided by the sum of the probabilities of the data in all possible hypotheses.” A Bayesian probability is really a person’s degree of belief in a certain event rather than one based upon physical evidence. Because the Bayesian analysis approach is based upon the subjective interpretation of probability, it provides a ready basis for decision thinking and the development of Bayesian nets (or Belief Nets, belief networks or Bayesian networks).

The Bayesian paradigm is based on an interpretation of probability as a rational, conditional measure of uncertainty, which closely matches the sense of the word ‘probability’ in ordinary language. Statistical inference about a quantity of interest is described as the modification of the uncertainty about its value in the light of evidence, and Bayes’ theorem precisely specifies how this modification should be made. A central element of the Bayesian paradigm is the use of probability distributions to describe all relevant unknown quantities, interpreting the probability of an event as a conditional measure of uncertainty, on a [0, 1] scale, about the occurrence of the event in some specific conditions. The limiting extreme values 0 and 1, which are typically inaccessible in applications, respectively describe impossibility and certainty of the occurrence of the event. This interpretation of probability includes and extends all other probability interpretations. There are two independent arguments which prove the mathematical inevitability of the use of probability distributions to describe uncertainties; these are summarized later in this section.

The inputs are usually similar to the Monte Carlo analysis:
define system variables;
define causal links between variables;
specify conditional and prior probabilities;
add evidence to net;
perform belief updating;
extract posterior beliefs.
Bayesian analysis can provide an easily understood model and the data readily modified to consider correlations and sensitivity of parameters. This technique could be successfully applied to Quality Management Systems, however, there will be minimum sample size requirements for control charts that measure “non-conformities” (errors), based on the average non-conformity rate in the quality processes being measured. Lower error rates would therefore require larger sample sizes to make valid inferences because of the properties of the binomial distribution.

Rest of the methods are discussed in the last set of tools given in the ISO 31010:2011, which will help you in conducting risk assessments based on your industry. These tools are basically valid for various sectors, but most important thing is the place of the supply chain where your industry operates and the requirements of the interested parties.   


No comments:

Post a Comment