Statistical Methods
In recent years,
there is a diverse range of crises and controversies concerning food safety,
animal health and environmental risks including foot and mouth disease, dioxins
in seafood, GM crops and more recently the safety of Irish pork. This has led
to the recognition that the handling of uncertainty in risk assessments needs
to be more rigorous and transparent, where it means that decision makers and
the public could be better informed on the limitations of scientific advice.
The expression of the uncertainty may be qualitative or quantitative but it
must be well documented. Thus, various approaches to quantifying uncertainty
exist, but none are yet generally accepted amongst mathematicians,
statisticians, natural scientists and regulatory authorities. Since QA managers
tend to agree that most processing conditions have a multiple etiology, it is
necessary to develop models that consider simultaneously the effect of several
potential risk factors on the disease or condition of interest to have any
understanding of the relative impact of potential risk factors. However, the
selection of an appropriate analytic technique depends on several conditions.
There are wide range of possible statistical techniques that may be applied to
the problem of deriving a model for identification of multiple risk factors for
quality issues, potential risks, diseases and conditions. Thus, use of
statistical analysis are mostly considers on the events of complex risk
analysis scenarios.
24. Markov Analysis
Markov analysis is
a probabilistic technique which does not provide a recommended decision.
Instead, it provides probabilistic information about a decision situation that
can aid the decision maker to find a decision, whereas Markov analysis is not
an optimization technique, but it is a descriptive technique that results in
probabilistic information. Markov analysis provides a means of analyzing
the reliability and availability of systems whose components exhibit strong
dependencies. The method named after a Russian mathematician, best known for
his work on stochastic processes, where a collection of random variables
represents the evolution of some system of random values over time. Markov
analysis, or State-space analysis, is commonly used in the analysis of
repairable complex systems that can exist in multiple states, including
degraded states, and where the use of a reliability block analysis would be
inadequate to properly analyze the system. The Markov analysis process is a
quantitative technique and can be discrete (using probabilities of change
between the states) or continuous (using rates of change across the states).
Other systems analysis methods (such as the Kinetic Tree Theory method employed
in fault tree analyses) generally assume component independence that may lead
to optimistic predictions for the system availability and reliability
parameters. The nature of the Markov analysis techniques lends itself to the
use of software.

The inputs
essential to a Markov analysis are as follows:
List
of various states that the system, sub-system or component can be in (e.g.
fully operational, partially operation (i.e. a degraded state), failed state,
etc);
A
clear understanding of the possible transitions that are necessary to be
modelled. For example, failure of a car tier needs to consider the state of the
spare wheel and hence the frequency of inspection;
Rate
of change from one state to another, typically represented by either a
probability of change between states for discrete events, or failure rate (λ)
and/or repair rate (ì) for continuous events.
The output from a
Markov analysis is the various probabilities of being in the various states,
and therefore an estimate of the failure probabilities and/or availability, one
of the essential components of a system.
Strengths and Limitations of a Markov analysis
Markov diagrams for
large systems are often too large and complicated to be of value in most
business contexts and inherently difficult to construct. Markov models are more
suited to analyzing smaller systems with strong dependencies requiring accurate
evaluation. Other techniques, such as Fault Tree analysis may be used to evaluate
large systems using simpler probabilistic calculation techniques. States depend
on current state probabilities and the constant transition rates between
states.
A part from this
obvious drawback (complexity), a true Markovian process would only consider
constant transition rates, which may not be the case in a real-world system.
Events are statistically independent since future states are treated as
independent of all past states, except for the state immediately prior. In this
way, the Markov model does not need to know about the history of how the state
probabilities have evolved in-time in-order-to calculate future state
probabilities. However, computer programs are being marketed that allow
time-varying transition rates to be defined. Markov analysis requires knowledge
of matrix operations and the results are unsurprisingly hard to communicate
with non-technical personnel.
25. Monte
Carlo Simulation

How does Monte Carlo analysis model the
effects of uncertainty?

Monte-Carlo analysis
can be developed using spreadsheets, but software tools are readily available
to assist with more complex requirements, many of which are now relatively
inexpensive.
Monte Carlo
simulations require you to build a quantitative model of your business
activity, plan or process, which is often done by using Microsoft Excel with a
simulation tool plug-in (a relatively inexpensive set of tools).
To deal with
uncertainties using Monte Carlo analysis in your model, you’ll replace certain
fixed numbers (for example in spreadsheet cells) with functions that draw
random samples from probability distributions.
To analyze the
results of a simulation run, you’ll use statistics such as the mean, standard
deviation, and percentiles, as well as charts and graphs.
For risk assessment
using the Monte Carlo simulation, triangular distributions or beta
distributions are commonly used.
26. Bayesian Statistics and Bayes Nets

The Bayesian
paradigm is based on an interpretation of probability as a rational,
conditional measure of uncertainty, which closely matches the sense of the word
‘probability’ in ordinary language. Statistical inference about a quantity of
interest is described as the modification of the uncertainty about its value in
the light of evidence, and Bayes’ theorem precisely specifies how this
modification should be made. A central element of the Bayesian paradigm is the use of probability distributions to describe all relevant unknown quantities, interpreting the probability of an event as a conditional measure of uncertainty, on a [0, 1] scale, about the occurrence of the event in some specific conditions. The limiting extreme values 0 and 1, which are typically
inaccessible in applications, respectively describe impossibility and certainty
of the occurrence of the event. This interpretation of probability includes and
extends all other probability interpretations. There are two independent
arguments which prove the mathematical inevitability of the use of probability
distributions to describe uncertainties; these are summarized later in this
section.
The inputs are
usually similar to the Monte Carlo analysis:
define
system variables;
define
causal links between variables;
specify
conditional and prior probabilities;
add
evidence to net;
perform
belief updating;
extract
posterior beliefs.
Bayesian analysis
can provide an easily understood model and the data readily modified to consider
correlations and sensitivity of parameters. This technique could be
successfully applied to Quality Management Systems, however, there will be
minimum sample size requirements for control charts that measure
“non-conformities” (errors), based on the average non-conformity rate in the
quality processes being measured. Lower error rates would therefore require
larger sample sizes to make valid inferences because of the properties of the
binomial distribution.
Rest of the methods
are discussed in the last set of tools given in the ISO 31010:2011, which will
help you in conducting risk assessments based on your industry. These tools are
basically valid for various sectors, but most important thing is the place of
the supply chain where your industry operates and the requirements of the interested
parties.