Risk assessment can be qualitative, semi-quantitative, or quantitative. The degree of detail required will depend upon the particular application, the availability of reliable data, and the decision-making needs of the entity. Some techniques and the degree of detail of the assessment maybe prescribed by legislation.
Qualitative Techniques
Qualitative assessment consists of analysing probability and level of risk by significance levels, such as “high,” “medium” and “low,” against qualitative criteria. The following are the qualitative techniques used to assess risk:
a) Brainstorming- Brainstorming involves stimulating and encouraging free- flowing conversation amongst a group of knowledgeable people to identify potential failure modes and associated hazards, risks, criteria for decisions and/or options for treatment. Brainstorming can be used in conjunction with other risk assessment methods described below or may stand alone as a technique to encourage imaginative thinking at any stage of the risk management process and any stage of the life cycle of a system. It may be used for high-level discussions where issues are identified, for more detailed review, or at a detailed level for particular problems. Brainstorming techniques include: structured brainstorming, free-form brainstorming, or silent brainstorming.
b) Benchmarking- Benchmarking is a collaborative process among a group of entities. Benchmarking focuses on specific events or processes, compares measures and results using common metrics, and identifies improvement opportunities. Data on events, processes, and measures are developed to compare performance. Some companies use benchmarking to assess the likelihood and impact of potential events across an industry. Benchmarking data are available from research organizations, industry consortia, insurance companies and rating agencies, government agencies, and regulatory and supervisory bodies.
c) Scenario analysis- A scenario may be defined as ‘an outline of future development which shows the operation of causes or an internally consistent view of what the future might turn out to be -not a forecast, but one possible future’ (Porter, Competitive Advantage). Scenario analysis entails defining one or more risk scenarios, detailing the key assumptions (conditions or drivers) that determine the severity of impact, and estimating the impact on a key objective. Sets of scenarios reflecting “best case,” “worst case,” and “expected case” may be used to analyse potential consequences and their probabilities for each scenario as a form of sensitivity analysis when analysing risk.
d) Risk assessment workshops- Cross-functional workshops are preferable for assessment purposes as they facilitate consideration of risk interactions and break down siloed thinking. Workshops improve understanding of a risk by bringing together diverse perspectives. For example, when considering a risk such as information security breach, workshop participants from information technology, legal and compliance, public relations, customer service, strategic planning, and operations management may each bring different information regarding causes, consequences, likelihoods, and risk interactions. Interviews may be more appropriate for senior management, board members, and senior line managers due to their time constraints. Workshops may not work well in cultures that suppress free sharing of information or divergent opinions.
e) Hazard and Operational Study (HAZOP)- HAZOP is Hazard and Operability Study which uses structured and systematic examination of a planned or existing product, process, procedure, or system to identify risks to people, equipment, environment, and/or organizational objectives. Where possible, the study team is expected to provide a solution. A multidisciplinary team of experts, under the guidance of an independent HAZOP leader, performs the brainstorming. The HAZOP technique uses a standard list of guidewords (e.g. “more,” “less,” “no”) combined with process conditions (e.g. speed, flow, pressure) to systematically consider all possible deviations from the normal conditions. HAZOP is similar to FMEA in that it identifies failure modes of a process, system, or procedure, their causes and consequences. It differs in that the team considers unwanted outcomes and deviations from intended outcomes and conditions and works back to possible causes and failure modes; whereas, FMEA starts by identifying failure modes. The HAZOP technique was initially developed to analyse chemical process systems, but it has been extended to other types of systems and complex operations. These include mechanical and electronic systems, procedures, and software systems, and even to organizational changes and to legal contract design and review.
f) Root cause analysis (RCA)
The analysis of a major loss to prevent its reoccurrence is commonly referred to as Root Cause Analysis (RCA), Root Cause Failure Analysis (RCFA), or loss analysis. RCA is focused on asset losses due to various types of failures, while loss analysis is mainly concerned with financial or economic losses due to external factors or catastrophes. It attempts to identify the root or original causes instead of dealing only with the immediately obvious symptoms. It is recognized that corrective action may not always be entirely effective and that continuous improvement may be required. RCA is most often applied to the evaluation of a major loss but may also be used to analyse losses on a more global basis to determine where improvements can be made. RCA is applied in various contexts with the following broad areas of usage:
Safety-based RCA is used for accident investigations and occupational health and safety;
Failure analysis is used in technological systems related to reliability and maintenance;
Production-based RCA is applied in the field of quality control for industrial manufacturing;
Process-based RCA is focused on business processes; and,
System-based RCA has developed as a combination of the previous areas to deal with complex systems with application in change management, risk management and systems analysis.
g) Checklists
Checklists are lists of hazards, risks or control failures that have been developed, usually from experience, either as a result of a previous risk assessment, failure or expert judgements. A checklist can be used to identify hazards and risks or to assess the effectiveness of controls. They can be used at any stage of the life cycle of a product, process, or system. They may be used as part of other risk assessment techniques but are most useful when applied to check that everything has been covered after a more imaginative technique that identifies new problems has been applied.
h) Surveys
Surveys are useful for large, complex, and geographically distributed enterprises or where the culture suppresses open communication. Survey results can be downloaded into analytical tools allowing risks and opportunities to be viewed by level (board members, executives, managers), by business unit, by geography, or by risk category. Surveys have drawbacks too. Response rates can be low. If the survey is anonymous, it may be difficult to identify information gaps. Quality of responses may be low if respondents give survey questions superficial attention in a rush to completion, or if they misunderstand something and don’t have the opportunity to ask clarifying questions. But perhaps most of all, respondents don’t benefit from cross-functional discussions which enhance people’s risk awareness and understanding, provide context and information to support the risk ratings, and analyse risk interactions across silos. For these reasons, surveys should not be considered a substitute for workshops and other techniques for in-depth analysis of key risks
i) Risk probability/impact matrix
A simple risk impact assessment can be performed by using a matrix or risk map on which threats and hazards can be plotted according to the likely-hood of their happening and b the seriousness of their effect if they do happen.
a. Impact refers to consequences or implications if the risk does occur
1. A minor impact indicates that the risk would not have important implications.
2. A moderate impact indicates that the risk could have
implications for the organisation’s ability to succeed.
3. A significant impact indicates that the risk would have important implications for the organisation.
b. Likelihood refers to the probability that the risk may occur, given its nature and the current risk management practices in place
1. A low likelihood indicates that the risk is unlikely to occur, given its nature and current risk management practices in place.
2. A medium likelihood of occurrence indicates that the risk has a moderate probability of occurrence.
3. A high likelihood of occurrence indicates that the risk is likely to occur, despite the risk management practices in place.
Risk assessment grid
Quantitative Risk Assessment Techniques
Quantitative risk is the risk from each scenario is estimated numerically, allowing the analyst to determine not only risk relative to all scenarios in the system but absolute risk measured on whatever scale of units is chosen. The following are the quantitative risk assessment techniques:
i. Decision tree analysis
Decision tree analysis is a technique for coping with a series of decisions, each one having a variety of possible outcomes. The decision tree enables us to map the various combinations of decisions and outcomes in a structured way. If we can estimate the probabilities of the various possible outcomes, and assign monetary values to them, the tree enables us to evaluate the decision options and select the one leading to the most favourable monetary outcome.
ii. Fault tree analysis
Fault tree analysis is similar to decision tree analysis, except that it is concerned solely with the possibility of failure. This technique has been defined as ‘a graphical technique that provides a systematic description of the combinations of possible occurrences in a system, which can result in an undesirable outcome. The method can combine hardware failures and computer failures’.
iii. Dependency modelling
Risks do not necessarily arise from a single failure. On the contrary, it is very often a combination of factors (people, systems, processes, technology) that gives rise to risk. Dependency modelling is a software tool that helps organisations to analyse the links between different factors that between them can give rise to risk.
iv. Value at risk
Value at risk (VaR) is a measurement tool used to assess the size of an organisation’s risk exposure. VaR measures the volatility of a firm’s investment portfolio in the light of market conditions. It is defined as the maximum amount of loss that we can expect (with a predetermined level of probability) to suffer on a given investment over a given time frame.
v. Risk adjusted return on capital
Risk-adjusted return on capital (RAROC) is a risk-based profitability measurement framework for analysing risk-adjusted financial performance and providing a consistent view of profitability across businesses. The concept was developed by Bankers Trust and principal designer Dan Borge in the late 1970s.Note, however, that more and more return on risk adjusted capital (RORAC) is used as a measure, whereby the risk adjustment of Capital is based on the capital adequacy guidelines as outlined by the Basel Committee, currently Basel III
vi. Ratio analysis
Ratio Analysis is a form of Financial Statement Analysis that is used to obtain a quick indication of a firm’s financial performance in several key areas. The ratios are categorized as Short-term Solvency Ratios, Debt Management Ratios, Asset Management Ratios, Profitability Ratios, and Market Value Ratios.
Ratio Analysis as a tool possesses several important features. The data, which are provided by financial statements, are readily available. The computation of ratios facilitates the comparison of firms which differ in size. Ratios can be used to compare a firm’s financial performance with industry averages. In addition, ratios can be used in a form of trend analysis to identify areas where performance has improved or deteriorated over time.
vii. Return on capital employed
Return on capital employed or ROCE is a profitability ratio that measures how efficiently a company can generate profits from its capital employed by comparing net operating profit to capital employed. In other words, return on capital employed shows investors how many dollars in profits each dollar of capital employed generates.
viii. Acid test
The quick ratio or acid test ratio is a liquidity ratio that measures the ability of a company to pay its current liabilities when they come due with only quick assets. Quick assets are current assets that can be converted to cash within 90 days or in the short-term. Cash, cash equivalents, short-term investments or marketable securities, and current accounts receivable are considered quick assets.
The quick ratio is often called the acid test ratio in reference to the historical use of acid to test metals for gold by the early miners. If the metal passed the acid test, it was pure gold. If metal failed the acid test by corroding from the acid, it was a base metal and of no value.
The acid test of finance shows how well a company can quickly convert its assets into cash in order to pay off its current liabilities. It also shows the level of quick assets to current liabilities.
The quick ratio is calculated by adding cash, cash equivalents, short-term investments, and current receivables together then dividing them by current liabilities.
Higher quick ratios are more favourable for companies because it shows there are more quick assets than current liabilities. A company with a quick ratio of 1 indicates that quick assets equal current assets. This also shows that the company could pay off its current liabilities without selling any long-term assets. An acid ratio of 2 shows that the company has twice as many quick assets than current liabilities.
ix. Debt ratio
Debt ratio is a solvency ratio that measures a firm’s total liabilities as a percentage of its total assets. In a sense, the debt ratio shows a company’s ability to pay off its liabilities with its assets. In other words, this shows how many assets the company must sell in order to pay off all of its liabilities.
This ratio measures the financial leverage of a company. Companies with higher levels of liabilities compared with assets are considered highly leveraged and riskier for lenders.
This helps investors and creditors analysis the overall debt burden on the company as well as the firm’s ability to pay off the debt in future, uncertain economic times.
The debt ratio is shown in decimal format because it calculates total liabilities as a percentage of total assets. As with many solvency ratios, a lower ratio is more favourable than a higher ratio.
A lower debt ratio usually implies a more stable business with the potential of longevity because a company with lower ratio also has lower overall debt. Each industry has its own benchmarks for debt, but .5 is reasonable ratio.
A debt ratio of .5 is often considered to be less risky. This means that the company has twice as many assets as liabilities. Or said a different way, this company’s liabilities are only 50 percent of its total assets. Essentially, only its creditors own half of the company’s assets and the shareholders own the remainder of the assets.
x. Gearing ratio
These measure the financial risk of a firm (the probability that a firm will not be able to pay up its debts). The more debts a business has (non owner supplied funds) the higher the financial risk.
xi. Linear regression
Linear regression is the most basic and commonly used predictive analysis. Regression estimates are used to describe data and to explain the relationship between one dependent variable and one or more independent variables.
At the centre of the regression analysis is the task of fitting a single line through a scatter plot. The simplest form with one dependent and one independent variable is defined by the formula y = c + b*x, where y = estimated dependent, c = constant, b = regression coefficients, and x = independent variable.
Qualitative and Quantitative Risk Assessment Techniques
Qualitative assessment consists of assessing each risk and opportunity according to descriptive scales as described in the previous section. Quantitative analysis requires numerical values for both impact and likelihood using data from a variety of sources.
The quality of the analysis depends on the accuracy and completeness of the numerical values and the validity of the models used. Model assumptions and uncertainty should be clearly communicated and evaluated using techniques such as sensitivity analysis. Both qualitative and quantitative techniques have advantages and disadvantages. Most enterprises begin with qualitative assessments and develop quantitative capabilities over time as their decision- making needs dictate.
Qualitative Techniques
Advantages |
Quantitative Techniques
Advantages |
· Is relatively quick and easy
· Provides rich information beyond financial impact and likelihood such as |
· Allows numerical aggregation taking into account risk interactions when using an “at risk” measure such as
Cash Flow at Risk |
vulnerability, speed of onset, and non-financial impacts such as health
and safety and reputation · Is easily understood by a large number |
· Permits cost-benefit analysis of risk response options
· Enables risk-based capital allocation to business activities with optimal risk-return · Helps compute capital requirements to maintain solvency under extreme conditions |
Disadvantages | Disadvantages |
· Gives limited differentiation between levels of
· risk (i.e. very high, high, medium, and low) · Is imprecise – risk events that plot within the · same risk level can represent substantially · different amounts of risk · Cannot numerically aggregate or address risk · interactions and correlations · Provides limited ability to perform cost-benefit · analysis |
· Can be time-consuming and costly, especially at first during model development
· Must choose units of measure such as shillings and annual frequency which may result in qualitative impacts being overlooked · Use of numbers may imply greater precision than the uncertainty of inputs warrants · Assumptions may not be apparent |