How much should be spent to prevent disaster?

Moreover, no matter how safe they may be, they can always be made safer, at a price.

This situation brings up several questions: How much should the owners of such assets spend in the pursuit of greater safety?

When is it reasonable to conclude that it is not worthwhile to seek to lower the risk further? In this paper I will explore how risk analysts go about answering these questions.

I will suggest that current methods are seriously flawed, in fact fatally flawed. My claim is that calculations that rely on probability x consequence are fundamentally misleading for the owners of hazardous facilities.

Finally I will suggest ways out of the problem.

Broadly, the way these decisions are made is by means of a cost benefit analysis (CBA).

This involves quantifying the risks, calculating the benefit of some risk reduction measure, usually in terms of the number of lives saved and the value of each such life saved, and then comparing the benefit with the cost of the risk reduction measure.

If the cost outweighs the benefit, (or in some interpretations, is grossly disproportionate to the benefit1) then the risk reduction measure is judged to be not reasonably practicable.

This kind of CBA poses methodological and ethical problems.

But the end point of my argument is that, regardless of these problems, CBA in relation to high consequence low probability events is of absolutely no use to those faced with the decision of how much to spend on safety.

This is clearly a controversial argument, so I will need to make it very carefully.

Generally speaking what makes a scenario catastrophic is the large scale loss of life.

Of course in some situations, such as the Gulf of Mexico oil spill, the loss of life is overshadowed in the public mind by the scale and cost of the environmental damage.

But for present purposes I shall take loss of life as the most significant consequence of major accident events.

To carry out a CBA requires that a value be placed on human life.

Placing a monetary value on human life raises numerous ethical and social issues.

These are so serious as to call into question the whole enterprise2.

However, I shall not canvas these issues here because it would be too much of a diversion from the main argument to be made in this paper.

For the sake of that argument I shall take the value of life to be $5 million.

The other number which is important in any CBA is likelihood, in particular the likelihood of a disastrous event occurring during the lifetime of a facility.

There is a great deal of arbitrariness and ambiguity involved in determining likelihood of very rare events but, as with the value of life, I shall not dwell here on these uncertainties.

To develop the argument, let us consider a specific and very simplified CBA.

Suppose the scenario of concern is one that could cause 40 deaths.

At $5 million per death, the total cost of this incident is $200 million (ignoring costs other than death).

Suppose the probability of such an event is one in a million.

The risk assessment process requires that the cost of the scenario be multiplied by the probability.

Multiplying $200 million by 10-6 gives us a figure of $200.

This figure will be pivotal as this discussion progresses.

The big question is: what meaning can be given to the figure obtained in this way and to what use can it legitimately be put?

To answer this question we need to be clearer about what it means to say that the probability is one in a million: one in a million “˜whats’?

The initial statement, made more carefully, is that the probability is one in a million that a facility will “˜blow up’ during its lifetime.

(I will use “˜blow up’ as shorthand for experience a catastrophic event such as described above).

So the “˜whats’ above are facilities, and the probability statement becomes: “˜It is probable, i.e. we expect, on average, that one is every million such facilities with blow up in its lifetime.’

An average of 1 is difficult to work with, so let’s multiply by 1000.

The probability statement now becomes: “˜We expect on average that of every billion facilities, a thousand will blow up during their life time.’

In reality there will never be anything like a billion facilities or even a million so we can never test these claimed probabilities against any real data, but it is the logic of an argument I am following here.

Now here is one way to give meaning to the $200 figure.

Suppose I am a super insurance company, responsible for ensuring all one billion facilities against blow up. What will I charge as a premium?

I know that the cost per blow up is $200 million (assuming the above example is typical).

Of my one billion clients, approximately 1000 of them will blow up and require a payout.

So the total payout for my clients is $200 million x 1000 = $200×109.

What do I need to charge each of my clients to cover this cost? Answer: $200×109 divided by 1 billion = $200.

Of course I will need to charge a bit more to cover the possibility that more than 1000 facilities will blow up during their lifetime.

Perhaps 1200 or 1300 will blow up.

But if the probability statement can be taken at face value, the expected number is 1000.

I will also need to charge more to allow for profit. But the figure of $200 is the break-even figure.

The conclusion here is that if I am the super insurer in the position described above, it makes sense to multiply consequence by probability to derive a figure that will be used as the basis of an insurance premium.

In this highly artificial context the $200 provides a guide to action for the super-insurer.

We can frame this super insurer argument a little differently if we substitute society for the super-insurer.

This involves personifying society in a way that is highly questionable, but I leave that aside for present purposes.

Society has an interest in preventing the single blow up costing $200 million, not because it is a disaster for those concerned, but simply because it is a loss to society.

Suppose there are one million facilities at risk, society wide – I assumed a billion in the preceding discussion but a million will be a bit easier to work with here – the chances are that one of these million facilities will blow up in its lifetime.

If society can eliminate this risk by requiring each facility to spend an additional $200, that will be value for money, just.

Why do I say this? The assumption here is that the cost is passed on to the rest of us, as consumers of the facilities’ products.

So if we, as a society, require each facility to spend an extra $200, in the end that will cost us a total of $200 million – to prevent a loss of $200 million.

Arguably that is just worth our while.

But it will not be cost effective to spend more. If the risk can only be eliminated by spending, say $2000 per facility, that is, a total cost of $2 billion ($2000 times a million), which is passed on to us.

In this case society will be better off accepting the loss of one facility, and the associated deaths, and putting the money to some other use, for example medical research, or even risk reduction in some other area of human activity.

As members of society we have an interest in the one million facility owners being compelled to spend up to $200 to eliminate the risk, but it is not in our interest that they spend more; $200 is the “˜maximum justifiable spend’ from the point of view of other members of society.

(“Maximum justifiable spend” (MJS) is a standard term in use in CBA).

In short, the figure of $200 is a meaningful guide to action, from a societal point of view.

Now imagine a single facility owner. Can the $200 figure be given a meaning and/or be used as a guide to action?

We can answer this by reverting to statistics for a moment.

What is the statistical meaning of the $200? The cost of an actual blow-up is $200 million.

Statistics talks about the expected cost given a certain probability distribution.

In simple terms the expected cost is the weighted average of the cost across all the facilities.

So if one facility blows up and experiences a cost of $200 million, while all the others avoid blow-up and hence have no cost, the “˜expected cost’, that is, the average cost across the million facilities is $200.

How useful is this knowledge to the single facility owner? The answer is: not at all.

The average across all million facilities give no guidance about what the single facility owner should spend to prevent a blow-up.

To see why, let us suppose that all agree that the risk cannot be eliminated for $200 and that it is as low as it can be without spending a great deal more money.

Is it sensible for the facility owner to simply accept this risk on the basis that it is not in society’s interest to spend more than $200? Such an owner is in a very different position from society.

If the facility blows up it will be a disaster for those directly involved and may destroy the company.

For society on the other hand, $200 million is a completely insignificant figure when set against the total value of all economic activity.

The stakes in other words, are quite different.

Therefore, it is not sensible for the facility owner to simply accept this risk on the basis of what is in society’s interest.

The individual hazardous facility owner might reasonably take the view that it is worth spending a good deal more than the expected cost to eliminate the risk of a blow-up killing 40 people.

In support of this last statement, consider an analogy suggested to me by a colleague.

Suppose the chance of having a write-off accident in my car is one in a thousand, that is, 10-3, p/a, and my car is worth $20,000.

The break-even insurance premium to protect me against this loss (the maximum justifiable spend) is $20,000×10-3 = $20.

From my colleague’s perspective, losing $20,000 is pretty significant; in fact it would be a disaster for him, in a relevant sense.

Accordingly, he told me, he would probably be prepared to pay $200 p/a for insurance, despite knowing this is “˜over the odds’.

But if the insurance company tried to charge $2000, his view was that he would carry the risk himself because the cost of the risk elimination measure (the insurance) was too high when compared to the benefit, he said.

Let us consider this analogy. My colleague’s position is that if it takes $200 to ensure his car and so avoid financial disaster, then he will pay.

This is because he simply cannot afford to run the risk of losing $20,000.

So he will pay almost whatever it takes, although there is a limit of $2000.

Why does he stipulate this limit? He did not say, but we can speculate.

A premium of $2000 p/a would mean that in 10 years he had paid out the total value of the car in premiums (I am assuming for the sake of the discussion that the value of the car does not change over this period.)

In this way what is intended to be an insurance against loss of $20,000 has become a certainty that he will lose $20,000 over a 10 year period.

That makes no sense, so of course there is a limit to what he is willing to pay in premiums.

But what is interesting about this situation is that the amount he is willing to pay to avoid disaster bears no relationship to the break-even insurance value; it is largely determined in this case by the total cost of the disaster.

The analogy with the single major hazard facility may not be complete, but it is illuminating.

It suggests that it is in the interests of the owner of such a facility to pay whatever it takes to avoid disaster, subject to the limitation that this payment remains a small fraction of the total cost of disaster, should it occur.

There is no obvious way of deciding just what this limit should be, but one thing is clear: the single facility owner’s decision about how much to spend will be largely independent of the “˜expected value’ or “˜maximum justifiable spend’.

As a matter of fact, risk analysts generally concede that the single facility owner should not limit spending to the expected value.

The law in many countries requires them to spend whatever it takes, provided that the expenditure is not “˜grossly disproportionate’ to the benefit.

To take account of this, risk analysts sometimes add a “˜proportionality factor’ to the calculation, a number ranging from 1 to 103.

The result is that the “˜maximum justifiable spend’ increases by a factor of up to 10.

However this in no way alters my argument. If we start with an irrelevant figure and multiply it by 10, the resultant figure is still irrelevant.

That completes the basic argument of the paper, namely that multiplying consequence by probability cannot provide guidance for the individual major hazard facility owner. In what follows I want to consider some alternatives.

Consequence-based decision making

Consider again the individual hazardous facility owner faced with the question of how much to spend to prevent a particular disaster scenario.

If we accept that consequence times probability provides no basis for decision-making, what is to be done?

One possibility is to make decisions on the basis of consequence only.

This approach repudiates any idea of acceptable risk.

The philosophy is that, if the consequences are severe, then people must be protected from them, no matter how unlikely they may be.

BP moved to consequence-based decision making in relation to the siting of occupied portable buildings (e.g. trailers) following its 2005 Texas City explosion.

That explosion killed 15 people who were housed in trailers located close to the blast source.

BP decided that henceforth any building located in a potential blast zone must be built to withstand the blast pressure.

To implement this principle, it divided each refinery into zones, based in part on calculations of the possible blast pressures: a red zone, in which no buildings may be located; an orange zone in which buildings may be located if they are designed to withstand the blast pressures that are possible in that zone; and a yellow zone beyond that.

An immediate consequence of this was that trailers were restricted to the yellow zone.

In summary, the consequence-based approach made it impossible to justify siting trailers close to a potential blast source, simply on the basis that the probability of such a blast was very low4.

The principle of consequence-based decision making is not one that can be universally applied. In many, if not most, situations it is impossible to protect people from all risk.

However, in the case of occupied portable buildings the risk to occupants can easily be eliminated by remote siting.

What makes it relatively easy to adopt the principle in this case is that the financial cost of moving trailers out of range is not significant; all that is really at stake is questions of convenience.

But even where the cost of is high, it is still possible to adopt a consequence-based approach.

The issue of occupied buildings (not just temporary or portable buildings) has been exercising the minds of board members of a very large company I have been working with.

They have instructed company officers to reduce exposure to explosion hazards, by relocating occupied buildings, even though this will be extremely expensive.

Furthermore, some of the company’s constituent businesses are more affected than others and would bear a heavy cost burden if required to fund this relocation themselves.

Accordingly the company has decided to make the funds available centrally.

A strategy for risk analysts

CBA is intended to be a tool to aid top decision makers.

But the guidelines and standards routinely say that CBA should not be the only input into their decision making.

Decision makers are exhorted to take account also of less quantifiable matters such as reputational risk and perhaps public outrage. In practice they tend not to, for the following reason.

The result of the CBA process for a high consequence low probability event is often that the risk is already as low as reasonably practicable, meaning that it is not worth spending a significant amount money to reduce it further.

From the point of view of the busy decision-maker, the risk analyst has provided technical advice which the decision-maker is happy to accept.

In this way the CEO and board may never get to consider the consequences of the hypothesised failure.

Risk analysts may feel frustrated that they are in effect being used to make decisions which rightfully should be made much higher in the organisation, but they feel powerless to do much about it.

Here is a suggested solution. First, analysts should avoid summarising their analysis in a single indicator: consequence x probability.

They should provide information about consequences and probability separately, with information about probability being in qualitative rather than quantitative terms.

This has two advantages. It means that top decision makers are not presented with a decision “˜ready-made’ by the risk analyst; they must therefore apply their minds to making decisions themselves.

It also avoids the erroneous implication that consequence x probability is of relevance to people making decision about single facilities.

Information about consequences should include a description of the hypothesised event, and the likely damage to the business, to public property and to the environment.

If the environmental damage has implications for people’s livelihoods this should also be noted.

Finally, information should be provided on likely numbers killed and injured, together with relevant details, such as whether children attending a nearby school are expected to be among the victims.

There would be no need to monetise the value of life.

The documents or standards that lay out this procedure should invite decision makers to pay attention also to the reputational costs, legal costs and personal costs to top company people subject to legal scrutiny after a major accident.

In addition to outlining consequences in some detail, analysts should identify various measures aimed at eliminating the threat or reducing the severity of the consequences, together with some indication of costs.

It will also be valuable for analysts to identify and describe accidents around the world broadly similar to the one under discussion.

Finally, the documents or standards laying out these processes should make it clear that decisions in relation to very high consequence events should be taken at the highest corporate level.

One guiding principle might be that the decision maker should be sufficiently senior to be able to authorise the most costly of the consequence reduction measures identified by analysts.

An alternative paradigm

So far I have argued that probability x consequence is not a relevant number for the single major hazard facility owner and that traditional CBA methods cannot therefore be used to determine how much to spend on hazard control.

This leaves top decision makers in a difficult position.

Now I want to suggest a solution to this problem, one that involves abandoning the whole idea that major accidents are simply chance events occurring with a certain probability.

The principle strategy for the prevention of major accidents is defence in depth, which means putting in place a series of defences or controls to guard against a major accident event.

The corresponding model of accident causation is that accidents occur when all these defences fail simultaneously.

This is nicely depicted in the Swiss cheese accident model5: each defence is a slice of cheese with holes in it, representing imperfections, and accidents only occur when the all these holes line up.

If we stick with probability for a moment we can say that, if one of the barriers is absent or non-operational for any reason, the probability of a major accident event is correspondingly higher.

To take an example, if one of the controls is regular inspection of pipes and tanks for corrosion, yet inspections are not being done, we can infer that the conditional probability of an accident – the probability given that this control is not in place – is much higher.

The more defective our system of controls, the higher is the probability of accident.

On the other hand, if our controls are all being implemented as intended, the probability of an accident is zero.

That is the matter of logic. Putting this another way, major accidents can be prevented by ensuring that all controls are working as intended.

From this perspective, accidents are no longer chance events, they are caused by control failures which are within the power of the facility owner to prevent.

The word “˜cause’ is used here in the sense of “˜but for’ cause.

The point is that in relation to each of these control failures we can say that, “˜but for’ this failure, the accident in question would not have occurred.

This way of thinking shifts the question for top decision makers from “˜How much is it worth spending?’ to “˜What are the critical controls that are supposed to be in place and how can I guarantee that they are?’

This approach does not eliminate financial considerations.

Some controls are more reliable because once in place they remain in place – for example, replacing hazardous equipment with less hazardous equipment.

But this may be expensive. Other controls, such as procedural and engineering controls, may be cheaper, but require more ongoing effort to ensure that they remain effectively in place.

Decision makers must therefore understand that if they economise on the more reliable controls, they will need to be more vigilant with respect to the less reliable controls they adopt.

The history of major accident indicates just how unreliable procedural controls can be.6 Decision makers need to be armed with this understanding in order to make the best decisions about how to prevent major accidents.

This alternative approach has implications for the issue of ALARP – “˜as low as reasonably practicable’.

In many jurisdictions the ultimate requirement imposed on companies is that they ensure the safety of workplaces so far as reasonably practicable, or words to that effect.

Risk analysts understand this to mean that risks are to be as low as reasonably practicable.

As we have seen, CBA translates this to a question of whether the costs outweigh the benefits.

However, using the defence in depth approach, the question is: have we done all that is reasonably practicable to ensure that the required defences are in place?

In legal proceedings that follow a major accident, this is always the question that courts focus on, and they almost invariably find that controls that were supposed to be in place were missing.

From this point of view, the Swiss cheese approach is a far more useful guide for decision-making than CBA.


This paper is concerned with cost benefit analysis in a very limited context, namely, cost benefit analysis for measures aimed at reducing the risk of rare but catastrophic events.

I have not attempted to generalise beyond this context and there are certainly other areas in which this critique does not apply.

CBA in the present context is controversial because of the way in which it places a financial value on life and also because of the uncertainty of the probability estimates that are made for rare events.

But this paper has not been concerned with those questions.

I have focused instead on a very fundamental feature of these CBAs – the fact that they require the analyst to multiply consequence by probability – and I have argued that for the individual facility owner, the result is a meaningless number.

In fact the result is worse than useless because it allows top decision makers to avoid making the hard decisions.

It is far better to present top decision makers with information about consequences and probability separately, rather than combining them into to single indicator of risk.

Furthermore, rather than thinking about accidents as chance events, it is better to think of them as caused by control failures.

They can therefore be prevented by ensuring that controls that are intended to be place really are.

Works cited:

  • Heinzerling Land Ackerman F 2002 Pricing the Priceless: Cost-Benefit Analysis of Environmental Protection Georgetown University Law Center
  • Hopkins A 2008 Failure to Learn: The BP Texas City Refinery Disaster (CCH, Sydney)
  • Hopkins A 2012 Disastrous Decisions: The Human and Organisational Causes of the Gulf of Mexico Blowout (CCH: Sydney)
  • Reason J (1997) The Organisational Accident (Ashgate, Aldershot)
    • Edwards v. National Coal Board.(1949) All ER 743 (CA)
    • Heinzerling and Ackerman 2002

Send this to a friend