Amnesia blights process safety
13 Jul 2010
Is corporate memory loss to blame for the recurrence of major accidents? Dr Julian Hought examines
The world is currently watching BP’s frenzied attempts to stem the flow of oil following a catastrophic explosion at the Deepwater Horizon rig in the Gulf of Mexico. The incident cost 11 lives with many more injured, while the environmental damage is escalating at an alarming rate. The leak has already surpassed that of the Exxon Valdez in 1989 and could rival the 1979 Itox disaster which, at 140 million gallons, is one of the worst ever oil spills.
Whilst the inevitable investigation into the causes of the Deepwater Horizon explosion will take months, if not years, to be published, its immediate impact has led to a moratorium on deepwater drilling, pending answers to questions raised about the adequacy of the safeguards and contingency plans in place.
These incidents bear witness to the fact that, despite the industry’s willingness to share information on the lessons learned, and the collective desire to continue to improve safety, history does tend to repeat itself. There are many reasons for this, but could corporate memory have a part to play?
The Buncefield and BP Texas City incidents happened less than five years ago. As with previous major industrial accidents - Flixborough, Bhopal, Piper Alpha, Toulouse - they were deemed so catastrophic that they forced a change in the behaviour of organisations around the world and influenced the formation and enforcement of new legislation designed specifically to prevent similar situations ever happening again.
The 28 fatalities at the Nypro (UK), Flixborough explosion in June 1974, caused by the leakage of cyclohexane vapour, and the explosion at the ICSEMA chemical plant in Seveso in 1976 led to what is now the Seveso II Directive in Europe, which is regulated under the COMAH Regulations in the UK.
This regime has had a profound impact on the way we look at design, construction, operation and maintenance at high hazard sites. Likewise, the loss of 167 lives on Piper Alpha in 1988 led to a similar regime for offshore installations. But despite periodic reviews and amendment, in an attempt to further improve process safety, and the fact that learning is shared freely around the world, we find that major accidents still occur.
We know that the time needed to implement the technological, managerial and cultural changes that inevitably follow the findings from major accidents can be significant, often leading to a loss of impetus as organisations change and priorities shift over time.
High profile incidents remind people of what can happen when things go wrong, and fear of the consequences generally leads to greater conservatism in management of risk.
Protagonists of caution challenge with vigour the efficacy of safeguards and help rein in those that would treat risk assessment as an academic exercise to justify taking no further action. That said, an over-zealous approach can lead to an unbalanced allocation of resources and an incoherent approach to managing risk overall.
Risk assessment by its very nature is an uncertain science; at HFL Risk Services we estimate the consequences and likelihood of occurrence and only refine those estimates when they are perceived to be unacceptable, to support the decision-making process.
If incidents help to drive conservatism, then is the opposite true? Do we need to be reminded of what can go wrong and the consequences? And do past incidents cease to have relevance as technology changes?
Experience of incidents leads us to improve our estimates as a consequence of either what we learn directly, or from research that we carry out after the event. The industry has amassed, and continues to acquire, knowledge concerning failure modes and failure rates. This knowledge bank is strengthened as a result of the investigations of incidents across the world, helping to prove or disprove theories.
Mathematical modelling can help, but is this sufficient? We need to continually incorporate hard evidence to develop our theories. Buncefield is a good example of an event where the overpressure experienced from the vapour cloud explosion was far greater than could have been predicted from the available models at the time.
Corporate memory also comes into play in driving conservatism. As the saying goes “you can’t teach experience”, but you can certainly learn lessons from it. Anyone who has ever experienced a major incident such as Flixborough or Piper Alpha first hand is unlikely to forget the lessons learned.
The challenge for us all is to ensure that such fervour for safe practice is passed on to new entrants to the industry. With younger engineers moving on more frequently and older generations retiring, there is the very real danger that knowledge, vital to the integrity of the plant, is lost.
The economic climate and smaller profit margins as a result of overseas competition have made it difficult for many companies in the UK process industries to justify investment in new plant. Where equipment is being pushed beyond its planned life, regular risk assessment and retained corporate memory are vital. The ability to make accurate estimates, backed by hard evidence, can mean the difference between life and death.
Corporate memory is important and we must take steps to ensure that we absorb the lessons of the past. We shouldn’t have to wait for reports of another fatal accident to jolt us into undertaking regular risk assessments and acting in the interests of protecting those who work and live near our sites.
BP controls boss: It’s all about people
Mervyn Currie, senior advisor in controls and instrumentation for BP Exploration, braved the prevailing storm around the Deepwater Horizon disaster in the Gulf of Mexico to give a presentation on his company’s safety strategy at the Yokogawa User Conference, 23-24 June in Amsterdam. His talk emphasised BP’s commitment to “no accidents, no harm to people and no damage to the environment”.
In terms of process safety, BP follows an “inherently safer design approach”, in which engineers seek to understand and identify hazards and eliminate them from source.
So BP, for example, designs pressure vessels and pipelines to contain high pressures, where at all possible, rather than using lower specifications that rely on safety instrumented systems (SIS) to prevent cascade trips and ensure orderly shutdowns, particularly offshore.
“We have to justify to ourselves and to our regulators that our risks are sufficiently low. So we hope to not have to place so much reliance on the SIS, and that will result in lower safety integrity levels (SIL),” said Currie.
For hazard consequences above a certain severity, BP carries out layer of protection analysis - a process listed in the IEC61511 standard which guides on where SIS systems should be used and the reliability requirements for designing, installing and operating them. Here, BP has standardised on the North American LOPA method - instead of its previously used risk graphs, which, said Currie, are more quantitative, more conservative and typically result in needing higher SIL levels.
Another important aspect is proof-testing of safety systems, Currie commenting: “If we don’t do proof testing at the specified intervals, we can be running at a higher risk level than we said we could tolerate.”
The BP expert, however, concluded: “At the end of the day, though, all systems are only as good as the people that design, specify, install, operate and maintain them.
“From root cause failure analysis, virtually all incidents, failures and problems come down to people. So training and assessment is vital, for people involved in engineering SIS and control systems … and engineers and technicians involved in any aspect of the SIS.”