The trouble […] is that you are trying to get out of your statistics something that you cannot get. No statistics will automatically do your safety job for you. They merely show you what sort of accidents you are having – where you must look for trouble.” (Williams, 1927)
As stated eloquently above, we often make the mistake in the EHS profession, especially as it relates to safety metrics, that this type of information will magically communicate exactly what is wrong. That by simply viewing the presented information, management will always understand what is wrong and what needs to be done to fix it. Unfortunately, relying on the data to speak for itself lends itself to biases that can easily lead to misconceptions and misinterpretations. A robust EHS plan related to the use of data should include not only the collection of information but a clear understanding of what it means (and what it does not) as well as how to act appropriately on the data.
Another aspect about “misleading indicators” is that some managers believe the safety metrics as if they represent everything, even if they mean virtually nothing. Examples include low or zero injury rates or high or 100% percent safe inspection reports. This false sense of security is often very dangerous in that it can lead to inaction. Why act to improve when the safety metrics are giving the appearance that everything is fine? People naturally tend to take the easy route and focus on surface-level numbers instead of on why things are occurring as they are. Tracking and positively using safety metrics and performance indicators should lead to further inquiry to get a holistic view.
Safety metrics involving incident data are quite common. However, uncertainty arises based on what should (and should not) be measured and used. For example, the number of incidents reported is useless (compared to the rate), the percentage of incidents reported is impossible (how do you know how many have not been reported), and the severity of incidents is potentially useful (but it often fails to differentiate a lucky outcome versus a likely potential outcome). Furthermore, incidents are only useful if something comes out of them, and the lessons learned are well communicated. If lessons are not learned (including affecting a positive change), an incident is no more than a discussion point or topic for a lecture or presentation.
Collecting data such as safety metrics is becoming much easier, especially considering the advantage of the technological revolution of smartphones and wearables. What remains elusive is development of a well-planned program to turn a collection of disparate information into actionable insights. These actionable insights can ultimately be leveraged and delivered in such a way that allows your organization to automate expert recommendations on a global scale.
AUTHOR BIO
Cary comes to the SafetyStratus team as the Vice President of Operations with almost 30 years of experience in several different industries. He began his career in the United States Navy’s nuclear power program. From there he transitioned into the public sector as an Environmental, Health & Safety Manager in the utility industry. After almost thirteen years, he transitioned into the construction sector as a Safety Director at a large, international construction company. Most recently he held the position of Manager of Professional Services at a safety software company, overseeing the customer success, implementation, and process consulting aspects of the services team.
At SafetyStratus, he is focused on helping achieve the company’s vision of “Saving lives and the environment by successfully integrating knowledgeable people, sustainable processes, and unparalleled technology”.
Follow @cary: Linkedin | Twitter
References Williams, S.J. (1927) The Manual of Industrial Safety. Chicago & New York: A.W. Shaw Company