To Err is Human But Human Error is Not a Root Cause

In a recent post I talked about Human Factors and different error types. You don’t necessarily need to classify human errors into these types but splitting them out this way helps us think about the different sorts of errors there are. This moves us on from when we get to ‘human error’ when carrying out our root cause analysis (using DIGR® or another method). Part of the problem with having ‘human error’ as a root cause is that there isn’t much you can do with your conclusion. To err is human after all so let’s move on to something else. But people make errors for a reason and trying to understand why they made the error can lead us down a much more fruitful path to actions we can implement to try to prevent recurrence. If a pilot makes an error that leads to a near disaster or worse, we don’t just conclude that it was human error and there is nothing we can do about it. In a crash involving a self-driving car we want to go beyond “human error” as a root cause to understand why the human error might have occurred. As we get more self-driving cars on the road, we want to learn from every incident.

By getting beyond human error and considering different error types, we can start to think of what some actions are that we can implement to try to stop the errors occurring (“corrective actions”). Ideally, we want processes and systems to be easy and intuitive and the people to be well trained. When people are well trained but the process and/or system is complex, there are likely to be errors from time to time. As W. Edwards Deming once said, “A bad system will beat a good person every time.”

Below are examples of each of the error types described in my last post and example corrective actions.

Error Type Example Example Corrective Action
Action errors (slips) Entering data into the wrong field in EDC Error and sense checks to flag a possible error
Action errors (lapses) Forgetting to check fridge temperature Checklist that shows when fridge was last checked
Thinking errors (rule based) Reading a date written in American format as European (3/8/16 being 8-Mar-2016 rather than 3-Aug-2016) Use an unambiguous date format such as dd-mmm-yyyy
Thinking errors (knowledge based) Incorrect use of a scale Ensure proper training and testing on use of the scale. Only those trained can use it.
Non-compliance (routine, situational and exceptional) Not noting down details of the drug used in the Accountability Log due to rushing Regular checking by staff and consequences for not noting appropriately

These are examples and you should be able to think of additional possible corrective actions. But then which ones would you actually implement? You want the most effective and efficient ones of course. You want your actions to be focused on the root cause – or the chain of cause and effect that leads to the problem.

The most effective actions are those that eliminate the problem completely such as adding an automated calculation of BMI (Body Mass Index) from height and mass, for example, rather than expecting staff to calculate it correctly. If it can’t go wrong, it won’t go wrong (the corollary of Murphy’s Law). This is mistake-proofing.

The next most effective actions are ones that help people to get it right. Drop-down lists and clear, concise instructions are examples of this. Although instructions do have their limitations (as I will discuss in a future post). “No-one goes to work to do a bad job!” (W Edwards Deming again) so let’s help them do a good job.

The least effective actions are ones that rely on a check catching an error right at the end of the process. For example, the nurse checking the expiry date on a vial before administering. That’s not to say these checks should not be there, but rather they should be thought of as the “last line of defence”.

Ideally, you also want some sort of check to make sure the revised process is working. This check is an early signal as to whether your actions are effective at fixing the problem.

Got questions or comments? Interested in training options? Contact me.

 

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott Metrics & Process Improvement Ltd.

“To err is human” – Alexander Pope

Don’t blame me! The corrosive effect of blame

Root cause analysis (RCA) is not always easy. And there is frequently not enough time. So where it is done, it is common for people to take short cuts. The easiest short cuts are:

  1. to assume this problem is the same as one you’ve seen before and that the cause is the same (I mentioned this in a previous post). Of course, you might be right. But it might be worth taking a little extra time to make sure you’ve considered all options. The DIGR® approach to RCA can really help here as it takes everyone through the facts and process in a logical way.
  2. to blame someone (or a department, site etc)

Blame is corrosive. As soon as that game starts being played, everyone clams up. Most people don’t want to open up in that sort of environment because they risk every word they utter being used against them. So once blame comes into the picture you can forget getting to root cause.

To help guard against blame, it’s useful to know a little about the field of Human Factors. This is an area of science focused on designing products, systems, or processes to take proper account of the interaction between them and the people who use them. It is used extensively in the airline industry and has helped them get to their current impressive safety record. The British Health and Safety Executive has a great list of different error types.

This is based on the Human Factors Analysis and Classification System (HFACS). The error types are split into:

Error Type Example
Action errors (slips) Turning the wrong switch on or off
Action errors (lapses) Forgetting to lock a door
Thinking errors (rule based) – where a known rule is misapplied Ignoring an evacuation alarm because of previous false alarms
Thinking errors (knowledge based) – where lack of prior knowledge leads to a mistake Using an out-of-date map to plot an unfamiliar route
Non-compliance (routine, situational and exceptional) Speeding in a car (knowingly ignoring the speed limit because everyone else does)

So how can human factors help us? Consider a hypothetical situation where you are the Clinical Trial Lead on a vaccine study. Information is emerging that a number of the injections of trial vaccine have actually been administered after the expiry date of the vials. This has happened at several sites. It might be easiest to blame the nurse administering of the pharmacist prescribing. They should have taken more care and checked the expiry date properly. What could the human errors have been?

They might have forgotten (lapse). Or they might have read the expiry date in European date format when it was written in American date format (rule-based thinking error). Or they might have been rushing and not had time (non-compliance). Of course, we know the error occurred on multiple occasions and by different people as it happened at multiple sites. This suggests a systemic issue and that reminding or retraining staff will only have a limited effect.

Maybe it would be better to make sure that expired drug can’t reach the point of being dispensed or administered so that we don’t rely on the final check by the pharmacist and nurse. We still want them to check but do not expect them to find expired vaccine.

After all, as W. Edwards Deming said “No-one goes to work to do a bad job!”

In my next post I will talk about the different sorts of actions you can take to try to minimise the chance of human error.

And as an added extra, here’s a link to an astonishing story that emphasises the importance of taking blame out of RCA.

 

Photo: NYPhotographic

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott Metrics & Process Improvement Ltd.