Don’t Waste a Good Mistake…Learn From It

Everyone is so busy. There’s not enough time to even think! This seems to be a challenge in many areas of business – we expect more and more from fewer people. Tom DeMarco describes this situation in his book “Slack” which I have recently re-read. And I think he’s on to something when he quotes “Lister’s Law – People under time pressure don’t think faster.” And of course, that’s right. Put people under time pressure and they will try to cut out wasted time. And they can re-prioritize so they spend more time on that task. They can work longer hours. But eventually, there is a limit and so people start to take cognitive short-cuts…”this problem is the same as one I’ve encountered before and so the solution must be the same”. Of course, that might be the right conclusion but if you don’t have the available time to interrogate it a little further then you run the risk of implementing the wrong solution and even making the problem worse.

One of the reasons I often hear as to why people don’t do root cause analysis is that they don’t have the time. People don’t want to be seen analysing a problem – much better to be taking action. But what if the action is the wrong action and is not based on the root cause? If the action is “re-training” you can be sure no-one has taken the time to really understand why the problem occurred. Having a good method you can rely on is part of the battle (I suggest DIGR® of course). But even knowing how is no good if you simply don’t have the time. Not having the time is ultimately a management issue. If managers asked “why” questions more and encouraged their staff to take time to think, question and get to root cause rather than rushing to a short-term fix, we would have true learning.

If we are not learning from things that go wrong to try to stop it recurring then we have missed an opportunity. If the culture of an organization is for learning and improvement then management must support staff with the right encouragement to understand, and good tools. But above all they must provide the time to really understand an issue, get to root cause and implement actions to try to stop recurrence. And if your manager isn’t providing time and encouraging you in this, challenge them on it – and get them to read this blog!

As Robert Kiyosaki said “Don’t waste a good mistake…learn from it.”

 

Text: © 2018 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott Metrics & Process Improvement Ltd.

Is more QC ever the right answer? Part II

In part I of this post, I described how some processes have been developed that they can end up being the worst of all worlds by adding a QC step – they take longer, cost more and give quality the same (or worse) than a one step process. So why would anyone implement a process like this? Because “two sets of eyes are better than one!”

What might a learning approach with better quality and improved efficiency look like? I would suggest this:

In this process, we have a QC role and the person performing that role takes a risk-based approach to sampling the work and works together with the Specialist to improve the process by revising definitions, training etc. The sampling might be 100% for a Specialist who has not carried out the task previously. But would then reduce down to low levels as the Specialist demonstrates competence. The Specialist is now accountable for their work – all outputs come from them. If a high level of errors is found then an escalation process is needed to contain the issue and get to root cause (see previous posts). You would also want to gather data about the typical errors seen during the QC role and plot them (Pareto charts are ideal for this) to help focus on where to develop the process further.

This may remind you of the move away from 100% Source Document Verification (SDV) at sites. The challenge with a change like this is that the process is not as simple – it requires more “thinking”. What do you do if you find a certain level of errors? This is where the reviewer (or the CRA in the case of SDV) need a different approach. It can be a challenge to implement properly. But it should actually make the job more interesting.

So, back to the original question: Is more QC ever the answer? Sometimes – But make sure you think through the consequences and look for other options first.

In my next post, I’ll talk about a problem I come across again and again. People don’t seem to have enough time to think! How can you carry out effective root cause analysis or improve processes without the time to think?

Text: © 2018 Dorricott MPI Ltd. All rights reserved.

Is More QC Ever the Right Answer? Part I

In a previous post, I discussed whether retraining is ever a good answer to an issue. Short answer – NO! So what about that other common one of adding more QC?

An easy corrective action to put in place is to add more QC. Get someone else to check. In reality, this is often a band-aid because you haven’t got to the root cause and are not able to tackle it directly. So you’re relying on catching errors rather than stopping them from happening in the first place. You’re not trying for “right first time” or “quality by design”.

“Two sets of eyes are better than one!” is the common defence of multiple layers of QC. After all, if someone misses an error, someone else might find it. Sounds plausible. And it does make sense for processes that occur infrequently and have unique outputs (like a Clinical Study Report). But for processes that repeat rapidly this approach becomes highly inefficient and ineffective. Consider a process like that below:

Specialist I carries out work in the process – perhaps entering metadata in relation to a scanned document (investigator, country, document type etc). They check their work and modify it if they see errors. Then they pass it on to Specialist II who checks it and modifies it if they see any errors. Then the reviewer passes it on to the next step. Two sets of eyes. What are the problems with this approach?

  1. It takes a long time. The two steps have to be carried out in series i.e. Specialist II can’t QC the same item at the same time as Specialist I. Everything goes through two steps and a backlog forms between the Specialists. This means it takes much longer to get to the output.
  2. It is expensive. A whole process develops around managing the workflow with some items fast-tracked due to impending audit. It takes the time of two people (plus management) to carry out the task. More resources means more money.
  3. The quality is not improved. This may seem odd but if we think it through. There is no feedback loop in the process for Specialist I to learn from any errors that escape to Specialist II so Specialist I continues to let those errors pass. And the reviewer will also make errors – in fact the rework they do might actually add more errors. They may not agree on what is an error. This is not a learning process. And what if the process is under stress due to lack of resources and tight timelines? With people rushing, do they check properly? Specialist I knows That Specialist II will pick up any errors so doesn’t check thoroughly. And Specialist II knows that Specialist I always checks their work so doesn’t check thoroughly. And so more errors come out than Specialist II had not been there at all. Having everything go through a second QC as part of the process takes away accountability from the primary worker (Specialist I).

So let’s recap. A process like this takes longer, costs more and gives quality the same (or worse) than a one step process. So why would anyone implement a process like this? Because “two sets of eyes are better than one!”

What might a learning approach with better quality and improved efficiency look like? I will propose an approach in my next post. As a hint, it’s risk-based!

Text: © 2018 Dorricott MPI Ltd. All rights reserved.

I Must Do Better Next Time

I was interviewed recently by LMK Clinical Research Consulting (podcast here). I was intrigued when in the interviewer’s introduction, he said that from reading my blog he knew that I “have a fundamentally positive outlook with how humans interact with systems”. I suppose that’s true but I’d not thought of it that way before. I do often quote W. Edwards Deming “Nobody comes to work to do bad job” and “A bad system will beat a good person every time”. The approach is really one of process thinking – it’s not that people don’t matter in processes, they are crucial. But processes should be designed to take account of the variation in how people work. They should be designed around the people using them. No point blaming the individual when things go wrong – time to learn and try to stop it going wrong next time. I wrote previously about the dangers of a culture of blame from the perspective of getting to root cause. Blame is corrosive. Most people don’t want to open up in an environment where people are looking for a scape-goat – so your chance of getting to root cause is much less.

Approaching blame in this way has an interesting effect on me. When things go wrong in everyday life, my starting point isn’t to blame myself (or someone else) but rather to think “why did that go wrong?” A simple everyday example…I was purchasing petrol (“gas” in American English) and there were two card readers at the till. The retailer asked me to put my card in – which I did. He immediately said “No – not that one!” So, I took it out and put it in the other one. “That’s pretty confusing having two of them,” I said. To which he replied, “no it’s not!” I can see how it’s not confusing to him because he is using the system every day but to me it was definitely confusing. I don’t think he was particularly interested in my logic on this, so I paid and said “Good-bye”. Of course, I don’t know why he had two card readers out – what was the root cause? But even without knowing the root cause, he certainly could have put a simple correction in place by telling me which card reader to put my card in to.

There’s no question, we can all learn from our mistakes and we should take responsibility for them. But perhaps by extending the idea of no blame to ourselves, we can focus on what we can do to improve rather than simply thinking “I must do better next time.”

 

Text: © 2018 Dorricott MPI Ltd. All rights reserved.

Some of My Recent Posts

A number of people on my subscriber list have told me they thought I had stopped posting. I haven’t – but it seems my posts have not been going out to many of you since about June last year. I hope this post reaches you. I’ve changed some of the settings. In case you missed any of my posts and wanted to take a look here are some of the popular ones since June:

Don’t blame me! The corrosive effect of blame

To Err is Human But Human Error is Not a Root Cause

Don’t waste people’s time on root cause analysis

Not everything that counts can be counted!

Stop Issues Recurring by Retraining?

Get rid of plastic packaging – are you mad?

And do let me know if you received this. Let’s hope I got to root cause…

Stop Issues Recurring by Retraining?

“That issue has happened again! We really need to improve the awareness of our staff – anyone who has not used the right format needs to be retrained. We can’t tolerate sloppy work. People just need to concentrate and do the job right!”

You may recall a previous post about human factors where I looked at why people make errors and the different types of errors. If the error was a slip (a type of action error where someone planned to do the right thing but did the wrong thing) then retraining won’t help. The person already knows what the right thing to do is. Similarly if the error was a lapse (where someone forgot to do it). Of course, with both of these error types, making people aware will help temporarily. But over time, they will likely go back to doing what they were doing before unless some other change is made.

If the error was a rule-based thinking error where the knowledge is there but was misapplied, it is unlikely that retraining will work long term. We would need to understand the situation and why it is that the knowledge was misapplied. If the date is written in American format but read as European (3/8/18 being 8-Mar-2018 rather than 3-Aug-2018) then can we change the date format to be unambiguous in the form dd-mmm-yyyy (03-Aug-2018)?

What if the error is a non-compliance? If someone didn’t carry out the full procedure because they were rushed and they get retrained, do we really think that in the future when they are rushed they are going to do something different? They might do short term but longer term it is unlikely.

For all these errors, retraining or awareness might help short term but they are unlikely to make a difference longer term. To fix the issue longer term, we need to understand better why the error occurred and focus on trying to stop its recurrence by changes to process/systems.

A thinking error that is knowledge-based is different though. If someone made an error because they don’t know what they should be doing then clearly providing training and improving their knowledge should help. But even here, “retraining” is the wrong action. It implies they have already been trained and if so, the question is, why didn’t that training work? Giving them the same training again is likely to fail unless we understand what went wrong the first time. We need to learn from the failure in the training process and fix that.

Of course, this does not mean that training is not important. It is vital. Processes are least likely to have errors when they are designed to be as simple as possible and are run by well-trained people. When there are errors, making sure people know that they can happen is useful and will help short term but it is not a long term fix (corrective action). Longer term fixes need a better understanding of why the error(s) occurred and this is where the individuals running the process can be of vital help. As long as there is a no-blame culture (see previous post) you can work with those doing the work to make improvements and help stop the same errors recurring. Retraining is not the answer and it can actually have a negative impact. We want those doing the work to come forward with errors so we can understand them better, improve the process/system and reduce the likelihood of them happening again. If you came forward acknowledging an error you had made and were then made to retake an hour of on-line training on a topic you already know, how likely would you be to come forward a second time? Retraining can be seen as a punishment.

So, to go back to the post title “Stop errors recurring by retraining?” No, that won’t work. Retraining is never a good corrective action.

What about that other corrective action that comes up again and again – more QC? That’s the subject of a future post.

 

Text: © 2018 Dorricott MPI Ltd. All rights reserved.

Don’t waste people’s time on root cause analysis

In an earlier post, I described a hypothetical situation where you are the Clinical Trial Lead on a vaccine study. Information is emerging that a number of the injections of trial vaccine have actually been administered after the expiry date of the vials. This has happened at several sites. I then described actions you might take without the need for root cause analysis (RCA) such as – review medical condition of the subjects affected, review stability data to try to estimate the risk, ask CRAs to check expiry dates on all vaccine at sites on their next visit, remind all sites of the need to check the expiry date prior to administering the vaccine. So if you were to now go through the time and effort of a DIGR® RCA and you still end up with these and similar actions, why did you bother with the RCA? RCA should lead to actions that tackle the root cause and try to stop the issue recurring – to help you sleep at night. If you or your organization is not going to implement actions based on the RCA then don’t carry out the RCA. A couple of (real) examples from an office environment might help to illustrate the point.

In a coffee area there are two fridges for people to store milk, their lunch etc. One of them has a sign on it. The sign is large and very clear “Do not use”. And yet, if you open the fridge, you will see milk and people’s lunch in it. No-one takes any notice of the notice. But why not? In human factors analysis, the error occurring as people ignore the sign is a routine non-compliance. Most people don’t pay much attention to signs around the office and this is just another sign that no-one takes notice of. Facilities Management occasionally sends out a moaning email that people aren’t to use the fridge but again no-one really takes any notice.

What is interesting is that the sign also contains some root cause analysis. Underneath “Do not use” in small writing it states “Seal is broken and so fridge does not stay cold”. Someone had noticed at some point that the temperature was not as cold as it should be and root cause analysis (RCA) had led to the realisation that a broken seal was the cause. So far, so good. But the action following this was pathetic – putting up a sign telling people not to use it. Indeed, when you think about it, no RCA was needed at all to get to the action of putting up the sign. The RCA was a waste of time if this is all it led to. What should they have done? Replaced the seal perhaps. Or replaced the fridge. Or removed the fridge. But putting a sign up was not good enough.

The second example – a case of regular slips on the hall floors outside the elevators – including one minor concussion. A RCA was carried out and the conclusion was that the slips were due to wet surfaces when the time people left the office coincided with the floors being cleaned. So the solution was to make sure there were more of the yellow signs warning of slips at the time of cleaning. But slips still occurred – because people tended to ignore the signs. A better solution might have been to change the time of the cleaning or to put an anti-slip coating on the floor. There’s no point in spending time on determining the root cause unless you think beyond the root cause to consider options that might really make a difference.

Root cause analysis is not always easy and it can be time consuming. The last thing you want to do is waste the output by not using it properly. Always ask yourself – could I have taken this action before I knew what the root cause was? If so, then you are clearly not using the results of the RCA and it is likely your action on its own will not be enough. Using this approach might help you to determine whether “retraining” is a good corrective action. I will talk more about this in a future post.

Here’s a site I found with a whole series of signs that helps us understand one of the reasons signs tend to be ignored. Some of them made me cry with laughter.

 

Photo: Hypotheseyes CC BY-SA 4.0

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott Metrics & Process Improvement Ltd.

Big Data – Garbage in, garbage out?

Change of plan for this post…I visited the dentist recently. And before the consultation, I was handed an ipad with a form to complete. I was sure I had completed this form before last time – and checking with the receptionist she said it had to be completed every six months. So I had completed it before. It was a long form asking all sorts of details about medical history, medicines being taken etc. It included questions about lifestyle – how much exercise you get, whether you smoke, how much alcohol you drink etc. It all seemed rather over the top to be completing every six months. It seemed such an inefficient process and prone to error. Every patient completing all these detailed questions (often in a rush). And no way to check what my previous answers were – wouldn’t it be nice if they just pre-filled my previous answers and I could make any adjustments. All a little frustrating really. So I asked the receptionist why all this was needed.

“The government needs it,” was the reply. Really? What on earth do they do with it all, I wondered? I have to admit, that answer made me try a little experiment. I tried to see if the form would submit without me entering anything. It didn’t – it told me I had to sign the form first. So I signed it and sure enough it was accepted. So I handed the ipad back to the receptionist and she thanked me for being so quick. Off I went to my appointment and all was fine. And I felt as though I had struck a very small blow for freedom.

I wonder what does happen to all the data. Does it really go to “the government”? What would they do with it? Is it a case of gathering big data that can then be mined for trends – how the various factors affect dental health maybe? Well, one thing’s for sure, I wouldn’t trust the conclusions given how easy it seems to be to dupe the system. What guarantee is there on the accuracy of any of the data? Seems to me a case of garbage in, garbage out.

As we are all wowed by what Big Data can do and the incredible neural networks and algorithms teams can develop to help us (see previous blog), we do need to think about the source of the Big Data. Where has it come from? Could it be biased (almost certainly)? And in what way? How can we guard against the impact of that bias? There’s been a lot in the news recently about the dangers of bias – for example in Time and the Guardian. If we’re not careful, we can build bias into the algorithms and just continue with the discrimination we already have. Our best defence is scepticism. Just as when, in root cause analysis, an expert is quoted for evidence. As Edward Hodnett says: “Be sceptical of assertions of fact that start, ‘J. Irving Allerdyce, the tax expert, says…’ There are at least ten ways in which these facts may not be valid. (1) Allerdyce may not have made the statement at all. (2) He may have made an error. (3) He may be misquoted. (4) He may have been quoted only in part….”

Being sceptical and asking questions can help us avoid erroneous conclusions. Ask questions like: “how do you know that?”, “do we have evidence for that?” and “could there be bias here?”

Big Data has huge potential. But let’s not be wowed by it so that we don’t question. Be sceptical. Remember, it could be another case of garbage in, garbage out.

Image: Pixabay

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

To Err is Human But Human Error is Not a Root Cause

In a recent post I talked about Human Factors and different error types. You don’t necessarily need to classify human errors into these types but splitting them out this way helps us think about the different sorts of errors there are. This moves us on from when we get to ‘human error’ when carrying out our root cause analysis (using DIGR® or another method). Part of the problem with having ‘human error’ as a root cause is that there isn’t much you can do with your conclusion. To err is human after all so let’s move on to something else. But people make errors for a reason and trying to understand why they made the error can lead us down a much more fruitful path to actions we can implement to try to prevent recurrence. If a pilot makes an error that leads to a near disaster or worse, we don’t just conclude that it was human error and there is nothing we can do about it. In a crash involving a self-driving car we want to go beyond “human error” as a root cause to understand why the human error might have occurred. As we get more self-driving cars on the road, we want to learn from every incident.

By getting beyond human error and considering different error types, we can start to think of what some actions are that we can implement to try to stop the errors occurring (“corrective actions”). Ideally, we want processes and systems to be easy and intuitive and the people to be well trained. When people are well trained but the process and/or system is complex, there are likely to be errors from time to time. As W. Edwards Deming once said, “A bad system will beat a good person every time.”

Below are examples of each of the error types described in my last post and example corrective actions.

Error Type Example Example Corrective Action
Action errors (slips) Entering data into the wrong field in EDC Error and sense checks to flag a possible error
Action errors (lapses) Forgetting to check fridge temperature Checklist that shows when fridge was last checked
Thinking errors (rule based) Reading a date written in American format as European (3/8/16 being 8-Mar-2016 rather than 3-Aug-2016) Use an unambiguous date format such as dd-mmm-yyyy
Thinking errors (knowledge based) Incorrect use of a scale Ensure proper training and testing on use of the scale. Only those trained can use it.
Non-compliance (routine, situational and exceptional) Not noting down details of the drug used in the Accountability Log due to rushing Regular checking by staff and consequences for not noting appropriately

These are examples and you should be able to think of additional possible corrective actions. But then which ones would you actually implement? You want the most effective and efficient ones of course. You want your actions to be focused on the root cause – or the chain of cause and effect that leads to the problem.

The most effective actions are those that eliminate the problem completely such as adding an automated calculation of BMI (Body Mass Index) from height and mass, for example, rather than expecting staff to calculate it correctly. If it can’t go wrong, it won’t go wrong (the corollary of Murphy’s Law). This is mistake-proofing.

The next most effective actions are ones that help people to get it right. Drop-down lists and clear, concise instructions are examples of this. Although instructions do have their limitations (as I will discuss in a future post). “No-one goes to work to do a bad job!” (W Edwards Deming again) so let’s help them do a good job.

The least effective actions are ones that rely on a check catching an error right at the end of the process. For example, the nurse checking the expiry date on a vial before administering. That’s not to say these checks should not be there, but rather they should be thought of as the “last line of defence”.

Ideally, you also want some sort of check to make sure the revised process is working. This check is an early signal as to whether your actions are effective at fixing the problem.

Got questions or comments? Interested in training options? Contact me.

 

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott Metrics & Process Improvement Ltd.

“To err is human” – Alexander Pope

Don’t blame me! The corrosive effect of blame

Root cause analysis (RCA) is not always easy. And there is frequently not enough time. So where it is done, it is common for people to take short cuts. The easiest short cuts are:

  1. to assume this problem is the same as one you’ve seen before and that the cause is the same (I mentioned this in a previous post). Of course, you might be right. But it might be worth taking a little extra time to make sure you’ve considered all options. The DIGR® approach to RCA can really help here as it takes everyone through the facts and process in a logical way.
  2. to blame someone (or a department, site etc)

Blame is corrosive. As soon as that game starts being played, everyone clams up. Most people don’t want to open up in that sort of environment because they risk every word they utter being used against them. So once blame comes into the picture you can forget getting to root cause.

To help guard against blame, it’s useful to know a little about the field of Human Factors. This is an area of science focused on designing products, systems, or processes to take proper account of the interaction between them and the people who use them. It is used extensively in the airline industry and has helped them get to their current impressive safety record. The British Health and Safety Executive has a great list of different error types.

This is based on the Human Factors Analysis and Classification System (HFACS). The error types are split into:

Error Type Example
Action errors (slips) Turning the wrong switch on or off
Action errors (lapses) Forgetting to lock a door
Thinking errors (rule based) – where a known rule is misapplied Ignoring an evacuation alarm because of previous false alarms
Thinking errors (knowledge based) – where lack of prior knowledge leads to a mistake Using an out-of-date map to plot an unfamiliar route
Non-compliance (routine, situational and exceptional) Speeding in a car (knowingly ignoring the speed limit because everyone else does)

So how can human factors help us? Consider a hypothetical situation where you are the Clinical Trial Lead on a vaccine study. Information is emerging that a number of the injections of trial vaccine have actually been administered after the expiry date of the vials. This has happened at several sites. It might be easiest to blame the nurse administering of the pharmacist prescribing. They should have taken more care and checked the expiry date properly. What could the human errors have been?

They might have forgotten (lapse). Or they might have read the expiry date in European date format when it was written in American date format (rule-based thinking error). Or they might have been rushing and not had time (non-compliance). Of course, we know the error occurred on multiple occasions and by different people as it happened at multiple sites. This suggests a systemic issue and that reminding or retraining staff will only have a limited effect.

Maybe it would be better to make sure that expired drug can’t reach the point of being dispensed or administered so that we don’t rely on the final check by the pharmacist and nurse. We still want them to check but do not expect them to find expired vaccine.

After all, as W. Edwards Deming said “No-one goes to work to do a bad job!”

In my next post I will talk about the different sorts of actions you can take to try to minimise the chance of human error.

And as an added extra, here’s a link to an astonishing story that emphasises the importance of taking blame out of RCA.

 

Photo: NYPhotographic

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott Metrics & Process Improvement Ltd.