What’s Swiss Cheese got to do with Root Cause Analysis?

“There can be only one true root cause!” Let’s examine this oft-made statement with an example of a root cause analysis. Many patients in a study have been found at database lock to have been mis-stratified – causing difficulties with analysis and potentially invalidating the whole study. We discover that at randomization, the health professional is asked “Is the BMI ≤ 25? Yes/No”. In talking with CRAs and sites we realise that at a busy site, where English is not your first language, this is rather easy to answer incorrectly. If we wanted to make it easier for the health professional to get it right, why not simply ask for the patient’s height and weight. Once those are entered, the IXRS could calculate the BMI and determine whether it is less than or equal to 25. This would be much less likely to lead to error. So, we determine that the root cause is that “the IXRS was set up without considering how to reduce the likelihood for user error.” We missed an opportunity to prevent the error occurring. That’s definitely actionable. Unfortunately, of course, it’s too late for this study but we can learn from the error for existing and future studies. We can look at other studies to see how they stratify patients and whether a similar error is likely to occur. We can update the standards for IXRS for future studies. Great!

But is there more to it? Were there other actions that might have helped prevent the issue? Why was this not detected earlier? Were there opportunities to save this study? As we investigate further, we find:

  1. During user acceptance testing, this same error occurred but was put down to user error.
  2. There were several occasions during the study where a CRA had noticed that the IXRS question was answered incorrectly. They modified the setting in EDC but were unable to change the stratification as this is set at randomization. No-one had realized that this was a systemic issue (i.e. had been detected at several sites due to a special cause).

Our one root cause definitely takes us forward. But there is more to learn from this issue. Perhaps there are some other root causes too. Such as “the results of user acceptance testing were not evaluated for the potential of user error”. And “issues detected by CRAs were not recognised as systematic because there is no standard way of pulling out common issues found at sites.” These could both lead to additional actions that might help to reduce the likelihood of the issue recurring. And notice that actions on these root causes might also help reduce the likelihood of other issues occurring too.

In my experience, root cause analysis rarely leads to one root cause. In a recent training course I was running for the Institute of Clinical Research, one of the delegates reminded me of the “Swiss Cheese” model of root causes. There are typically many hazards, such as a user entering data into an IXRS. These hazards don’t normally end up as issues because we put preventive measures in place (such as standards, user acceptance testing, training). You can think of each of these preventive measures as a slice of swiss cheese – they prevent many hazards becoming issues but won’t prevent everything. Sometimes, a hazard can get through a hole in the cheese. We also put detection methods in place (such as source data verification, edit checks, listing review). You can think of each of these as additional slices of cheese which prevent issues growing more significant but won’t prevent everything. It’s when the holes in each of the layers of prevention and detection line up that a hazard can become a significant issue that might even lead to the failure of a study. So, in our example, the IXRS was set up poorly (a prevention layer failed), the user acceptance testing wasn’t reviewed considering user error (another prevention layer failed), and CRA issues were not reviewed systematically (a detection layer failed). All these failures led to the study potentially being lost.

So if, in your root cause analysis, you have only one root cause, maybe it’s time to take another look. Are there maybe other learnings you can gain from the issue? Are there other prevention or detection layers that failed?

Do you need help in root cause analysis? Take a look at DIGR-ACT training. Or give me a call.

 

Text: © 2019 DMPI Ltd. All rights reserved.

Deliver Us From Delivery Errors

I returned home recently to find two packages waiting for me. They had been delivered while I was out. One was something I was expecting. The other was not – it was addressed to someone else. And at a completely different address (except the house number). How did that happen I wondered? I called the courier company. After waiting 15 minutes to get through, the representative listened to the problem and was clearly perplexed as the item had been signed for on the system. Eventually he started “Here’s what I can do for you…” and went on to explain how they could pick it up and deliver it to the right address. Problem solved.

Except that. It caused me inconvenience (e.g. a 20 minute call) for which no apology ever came. Their customer did not receive the service they paid for (the package would now be late). The package was put at risk – I could have kept it and no-one would have known. There was no effort at trying to understand how the error was made. They seem to be too busy for this. It has damaged their reputation – I would certainly not use that delivery firm. It was simply seen as a problem to resolve. Not an opportunity to improve.

The next day, a neighbour came round to hand over a mis-delivered parcel. You guessed it, it was the same courier company who had delivered a separate package that was for us to a neighbour. It’s great our neighbour brought it round. But the company will never hear of that error.

So many learnings from this! If the company was customer-focused they would really want to understand how such errors occur (by carrying out root cause analysis). And they would want to learn from the problems rather than just resolving each one individually. They should take a systemic approach. They should also consider that data they hold on the number of errors (mis-deliveries in this case) is incomplete. Helpful people sort mis-deliveries out for them every day without them even knowing. When they review data on the scale of the problem they should be aware that their data is an underestimate. And as for customer service, I can’t believe I didn’t even get a “sorry for the inconvenience” comment. According to a recent UK survey, 20% of people have had a parcel lost during delivery in the last 12 months. This is, after all, a critical error. Any decent company would want to really understand the issue and put systems in place to try to prevent future issues.

To me, this smacks of a culture of cost-cutting and lack of customer focus. Without a culture of continuous improvement, they will lose ground against their competitors. I have dealt with other courier companies and some of them are really on the ball. Let’s hope their management realises they need to change sooner rather than later…

 

Text: © 2018 Dorricott MPI Ltd. All rights reserved.

Stop Issues Recurring by Retraining?

“That issue has happened again! We really need to improve the awareness of our staff – anyone who has not used the right format needs to be retrained. We can’t tolerate sloppy work. People just need to concentrate and do the job right!”

You may recall a previous post about human factors where I looked at why people make errors and the different types of errors. If the error was a slip (a type of action error where someone planned to do the right thing but did the wrong thing) then retraining won’t help. The person already knows what the right thing to do is. Similarly if the error was a lapse (where someone forgot to do it). Of course, with both of these error types, making people aware will help temporarily. But over time, they will likely go back to doing what they were doing before unless some other change is made.

If the error was a rule-based thinking error where the knowledge is there but was misapplied, it is unlikely that retraining will work long term. We would need to understand the situation and why it is that the knowledge was misapplied. If the date is written in American format but read as European (3/8/18 being 8-Mar-2018 rather than 3-Aug-2018) then can we change the date format to be unambiguous in the form dd-mmm-yyyy (03-Aug-2018)?

What if the error is a non-compliance? If someone didn’t carry out the full procedure because they were rushed and they get retrained, do we really think that in the future when they are rushed they are going to do something different? They might do short term but longer term it is unlikely.

For all these errors, retraining or awareness might help short term but they are unlikely to make a difference longer term. To fix the issue longer term, we need to understand better why the error occurred and focus on trying to stop its recurrence by changes to process/systems.

A thinking error that is knowledge-based is different though. If someone made an error because they don’t know what they should be doing then clearly providing training and improving their knowledge should help. But even here, “retraining” is the wrong action. It implies they have already been trained and if so, the question is, why didn’t that training work? Giving them the same training again is likely to fail unless we understand what went wrong the first time. We need to learn from the failure in the training process and fix that.

Of course, this does not mean that training is not important. It is vital. Processes are least likely to have errors when they are designed to be as simple as possible and are run by well-trained people. When there are errors, making sure people know that they can happen is useful and will help short term but it is not a long term fix (corrective action). Longer term fixes need a better understanding of why the error(s) occurred and this is where the individuals running the process can be of vital help. As long as there is a no-blame culture (see previous post) you can work with those doing the work to make improvements and help stop the same errors recurring. Retraining is not the answer and it can actually have a negative impact. We want those doing the work to come forward with errors so we can understand them better, improve the process/system and reduce the likelihood of them happening again. If you came forward acknowledging an error you had made and were then made to retake an hour of on-line training on a topic you already know, how likely would you be to come forward a second time? Retraining can be seen as a punishment.

So, to go back to the post title “Stop errors recurring by retraining?” No, that won’t work. Retraining is never a good corrective action.

What about that other corrective action that comes up again and again – more QC? That’s the subject of a future post.

 

Text: © 2018 Dorricott MPI Ltd. All rights reserved.

Don’t waste people’s time on root cause analysis

In an earlier post, I described a hypothetical situation where you are the Clinical Trial Lead on a vaccine study. Information is emerging that a number of the injections of trial vaccine have actually been administered after the expiry date of the vials. This has happened at several sites. I then described actions you might take without the need for root cause analysis (RCA) such as – review medical condition of the subjects affected, review stability data to try to estimate the risk, ask CRAs to check expiry dates on all vaccine at sites on their next visit, remind all sites of the need to check the expiry date prior to administering the vaccine. So if you were to now go through the time and effort of a DIGR® RCA and you still end up with these and similar actions, why did you bother with the RCA? RCA should lead to actions that tackle the root cause and try to stop the issue recurring – to help you sleep at night. If you or your organization is not going to implement actions based on the RCA then don’t carry out the RCA. A couple of (real) examples from an office environment might help to illustrate the point.

In a coffee area there are two fridges for people to store milk, their lunch etc. One of them has a sign on it. The sign is large and very clear “Do not use”. And yet, if you open the fridge, you will see milk and people’s lunch in it. No-one takes any notice of the notice. But why not? In human factors analysis, the error occurring as people ignore the sign is a routine non-compliance. Most people don’t pay much attention to signs around the office and this is just another sign that no-one takes notice of. Facilities Management occasionally sends out a moaning email that people aren’t to use the fridge but again no-one really takes any notice.

What is interesting is that the sign also contains some root cause analysis. Underneath “Do not use” in small writing it states “Seal is broken and so fridge does not stay cold”. Someone had noticed at some point that the temperature was not as cold as it should be and root cause analysis (RCA) had led to the realisation that a broken seal was the cause. So far, so good. But the action following this was pathetic – putting up a sign telling people not to use it. Indeed, when you think about it, no RCA was needed at all to get to the action of putting up the sign. The RCA was a waste of time if this is all it led to. What should they have done? Replaced the seal perhaps. Or replaced the fridge. Or removed the fridge. But putting a sign up was not good enough.

The second example – a case of regular slips on the hall floors outside the elevators – including one minor concussion. A RCA was carried out and the conclusion was that the slips were due to wet surfaces when the time people left the office coincided with the floors being cleaned. So the solution was to make sure there were more of the yellow signs warning of slips at the time of cleaning. But slips still occurred – because people tended to ignore the signs. A better solution might have been to change the time of the cleaning or to put an anti-slip coating on the floor. There’s no point in spending time on determining the root cause unless you think beyond the root cause to consider options that might really make a difference.

Root cause analysis is not always easy and it can be time consuming. The last thing you want to do is waste the output by not using it properly. Always ask yourself – could I have taken this action before I knew what the root cause was? If so, then you are clearly not using the results of the RCA and it is likely your action on its own will not be enough. Using this approach might help you to determine whether “retraining” is a good corrective action. I will talk more about this in a future post.

Here’s a site I found with a whole series of signs that helps us understand one of the reasons signs tend to be ignored. Some of them made me cry with laughter.

 

Photo: Hypotheseyes CC BY-SA 4.0

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott Metrics & Process Improvement Ltd.

Big Data – Garbage in, garbage out?

Change of plan for this post…I visited the dentist recently. And before the consultation, I was handed an ipad with a form to complete. I was sure I had completed this form before last time – and checking with the receptionist she said it had to be completed every six months. So I had completed it before. It was a long form asking all sorts of details about medical history, medicines being taken etc. It included questions about lifestyle – how much exercise you get, whether you smoke, how much alcohol you drink etc. It all seemed rather over the top to be completing every six months. It seemed such an inefficient process and prone to error. Every patient completing all these detailed questions (often in a rush). And no way to check what my previous answers were – wouldn’t it be nice if they just pre-filled my previous answers and I could make any adjustments. All a little frustrating really. So I asked the receptionist why all this was needed.

“The government needs it,” was the reply. Really? What on earth do they do with it all, I wondered? I have to admit, that answer made me try a little experiment. I tried to see if the form would submit without me entering anything. It didn’t – it told me I had to sign the form first. So I signed it and sure enough it was accepted. So I handed the ipad back to the receptionist and she thanked me for being so quick. Off I went to my appointment and all was fine. And I felt as though I had struck a very small blow for freedom.

I wonder what does happen to all the data. Does it really go to “the government”? What would they do with it? Is it a case of gathering big data that can then be mined for trends – how the various factors affect dental health maybe? Well, one thing’s for sure, I wouldn’t trust the conclusions given how easy it seems to be to dupe the system. What guarantee is there on the accuracy of any of the data? Seems to me a case of garbage in, garbage out.

As we are all wowed by what Big Data can do and the incredible neural networks and algorithms teams can develop to help us (see previous blog), we do need to think about the source of the Big Data. Where has it come from? Could it be biased (almost certainly)? And in what way? How can we guard against the impact of that bias? There’s been a lot in the news recently about the dangers of bias – for example in Time and the Guardian. If we’re not careful, we can build bias into the algorithms and just continue with the discrimination we already have. Our best defence is scepticism. Just as when, in root cause analysis, an expert is quoted for evidence. As Edward Hodnett says: “Be sceptical of assertions of fact that start, ‘J. Irving Allerdyce, the tax expert, says…’ There are at least ten ways in which these facts may not be valid. (1) Allerdyce may not have made the statement at all. (2) He may have made an error. (3) He may be misquoted. (4) He may have been quoted only in part….”

Being sceptical and asking questions can help us avoid erroneous conclusions. Ask questions like: “how do you know that?”, “do we have evidence for that?” and “could there be bias here?”

Big Data has huge potential. But let’s not be wowed by it so that we don’t question. Be sceptical. Remember, it could be another case of garbage in, garbage out.

Image: Pixabay

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

To Err is Human But Human Error is Not a Root Cause

In a recent post I talked about Human Factors and different error types. You don’t necessarily need to classify human errors into these types but splitting them out this way helps us think about the different sorts of errors there are. This moves us on from when we get to ‘human error’ when carrying out our root cause analysis (using DIGR® or another method). Part of the problem with having ‘human error’ as a root cause is that there isn’t much you can do with your conclusion. To err is human after all so let’s move on to something else. But people make errors for a reason and trying to understand why they made the error can lead us down a much more fruitful path to actions we can implement to try to prevent recurrence. If a pilot makes an error that leads to a near disaster or worse, we don’t just conclude that it was human error and there is nothing we can do about it. In a crash involving a self-driving car we want to go beyond “human error” as a root cause to understand why the human error might have occurred. As we get more self-driving cars on the road, we want to learn from every incident.

By getting beyond human error and considering different error types, we can start to think of what some actions are that we can implement to try to stop the errors occurring (“corrective actions”). Ideally, we want processes and systems to be easy and intuitive and the people to be well trained. When people are well trained but the process and/or system is complex, there are likely to be errors from time to time. As W. Edwards Deming once said, “A bad system will beat a good person every time.”

Below are examples of each of the error types described in my last post and example corrective actions.

Error Type Example Example Corrective Action
Action errors (slips) Entering data into the wrong field in EDC Error and sense checks to flag a possible error
Action errors (lapses) Forgetting to check fridge temperature Checklist that shows when fridge was last checked
Thinking errors (rule based) Reading a date written in American format as European (3/8/16 being 8-Mar-2016 rather than 3-Aug-2016) Use an unambiguous date format such as dd-mmm-yyyy
Thinking errors (knowledge based) Incorrect use of a scale Ensure proper training and testing on use of the scale. Only those trained can use it.
Non-compliance (routine, situational and exceptional) Not noting down details of the drug used in the Accountability Log due to rushing Regular checking by staff and consequences for not noting appropriately

These are examples and you should be able to think of additional possible corrective actions. But then which ones would you actually implement? You want the most effective and efficient ones of course. You want your actions to be focused on the root cause – or the chain of cause and effect that leads to the problem.

The most effective actions are those that eliminate the problem completely such as adding an automated calculation of BMI (Body Mass Index) from height and mass, for example, rather than expecting staff to calculate it correctly. If it can’t go wrong, it won’t go wrong (the corollary of Murphy’s Law). This is mistake-proofing.

The next most effective actions are ones that help people to get it right. Drop-down lists and clear, concise instructions are examples of this. Although instructions do have their limitations (as I will discuss in a future post). “No-one goes to work to do a bad job!” (W Edwards Deming again) so let’s help them do a good job.

The least effective actions are ones that rely on a check catching an error right at the end of the process. For example, the nurse checking the expiry date on a vial before administering. That’s not to say these checks should not be there, but rather they should be thought of as the “last line of defence”.

Ideally, you also want some sort of check to make sure the revised process is working. This check is an early signal as to whether your actions are effective at fixing the problem.

Got questions or comments? Interested in training options? Contact me.

 

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott Metrics & Process Improvement Ltd.

“To err is human” – Alexander Pope

Don’t blame me! The corrosive effect of blame

Root cause analysis (RCA) is not always easy. And there is frequently not enough time. So where it is done, it is common for people to take short cuts. The easiest short cuts are:

  1. to assume this problem is the same as one you’ve seen before and that the cause is the same (I mentioned this in a previous post). Of course, you might be right. But it might be worth taking a little extra time to make sure you’ve considered all options. The DIGR® approach to RCA can really help here as it takes everyone through the facts and process in a logical way.
  2. to blame someone (or a department, site etc)

Blame is corrosive. As soon as that game starts being played, everyone clams up. Most people don’t want to open up in that sort of environment because they risk every word they utter being used against them. So once blame comes into the picture you can forget getting to root cause.

To help guard against blame, it’s useful to know a little about the field of Human Factors. This is an area of science focused on designing products, systems, or processes to take proper account of the interaction between them and the people who use them. It is used extensively in the airline industry and has helped them get to their current impressive safety record. The British Health and Safety Executive has a great list of different error types.

This is based on the Human Factors Analysis and Classification System (HFACS). The error types are split into:

Error Type Example
Action errors (slips) Turning the wrong switch on or off
Action errors (lapses) Forgetting to lock a door
Thinking errors (rule based) – where a known rule is misapplied Ignoring an evacuation alarm because of previous false alarms
Thinking errors (knowledge based) – where lack of prior knowledge leads to a mistake Using an out-of-date map to plot an unfamiliar route
Non-compliance (routine, situational and exceptional) Speeding in a car (knowingly ignoring the speed limit because everyone else does)

So how can human factors help us? Consider a hypothetical situation where you are the Clinical Trial Lead on a vaccine study. Information is emerging that a number of the injections of trial vaccine have actually been administered after the expiry date of the vials. This has happened at several sites. It might be easiest to blame the nurse administering of the pharmacist prescribing. They should have taken more care and checked the expiry date properly. What could the human errors have been?

They might have forgotten (lapse). Or they might have read the expiry date in European date format when it was written in American date format (rule-based thinking error). Or they might have been rushing and not had time (non-compliance). Of course, we know the error occurred on multiple occasions and by different people as it happened at multiple sites. This suggests a systemic issue and that reminding or retraining staff will only have a limited effect.

Maybe it would be better to make sure that expired drug can’t reach the point of being dispensed or administered so that we don’t rely on the final check by the pharmacist and nurse. We still want them to check but do not expect them to find expired vaccine.

After all, as W. Edwards Deming said “No-one goes to work to do a bad job!”

In my next post I will talk about the different sorts of actions you can take to try to minimise the chance of human error.

And as an added extra, here’s a link to an astonishing story that emphasises the importance of taking blame out of RCA.

 

Photo: NYPhotographic

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott Metrics & Process Improvement Ltd.

Process Improvement: Let’s Automate Our Processes!

I came across an example of a process in need of improvement recently. Like you, I come across these pretty regularly in everyday life. But this one has an interesting twist…

I was applying for a service via a broker. The broker recommended a company and he was excited because this company had a new process using electronic signatures. They had ‘automated the process’ rather than needing paper forms, snail mail etc. etc. I was intrigued too and so was pleased to give it a go. The email arrived and it was a little disconcerting because warned that if I made any error in the electronic signature process that it was my fault and it might invalidate it. They would not check for accuracy. When I clicked on the link there was a problem because the broker had entered my landline number into the system and not my mobile number. The phone number was needed to send an authentication text. So he attempted to correct that and a new email arrived. When I clicked the link this time it told me that “the envelope is being updated”. I have no idea what envelope it was talking about – a pretty useless error message. I wasn’t feeling great about this process improvement now.

The broker said “Let’s go back to the paper way then.” He emailed me a 16-page form that I had to complete. I had to get it signed by 4 different people in a particular order. It was a pretty challenging form that needed to be completed, scanned and emailed back. I did wonder as I completed it just how many times there must be errors in completion (including, possibly my own). There seemed to be hundreds of opportunities for error. Makes sense, I thought, to implement a process improvement and use a process with electronic signatures – to ‘automate the process’. Where they had failed was clearly in the implementation – they had not trained the broker or given adequate instructions to the end user (me). Error messages using IT jargon were of no help to the end user. It reminded me of an electronic filing system I saw implemented some years ago, where a company decided to ‘automate the process’ of filing. The IT Department was over the moon because they had implemented the system one week ahead of schedule. But no-one was actually using it because they hadn’t been trained, the roll-out had not been properly considered, there was no thought about reinforcing behaviours or monitoring actual use. No change management considerations. A success for IT but a failure for the company!

Anyway, back to the story. After completing the good old paper-based process, I was talking some more with the broker and he said “their quote for you was good but their application process is lousy. Other companies have a much easier way of doing it – for most of them the broker completes the information on-line and then sends a two-page form via email to print, review, sign (once), scan and return. After that a confirmation pack comes through and the consumer has the chance to correct errors at that stage. But it’s all assumed to be right at the start.” These companies had a simple and efficient process and no need to ‘automate the process’ with electronic signatures.

Hang on. Why does the company I used need a 16-page form and 4 signatures I hear you ask? Who knows! They had clearly recognised that their process needed improving but had headed down the route of ‘let’s automate it’. They could have saved themselves an awful lot of cost of implementing their new improved process if they had talked with the broker about his experience first.

The lesson here is – don’t just take a bad process and try to ‘automate’ it with IT. Start by challenging the process. Why is it there? Does it have to be done that way? There might even be other companies out there who have a slick process already – do you know how your competition solves the problem? Even more intriguingly, perhaps another industry has solved a similar problem in a clever way that you could copy. If you discover that a process is actually unnecessary and you can dramatically simplify it then you’re mistake-proofing the process. Taking out unnecessary steps means they can’t go wrong.

In my next post I will explore the confusion surrounding the term CAPA.

Breaking News – the broker just got back to me to tell me I had got one of the pages wrong on the 16-page form. This is definitely a process in need of improvement!

 

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

Go Step-By-Step to get to Root Cause

In an earlier post, I described my DIGR® method of root cause analysis (RCA):

Define

Is – Is Not

Go Step By Step

Root Cause

In this post, I wanted to look more at Go Step By Step and why it is so powerful.

“If you can’t describe what you’re doing as a process, you don’t know what you’re doing” – a wonderful quote from W. Edwards Deming! And there is a lot of truth to it. In this blog, I’ve been using a hypothetical situation to help illustrate my ideas. Consider the situation where you are the Clinical Trial Lead on a vaccine study. Information is emerging that a number of the injections of trial vaccine have actually been administered after the expiry date of the vials. This has happened at several sites. You’ve taken actions to contain the situation for now. And have started using DIGR® to try to get to the root cause. It’s already brought lots of new information out and you’ve got to Go Step By Step. As you start to talk through the process, it becomes clear that not everyone has the same view of what each role in the process should do. A swim-lane process map for how vaccine should be quarantined shows tasks split into roles and helps the team to focus on where the failures are occurring:

In going step-by-step through the process, it becomes clear that the Clinical Research Associates (CRAs) are not all receiving the emails. Nor are they clear what they should do with them when they do receive them. The CRA role here is really a QC role however – the primary process takes place in the other two swimlanes. And it was the primary process that broke down – the email going from the Drug Management System to the Site (step highlighted in red).

So we now have a focus for our efforts to try to stop recurrence. You can probably see ways to redesign the process. That might work for future clinical trials but could lead to undesired effects in the current one. So a series of checks might be needed. For example, sending test emails from the system to confirm receipt by site and CRA or regular checks for bounced emails. Ensuring CRAs know what they should do when they receive an email would also help – perhaps the text in the email can be clearer.

By going step-by-step through the process as part of DIGR®, we bring the team back to what they have control of. We have moved away from blaming the pharmacists or the nurses at the two sites. Going down the blame route is never good in RCA as I will discuss in a future post. Reviewing the process as it should be also helps to combat cognitive bias which I’ve mentioned before.

As risk assessment, control and management is more clearly laid out in ICH GCP E6 (R2), process maps can help with risk identification and reduction too. To quote from section 5.0 “The sponsor should identify risks to critical trial processes and data”. Now we’ve discovered a process that is failing and could have significant effects on subject safety. By reviewing process maps of such critical processes, consideration can be given to the identification, prioritisation and control of risks. This might involve tools such as Failure Mode and Effects Analysis (FMEA) and redesign where possible in an effort to mistake-proof the process. This helps to show one way how RCA and risk connect – the RCA led us to understand a risk better and we can then put in controls to try to reduce the risk (by reducing the likelihood of occurrence). We can even consider how, in future trials, we might be able to modify the process to make similar errors much less likely and so reduce the risk from the start. This is true prevention of error.

In my next post I will talk about how (not) to ‘automate’ a process.

 

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott MPI Ltd.