Why Can’t You Answer My Simple Question?

Often during a Root Cause Analysis session, it’s easy to get lost in the detail. The issues are typically complex and there are many aspects that need to be considered. Something that really doesn’t help is when people seem to be unable to answer a simple question. For example, you might ask “At what point would you consider escalating such an issue?” and you get a response such as “I emphasised the importance of the missing data in the report and follow-up letter.” The person seems to be making a statement about something different and has side-stepped your question. Why might that be?

Of course, it might be simply that they didn’t understand the question. Maybe English isn’t their first language, or the phone line is poor. Or they were distracted by an urgent email coming in. If you think this is the reason, it’s worth asking again – perhaps re-wording and making sure you’re clear.

Or maybe, they don’t know the answer but feel they need to answer anyway. A common questioning technique is to ask an open question and then be silent to try to draw out a response. People tend not to like silence and so they fill the gap. An unintended consequence of this might be that they fill the gap with something that doesn’t relate to the question you asked. They may feel embarrassed that they don’t know the answer and feel they should try to answer with something. You will need to listen carefully to the response and perhaps if it appears they simply don’t know the answer, you could ask them whether anyone else might. Perhaps the person who knows is not at the meeting.

Another possibility is that they are fearful. They might fear the reaction of others. Perhaps procedures weren’t followed and they know they should have been. But admitting it might bring them, or their colleagues, trouble. This is probably more difficult to ascertain. To understand whether this is going on, you’ll need to build a rapport with those involved in the root cause analysis. Can you help them by asking them to think of Gilbert’s Behavioral Engineering factors that support good performance? Was the right information available at the right time to carry out the task? What about appropriate, well-functioning tools and resource? And were those involved properly trained? See if you can get them thinking about how to stop the issue recurring – as they come up with ideas, that might lead to a root cause of the actual issue. For example, if they think the escalation plan could be clearer, is a root cause that the escalation plan was unclear?

“No-one goes to work to do a bad job!” [W. Edwards Deming] They want to help improve things for next time. If they don’t seem to be answering your question – what do you think the root cause of that might be? And how can you overcome it?

Do you need help in root cause analysis? Take a look at DIGR-ACT training. Or give me a call.

 

Please FDA – Retraining is NOT the Answer!

The FDA has recently issued a draft Q&A Guidance Document on “A Risk-Based Approach to Monitoring of Clinical Investigations”. Definitely worth taking a look. There are 8 questions and answers. Two that caught my eye:

Q2. “Should sponsors monitor only risks that are important and likely to occur?”

The answer mentions that sponsors should also “consider monitoring risks that are less likely to occur but could have a significant impact on the investigation quality.” These are the High Impact, Low Probability events that I talked about in this post. The simple model of calculating risk by multiplying Impact and Probability essentially prioritises a High Impact, Low Probability event the same as a Low Impact, High Probability event. But many experts in risk management think these should not be prioritized equally. High Impact, Low Probability events should be prioritised higher. So I think this is a really interesting answer.

Q7. “How should sponsors follow up on significant issues identified through monitoring, including communication of such issues?”

One part of the answer here has left me aghast. “…some examples of corrective and preventive actions that may be needed include retraining…” I have helped investigate issues in clinical trials so many times, and run root cause analysis training again and again. I always tell people that retraining is not a corrective action. Corrective actions should be based on the root cause(s). See a previous post on this and the confusing terminology. If you think someone needs retraining, ask yourself “why?” Could it be:

      • They were trained but didn’t follow the training. Why? Could it be one or more of the Behavior Engineering Model categories was not supported e.g. they didn’t have time, they didn’t have the right tools, they weren’t provided with regular feedback to tell them how they were doing? Etc. If it’s one of these, then focus on that. Retraining will not be effective.
      • They haven’t ever received training. Why? Maybe they were absent when the rest of the staff was trained and there was no plan to make sure they caught up later. They don’t need retraining – they were never trained. They need training. And is it possible that there might be others in this situation? Who else might have missed training and needs training now? Maybe at other sites too.
      • There was something missing from the training (as looks increasingly likely as one possible root cause in the tragic case of the Boeing 737 Max). Then the training needs to be modified. And it’s not about retraining one person or one site on training they had already received. It’s about training everyone on the revised training. Of course, later on, you might want to try to understand why an important component was missing from the training in the first place.

I firmly believe retraining is never the answer. There must be something deeper going on. If your only action is retraining, then you’ve not got to the root cause. I can accept reminding as an immediate action – but it’s not based on a root cause. It is more about providing feedback and is only going to have a short-term effect. An elephant may never forget but people do.

Got questions or comments? Interested in training options? Contact me.

 

Text: © 2019 DMPI Ltd. All rights reserved.

Beyond Human Error

One of my most frequently viewed posts is on human errors. I am intrigued by this. I’ve run training on root cause analysis a number of times and occasionally someone will question my claim that human error is not a root cause. Of course, it may be on the chain of cause-and-effect but why did the error occur? And you can be sure it’s not the first time the error has occurred – so why has it occurred on other occasions? What could be done to make the error less likely to occur? Using this line of questioning is how we can make process improvements and learn from things that go wrong rather than just blame someone for making a mistake and “re-training” them.

There is another approach to errors which I rather like. I was introduced to it by SAM Sather of Clinical Pathways. It comes from Gilbert’s Behavior Engineering Model and provides six categories that need to be in place to support the performance of an individual in a system:

Category Example questions
Expectations & Feedback Is there a standard for the work? Is there regular feedback?
Tools, Resources Is there enough time to perform well? Are the right tools in place?
Incentives & Disincentives Are incentives contingent on good performance?
Knowledge & Skills Is there a lack of knowledge or skill for the tasks?
Capacity & Readiness Are people the right match for the tasks?
Motives & Preferences Is there recognition of work well done?

 

Let’s take an example I’ve used a number of times: getting documents into the TMF. As you consider Gilbert’s Behavior Engineering Model you might ask:

    • Do those submitting documents know what the quality standard is?
    • Do they have time to perform the task well? Does the system help them to get it right first time?
    • Are there any incentives for performing well?
    • Do they know how to submit documents accurately?
    • Are they detail-oriented and likely to get it right?
    • Does the team celebrate success?

I have seen systems with TMF where most of the answers to those questions is “no”. Is there any wonder that there are rejection rates of 15%, cycle times of many weeks and TMFs that are never truly “inspection ready”?

After all, “if you always do what you’ve always done, you will always get what you’ve always got”. Time to change approach? Let’s get beyond human error.

Got questions or comments? Interested in training options? Contact me.

 

Text: © 2019 DMPI Ltd. All rights reserved.

DIGR-ACT® is a registered trademark of Dorricott Metrics & Process Improvement Ltd.

Picture: Based on Gilbert’s Behavior Engineering Model