Have You Asked the Regulators?

To quote W Edwards Deming, “Every system is perfectly designed to give you exactly what you are getting today.” We all know our industry needs radical innovation and we are seeing it in many places – as you can see when attending DIA. I wonder why innovation seems to be so slow in our industry compared with others though.

I was talking to a systems vendor recently about changing the approach to QC for documents going in to the TMF. I was taken aback by the comment “Have you asked the regulators about it? I’m not sure what they would think.” Regulation understandably plays a big part in our industry but have we learned to fear it? If every time someone wants to try something new, the first response is “But what would the regulators think?” doesn’t that limit innovation and improvement? I’m not arguing for ignoring regulation, of course, it is there for a very important purpose. But does our attitude to it stifle innovation?

When you consider the update to ICH E6 (R2), it is not exactly radical when compared with other industries. Carrying out a formal risk assessment has been standard for Health & Safety in factories and workplaces for years. ISO – not a body known for moving swiftly – introduced its risk management standard ISO 13000 in 2009. The financial sector started developing their approach to risk management in the 1980s (although that didn’t seem to stop the 2008 financial crash!) And, of course, insurance has been based on understanding and quantifying risk for decades before that.

There has always been a level of risk management in clinical trials – but usually rather informal and based on the knowledge and experience of the individuals involved in running the trial. Implementing ICH E6 (R2) brings a more formal approach and encourages lessons learned to be used as part of risk assessment, evaluation and control for other trials.

So, if ICH E6 (R2) is not radical, why did our industry not have a formal and developed approach to risk management beforehand? Could it be this fear of the regulator? Do we have to wait until the regulators tell us it is OK to think the unthinkable (such as not having 100% SDV)?

What do you think? Is our pace of change right? Does fear of regulators limit our horizons?

Text: © 2018 Dorricott MPI Ltd. All rights reserved.

 

Is more QC ever the right answer? Part II

In part I of this post, I described how some processes have been developed that they can end up being the worst of all worlds by adding a QC step – they take longer, cost more and give quality the same (or worse) than a one step process. So why would anyone implement a process like this? Because “two sets of eyes are better than one!”

What might a learning approach with better quality and improved efficiency look like? I would suggest this:

In this process, we have a QC role and the person performing that role takes a risk-based approach to sampling the work and works together with the Specialist to improve the process by revising definitions, training etc. The sampling might be 100% for a Specialist who has not carried out the task previously. But would then reduce down to low levels as the Specialist demonstrates competence. The Specialist is now accountable for their work – all outputs come from them. If a high level of errors is found then an escalation process is needed to contain the issue and get to root cause (see previous posts). You would also want to gather data about the typical errors seen during the QC role and plot them (Pareto charts are ideal for this) to help focus on where to develop the process further.

This may remind you of the move away from 100% Source Document Verification (SDV) at sites. The challenge with a change like this is that the process is not as simple – it requires more “thinking”. What do you do if you find a certain level of errors? This is where the reviewer (or the CRA in the case of SDV) need a different approach. It can be a challenge to implement properly. But it should actually make the job more interesting.

So, back to the original question: Is more QC ever the answer? Sometimes – But make sure you think through the consequences and look for other options first.

In my next post, I’ll talk about a problem I come across again and again. People don’t seem to have enough time to think! How can you carry out effective root cause analysis or improve processes without the time to think?

Text: © 2018 Dorricott MPI Ltd. All rights reserved.

Is More QC Ever the Right Answer? Part I

In a previous post, I discussed whether retraining is ever a good answer to an issue. Short answer – NO! So what about that other common one of adding more QC?

An easy corrective action to put in place is to add more QC. Get someone else to check. In reality, this is often a band-aid because you haven’t got to the root cause and are not able to tackle it directly. So you’re relying on catching errors rather than stopping them from happening in the first place. You’re not trying for “right first time” or “quality by design”.

“Two sets of eyes are better than one!” is the common defence of multiple layers of QC. After all, if someone misses an error, someone else might find it. Sounds plausible. And it does make sense for processes that occur infrequently and have unique outputs (like a Clinical Study Report). But for processes that repeat rapidly this approach becomes highly inefficient and ineffective. Consider a process like that below:

Specialist I carries out work in the process – perhaps entering metadata in relation to a scanned document (investigator, country, document type etc). They check their work and modify it if they see errors. Then they pass it on to Specialist II who checks it and modifies it if they see any errors. Then the reviewer passes it on to the next step. Two sets of eyes. What are the problems with this approach?

  1. It takes a long time. The two steps have to be carried out in series i.e. Specialist II can’t QC the same item at the same time as Specialist I. Everything goes through two steps and a backlog forms between the Specialists. This means it takes much longer to get to the output.
  2. It is expensive. A whole process develops around managing the workflow with some items fast-tracked due to impending audit. It takes the time of two people (plus management) to carry out the task. More resources means more money.
  3. The quality is not improved. This may seem odd but if we think it through. There is no feedback loop in the process for Specialist I to learn from any errors that escape to Specialist II so Specialist I continues to let those errors pass. And the reviewer will also make errors – in fact the rework they do might actually add more errors. They may not agree on what is an error. This is not a learning process. And what if the process is under stress due to lack of resources and tight timelines? With people rushing, do they check properly? Specialist I knows That Specialist II will pick up any errors so doesn’t check thoroughly. And Specialist II knows that Specialist I always checks their work so doesn’t check thoroughly. And so more errors come out than Specialist II had not been there at all. Having everything go through a second QC as part of the process takes away accountability from the primary worker (Specialist I).

So let’s recap. A process like this takes longer, costs more and gives quality the same (or worse) than a one step process. So why would anyone implement a process like this? Because “two sets of eyes are better than one!”

What might a learning approach with better quality and improved efficiency look like? I will propose an approach in my next post. As a hint, it’s risk-based!

Text: © 2018 Dorricott MPI Ltd. All rights reserved.