Don’t let metrics distract you from the end goal!

We all know the fable of the tortoise and the hare. The tortoise won the race by taking things at a steady pace and planning for the end rather than rushing and taking their eye off the end goal. Metrics and how they are used can drive the behaviours we want but also behaviours that mean people take their eye off the end goal. As is often said, what gets measured gets managed – and we all know metrics can influence behaviour. When metrics are well-designed and are focused on answering important questions, and there are targets making it clear to a team what is important, they can really help focus efforts. If the rejection rate for documents being submitted to the TMF is set to be no greater than 5% but is tracking well above, then there can be a focus of effort to try and understand why. Maybe there are particular errors such as missing signatures, or there is a particular document type that is regularly rejected. If a team can get to the root causes then they can implement solutions to improve the process and see the metric improve. That is good news – metrics can be used as a great tool to empower teams. Empowering them to understand how the process is performing and where to focus their effort for improvement. With an improved, more efficient process with fewer errors, the end goal of a contemporaneous, high quality, complete TMF is more likely to be achieved.

But what if metrics and their associated targets are used for reward or punishment? We see this happen with metrics when used for personal performance goals. People will focus on those metrics to make sure they meet the targets at almost any cost! If individuals are told they must meet a target of less than 5% for documents rejected when submitted to the TMF, they will meet it. But they may bend the process and add inefficiency in doing so. For example, they may decide only to submit the documents they know are going to be accepted and leave the others to be sorted out when they have more time. Or they may avoid submitting documents at all. Or perhaps they might ask a friend to review the documents first. Whatever the approach, it is likely it will impact the process of a smooth flow of documents into the TMF by causing bottlenecks. And they are being done ‘outside’ the documented process – sometimes termed the ‘hidden factory’. Now the measurement is measuring a process of which we no longer know all the details – it is different to the SOP. The process has not been improved, but rather made worse. And the more complex process is liable to lead to a TMF that is no longer contemporaneous and may be incomplete. But the metric has met its target. The rush to focus on the metric in exclusion to the end goal has made things worse.

And so, whilst it is good news that in the adopted ICH E8 R1, there is a section (3.3.1) encouraging “the establishment of a culture that supports open dialogue” and critical thinking, it is a shame that the following section in the draft did not make it into the final version:

“Choose quality measures and performance indicators that are aligned with a proactive approach to design. For example, an overemphasis on minimising the time to first patient enrolled may result in devoting too little time to identifying and preventing errors that matter through careful design.”

There is no mention of performance indicators in the final version or the rather good example of a metric that is likely to drive the wrong behaviour – time to first patient enrolled. What is the value in racing to get the first patient enrolled if the next patient isn’t enrolled for months? Or a protocol amendment ends up being delayed leading to an overall delay in completing the trial? More haste, less speed.

It can be true that what gets measured gets managed – but it will only be managed well when a team is truly empowered to own the metrics, the targets, and the understanding and improvement of the process. We have to move away from command and control to supporting and trusting teams to own their processes and associated metrics, and to make improvements where needed. We have to be brave enough to allow proper planning and risk assessment and control to take place before rushing to get to first patient. Let’s use metrics thoughtfully to help us on the journey and make sure we keep our focus on the end goal.

 

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Image – openclipart.org

Oh No – Not Another Audit!

It has always intrigued me, this fear of the auditor. Note that I am separating out auditor from (regulatory) inspector here. Our industry has had an over reliance on auditing for quality rather than on building our processes to ensure quality right the first time. The Quality Management section of ICH E6 (R2) is a much needed change in approach. And this has been enhanced by the ICH E8 (R1) (draft) “Quality should rely on good design and its execution rather than overreliance on retrospective document checking, monitoring, auditing or inspection”. The fear of the auditor has led to some very odd approaches.

Trial Master File (TMF) is a case in point. I seem to have become involved with TMF issues and improving TMF processes a number of times in CROs and more recently have helped facilitate the Metrics Champion Consortium TMF Metrics Work Group. The idea of an inspection ready TMF at all times comes around fairly often. But to me, that misses the point. An inspection ready (or audit ready) TMF is a by-product of the TMF processes working well – not an aim in itself. We should be asking – what is the TMF for? The TMF is to help in the running of the trial (as well as to document it to be able to demonstrate processes, GCP etc were followed). It should not be an archive gathering dust until an audit or inspection is announced when a mad panic ensues to make sure the TMF is inspection ready. It should be being used all the time – a fundamental source of information for the study team. Used this way, gaps, misfiles etc will be noticed and corrected on an ongoing basis. If the TMF is being used correctly, there shouldn’t be significant audit findings. Of course, process and monitoring (via metrics) need to be set up around this to make sure it works. This is process thinking.

And then there are those processes that I expect we have all come across. No-one quite understands why there are so many convoluted steps. Then you discover that at some point in the past there was an audit and to close the audit finding (or CAPA), additional steps were added. No-one knows the point of the additional steps any more but they are sure they must be needed. One example I have seen was of a large quantity of documents being photo-copied prior to sending to another department. This was done because documents had got lost on one occasion and an audit had discovered this. So now someone spent 20% of their day photocopying documents in case they got lost in transit. Not a good use of time and not good for the environment. Better to redesign the process and then consider the risk. How often do documents get lost en route? Why? What is the consequence? Are some more critical than others? Etc. Adding the additional step to the process due to an audit finding was the easiest thing to do (like adding a QC step). But it was the least efficient response.

I wonder if part of the issue is that some auditors appear to push their own solution too hard. The process owner is the person that understand the process best. It is their responsibility to demonstrate they understand the audit findings, to challenge where necessary, and to argue for the actions they think will address the real issues. They should focus on the ‘why’ of the process.

Audit findings can be used to guide you in improving the process to take out risk and make it more efficient. Root cause analysis, of course, can help you with the why for particular parts of the process. And again, understanding the why helps you to determine much better actions to help prevent recurrence of issues.

Audits take time, and we would rather be focusing on the real work. But they also provide a valuable perspective from outside our organisation. We should welcome audits and use the input provided by people who are neutral to our processes to help us think, understand the why and make improvements in quality and efficiency. Let’s welcome the auditor!

 

Image: Pixabay

Text: © 2019 Dorricott MPI Ltd. All rights reserved.

What My Model of eTMF Processing Taught Me (Part II)

In a previous post, I described a model I built for 100% QC of documents as part of an eTMF process. We took a look at the impact of the rejection rate for documents jumping from 10% to 15%. It was not good! So, what happens when an audit is announced and suddenly the number of documents submitted doubles? In the graph below, weeks 5 and 6 had double the number of documents. Look what it does to the inventory and cycle time:

The cycle time has shot up to around 21 days after 20 weeks. The additional documents have simply added to the backlog and that increases the cycle time because we are using First In, First Out.

So what do we learn overall from the model? In a system like this, with 100% QC, it is very easy to turn a potential bottleneck into an actual bottleneck. And when that happens, the inventory and cycle time will quickly shoot upwards unless additional resource is added (e.g. overtime). But, you might ask, do we really care about cycle time? We definitely should: if the study team can’t access documents until they have gone through the QC, those documents are now not available for 21 days on average. That’s not going to encourage every day use of the TMF to review documents (as the regulators expect). And might members of the study team send in duplicates because they can’t see the documents that are awaiting processing? Adding further documents and impacting inventory and cycle time still further. And this is not a worst case scenario as I’m only modelling one TMF here – typically a Central Files group will be managing many TMFs and may be prioritizing one over another (i.e. not First In, First Out). This spreads out the distribution of cycle times and will lead to many more documents that are severely delayed through processing.

“But we need 100% QC of documents because the TMF is important!” I hear you shout. But do you really? As the great W Edwards Deming said, “Inspection is too late. The quality, good or bad, is already in the product.” Let’s get quality built in in the first place. You should start by looking at that 15% rejection rate. What on earth is going on to get a rejection rate like that? What are those rejections? Are those carrying out the QC doing so consistently? Do those uploading documents know the criteria? Is there anyone uploading documents who gets it right every time? If so, what is it that they do differently to others?

What if you could get the rejection rate down to less than 1%? At what point would you be comfortable taking a risk-based approach – that assumes those uploading documents do it right the first time? And carrying out a random QC to look for systemic issues that could then be tackled? How much more efficient this would be. See the diagram in this post. And you’d remove that self-imposed bottleneck. You’d get documents in much quicker, costing less and with improved quality. ICH E6 (R2) is asking us to consider quality as not being 100% but concerning ourselves with errors that matter. Are we brave enough as an industry to apply this to the TMF?

 

Text: © 2019 DMPI Ltd. All rights reserved.

Picture: CC BY 2.0 Remko Van Dokkum

What My Model of eTMF Processing Taught Me

On a recent long-haul flight, I got to thinking about the processing of TMF documents. Many organisations and eTMF systems seem to approach TMF documents with the idea that every one must be checked by someone other than the document owner. Sometimes, the document owner doesn’t even upload their own documents but provides them, along with metadata, to someone else to upload and index. And then their work is checked. There are an awful lot of documents in the TMF and going through multiple steps of QC (or inspection as W Edwards Deming would call it) seems rather inefficient – see my previous posts. But we are a risk-averse industry – even having been given the guidance to used risk-based approaches in ICH E6 (R2) and so many organizations seem to use this approach.

So what is the implication of 100% QC? I decided I would model it via an Excel spreadsheet. My assumptions are that there are 1000 documents submitted per week. Each document requires one round of QC. The staff in Central Files can process up to 1100 documents per week. I’ve included a random +/-5% to these numbers for each week (real variation is much greater than this I realise). I assume 10% of documents are rejected at QC. And that when rejected, the updated documents are processed the next week. I’ve assumed First In, First Out for processing. My model looks at the inventory at the end of each week and the average cycle time for processing. It looks like this:

It’s looking reasonably well in control. The cycle time hovers around 3 days after 20 weeks which seems pretty good. If you had a process for TMF like this, you’d probably be feeling pretty pleased.

So what happens if the rejection rate is 15% rather than 10%?

Not so good! It’s interesting just how sensitive the system is to the rejection rate. This is clearly not a process in control any more and both inventory and cycle time are heading upwards. After 20 weeks, the average cycle time sits around 10 days.

Having every document go through a QC like this forms a real constraint on the system – a potential bottleneck in terms of the Theory of Constraints. And it’s really easy to turn this potential bottleneck into a real bottleneck. And a bottleneck in a process leads to regular urgent requests, frustration and burn-out. Sound familiar?

In my next post, I’ll take a look at what happens when an audit is announced and the volume of documents to be processed jumps for a couple of weeks.

 

Text: © 2019 DMPI Ltd. All rights reserved.

Picture: CC BY 2.0 Remko Van Dokkum

Have You Asked the Regulators?

To quote W Edwards Deming, “Every system is perfectly designed to give you exactly what you are getting today.” We all know our industry needs radical innovation and we are seeing it in many places – as you can see when attending DIA. I wonder why innovation seems to be so slow in our industry compared with others though.

I was talking to a systems vendor recently about changing the approach to QC for documents going in to the TMF. I was taken aback by the comment “Have you asked the regulators about it? I’m not sure what they would think.” Regulation understandably plays a big part in our industry but have we learned to fear it? If every time someone wants to try something new, the first response is “But what would the regulators think?” doesn’t that limit innovation and improvement? I’m not arguing for ignoring regulation, of course, it is there for a very important purpose. But does our attitude to it stifle innovation?

When you consider the update to ICH E6 (R2), it is not exactly radical when compared with other industries. Carrying out a formal risk assessment has been standard for Health & Safety in factories and workplaces for years. ISO – not a body known for moving swiftly – introduced its risk management standard ISO 13000 in 2009. The financial sector started developing their approach to risk management in the 1980s (although that didn’t seem to stop the 2008 financial crash!) And, of course, insurance has been based on understanding and quantifying risk for decades before that.

There has always been a level of risk management in clinical trials – but usually rather informal and based on the knowledge and experience of the individuals involved in running the trial. Implementing ICH E6 (R2) brings a more formal approach and encourages lessons learned to be used as part of risk assessment, evaluation and control for other trials.

So, if ICH E6 (R2) is not radical, why did our industry not have a formal and developed approach to risk management beforehand? Could it be this fear of the regulator? Do we have to wait until the regulators tell us it is OK to think the unthinkable (such as not having 100% SDV)?

What do you think? Is our pace of change right? Does fear of regulators limit our horizons?

Text: © 2018 Dorricott MPI Ltd. All rights reserved.

 

Is more QC ever the right answer? Part II

In part I of this post, I described how some processes have been developed that they can end up being the worst of all worlds by adding a QC step – they take longer, cost more and give quality the same (or worse) than a one step process. So why would anyone implement a process like this? Because “two sets of eyes are better than one!”

What might a learning approach with better quality and improved efficiency look like? I would suggest this:

In this process, we have a QC role and the person performing that role takes a risk-based approach to sampling the work and works together with the Specialist to improve the process by revising definitions, training etc. The sampling might be 100% for a Specialist who has not carried out the task previously. But would then reduce down to low levels as the Specialist demonstrates competence. The Specialist is now accountable for their work – all outputs come from them. If a high level of errors is found then an escalation process is needed to contain the issue and get to root cause (see previous posts). You would also want to gather data about the typical errors seen during the QC role and plot them (Pareto charts are ideal for this) to help focus on where to develop the process further.

This may remind you of the move away from 100% Source Document Verification (SDV) at sites. The challenge with a change like this is that the process is not as simple – it requires more “thinking”. What do you do if you find a certain level of errors? This is where the reviewer (or the CRA in the case of SDV) need a different approach. It can be a challenge to implement properly. But it should actually make the job more interesting.

So, back to the original question: Is more QC ever the answer? Sometimes – But make sure you think through the consequences and look for other options first.

In my next post, I’ll talk about a problem I come across again and again. People don’t seem to have enough time to think! How can you carry out effective root cause analysis or improve processes without the time to think?

Text: © 2018 Dorricott MPI Ltd. All rights reserved.

Is More QC Ever the Right Answer? Part I

In a previous post, I discussed whether retraining is ever a good answer to an issue. Short answer – NO! So what about that other common one of adding more QC?

An easy corrective action to put in place is to add more QC. Get someone else to check. In reality, this is often a band-aid because you haven’t got to the root cause and are not able to tackle it directly. So you’re relying on catching errors rather than stopping them from happening in the first place. You’re not trying for “right first time” or “quality by design”.

“Two sets of eyes are better than one!” is the common defence of multiple layers of QC. After all, if someone misses an error, someone else might find it. Sounds plausible. And it does make sense for processes that occur infrequently and have unique outputs (like a Clinical Study Report). But for processes that repeat rapidly this approach becomes highly inefficient and ineffective. Consider a process like that below:

Specialist I carries out work in the process – perhaps entering metadata in relation to a scanned document (investigator, country, document type etc). They check their work and modify it if they see errors. Then they pass it on to Specialist II who checks it and modifies it if they see any errors. Then the reviewer passes it on to the next step. Two sets of eyes. What are the problems with this approach?

  1. It takes a long time. The two steps have to be carried out in series i.e. Specialist II can’t QC the same item at the same time as Specialist I. Everything goes through two steps and a backlog forms between the Specialists. This means it takes much longer to get to the output.
  2. It is expensive. A whole process develops around managing the workflow with some items fast-tracked due to impending audit. It takes the time of two people (plus management) to carry out the task. More resources means more money.
  3. The quality is not improved. This may seem odd but if we think it through. There is no feedback loop in the process for Specialist I to learn from any errors that escape to Specialist II so Specialist I continues to let those errors pass. And the reviewer will also make errors – in fact the rework they do might actually add more errors. They may not agree on what is an error. This is not a learning process. And what if the process is under stress due to lack of resources and tight timelines? With people rushing, do they check properly? Specialist I knows That Specialist II will pick up any errors so doesn’t check thoroughly. And Specialist II knows that Specialist I always checks their work so doesn’t check thoroughly. And so more errors come out than Specialist II had not been there at all. Having everything go through a second QC as part of the process takes away accountability from the primary worker (Specialist I).

So let’s recap. A process like this takes longer, costs more and gives quality the same (or worse) than a one step process. So why would anyone implement a process like this? Because “two sets of eyes are better than one!”

What might a learning approach with better quality and improved efficiency look like? I will propose an approach in my next post. As a hint, it’s risk-based!

Text: © 2018 Dorricott MPI Ltd. All rights reserved.