Oh No – Not Another Audit!

It has always intrigued me, this fear of the auditor. Note that I am separating out auditor from (regulatory) inspector here. Our industry has had an over reliance on auditing for quality rather than on building our processes to ensure quality right the first time. The Quality Management section of ICH E6 (R2) is a much needed change in approach. And this has been enhanced by the ICH E8 (R1) (draft) “Quality should rely on good design and its execution rather than overreliance on retrospective document checking, monitoring, auditing or inspection”. The fear of the auditor has led to some very odd approaches.

Trial Master File (TMF) is a case in point. I seem to have become involved with TMF issues and improving TMF processes a number of times in CROs and more recently have helped facilitate the Metrics Champion Consortium TMF Metrics Work Group. The idea of an inspection ready TMF at all times comes around fairly often. But to me, that misses the point. An inspection ready (or audit ready) TMF is a by-product of the TMF processes working well – not an aim in itself. We should be asking – what is the TMF for? The TMF is to help in the running of the trial (as well as to document it to be able to demonstrate processes, GCP etc were followed). It should not be an archive gathering dust until an audit or inspection is announced when a mad panic ensues to make sure the TMF is inspection ready. It should be being used all the time – a fundamental source of information for the study team. Used this way, gaps, misfiles etc will be noticed and corrected on an ongoing basis. If the TMF is being used correctly, there shouldn’t be significant audit findings. Of course, process and monitoring (via metrics) need to be set up around this to make sure it works. This is process thinking.

And then there are those processes that I expect we have all come across. No-one quite understands why there are so many convoluted steps. Then you discover that at some point in the past there was an audit and to close the audit finding (or CAPA), additional steps were added. No-one knows the point of the additional steps any more but they are sure they must be needed. One example I have seen was of a large quantity of documents being photo-copied prior to sending to another department. This was done because documents had got lost on one occasion and an audit had discovered this. So now someone spent 20% of their day photocopying documents in case they got lost in transit. Not a good use of time and not good for the environment. Better to redesign the process and then consider the risk. How often do documents get lost en route? Why? What is the consequence? Are some more critical than others? Etc. Adding the additional step to the process due to an audit finding was the easiest thing to do (like adding a QC step). But it was the least efficient response.

I wonder if part of the issue is that some auditors appear to push their own solution too hard. The process owner is the person that understand the process best. It is their responsibility to demonstrate they understand the audit findings, to challenge where necessary, and to argue for the actions they think will address the real issues. They should focus on the ‘why’ of the process.

Audit findings can be used to guide you in improving the process to take out risk and make it more efficient. Root cause analysis, of course, can help you with the why for particular parts of the process. And again, understanding the why helps you to determine much better actions to help prevent recurrence of issues.

Audits take time, and we would rather be focusing on the real work. But they also provide a valuable perspective from outside our organisation. We should welcome audits and use the input provided by people who are neutral to our processes to help us think, understand the why and make improvements in quality and efficiency. Let’s welcome the auditor!

 

Image: Pixabay

Text: © 2019 Dorricott MPI Ltd. All rights reserved.

Hurry Up and Think Critically!

At recent conferences I’ve attended and presented at, the topic of critical thinking has come up. At the MCC Summit, there was consternation that apparently some senior leaders think the progress in Artificial Intelligence will negate the need for critical thinking. No-one at the conference agreed with those senior leaders. And at the Institute for Clinical Research “Risky Business Forum”, everyone agreed on the importance of fostering critical thinking skills. We need people to take a step back and think about future issues (risks) rather than just the pressing issues of the day. Most people (except those senior leaders) would agree we need more people to be developing and using critical thinking skills in their day-to-day work. We need to teach people to think critically and not “spoon-feed” them the answers with checklists. But there’s much more to this than tools and techniques. How great to see, then in the draft revision of ICH E8: “Create a culture that values and rewards critical thinking and open dialogue about quality and that goes beyond sole reliance on tools and checklists.” And that culture needs to include making sure people have time to think critically.

Think of those Clinical Research Associates on their monitoring visits to sites. At a CRO it’s fairly common to expect them to be 95% utilized. This leaves only 5% of their contracted time for all the other “stuff” – the training, the 1:1’s, the departmental meetings, the reading of SOPs etc. Do people in this situation have time to think? Are they able and willing to take the time to follow up on leads and hunches? As I’ve mentioned previously, root cause analysis needs critical thinking. And it needs time. If you are pressurized to come up with the results now, you will focus on containing the issue so you can rush on to the next one. You’ll make sure the site staff review their lab reports and mark clinical significance – but you won’t have time to understand why they didn’t do that in the first place. You will not learn the root cause(s) and will not be able to stop the issue from recurring. The opportunity to learn is lost. This is relevant in other areas too, such as risk identification, evaluation and control. With limited time for risk assessment on a study, would you be tempted to start with a list from another study, have a quick look over and move on to the next task quickly? You would know it wasn’t a good job but hopefully it was good enough.

Even worse, some organizations, in effect, punish those thinking critically. If you can see a way of improving the process, of reducing the likelihood of a particular issue recurring, what should you do? Some organizations make it a labyrinthine process to make the change. You might have to go off to QA and complete a form requesting a change to an SOP. And hope it gets to the right person – who has time to think about it and consider the change. And how should you know about the form? You should have read the SOP on SOP updates in your 5% of non-utilized time!

Organizations continue to put pressure on employees to work harder and harder. It is unreasonable to expect employees to perform tasks needing critical thinking well without allowing them the time to do so.

Do you and others around you have time to think critically?

 

Text: © 2019 DMPI Ltd. All rights reserved. (With thanks to Steve Young for the post title)

Picture: Rodin – The Thinker (Andrew Horne)

Lack of Formal Documentation – Not a Root Cause

When conducting root cause analysis, “Lack of formal documentation” is a suggested root cause I have often come across. It seems superficially like a good, actionable root cause. Let’s get some formal documentation of our process in place. But, I always ask, “Will the process being formally documented stop the issue from recurring?” What if people don’t follow the formally documented process? What if the existing process is poor and we are simply documenting it? It might help, of course. But it can’t be the only answer. Which means this is not the root cause – or at least it’s not the only root cause.

When reviewing a process, I always start off by asking those in the process what exactly they do and why. They will tell you what really happens. Warts and all. When you send the request but never get a response back. When the form is returned but the signature doesn’t match the name. When someone goes on vacation, their work was in process and no-one knows what’s been done or what’s next. Then I take a look at the Standard Operating Procedure (SOP) if there is one. It never matches.

So, if we get the SOP to match the actual process, our problems will go away won’t they? Of course not. You don’t only need a clearly defined process. You need people that know the process and follow it. And you also want the defined process to be good. You want it carefully thought through and the ways it might fail considered. You can then build an effective process – one that is designed to handle the possible failures. And there is a great tool for this – Failure Mode and Effects Analysis (FMEA). Those who are getting used to Quality-Based Risk Management as part of implementing section 5.0 of ICH E6 (R2) will be used to the approach of scoring risks by Likelihood, Impact and Detectability. FMEA takes you through each of the process steps to develop your list of risks and prioritise them prior to modifying the process to make it more robust. This is true preventive action. Trying to foresee issues and stop them from ever occurring. If you send a request but don’t get a response back, why might that be? Could the request have gone into spam? Could it have gone to the wrong person? How might you handle it? Etc. Etc.

Rather than the lack of a formal documented process being a root cause, it’s more likely that there is a lack of a well-designed and consistently applied process. And the action should be to agree the process and then work through how it might fail to develop a robust process. Then document that robust process and make sure it is followed. And, of course, monitor the process for failures so you can continuously improve. Perhaps more easily said than done. But better to work on that than spend time formally documenting a failing process and think you’ve fixed the problem.

Here are more of my blog posts on root cause analysis where I describe a better approach than Five Whys. Got questions or comments? Interested in training options? Contact me.

 

Text: © 2019 DMPI Ltd. All rights reserved.

Image: Standard Operating Procedures – State Dept, Bill Ahrendt

Do Processes Naturally Become More Complex?

I have been taking a fascinating course in language by Professor John McWhorter. One of his themes is that languages naturally become more complex over time. There are many processes that cause this as languages are passed through the generations and slowly mutate – vowels sounds change and consonants can be added to the ends of words, for example. And meanings are constantly changing too. He discusses the Navajo language which is phenomenally complex with, incredibly, almost no regular verbs.

It got me to wondering about whether processes, like languages, have a tendency to get more complex over time too. I think perhaps they do. I remember walking through a process with a Project Research Associate (assistant to the Project Manager) and she explained each of the steps with a green light package used for approving a site for drug shipment. One of the steps was to photocopy all the documents before returning them to the Regulatory Department. These photocopies were then stored in a bulging set of filing cabinets. The documents were often multi-page, double-sided and with staples and there were many of them – so it took over an hour for each site. I asked what the purpose was but the Project Research Associate didn’t know. No-one had told her. It was in the Work Instruction so that’s what she did. The only reason I could think for this was that at some point in the past, a pack of documents had been lost in transit to the Regulatory Department and fingers of blame were pointed in each direction. So the solution? Add a Cover-Your-Arse step to the process for every pack from then on. More complexity and the reason lost in time.

I’ve seen the same happen in reaction to an audit finding. A knee-jerk change made to an SOP so that the finding can be responded to. But making the process more complicated. Was it really needed? Was it the most effective change? Not necessarily – but we have to get the auditors off our back!

Technology can also lead to increasing complexity of processes if we’re not careful. That wonderful new piece of technology is to be used for new studies but the previous ones have to continue in the “old” technology. And those working in the process have to cope with processes for both the old and the new. More complexity.

There are a set of languages which are much simpler than most though. That have somehow shed their complexity. These are creoles. They develop where several languages are brought together and children grow up learning them. The creole ends up as a mush of the different languages but tends to lose much of the complexity in the mean time.

Perhaps processes have an analogy to creoles. Those people joining your organisation from outside – they do things somewhat differently. Maybe by pulling these ideas in and really examining your processes, you can take some of the complexity out and make it easier for people to follow? For true continuous improvement, we need to be open to those outside ideas and not dismiss them with “that’s not the way we do things here!” People coming in with fresh eyes looking at how you do things can be frustrating but can also lead to real improvements and perhaps simplification (like getting rid of the photocopying step!)

What do you think? Do processes tend to become more complex over time? How can we counter this trend?

 

Text: © 2019 DMPI Ltd. All rights reserved.

Image: Flag of the Navajo Nation (Himasaram)

What My Model of eTMF Processing Taught Me (Part II)

In a previous post, I described a model I built for 100% QC of documents as part of an eTMF process. We took a look at the impact of the rejection rate for documents jumping from 10% to 15%. It was not good! So, what happens when an audit is announced and suddenly the number of documents submitted doubles? In the graph below, weeks 5 and 6 had double the number of documents. Look what it does to the inventory and cycle time:

The cycle time has shot up to around 21 days after 20 weeks. The additional documents have simply added to the backlog and that increases the cycle time because we are using First In, First Out.

So what do we learn overall from the model? In a system like this, with 100% QC, it is very easy to turn a potential bottleneck into an actual bottleneck. And when that happens, the inventory and cycle time will quickly shoot upwards unless additional resource is added (e.g. overtime). But, you might ask, do we really care about cycle time? We definitely should: if the study team can’t access documents until they have gone through the QC, those documents are now not available for 21 days on average. That’s not going to encourage every day use of the TMF to review documents (as the regulators expect). And might members of the study team send in duplicates because they can’t see the documents that are awaiting processing? Adding further documents and impacting inventory and cycle time still further. And this is not a worst case scenario as I’m only modelling one TMF here – typically a Central Files group will be managing many TMFs and may be prioritizing one over another (i.e. not First In, First Out). This spreads out the distribution of cycle times and will lead to many more documents that are severely delayed through processing.

“But we need 100% QC of documents because the TMF is important!” I hear you shout. But do you really? As the great W Edwards Deming said, “Inspection is too late. The quality, good or bad, is already in the product.” Let’s get quality built in in the first place. You should start by looking at that 15% rejection rate. What on earth is going on to get a rejection rate like that? What are those rejections? Are those carrying out the QC doing so consistently? Do those uploading documents know the criteria? Is there anyone uploading documents who gets it right every time? If so, what is it that they do differently to others?

What if you could get the rejection rate down to less than 1%? At what point would you be comfortable taking a risk-based approach – that assumes those uploading documents do it right the first time? And carrying out a random QC to look for systemic issues that could then be tackled? How much more efficient this would be. See the diagram in this post. And you’d remove that self-imposed bottleneck. You’d get documents in much quicker, costing less and with improved quality. ICH E6 (R2) is asking us to consider quality as not being 100% but concerning ourselves with errors that matter. Are we brave enough as an industry to apply this to the TMF?

 

Text: © 2019 DMPI Ltd. All rights reserved.

Picture: CC BY 2.0 Remko Van Dokkum

What My Model of eTMF Processing Taught Me

On a recent long-haul flight, I got to thinking about the processing of TMF documents. Many organisations and eTMF systems seem to approach TMF documents with the idea that every one must be checked by someone other than the document owner. Sometimes, the document owner doesn’t even upload their own documents but provides them, along with metadata, to someone else to upload and index. And then their work is checked. There are an awful lot of documents in the TMF and going through multiple steps of QC (or inspection as W Edwards Deming would call it) seems rather inefficient – see my previous posts. But we are a risk-averse industry – even having been given the guidance to used risk-based approaches in ICH E6 (R2) and so many organizations seem to use this approach.

So what is the implication of 100% QC? I decided I would model it via an Excel spreadsheet. My assumptions are that there are 1000 documents submitted per week. Each document requires one round of QC. The staff in Central Files can process up to 1100 documents per week. I’ve included a random +/-5% to these numbers for each week (real variation is much greater than this I realise). I assume 10% of documents are rejected at QC. And that when rejected, the updated documents are processed the next week. I’ve assumed First In, First Out for processing. My model looks at the inventory at the end of each week and the average cycle time for processing. It looks like this:

It’s looking reasonably well in control. The cycle time hovers around 3 days after 20 weeks which seems pretty good. If you had a process for TMF like this, you’d probably be feeling pretty pleased.

So what happens if the rejection rate is 15% rather than 10%?

Not so good! It’s interesting just how sensitive the system is to the rejection rate. This is clearly not a process in control any more and both inventory and cycle time are heading upwards. After 20 weeks, the average cycle time sits around 10 days.

Having every document go through a QC like this forms a real constraint on the system – a potential bottleneck in terms of the Theory of Constraints. And it’s really easy to turn this potential bottleneck into a real bottleneck. And a bottleneck in a process leads to regular urgent requests, frustration and burn-out. Sound familiar?

In my next post, I’ll take a look at what happens when an audit is announced and the volume of documents to be processed jumps for a couple of weeks.

 

Text: © 2019 DMPI Ltd. All rights reserved.

Picture: CC BY 2.0 Remko Van Dokkum

Have You Asked the Regulators?

To quote W Edwards Deming, “Every system is perfectly designed to give you exactly what you are getting today.” We all know our industry needs radical innovation and we are seeing it in many places – as you can see when attending DIA. I wonder why innovation seems to be so slow in our industry compared with others though.

I was talking to a systems vendor recently about changing the approach to QC for documents going in to the TMF. I was taken aback by the comment “Have you asked the regulators about it? I’m not sure what they would think.” Regulation understandably plays a big part in our industry but have we learned to fear it? If every time someone wants to try something new, the first response is “But what would the regulators think?” doesn’t that limit innovation and improvement? I’m not arguing for ignoring regulation, of course, it is there for a very important purpose. But does our attitude to it stifle innovation?

When you consider the update to ICH E6 (R2), it is not exactly radical when compared with other industries. Carrying out a formal risk assessment has been standard for Health & Safety in factories and workplaces for years. ISO – not a body known for moving swiftly – introduced its risk management standard ISO 13000 in 2009. The financial sector started developing their approach to risk management in the 1980s (although that didn’t seem to stop the 2008 financial crash!) And, of course, insurance has been based on understanding and quantifying risk for decades before that.

There has always been a level of risk management in clinical trials – but usually rather informal and based on the knowledge and experience of the individuals involved in running the trial. Implementing ICH E6 (R2) brings a more formal approach and encourages lessons learned to be used as part of risk assessment, evaluation and control for other trials.

So, if ICH E6 (R2) is not radical, why did our industry not have a formal and developed approach to risk management beforehand? Could it be this fear of the regulator? Do we have to wait until the regulators tell us it is OK to think the unthinkable (such as not having 100% SDV)?

What do you think? Is our pace of change right? Does fear of regulators limit our horizons?

Text: © 2018 Dorricott MPI Ltd. All rights reserved.

 

Is more QC ever the right answer? Part II

In part I of this post, I described how some processes have been developed that they can end up being the worst of all worlds by adding a QC step – they take longer, cost more and give quality the same (or worse) than a one step process. So why would anyone implement a process like this? Because “two sets of eyes are better than one!”

What might a learning approach with better quality and improved efficiency look like? I would suggest this:

In this process, we have a QC role and the person performing that role takes a risk-based approach to sampling the work and works together with the Specialist to improve the process by revising definitions, training etc. The sampling might be 100% for a Specialist who has not carried out the task previously. But would then reduce down to low levels as the Specialist demonstrates competence. The Specialist is now accountable for their work – all outputs come from them. If a high level of errors is found then an escalation process is needed to contain the issue and get to root cause (see previous posts). You would also want to gather data about the typical errors seen during the QC role and plot them (Pareto charts are ideal for this) to help focus on where to develop the process further.

This may remind you of the move away from 100% Source Document Verification (SDV) at sites. The challenge with a change like this is that the process is not as simple – it requires more “thinking”. What do you do if you find a certain level of errors? This is where the reviewer (or the CRA in the case of SDV) need a different approach. It can be a challenge to implement properly. But it should actually make the job more interesting.

So, back to the original question: Is more QC ever the answer? Sometimes – But make sure you think through the consequences and look for other options first.

In my next post, I’ll talk about a problem I come across again and again. People don’t seem to have enough time to think! How can you carry out effective root cause analysis or improve processes without the time to think?

Text: © 2018 Dorricott MPI Ltd. All rights reserved.

Is More QC Ever the Right Answer? Part I

In a previous post, I discussed whether retraining is ever a good answer to an issue. Short answer – NO! So what about that other common one of adding more QC?

An easy corrective action to put in place is to add more QC. Get someone else to check. In reality, this is often a band-aid because you haven’t got to the root cause and are not able to tackle it directly. So you’re relying on catching errors rather than stopping them from happening in the first place. You’re not trying for “right first time” or “quality by design”.

“Two sets of eyes are better than one!” is the common defence of multiple layers of QC. After all, if someone misses an error, someone else might find it. Sounds plausible. And it does make sense for processes that occur infrequently and have unique outputs (like a Clinical Study Report). But for processes that repeat rapidly this approach becomes highly inefficient and ineffective. Consider a process like that below:

Specialist I carries out work in the process – perhaps entering metadata in relation to a scanned document (investigator, country, document type etc). They check their work and modify it if they see errors. Then they pass it on to Specialist II who checks it and modifies it if they see any errors. Then the reviewer passes it on to the next step. Two sets of eyes. What are the problems with this approach?

  1. It takes a long time. The two steps have to be carried out in series i.e. Specialist II can’t QC the same item at the same time as Specialist I. Everything goes through two steps and a backlog forms between the Specialists. This means it takes much longer to get to the output.
  2. It is expensive. A whole process develops around managing the workflow with some items fast-tracked due to impending audit. It takes the time of two people (plus management) to carry out the task. More resources means more money.
  3. The quality is not improved. This may seem odd but if we think it through. There is no feedback loop in the process for Specialist I to learn from any errors that escape to Specialist II so Specialist I continues to let those errors pass. And the reviewer will also make errors – in fact the rework they do might actually add more errors. They may not agree on what is an error. This is not a learning process. And what if the process is under stress due to lack of resources and tight timelines? With people rushing, do they check properly? Specialist I knows That Specialist II will pick up any errors so doesn’t check thoroughly. And Specialist II knows that Specialist I always checks their work so doesn’t check thoroughly. And so more errors come out than Specialist II had not been there at all. Having everything go through a second QC as part of the process takes away accountability from the primary worker (Specialist I).

So let’s recap. A process like this takes longer, costs more and gives quality the same (or worse) than a one step process. So why would anyone implement a process like this? Because “two sets of eyes are better than one!”

What might a learning approach with better quality and improved efficiency look like? I will propose an approach in my next post. As a hint, it’s risk-based!

Text: © 2018 Dorricott MPI Ltd. All rights reserved.

Stop Issues Recurring by Retraining?

“That issue has happened again! We really need to improve the awareness of our staff – anyone who has not used the right format needs to be retrained. We can’t tolerate sloppy work. People just need to concentrate and do the job right!”

You may recall a previous post about human factors where I looked at why people make errors and the different types of errors. If the error was a slip (a type of action error where someone planned to do the right thing but did the wrong thing) then retraining won’t help. The person already knows what the right thing to do is. Similarly if the error was a lapse (where someone forgot to do it). Of course, with both of these error types, making people aware will help temporarily. But over time, they will likely go back to doing what they were doing before unless some other change is made.

If the error was a rule-based thinking error where the knowledge is there but was misapplied, it is unlikely that retraining will work long term. We would need to understand the situation and why it is that the knowledge was misapplied. If the date is written in American format but read as European (3/8/18 being 8-Mar-2018 rather than 3-Aug-2018) then can we change the date format to be unambiguous in the form dd-mmm-yyyy (03-Aug-2018)?

What if the error is a non-compliance? If someone didn’t carry out the full procedure because they were rushed and they get retrained, do we really think that in the future when they are rushed they are going to do something different? They might do short term but longer term it is unlikely.

For all these errors, retraining or awareness might help short term but they are unlikely to make a difference longer term. To fix the issue longer term, we need to understand better why the error occurred and focus on trying to stop its recurrence by changes to process/systems.

A thinking error that is knowledge-based is different though. If someone made an error because they don’t know what they should be doing then clearly providing training and improving their knowledge should help. But even here, “retraining” is the wrong action. It implies they have already been trained and if so, the question is, why didn’t that training work? Giving them the same training again is likely to fail unless we understand what went wrong the first time. We need to learn from the failure in the training process and fix that.

Of course, this does not mean that training is not important. It is vital. Processes are least likely to have errors when they are designed to be as simple as possible and are run by well-trained people. When there are errors, making sure people know that they can happen is useful and will help short term but it is not a long term fix (corrective action). Longer term fixes need a better understanding of why the error(s) occurred and this is where the individuals running the process can be of vital help. As long as there is a no-blame culture (see previous post) you can work with those doing the work to make improvements and help stop the same errors recurring. Retraining is not the answer and it can actually have a negative impact. We want those doing the work to come forward with errors so we can understand them better, improve the process/system and reduce the likelihood of them happening again. If you came forward acknowledging an error you had made and were then made to retake an hour of on-line training on a topic you already know, how likely would you be to come forward a second time? Retraining can be seen as a punishment.

So, to go back to the post title “Stop errors recurring by retraining?” No, that won’t work. Retraining is never a good corrective action.

What about that other corrective action that comes up again and again – more QC? That’s the subject of a future post.

 

Text: © 2018 Dorricott MPI Ltd. All rights reserved.