Blog

Training for KPIs – Why Bother?

I was facilitating a brainstorm session recently as part of a discussion on the challenges of using Key Performance Indicators (KPIs). People spend a lot of time deciding on their KPIs and often horse-trading over targets. But we were discussing how people actually use the KPIs. After all, KPIs are not an end in themselves. They are there to serve a purpose. To shed light on processes and performance and help move people from the subjective to the objective.

The brainstorm raised lots of good ideas such as:

    • The importance of getting senior level buy-in
    • Regular review of the KPIs
    • Rules on actions to take if there are more than a certain number of “red” KPIs
    • The importance of making sure the definitions are clear

But no-one raised the question of training. I found this intriguing. Do people think that once you have a set of KPIs being reported, everyone somehow automatically knows how to use them? I’m not at all convinced that everyone does know. Sometimes, teams spend a whole meeting debating the possible reasons of why this month’s KPI value is slightly lower than last month’s. They don’t like the idea that perhaps it’s just noise and is unlikely to be worth the time investigating (until we see an actual trend). I’ve been in meetings where most of the KPIs are red and managers throw up their hands because they don’t know what to do. They just hope next month is better. Followers of this blog know that the Pareto Principle would help here. Or maybe the manager gets frustrated and tells the staff the numbers have to get better…which you can always do by playing with definitions rather than actually improving the underlying processes.

There are opportunities to learn more about interpreting data – such as books by Tim Harford, Don Wheeler, Davis Balestracci; or workshops such as at the upcoming MCC vSummit or even from DMPI – but I wonder whether it’s a case of people not knowing what they don’t know? Interpreting KPI data isn’t easy. It needs critical thinking and careful consideration. We should not accept the adage “Lies, damned lies, and statistics!

If people are not trained to use and interpret KPIs, should you bother with collecting and reporting KPIs at all?

 

Text: © 2021 Dorricott MPI Ltd. All rights reserved.

Picture: Tumisu, pixabay

You’re Solving the Wrong Problem!

The basic idea behind continuous process improvement is not difficult. It’s the idea of a cycle – defining the problem, investigating, determining actions to improve, implementing those actions, and then looking again to see if there has been improvement. It’s the Plan-Do-Check-Act cycle of Shewhart and Deming. Or the DMAIC cycle of Six Sigma. It’s a proven approach to continually improving. But it takes time and effort. It takes determination. And it can easily be derailed by those who say “Just get on with it!” Much better to be rushing into implementation to show how you are someone of action rather than someone who suffers from “paralysis by analysis.” But a greater danger is to move into actions without taking time to analyse properly – or even to define the problem. It looks great because you’re taking action. But what if your actions make things worse?

Let’s take the example of HS2 in the UK. This is the UK’s second high-speed railway line. The cost is enormous and keeps going up. Building is underway and billions have been spent already. The debate continues as to whether it is worth all the money. During one of the many consultations, in 2011, I wrote to give my perspective. I had read the proposal and was shocked to see there was no problem defined. Here was an expensive solution without a clear definition of the problem it was designed to resolve. It talked about trains being overcrowded currently. If that was the problem, then was this the best solution? I suggested they take that problem and drill down some more – when are the trains crowded? Where? Why? And so on. Then see if they could come up with solutions. Preferably ones that don’t cost tens of billions. If they are overcrowded during commuting times, I suggested that perhaps people could be given a tax incentive to work from home. Which would have the added advantage of being better for the environment.

Of course, since then, we’ve had the pandemic. And many have been working from home. Trains have not been overcrowded. And many have found they rather like working from home. So while the case for HS2 was flimsy 10 years ago, it’s become transparently thin since then. And because they didn’t spend time defining the problem or analysing it, there is no obvious route to go back and re-evaluate the decision. Given the change of circumstances, is it still the right thing to do? We can’t answer because we don’t know the problem it is trying to solve.

I do find it odd that so many organisations (governments included) rush into implementing changes without taking time to define the problem and analyse it. I suspect motives such as vanity – “let’s implement this new, shiny thing because it’ll make me look good”, and wanting to be seen as someone of action. Interesting that Taiichi Ohno, creator of the Toyota Production System, and Lean, used to get graduates to spend time just watching production. Afterwards, he would ask them what they saw and if he didn’t think they had observed enough he would get them to watch some more. Better to pause, observe, reflect, analyse than to go straight into actions that might actually make things worse.

For process improvement, make sure you understand the problem you’re trying to solve. Solving the wrong problem can be costly and wasteful!

 

Text: © 2021 Dorricott MPI Ltd. All rights reserved.

Picture: pxhere.com

Why won’t they accept the change?

In process improvement, this question comes up again and again.  When Management is convinced of the change, they sometimes view employees who doubt as being ‘change resistant’. These employees are seen as roadblocks to be overcome. As people who just want to keep doing things the way they’ve done them. Of course, there are people like this – but fewer than you’d think in my experience.

Why are people resistant to change after all? Maybe they have good reason. Maybe they can see flaws in the approach – they know the detail of the work after all. Maybe they think there is a better way to do it but no-one’s asked them. I’m simply not convinced that most people are resistant to change. Why is it that so many make New Year’s resolutions to get fit, lose weight, join a dating agency and so on? Because they want change. But they want change when they are in the driving seat. Not to be told what changes are going to happen to them. I spoke with someone once about the introduction of computers to the workplace. She was an experienced secretary and shorthand typist. She arrived at work one day to find her typewriter replaced by a computer. And her boss thought she’d be delighted. She was not. Was she ‘change resistant’? No – but she wanted to be involved in the change rather than have it done to her. So it is when the decision is made to automate a process without involving the people who do the day-to-day work in improving the process first.

One of the trickiest parts of process improvement efforts is not the complex techniques, or the statistics, it is implementing change that sticks. The secret is to involve the people doing the work. Something I have seen time and time again is the enthusiasm and ingenuity of those doing the work to actually improve what they do. People love to take time to understand their work better, with measurements if possible. And to come up with new ways of working. When those ways of working are implemented, they have a much better chance of sticking than the top-down ones from Management. These small teams are a delight to facilitate. With some guidance in process improvement, and the time and support, they can move mountains.

To get effective change, Management should set the direction and then support the employees to work out the best way to get there. This is true empowerment. And change resistance will melt away.

 

Text: © 2021 Dorricott MPI Ltd. All rights reserved.

Picture: pxfuel.com

KPIs: What’s not to like?

Many organizations set Key Performance Indicators (KPIs) to monitor their performance against an overall goal or target. Makes sense, surely, to monitor progress with something tangible. And they can be very effective. But there are a lot of pitfalls. And I’m not convinced they work for all circumstances.

A major pitfall in implementing KPIs and targets is an overly top-down approach. Every department / function is told it must have a set of KPIs with targets. After all, this will ensure everyone is accountable. And there will be lots of data showing KPIs against targets Management to review. When these requests come through, most people just shrug their shoulders and mouth “here we go again,” or something less polite. They put together some KPIs with targets that will be easy to achieve and hope that will keep the Management quiet. After a bit of horse-trading, they agree slightly tougher targets and hope for the best.

Or even worse, Management wants to “hold their feet to the fire” and they impose KPIs and targets on each department. They require the cycle time of site activation to be reduced by 20%, or 20% more documents to be processed with the same resource, for example. This leads to much time spent on the definitions – what can be excluded, what should we include. How can we be ingenious and make sure the KPI meets the goal, regardless of the impact on anything else. We can, after all, work people much harder to do more in less time. But the longer-term consequences can be detrimental as burnout leads to sicknesses and resignations and loss of in-depth knowledge about the work.

This is an exercise in futility. It is disrespectful to the people working in the organization. It is wasting the time, ingenuity, and talent of those doing the work – those creating value in the organization. “The whole notion of targets is flawed. Their use in a hierarchical system engages people’s ingenuity in managing the numbers instead of improving their methods,” according to John Seddon in Freedom from Command & Control.  Rather than understanding the work as a process and trying to improve it, they spend their time being ingenious about KPIs that will keep Management off their backs and making sure they meet the targets at whatever cost. There are plenty of examples of this and I’ve described two in past posts – COVID testing & Windrush.

Much better is for the team that owns the work to use metrics to understand that work. To set their own KPIs and goals based on their deep understanding. And to be supported by Management all the way in putting in the hard graft of process improvement. As W. Edwards Deming said, “There is no instant pudding!” Management should be there to support those doing the work, those adding value. They should set the framework and direction but truly empower their workforce to use metrics and KPIs to understand and improve performance. Longer term, that’s better for everyone.

 

Text: © 2021 Dorricott MPI Ltd. All rights reserved.

Picture: KPI Board by Anna Sophie from the Noun Project

Pareto: Focus Your Efforts

For some of my work with the Metrics Champion Consortium, I was looking at MHRA inspection finding categories. MHRA publish reports on their findings – the most recent is for the year 2017-2018. For major findings, 86% are within just 21% of the categories. If this is representative of the industry, then focusing our improvement efforts on the processes associated with those 21% of categories could have a disproportionate impact on findings in the future. This fits the pattern of the Pareto Principle.

The Pareto Principle was proposed by Joseph Juran, a 20th century pioneer of quality improvement. He based it on the observation of the economist Vilfredo Pareto of Italy who noted that 80% of Italy’s land was owned by 20% of the people. The principle is that in any given situation, roughly 80% of the effect is due to 20% of the causes. It seems to work well in many fields, for example:

    • 20% of the most reported software bugs cause 80% of software crashes
    • It is often claimed in business that 80% of the sales comes from 20% of the clients
    • 20% of people account for 80% of all healthcare spending
    • Even in COVID-19, 80% of deaths have occurred among 20% of the population (65 and older)

The principle is sometimes called the 80:20 rule or the law of the vital few because it implies that if you can focus on the 20% and put effort into improving that, you can impact 80% of the results – having a disproportionate effect on the whole. It is regularly discussed in business and I once worked with a company which had the 80:20 rule as one of its guiding principles.

Davis Balestracci’s Data Sanity has a really interesting observation on the power of the Pareto Principle in process improvement. One mode of process improvement is taking the exceptional and trying to understand why it happened and to learn from it. So, if site contracts in one country take much longer than in others to finalise, you can focus on that country to understand why and to improve. Or, of course, you could take the country with the shortest cycle time and try to understand why so you can spread “best practice”. This is the world of root cause analysis (RCA) & CAPA and can be effective in improvement. But what if the approach is over-used – for example maybe there are regularly issues detected in site audits for clinical trials that relate to problems with the process of Informed Consent. If there are many issues, then perhaps it would be better to look at them all rather than take each one individually as its own self-contained issue. In other words, maybe there is a systemic cause that is not related to the individual sites or studies. If you took all the issues (findings) together, you could use the Pareto Principle. It’s likely that 80% of the effects seen are due to a small number of causes. Why not work to find out what they are and implement changes to the whole system that affect those? Then continue to measure over time to see if it’s worked. Isn’t that likely to get better results than lots of independent RCA & CAPA efforts that each only has a small part of the picture?

That does bring up the challenge of how you determine when one issue is similar (or the same) as another. If you categorized all the issues in a consistent way, you’d see that around 80% of the observed issues come from 20% of the categories – the Pareto Principle in action. Just as we see from the MHRA. It’d be a good idea to focus process improvement on those 20% of categories.

Next time you look to improve a process, make sure you use the Pareto Principle to help focus your efforts so you can have maximum effect.

Tip: Pronounce Pareto as “pah-ree-toh”

 

Text: © 2021 Dorricott MPI Ltd. All rights reserved.

Why do I end up waiting so long?

I visited my local hospital recently for a blood test. The test centre opened at 7am so I aimed to arrive at that time hoping to be seen straight away. There are often queues there. When I arrived, there was already a queue of around 10 people! We all had temperature checks, took a numbered ticket, and sat in the socially distanced waiting room. And I was impressed to see an information board that announced the next ticket number to go for the blood draw and also the average wait time. The average wait time was only a few minutes. As I sat there, the average wait time gradually crept up. In the end, I waited for 25 minutes before I was seen. But the average wait time still showed as only 15 minutes. What was going on?

When you learn French, there is a term “faux amis” (false friends). These are words that are the same, or similar, to English words but actually mean something different. For example attendre means to wait for rather than to attend to others, brasserie is not a type of lingerie but a bar, and pub is an advertisement. Metrics can be rather like this. Superficially, the average wait time in a queue is a really useful metric to know when you start queueing for something. After all, you would expect to be around the average wait time wouldn’t you? Time to run a simple Excel model to investigate further! Below you see the arrival times of patients at the hospital. I am highlighted as person 10. After 7am, there was a slow but steady stream of people so I have them arriving every 5 minutes. I estimated the time for each blood draw to be 3 minutes and so you can see when each blood draw took place and the wait time for each individual. But look at the average wait time. We don’t know how the hospital defined it exactly, but I’m guessing they took the previous patients that day and calculated the mean wait time – which is what I’ve done here. There are only 9 patients whose actual wait time is within 5 minutes of the average wait time (shown in green). And I’m shown as patient 10 with the longest wait time and the greatest difference to the average wait time. The average wait time is like a faux ami – it appears to tell you one thing but actually tells you something else. There may be a value to the metric. But not for those joining a queue.

When I join a queue, I’m interested in how long I might have to wait for. You can estimate that by knowing the time to process each person in the queue and multiplying by the number of people in front of you. In this case, the estimate would be 27 minutes for me rather than the few minutes that the average wait time metric told me. I am impressed that the hospital thought to include a metric. But perhaps they need to think more about the purpose of the metric and a better definition. The metric should try to answer the question “How long am I likely to have to wait for?”

Next time I go for a blood test, I’m going to arrive at the more civilised time of 8am and walk straight in!

 

Text: © 2020 Dorricott MPI Ltd. All rights reserved.

Picture – pixy.org CC0 Public Domain.

Is Risk Thinking Going Mainstream?

I sing in a chamber choir – rehearsals though have, of course, been over Zoom in recent months. I’m on the choir committee and we’ve been discussing what we might need to do to get back to singing together in the real world. And the conductor showed us a Risk Assessment that he’d been working on! I was really impressed. It showed different categories to consider for risks such as preparation for rehearsal, attendee behaviour during rehearsals, rehearsal space etc. The risks had been brainstormed. Each are scored for Likelihood and Impact. These scores were multiplied to determine a Risk Score. Mitigations were listed to try to reduce the high Risk Scores. Then each risk was re-scored assuming the mitigation is implemented – to see whether the score is now acceptable. We went through the risk assessment process and the main mitigation actions we needed to take were:

      1. Maintain social distancing at all times and wear masks when not singing.
      2. Register all attendees for track and trace purposes.
      3. No sharing of music, pencils, water etc. Choir members need to bring their own music.
      4. Rehearsal limited to one hour, then a 15 minute break where the room is ventilated, then continue with rehearsal to prevent unacceptable build-up of aerosols. Ideally, people go outside during break (if not too cold).
      5. Clear instructions to the choir before, during and after. Including making it clear the rehearsal is not risk-free and no-one is obliged to attend.

Which I thought was pretty good.

It really intrigued me that a small choir would be completing something like this. I helped develop the MCC’s Risk Assessment & Mitigation Management Tool 2.0 and there are interesting similarities – the brainstorming of risks, the use of Likelihood and Impact to provide a Risk Score, the mitigations, and the re-scoring to see if the Risk Score is at an acceptable level.  And there are some differences too – in particular, there is no score for Detectability. I’ve often heard at meetings in the MCC and with other clients how difficult it is in clinical trials to get people really thinking critically for risk assessments. And how challenging the risk assessment can be to complete. I wonder if COVID-19 is helping to bring the concept of risk more into the mainstream (as mentioned in an article in the New Scientist on risk budgeting) and that might make it easier for those involved in clinical trials to think this way too?

Unfortunately, within days of us completing the choir rehearsal risk assessment, the government announced a new national lockdown. Which has stopped us moving forward for now. But we’re ready when restrictions ease. Zoom rehearsals for a while longer!

 

Text: © 2020 Dorricott MPI Ltd. All rights reserved.

Bringing Processes into Focus

I have been leading a process integration from a merger recently. The teams provided their many long SOPs and I tried to make sense of them – but with only minimal success. So, at the first meeting (web-based of course), I said we should map the process at high level (one page) for just one of the organisations. People weren’t convinced there would be a benefit but were willing to humour me. In a two-hour meeting, we mapped the process and were also able to:

  • Mark where the existing SOPs fit in the high-level process – giving a perspective no-one had seen before
  • Highlight differences in processes between the two organisations – in actual process steps, equipment or materials
  • Discuss strengths, weaknesses and opportunities in the processes
  • Agree an action plan for the next steps to move towards harmonisation

Mapping was done using MS PowerPoint. They loved this simple approach that made sure the focus of the integration effort was on the process – after all, to quote W. Edwards Deming, “If you can’t describe what you are doing as a process, you don’t know what you’re doing.” At a subsequent meeting, reviewing another process, one of the participants had actually mapped their process beforehand – and we used that as the starting point.

Process maps are such a powerful tool in helping people focus on what matters – without getting into unnecessary detail. They help people to come to a common perspective and to highlight differences to discuss. We also use them this way at the Metrics Champion Consortium where one of the really important outcomes from mapping is the recognition of different terminology used by different organisations. We can then focus on harmonising the terminology and developing a glossary of terms that we all agree on. This reduces confusion in subsequent discussions.

Process maps are really a great tool. They are useful when complete, but so much more benefit comes from a team of people with different perspectives actually developing them. They help to bring processes into focus. And can even help with root cause analysis. If you don’t use them, perhaps you should!

For those that use process maps, what do you find as the benefits? And the challenges?

 

Text: © 2020 Dorricott MPI Ltd. All rights reserved.

Picture – PublicDomainPictures from Pixabay

Are we seeing a breakthrough in clinical trial efficiency?

I joined my first CRO as an “International Black Belt” in 2005. Having come from a forward-thinking manufacturer who had been implementing six sigma and lean philosophy, I was dumb-founded by what I saw. After the first few weeks, I mentioned to a colleague that most of what seemed to happen in clinical trials was about checking because the process could not be relied on to be right the first time. Manufacturing learned in the 1980s and 1990s that checking (or “inspection” as they call it) is costly, inefficient, and ineffective. This colleague recently repeated this back to me. We’ve all seen examples in clinical trial – TMF documents being checked before sending, checked on receipt, then checked during regular QCs; reports going through endless rounds of review; data queries being raised for items that can have no impact on trial results or patients. When challenged, often the response is that we’ve always done it that way. Or that QA, or the regulators, tell us we have to do it that way. I’ve spent my career in clinical trials trying to get people to focus on the process:

    • What is the purpose?
    • What are the inputs and outputs?
    • What is the most efficient way to get from one to the other?
    • How can we measure the process use the measurement to continuously improve?
    • What is the perspective of the “customers” of the process?
    • What should we do when a process goes wrong?

And I’ve had a number of successes along the way – the most satisfying of which is when someone has an “Aha!” moment and takes the ideas and runs with them. Mapping a process themselves to highlight where there are opportunities to improve, for example. But I do often wonder why it is so difficult to get the industry to make the significant changes that we all know it needs. Process improvement should not be seen as an optional extra. It is a necessity to stay in business. It seems unfair to blame regulators who have been pushing us along to be process focused – for example with the introduction of Quality Tolerance Limits in GCP in 2016.

COVID-19 has caused so much loss of life and impacted everybody’s lives. It has been hugely to the detriment of the people of the world. And yet, there are some positives too. In clinical trials, suddenly, people are starting to ask “how can we make this change?” rather than “why can’t we make this change?” At meetings in the Metrics Champion Consortium we have heard stories of cycle times that were thought impossible for developing a protocol, for example, of a company that has switched from 100% Source Document Verification to 0% after reviewing evidence of the ineffectiveness of the process; and of companies implementing remote and centralized monitoring in record time. There are some great examples from the COVID-19 RECOVER study in the UK. And, at the same time, pharmaceuticals and the associated clinical trials are seen as critical to helping us turn the corner of the pandemic.

Let’s hope this new-found momentum to improve continues in our industry when this pandemic is finally declared over. And we can bring new therapies to patients much quicker in the future – with less cost and with quality and safety as high or even higher than in the past. We are showing what’s possible. Let’s continue to challenge each other on that assumption that because we’ve always done things one way, we have to continue.

Text: © 2020 Dorricott MPI Ltd. All rights reserved.

Picture – Gerd Altmann, Needpix.com

When is a test not a test?

First, I hope you are keeping safe in these disorienting times. This is certainly a time none of us will forget.

There have been lots of really interesting examples during this pandemic of the challenge of measurement. We know that science is key to us getting through this with the minimum impact and measurement is fundamental to science. I described a measurement challenge in my last post. Here’s another one that caught my eye. Deceptively simple and yet…

On 2-Apr-2020, the UK Government announced a target of 100,000 COVID-19 tests a day by the end of April. On 30-Apr-2020, they reported 122,347 tests. So they met the target, right? Well, maybe. To quote the great Donald J. Wheeler’s First Principle for Understanding Data “No data have meaning apart from their context”. So, let’s be sceptical for a moment and see if we can understand what these 122,347 counts actually are. Would it be reasonable to include the following in the total?

    • Tests that didn’t take place – but where there was the capacity to run those tests
    • Tests where a sample was taken but has not yet been reported on as positive or negative
    • The number of swabs taken within a test – so a test requiring two swabs which are both analysed counts as two tests
    • Multiple tests on the same patient
    • Test kits that have been sent out by post on that day but have not yet been returned (and may never be returned)

You might think that including some of these is against the spirit of the target of 100,000 COVID-19 tests a day. Of course, it depends what the question is that the measurement is trying to answer. Is it the number of people who have received test results? Or is it the number of tests supplied (whether results are in or not)? In fact, you could probably list many different questions – each that would give different numbers. Reporting from the Government doesn’t go into all this detail so we’re not sure what they include in their count. And we’re not really sure what question they are asking.

And these differences aren’t just academic. The 122,347 tests include 40,369 test kits that were sent on 30-Apr-2020 but had not been returned (yet). And 73,191 individual patients were tested i.e. a significant number of tests were repeat tests on the same patients.

So, we should perhaps not take this at face value, and we need to ask a more fundamental question – what is the goal we are trying to achieve? Then we can develop measurements that focus on telling us whether the goal has been achieved. If the goal is to have tests performed for everyone that needs them then a simple count of number of tests is not really much use on its own.

As to whether it is wise to set an arbitrary target for a measurement which seems of limited value? To quote Nicola Stonehouse, professor in molecular virology at the University of Leeds, “In terms of 100,000 as a target, I don’t know where that really came from and whether that was a plucked out of thin air target or whether that was based on any logic.” On 6-May-2020, the UK Government announced a target of 200,000 tests a day by the end of May.

Stay safe.

 

Text: © 2020 Dorricott MPI Ltd. All rights reserved.

Picture – The National Guard