Are you asking the right questions?

I wrote recently about the importance of tuning up your KPIs every now and then (#KPITuneUp). When organizations ask me to review their Key Performance Indicators (KPIs), I ask them to provide the question the KPIs are trying to answer as well as the KPI titles. After all, if they are measuring something, there must be a purpose, mustn’t there? Surprisingly perhaps, people are generally surprised that I would want to know this. But if you don’t know why KPIs are being collected and reported, how do you know whether they are doing what you want them to? This is one of the many things I’ve learned from years of working with one of the most knowledgeable people around on clinical trial metrics/KPIs – Linda Sullivan. Linda was a co-founder of the Metrics Champion Consortium (now merged with the Avoca Quality Consortium) and developed the Metric Development Framework which works really well for developing metric definitions. And for determining a set of metrics or KPIs that measure things that really matter rather than simply measuring the things that are easy to measure.

I have used this approach often and it can bring real clarity to the determination of which KPIs to use. Working with one sponsor client, they provided me with lists of proposed KPIs from their preferred CROs. As is so often the case, they were lists of KPI titles without the questions they were aimed at answering. And the lists were largely very different between the CROs even though the same services were being provided. So, I worked with the client to determine the questions that they wanted their KPIs to answer. Then we had discussions with the CROs on which KPIs they had that could help answer those questions. This is a much better place to come at the discussion because it automatically focuses you on the purpose of the KPIs rather than whether one KPI is better than another. And sometimes it highlights that you have questions which are actually rather difficult to answer with KPIs – perhaps because data is not currently collected. Then you can start with a focus on the KPIs where the data is accessible and look to add additional ones if/when the data becomes accessible.

As an example:

    • Key question: Are investigators being paid on time?
    • Originally proposed KPI: Number of overdue payments at month end
    • Does the proposed KPI help answer the key question? No. Because it counts only overdue payments but doesn’t tell us how many were paid on time.
    • New proposed KPI: Proportion of payments made in the month that were made on time
    • Does this new proposed KPI help answer the key question? Yes. A number near 100% is clearly good whereas a low value is problematic.

In this example, we’ve rejected the originally proposed KPI and come up with a new definition. There is more detail to go into, of course, such as what “on time” really means and how an inaccurate invoice is handled for the KPI calculation. And what should the target be? But the approach focuses us on what the KPI is trying to answer. It’s the key questions you have to agree on first!

Perhaps it’s time to tune up your KPIs and make sure they’re fit for 2023. Contact me and I’d be happy to discuss the approach you have now and whether it meets leading practice in the industry. I can even give your current KPI list a review and provide feedback. If you have the key questions they’re tryng to answer, that’ll be a help! #KPITuneUp

 

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Image: rawpixel CC0 (Public Domain)

#KPITuneUp

Is it time your Vendor/CRO KPIs had a tune up?

As the late, great Michael Hammer once said in The Seven Deadly Sins of Measurement, “…there is a widespread consensus that [companies] measure too much or too little, or the wrong things, and that in any event they don’t use their metrics effectively.” Hammer wrote this in 2007 and I suspect many would think it still rings true today. What are Hammer’s deadly sins?

  1. Vanity – measuring something to make you look good. In a culture of fear, you want to make sure your KPIs are not going to cause a problem. So best to make sure they can’t! If you use KPIs to reward/punish then you’re likely to have some of these. The KPIs that are always green such as percent of key team member handovers with handover meetings. Maybe the annualized percent of key staff turnover might not be so green.
  2. Provincialism – sub-optimising by focusing on what matters to you but not the overall goal. The classic example in clinical trials (which was in the draft of E8 R1 but was removed in the final version) is the race to First Participant In. Race to get the first one but then have a protocol amendment because the protocol was poorly designed in the rush. We should not encourage people into rushing to fail.
  3. Narcissism – not measuring from the customer’s perspective. This is why it is important to consider the purpose of the KPI, what is the question you are trying to answer? If you want investigators to be paid on time, then measure the proportion of payments that are made accurately and on time. Don’t measure the average time from payment approved to payment made as a KPI.
  4. Laziness – not giving it enough thought or effort. To select the right metrics, define them well, verify them, and empowering those using them to get most value from them needs critical thinking. And critical thinking needs time. It also needs people who know what they are doing. A KPI that is a simple count at month end of overdue actions is an example of this. What is it for? How important are the overdue actions? Maybe they are a tiny fraction of all actions or maybe they are most of them. Better to measure the proportion of actions being closed on time. This focuses on whether the process is performing as expected.
  5. Pettiness – measuring only a small part of what matters. OK, so there was an average of only 2 findings per site audit in the last quarter. But how many site audits were there? How many of the findings were critical or major? Maybe one of the sites audited had 5 major findings and is the largest recruiting site for the study.
  6. Inanity – measuring things that have a negative impact on behaviour. I have come across examples of trying to drive CRAs to submit Monitoring Visit Reports within 5 days of a monitoring visit leading to CRAs submitting blank reports so that they meet the timeline. It gets even worse if KPIs are used for reward or punishment – people will go out of their way to make sure they meet the KPI by any means possible. Rather than focus effort on improving the process and being innovative, they will put their effort into making sure the target is met at all costs.
  7. Frivolity – not being serious about measurement. I have seen many organizations do this. They want KPIs because numbers gives an illusion of control. Any KPIs will do, as long as they look vaguely reasonable. And people guess at targets. But no time is spent on why KPIs are needed and how they are to be used. Let alone training people on the skills needed. Without this, KPIs are a waste of resource and effort.

I think Hammer’s list is a pretty good one and covers many of the problems I’ve seen with KPIs over the years.

How well do your KPIs work between you and your CRO/vendor? Does it take all the effort to gather them ready for the governance meeting only to have a cursory review before the next topic? Do you really use your KPIs to help achieve the overall goals of a relationship? Have you got the right ones? Do you and your staff know what they mean and how to use them?

Perhaps it’s time to tune up your KPIs and make sure they’re fit for 2023. Contact me and I’d be happy to discuss the approach you have now and whether it meets leading practice in the industry. I can even give your current KPI list a review and provide feedback. #KPITuneUp

 

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Image – Robert Couse-Baker, PxHere (CC BY 2.0)

Don’t let metrics distract you from the end goal!

We all know the fable of the tortoise and the hare. The tortoise won the race by taking things at a steady pace and planning for the end rather than rushing and taking their eye off the end goal. Metrics and how they are used can drive the behaviours we want but also behaviours that mean people take their eye off the end goal. As is often said, what gets measured gets managed – and we all know metrics can influence behaviour. When metrics are well-designed and are focused on answering important questions, and there are targets making it clear to a team what is important, they can really help focus efforts. If the rejection rate for documents being submitted to the TMF is set to be no greater than 5% but is tracking well above, then there can be a focus of effort to try and understand why. Maybe there are particular errors such as missing signatures, or there is a particular document type that is regularly rejected. If a team can get to the root causes then they can implement solutions to improve the process and see the metric improve. That is good news – metrics can be used as a great tool to empower teams. Empowering them to understand how the process is performing and where to focus their effort for improvement. With an improved, more efficient process with fewer errors, the end goal of a contemporaneous, high quality, complete TMF is more likely to be achieved.

But what if metrics and their associated targets are used for reward or punishment? We see this happen with metrics when used for personal performance goals. People will focus on those metrics to make sure they meet the targets at almost any cost! If individuals are told they must meet a target of less than 5% for documents rejected when submitted to the TMF, they will meet it. But they may bend the process and add inefficiency in doing so. For example, they may decide only to submit the documents they know are going to be accepted and leave the others to be sorted out when they have more time. Or they may avoid submitting documents at all. Or perhaps they might ask a friend to review the documents first. Whatever the approach, it is likely it will impact the process of a smooth flow of documents into the TMF by causing bottlenecks. And they are being done ‘outside’ the documented process – sometimes termed the ‘hidden factory’. Now the measurement is measuring a process of which we no longer know all the details – it is different to the SOP. The process has not been improved, but rather made worse. And the more complex process is liable to lead to a TMF that is no longer contemporaneous and may be incomplete. But the metric has met its target. The rush to focus on the metric in exclusion to the end goal has made things worse.

And so, whilst it is good news that in the adopted ICH E8 R1, there is a section (3.3.1) encouraging “the establishment of a culture that supports open dialogue” and critical thinking, it is a shame that the following section in the draft did not make it into the final version:

“Choose quality measures and performance indicators that are aligned with a proactive approach to design. For example, an overemphasis on minimising the time to first patient enrolled may result in devoting too little time to identifying and preventing errors that matter through careful design.”

There is no mention of performance indicators in the final version or the rather good example of a metric that is likely to drive the wrong behaviour – time to first patient enrolled. What is the value in racing to get the first patient enrolled if the next patient isn’t enrolled for months? Or a protocol amendment ends up being delayed leading to an overall delay in completing the trial? More haste, less speed.

It can be true that what gets measured gets managed – but it will only be managed well when a team is truly empowered to own the metrics, the targets, and the understanding and improvement of the process. We have to move away from command and control to supporting and trusting teams to own their processes and associated metrics, and to make improvements where needed. We have to be brave enough to allow proper planning and risk assessment and control to take place before rushing to get to first patient. Let’s use metrics thoughtfully to help us on the journey and make sure we keep our focus on the end goal.

 

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Image – openclipart.org

Why Do Metrics Always Lie?

We’ve all come across the phrase “Lies, Damned Lies, & Statistics” which was popularised by Mark Twain in the nineteenth century. And we’re used to politicians using metrics and statistics to prove any point they want to. See my previous example of COVID test numbers or “number theatre” as Professor Sir David Spiegelhalter calls it. His critique to the UK Parliament of the UK government’s metrics used in COVID briefings is sobering reading. We’re right to be sceptical of metrics we see. But we should avoid moving from scepticism to cynicism. Unfortunately, because we see so many examples of the misuse of metrics, we can end up mistrusting all of them and not believing anything.

Metrics can tell us real truths about the world. Over 150 years ago, Florence Nightingale used metrics to demonstrate that more British soldiers were dying in the Crimean War from disease than from fighting. Her use of data eventually saved thousands of lives. Similarly with Richard Doll and Austin Bradford Hill who demonstrated in 1954 the link between smoking and lung cancer. After all, science relies on the use of data and metrics to prove or disprove theories and to progress.

So we should be sceptical when we see metrics being used – we should especially ask who is presenting them and how impartial they might be. We should use our critical thinking skills and not simply accept at face value. What question is the metric trying to answer? Spiegelhalter and others argue for five principles for trustworthy evidence communication:

    • Inform, not persuade
    • Offer balance but not false balance
    • Disclose uncertainties
    • State evidence of quality
    • Pre-empt misinformation

If everyone using metrics followed these principles, then maybe we would no longer be talking about how metrics lie – but rather about the truths they can reveal.

 

Text: © 2021 Dorricott MPI Ltd. All rights reserved.

Image by D Miller from Pixabay

Is this a secret way to engage employees?

In circles of people discussing continuous improvement methodologies and approaches, there is often talk about the need to engage employees in these activities. That is of course true, after all, people doing the day-to-day work are most likely to know how to delight customers, cut unnecessary expenses, make processes faster, and so on. They understand the details and when given the right support (time, tools, expert help, encouragement etc), they can make huge improvements. So, we need to engage them in the activities, right?

Well yes. But I think that’s the wrong way round of looking at it. Being involved in continuous improvement and making processes better for you, your colleagues, and your customers can be enormously satisfying. I see this again, and again in improvement teams. Once a team is truly empowered, they love to take time to think more deeply about the processes they work in, and how they can modify them to work better. And to actually make a difference. I was reminded of this at the recent MCC summit. There was real excitement in the live discussions on how to understand and use metrics to improve processes. Indeed, I was contacted afterwards by someone who told me it had reignited his passion in this area and brought a real bright spot into what had become a dull job. Involving people in this way will also improve process ownership, buy-in for change and can improve the measurement of processes too.

According to Gallup’s State of the Global Workplace report 2021, only 20% of employees worldwide are engaged in their jobs – meaning that they are emotionally invested in committing their time, talent and energy in adding value to their team and advancing the organization’s initiatives. In Western Europe, engagement is at 11%!

Rather than thinking about how we can get employees engaged in continuous improvement, maybe we could get employees more engaged in their roles by supporting them to be involved in continuous improvement efforts? How can you free up your employees to improve processes and, at the same time, engage them more in their daily work?

 

Text: © 2021 Dorricott MPI Ltd. All rights reserved.

Picture: RobinHiggins, Pixabay

Training for KPIs – Why Bother?

I was facilitating a brainstorm session recently as part of a discussion on the challenges of using Key Performance Indicators (KPIs). People spend a lot of time deciding on their KPIs and often horse-trading over targets. But we were discussing how people actually use the KPIs. After all, KPIs are not an end in themselves. They are there to serve a purpose. To shed light on processes and performance and help move people from the subjective to the objective.

The brainstorm raised lots of good ideas such as:

    • The importance of getting senior level buy-in
    • Regular review of the KPIs
    • Rules on actions to take if there are more than a certain number of “red” KPIs
    • The importance of making sure the definitions are clear

But no-one raised the question of training. I found this intriguing. Do people think that once you have a set of KPIs being reported, everyone somehow automatically knows how to use them? I’m not at all convinced that everyone does know. Sometimes, teams spend a whole meeting debating the possible reasons of why this month’s KPI value is slightly lower than last month’s. They don’t like the idea that perhaps it’s just noise and is unlikely to be worth the time investigating (until we see an actual trend). I’ve been in meetings where most of the KPIs are red and managers throw up their hands because they don’t know what to do. They just hope next month is better. Followers of this blog know that the Pareto Principle would help here. Or maybe the manager gets frustrated and tells the staff the numbers have to get better…which you can always do by playing with definitions rather than actually improving the underlying processes.

There are opportunities to learn more about interpreting data – such as books by Tim Harford, Don Wheeler, Davis Balestracci; or workshops such as at the upcoming MCC vSummit or even from DMPI – but I wonder whether it’s a case of people not knowing what they don’t know? Interpreting KPI data isn’t easy. It needs critical thinking and careful consideration. We should not accept the adage “Lies, damned lies, and statistics!

If people are not trained to use and interpret KPIs, should you bother with collecting and reporting KPIs at all?

 

Text: © 2021 Dorricott MPI Ltd. All rights reserved.

Picture: Tumisu, pixabay

KPIs: What’s not to like?

Many organizations set Key Performance Indicators (KPIs) to monitor their performance against an overall goal or target. Makes sense, surely, to monitor progress with something tangible. And they can be very effective. But there are a lot of pitfalls. And I’m not convinced they work for all circumstances.

A major pitfall in implementing KPIs and targets is an overly top-down approach. Every department / function is told it must have a set of KPIs with targets. After all, this will ensure everyone is accountable. And there will be lots of data showing KPIs against targets Management to review. When these requests come through, most people just shrug their shoulders and mouth “here we go again,” or something less polite. They put together some KPIs with targets that will be easy to achieve and hope that will keep the Management quiet. After a bit of horse-trading, they agree slightly tougher targets and hope for the best.

Or even worse, Management wants to “hold their feet to the fire” and they impose KPIs and targets on each department. They require the cycle time of site activation to be reduced by 20%, or 20% more documents to be processed with the same resource, for example. This leads to much time spent on the definitions – what can be excluded, what should we include. How can we be ingenious and make sure the KPI meets the goal, regardless of the impact on anything else. We can, after all, work people much harder to do more in less time. But the longer-term consequences can be detrimental as burnout leads to sicknesses and resignations and loss of in-depth knowledge about the work.

This is an exercise in futility. It is disrespectful to the people working in the organization. It is wasting the time, ingenuity, and talent of those doing the work – those creating value in the organization. “The whole notion of targets is flawed. Their use in a hierarchical system engages people’s ingenuity in managing the numbers instead of improving their methods,” according to John Seddon in Freedom from Command & Control.  Rather than understanding the work as a process and trying to improve it, they spend their time being ingenious about KPIs that will keep Management off their backs and making sure they meet the targets at whatever cost. There are plenty of examples of this and I’ve described two in past posts – COVID testing & Windrush.

Much better is for the team that owns the work to use metrics to understand that work. To set their own KPIs and goals based on their deep understanding. And to be supported by Management all the way in putting in the hard graft of process improvement. As W. Edwards Deming said, “There is no instant pudding!” Management should be there to support those doing the work, those adding value. They should set the framework and direction but truly empower their workforce to use metrics and KPIs to understand and improve performance. Longer term, that’s better for everyone.

 

Text: © 2021 Dorricott MPI Ltd. All rights reserved.

Picture: KPI Board by Anna Sophie from the Noun Project

Why do I end up waiting so long?

I visited my local hospital recently for a blood test. The test centre opened at 7am so I aimed to arrive at that time hoping to be seen straight away. There are often queues there. When I arrived, there was already a queue of around 10 people! We all had temperature checks, took a numbered ticket, and sat in the socially distanced waiting room. And I was impressed to see an information board that announced the next ticket number to go for the blood draw and also the average wait time. The average wait time was only a few minutes. As I sat there, the average wait time gradually crept up. In the end, I waited for 25 minutes before I was seen. But the average wait time still showed as only 15 minutes. What was going on?

When you learn French, there is a term “faux amis” (false friends). These are words that are the same, or similar, to English words but actually mean something different. For example attendre means to wait for rather than to attend to others, brasserie is not a type of lingerie but a bar, and pub is an advertisement. Metrics can be rather like this. Superficially, the average wait time in a queue is a really useful metric to know when you start queueing for something. After all, you would expect to be around the average wait time wouldn’t you? Time to run a simple Excel model to investigate further! Below you see the arrival times of patients at the hospital. I am highlighted as person 10. After 7am, there was a slow but steady stream of people so I have them arriving every 5 minutes. I estimated the time for each blood draw to be 3 minutes and so you can see when each blood draw took place and the wait time for each individual. But look at the average wait time. We don’t know how the hospital defined it exactly, but I’m guessing they took the previous patients that day and calculated the mean wait time – which is what I’ve done here. There are only 9 patients whose actual wait time is within 5 minutes of the average wait time (shown in green). And I’m shown as patient 10 with the longest wait time and the greatest difference to the average wait time. The average wait time is like a faux ami – it appears to tell you one thing but actually tells you something else. There may be a value to the metric. But not for those joining a queue.

When I join a queue, I’m interested in how long I might have to wait for. You can estimate that by knowing the time to process each person in the queue and multiplying by the number of people in front of you. In this case, the estimate would be 27 minutes for me rather than the few minutes that the average wait time metric told me. I am impressed that the hospital thought to include a metric. But perhaps they need to think more about the purpose of the metric and a better definition. The metric should try to answer the question “How long am I likely to have to wait for?”

Next time I go for a blood test, I’m going to arrive at the more civilised time of 8am and walk straight in!

 

Text: © 2020 Dorricott MPI Ltd. All rights reserved.

Picture – pixy.org CC0 Public Domain.

When is a test not a test?

First, I hope you are keeping safe in these disorienting times. This is certainly a time none of us will forget.

There have been lots of really interesting examples during this pandemic of the challenge of measurement. We know that science is key to us getting through this with the minimum impact and measurement is fundamental to science. I described a measurement challenge in my last post. Here’s another one that caught my eye. Deceptively simple and yet…

On 2-Apr-2020, the UK Government announced a target of 100,000 COVID-19 tests a day by the end of April. On 30-Apr-2020, they reported 122,347 tests. So they met the target, right? Well, maybe. To quote the great Donald J. Wheeler’s First Principle for Understanding Data “No data have meaning apart from their context”. So, let’s be sceptical for a moment and see if we can understand what these 122,347 counts actually are. Would it be reasonable to include the following in the total?

    • Tests that didn’t take place – but where there was the capacity to run those tests
    • Tests where a sample was taken but has not yet been reported on as positive or negative
    • The number of swabs taken within a test – so a test requiring two swabs which are both analysed counts as two tests
    • Multiple tests on the same patient
    • Test kits that have been sent out by post on that day but have not yet been returned (and may never be returned)

You might think that including some of these is against the spirit of the target of 100,000 COVID-19 tests a day. Of course, it depends what the question is that the measurement is trying to answer. Is it the number of people who have received test results? Or is it the number of tests supplied (whether results are in or not)? In fact, you could probably list many different questions – each that would give different numbers. Reporting from the Government doesn’t go into all this detail so we’re not sure what they include in their count. And we’re not really sure what question they are asking.

And these differences aren’t just academic. The 122,347 tests include 40,369 test kits that were sent on 30-Apr-2020 but had not been returned (yet). And 73,191 individual patients were tested i.e. a significant number of tests were repeat tests on the same patients.

So, we should perhaps not take this at face value, and we need to ask a more fundamental question – what is the goal we are trying to achieve? Then we can develop measurements that focus on telling us whether the goal has been achieved. If the goal is to have tests performed for everyone that needs them then a simple count of number of tests is not really much use on its own.

As to whether it is wise to set an arbitrary target for a measurement which seems of limited value? To quote Nicola Stonehouse, professor in molecular virology at the University of Leeds, “In terms of 100,000 as a target, I don’t know where that really came from and whether that was a plucked out of thin air target or whether that was based on any logic.” On 6-May-2020, the UK Government announced a target of 200,000 tests a day by the end of May.

Stay safe.

 

Text: © 2020 Dorricott MPI Ltd. All rights reserved.

Picture – The National Guard

Metric Challenges With COVID-19

Everyone’s talking about the novel coronavirus, COVID-19. It is genuinely scary. And it’s people’s lives and livelihoods being affected. But with all the numbers flying around, I realised it’s quite a good example of how metrics can be mis-calculated and mislead.

For example, the apparently simple question – what is the mortality rate? is actually really difficult to determine during an epidemic. We need to determine the numerator and the denominator to estimate this. For the numerator, the number of deaths seems the right place to start. The denominator is a little more challenging though. Should it be the total population? Clearly not – so let’s take those who are known to be infected. But we know this will not be accurate: not everyone has been tested, some people have very mild symptoms etc. There is also the challenge of accurate data in such a fast-moving situation. We would need to make sure the data for the numerator and denominator are both as accurate as possible at the same time point.

Once the epidemic has run its course, scientists will be able to determine the actual mortality rate.  For example, if scientists are able to develop tests to determine the population exposure (testing for antibodies to COVID-19), then they will be able to make a much better estimate of the mortality rate.

But during the epidemic, there is another challenge with this metric. It actually impacts the numerator. We don’t know whether those who are infected and not yet recovered will die. It can take 2-8 weeks to know the outcome. Some of those infected will sadly die from their infection in the future. And so, the numerator is actually an underestimate.

As we measure processes in clinical trials, we can have similar issues with metrics. If we are trying to use metrics to predict the final drop-out rate from an ongoing trial (patients who discontinue treatment during the trial), dividing the number of drop-outs to-date by the number of patients randomized will be a poor (low) estimate. A patient who has just started treatment will have had little chance to drop out. But a patient who has nearly completed treatment is unlikely to drop out. At the end of the trial, the drop-out rate will be easy to calculate. But during the trial, we need to take account of the amount of time patients have been in treatment. We should weight a patient more if they have completed, or nearly completed treatment. And less if they have just started. We would also want to be sure that the numerator and denominator were accurate at the same time point. If data on drop-outs is delayed then again, our metric will be too low. By considering carefully the way we calculate the metric, we can ensure that we have a leading indicator that helps to predict the final drop-out rate (assuming things stay as is). That might provide an early warning signal so that action can be taken early to reduce a drop-out rate that would otherwise end up invalidating the trial results.

In the mean time, let’s hope the news of this virus starts to improve soon.

Much more detailed analysis of the Case Fatality Rate of COVID-19 is available here.

 

Text: © 2020 Dorricott MPI Ltd. All rights reserved.