Are you asking the right questions?

I wrote recently about the importance of tuning up your KPIs every now and then (#KPITuneUp). When organizations ask me to review their Key Performance Indicators (KPIs), I ask them to provide the question the KPIs are trying to answer as well as the KPI titles. After all, if they are measuring something, there must be a purpose, mustn’t there? Surprisingly perhaps, people are generally surprised that I would want to know this. But if you don’t know why KPIs are being collected and reported, how do you know whether they are doing what you want them to? This is one of the many things I’ve learned from years of working with one of the most knowledgeable people around on clinical trial metrics/KPIs – Linda Sullivan. Linda was a co-founder of the Metrics Champion Consortium (now merged with the Avoca Quality Consortium) and developed the Metric Development Framework which works really well for developing metric definitions. And for determining a set of metrics or KPIs that measure things that really matter rather than simply measuring the things that are easy to measure.

I have used this approach often and it can bring real clarity to the determination of which KPIs to use. Working with one sponsor client, they provided me with lists of proposed KPIs from their preferred CROs. As is so often the case, they were lists of KPI titles without the questions they were aimed at answering. And the lists were largely very different between the CROs even though the same services were being provided. So, I worked with the client to determine the questions that they wanted their KPIs to answer. Then we had discussions with the CROs on which KPIs they had that could help answer those questions. This is a much better place to come at the discussion because it automatically focuses you on the purpose of the KPIs rather than whether one KPI is better than another. And sometimes it highlights that you have questions which are actually rather difficult to answer with KPIs – perhaps because data is not currently collected. Then you can start with a focus on the KPIs where the data is accessible and look to add additional ones if/when the data becomes accessible.

As an example:

    • Key question: Are investigators being paid on time?
    • Originally proposed KPI: Number of overdue payments at month end
    • Does the proposed KPI help answer the key question? No. Because it counts only overdue payments but doesn’t tell us how many were paid on time.
    • New proposed KPI: Proportion of payments made in the month that were made on time
    • Does this new proposed KPI help answer the key question? Yes. A number near 100% is clearly good whereas a low value is problematic.

In this example, we’ve rejected the originally proposed KPI and come up with a new definition. There is more detail to go into, of course, such as what “on time” really means and how an inaccurate invoice is handled for the KPI calculation. And what should the target be? But the approach focuses us on what the KPI is trying to answer. It’s the key questions you have to agree on first!

Perhaps it’s time to tune up your KPIs and make sure they’re fit for 2023. Contact me and I’d be happy to discuss the approach you have now and whether it meets leading practice in the industry. I can even give your current KPI list a review and provide feedback. If you have the key questions they’re tryng to answer, that’ll be a help! #KPITuneUp

 

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Image: rawpixel CC0 (Public Domain)

#KPITuneUp

Is it time your Vendor/CRO KPIs had a tune up?

As the late, great Michael Hammer once said in The Seven Deadly Sins of Measurement, “…there is a widespread consensus that [companies] measure too much or too little, or the wrong things, and that in any event they don’t use their metrics effectively.” Hammer wrote this in 2007 and I suspect many would think it still rings true today. What are Hammer’s deadly sins?

  1. Vanity – measuring something to make you look good. In a culture of fear, you want to make sure your KPIs are not going to cause a problem. So best to make sure they can’t! If you use KPIs to reward/punish then you’re likely to have some of these. The KPIs that are always green such as percent of key team member handovers with handover meetings. Maybe the annualized percent of key staff turnover might not be so green.
  2. Provincialism – sub-optimising by focusing on what matters to you but not the overall goal. The classic example in clinical trials (which was in the draft of E8 R1 but was removed in the final version) is the race to First Participant In. Race to get the first one but then have a protocol amendment because the protocol was poorly designed in the rush. We should not encourage people into rushing to fail.
  3. Narcissism – not measuring from the customer’s perspective. This is why it is important to consider the purpose of the KPI, what is the question you are trying to answer? If you want investigators to be paid on time, then measure the proportion of payments that are made accurately and on time. Don’t measure the average time from payment approved to payment made as a KPI.
  4. Laziness – not giving it enough thought or effort. To select the right metrics, define them well, verify them, and empowering those using them to get most value from them needs critical thinking. And critical thinking needs time. It also needs people who know what they are doing. A KPI that is a simple count at month end of overdue actions is an example of this. What is it for? How important are the overdue actions? Maybe they are a tiny fraction of all actions or maybe they are most of them. Better to measure the proportion of actions being closed on time. This focuses on whether the process is performing as expected.
  5. Pettiness – measuring only a small part of what matters. OK, so there was an average of only 2 findings per site audit in the last quarter. But how many site audits were there? How many of the findings were critical or major? Maybe one of the sites audited had 5 major findings and is the largest recruiting site for the study.
  6. Inanity – measuring things that have a negative impact on behaviour. I have come across examples of trying to drive CRAs to submit Monitoring Visit Reports within 5 days of a monitoring visit leading to CRAs submitting blank reports so that they meet the timeline. It gets even worse if KPIs are used for reward or punishment – people will go out of their way to make sure they meet the KPI by any means possible. Rather than focus effort on improving the process and being innovative, they will put their effort into making sure the target is met at all costs.
  7. Frivolity – not being serious about measurement. I have seen many organizations do this. They want KPIs because numbers gives an illusion of control. Any KPIs will do, as long as they look vaguely reasonable. And people guess at targets. But no time is spent on why KPIs are needed and how they are to be used. Let alone training people on the skills needed. Without this, KPIs are a waste of resource and effort.

I think Hammer’s list is a pretty good one and covers many of the problems I’ve seen with KPIs over the years.

How well do your KPIs work between you and your CRO/vendor? Does it take all the effort to gather them ready for the governance meeting only to have a cursory review before the next topic? Do you really use your KPIs to help achieve the overall goals of a relationship? Have you got the right ones? Do you and your staff know what they mean and how to use them?

Perhaps it’s time to tune up your KPIs and make sure they’re fit for 2023. Contact me and I’d be happy to discuss the approach you have now and whether it meets leading practice in the industry. I can even give your current KPI list a review and provide feedback. #KPITuneUp

 

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Image – Robert Couse-Baker, PxHere (CC BY 2.0)

Don’t let metrics distract you from the end goal!

We all know the fable of the tortoise and the hare. The tortoise won the race by taking things at a steady pace and planning for the end rather than rushing and taking their eye off the end goal. Metrics and how they are used can drive the behaviours we want but also behaviours that mean people take their eye off the end goal. As is often said, what gets measured gets managed – and we all know metrics can influence behaviour. When metrics are well-designed and are focused on answering important questions, and there are targets making it clear to a team what is important, they can really help focus efforts. If the rejection rate for documents being submitted to the TMF is set to be no greater than 5% but is tracking well above, then there can be a focus of effort to try and understand why. Maybe there are particular errors such as missing signatures, or there is a particular document type that is regularly rejected. If a team can get to the root causes then they can implement solutions to improve the process and see the metric improve. That is good news – metrics can be used as a great tool to empower teams. Empowering them to understand how the process is performing and where to focus their effort for improvement. With an improved, more efficient process with fewer errors, the end goal of a contemporaneous, high quality, complete TMF is more likely to be achieved.

But what if metrics and their associated targets are used for reward or punishment? We see this happen with metrics when used for personal performance goals. People will focus on those metrics to make sure they meet the targets at almost any cost! If individuals are told they must meet a target of less than 5% for documents rejected when submitted to the TMF, they will meet it. But they may bend the process and add inefficiency in doing so. For example, they may decide only to submit the documents they know are going to be accepted and leave the others to be sorted out when they have more time. Or they may avoid submitting documents at all. Or perhaps they might ask a friend to review the documents first. Whatever the approach, it is likely it will impact the process of a smooth flow of documents into the TMF by causing bottlenecks. And they are being done ‘outside’ the documented process – sometimes termed the ‘hidden factory’. Now the measurement is measuring a process of which we no longer know all the details – it is different to the SOP. The process has not been improved, but rather made worse. And the more complex process is liable to lead to a TMF that is no longer contemporaneous and may be incomplete. But the metric has met its target. The rush to focus on the metric in exclusion to the end goal has made things worse.

And so, whilst it is good news that in the adopted ICH E8 R1, there is a section (3.3.1) encouraging “the establishment of a culture that supports open dialogue” and critical thinking, it is a shame that the following section in the draft did not make it into the final version:

“Choose quality measures and performance indicators that are aligned with a proactive approach to design. For example, an overemphasis on minimising the time to first patient enrolled may result in devoting too little time to identifying and preventing errors that matter through careful design.”

There is no mention of performance indicators in the final version or the rather good example of a metric that is likely to drive the wrong behaviour – time to first patient enrolled. What is the value in racing to get the first patient enrolled if the next patient isn’t enrolled for months? Or a protocol amendment ends up being delayed leading to an overall delay in completing the trial? More haste, less speed.

It can be true that what gets measured gets managed – but it will only be managed well when a team is truly empowered to own the metrics, the targets, and the understanding and improvement of the process. We have to move away from command and control to supporting and trusting teams to own their processes and associated metrics, and to make improvements where needed. We have to be brave enough to allow proper planning and risk assessment and control to take place before rushing to get to first patient. Let’s use metrics thoughtfully to help us on the journey and make sure we keep our focus on the end goal.

 

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Image – openclipart.org

Training for KPIs – Why Bother?

I was facilitating a brainstorm session recently as part of a discussion on the challenges of using Key Performance Indicators (KPIs). People spend a lot of time deciding on their KPIs and often horse-trading over targets. But we were discussing how people actually use the KPIs. After all, KPIs are not an end in themselves. They are there to serve a purpose. To shed light on processes and performance and help move people from the subjective to the objective.

The brainstorm raised lots of good ideas such as:

    • The importance of getting senior level buy-in
    • Regular review of the KPIs
    • Rules on actions to take if there are more than a certain number of “red” KPIs
    • The importance of making sure the definitions are clear

But no-one raised the question of training. I found this intriguing. Do people think that once you have a set of KPIs being reported, everyone somehow automatically knows how to use them? I’m not at all convinced that everyone does know. Sometimes, teams spend a whole meeting debating the possible reasons of why this month’s KPI value is slightly lower than last month’s. They don’t like the idea that perhaps it’s just noise and is unlikely to be worth the time investigating (until we see an actual trend). I’ve been in meetings where most of the KPIs are red and managers throw up their hands because they don’t know what to do. They just hope next month is better. Followers of this blog know that the Pareto Principle would help here. Or maybe the manager gets frustrated and tells the staff the numbers have to get better…which you can always do by playing with definitions rather than actually improving the underlying processes.

There are opportunities to learn more about interpreting data – such as books by Tim Harford, Don Wheeler, Davis Balestracci; or workshops such as at the upcoming MCC vSummit or even from DMPI – but I wonder whether it’s a case of people not knowing what they don’t know? Interpreting KPI data isn’t easy. It needs critical thinking and careful consideration. We should not accept the adage “Lies, damned lies, and statistics!

If people are not trained to use and interpret KPIs, should you bother with collecting and reporting KPIs at all?

 

Text: © 2021 Dorricott MPI Ltd. All rights reserved.

Picture: Tumisu, pixabay

KPIs: What’s not to like?

Many organizations set Key Performance Indicators (KPIs) to monitor their performance against an overall goal or target. Makes sense, surely, to monitor progress with something tangible. And they can be very effective. But there are a lot of pitfalls. And I’m not convinced they work for all circumstances.

A major pitfall in implementing KPIs and targets is an overly top-down approach. Every department / function is told it must have a set of KPIs with targets. After all, this will ensure everyone is accountable. And there will be lots of data showing KPIs against targets Management to review. When these requests come through, most people just shrug their shoulders and mouth “here we go again,” or something less polite. They put together some KPIs with targets that will be easy to achieve and hope that will keep the Management quiet. After a bit of horse-trading, they agree slightly tougher targets and hope for the best.

Or even worse, Management wants to “hold their feet to the fire” and they impose KPIs and targets on each department. They require the cycle time of site activation to be reduced by 20%, or 20% more documents to be processed with the same resource, for example. This leads to much time spent on the definitions – what can be excluded, what should we include. How can we be ingenious and make sure the KPI meets the goal, regardless of the impact on anything else. We can, after all, work people much harder to do more in less time. But the longer-term consequences can be detrimental as burnout leads to sicknesses and resignations and loss of in-depth knowledge about the work.

This is an exercise in futility. It is disrespectful to the people working in the organization. It is wasting the time, ingenuity, and talent of those doing the work – those creating value in the organization. “The whole notion of targets is flawed. Their use in a hierarchical system engages people’s ingenuity in managing the numbers instead of improving their methods,” according to John Seddon in Freedom from Command & Control.  Rather than understanding the work as a process and trying to improve it, they spend their time being ingenious about KPIs that will keep Management off their backs and making sure they meet the targets at whatever cost. There are plenty of examples of this and I’ve described two in past posts – COVID testing & Windrush.

Much better is for the team that owns the work to use metrics to understand that work. To set their own KPIs and goals based on their deep understanding. And to be supported by Management all the way in putting in the hard graft of process improvement. As W. Edwards Deming said, “There is no instant pudding!” Management should be there to support those doing the work, those adding value. They should set the framework and direction but truly empower their workforce to use metrics and KPIs to understand and improve performance. Longer term, that’s better for everyone.

 

Text: © 2021 Dorricott MPI Ltd. All rights reserved.

Picture: KPI Board by Anna Sophie from the Noun Project