Not everything that counts can be counted!

It’s coming to that time of year again – performance appraisals!  Do you know anyone who likes them? When I first became a line manager, I was lucky, I had actually received training on the importance of performance appraisals and how to run them. The time came and I was ready. Except I wasn’t ready for the wave of negativity. Employees came in to my office and slumped on the chair ready for what they clearly considered was a pointless annual appraisal. I tried my best, I used all the techniques I had been taught. I got feedback from peers, I considered strengths, weaknesses, opportunities. I spent hours putting the text together for the appraisal and further hours working out the SMART goals for the next year. But to no avail. Perhaps I wasn’t doing it right. I tried other ways but somehow over many cycles of these and different employers and employees I have never managed to work out the right formula. It was the same with my own appraisals – I never felt they really added much to what I already knew. And my sense of disappointment and unfairness at getting a “Meets Expectations” one year where I thought I had achieved so much lasted a long time.

Could it be that actually the annual appraisal process itself is not fit for purpose in the modern world of work? In fact, perhaps it never was. So much of what we do is team work, it is actually quite tricky to separate out the individual contribution. And things change so quickly in organizations. What seems important when goals are set may be irrelevant even two months later. As for SMART goals, there are many critiques of these and I have really been left questioning their value. They encourage you to set targets you know you’re going to achieve rather than challenging ones you might fail at. The challenging ones will bring your performance rating down. I’ve been in organizations that spent months going round and round trying to agree SMART objectives for the year and only getting there by May (when the appraisal year started in January, five months earlier!) And there were way too many goals that employees only looked at when the manager reminded them.

As to the actual rating process, I have never come across one that worked well. Many seemed based on the 5 point scale (1=Unacceptable Performance, 2=Needs Improvement, 3=Meets Expectations, 4=Exceeds Expectations, 5=Exceptional Performance). As the resident data nerd in one organization, I was given all the data on ratings to crunch to see what the distribution looked like. You can probably guess. 0.2% got a 1 rating, 0.8% got a 2 rating, and 0.2% got a 5 rating. In other words, 98.8% of people got a 3 or 4 rating. Managers tend to choose the middle ratings because it is easier – explaining the extreme values to employees and superiors is hard work.

Of course, good managers do much better. They manage performance on an ongoing basis. The annual performance appraisal becomes more of an administrative burden. Isn’t it about time we got rid of this failed approach and got managers managing performance with their employees on an ongoing basis? They could talk about strengths, weaknesses, opportunities, ways to grow without worrying about comparing individuals using numbers.

With employees and managers hating the process of annual performance appraisals, isn’t it about time we ditched them in favour of a continuous assessment approach and an ongoing focus on goals – for both the employee and organization? If you’re wondering how that might work, take a look here.

A phrase often wrongly attributed to Einstein but actually thought to be from the sociologist, William Bruce Cameron should give us pause for thought when using ratings for annual performance appraisals: “not everything that can be counted counts, and not everything that counts can be counted”.

Want to learn more about using KPIs correctly? Drop me a line! Or take a look at the training opportunities.

 

Picture: Rizkyharis  CC BY-SA 4.0

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

Don’t waste people’s time on root cause analysis

In an earlier post, I described a hypothetical situation where you are the Clinical Trial Lead on a vaccine study. Information is emerging that a number of the injections of trial vaccine have actually been administered after the expiry date of the vials. This has happened at several sites. I then described actions you might take without the need for root cause analysis (RCA) such as – review medical condition of the subjects affected, review stability data to try to estimate the risk, ask CRAs to check expiry dates on all vaccine at sites on their next visit, remind all sites of the need to check the expiry date prior to administering the vaccine. So if you were to now go through the time and effort of a DIGR® RCA and you still end up with these and similar actions, why did you bother with the RCA? RCA should lead to actions that tackle the root cause and try to stop the issue recurring – to help you sleep at night. If you or your organization is not going to implement actions based on the RCA then don’t carry out the RCA. A couple of (real) examples from an office environment might help to illustrate the point.

In a coffee area there are two fridges for people to store milk, their lunch etc. One of them has a sign on it. The sign is large and very clear “Do not use”. And yet, if you open the fridge, you will see milk and people’s lunch in it. No-one takes any notice of the notice. But why not? In human factors analysis, the error occurring as people ignore the sign is a routine non-compliance. Most people don’t pay much attention to signs around the office and this is just another sign that no-one takes notice of. Facilities Management occasionally sends out a moaning email that people aren’t to use the fridge but again no-one really takes any notice.

What is interesting is that the sign also contains some root cause analysis. Underneath “Do not use” in small writing it states “Seal is broken and so fridge does not stay cold”. Someone had noticed at some point that the temperature was not as cold as it should be and root cause analysis (RCA) had led to the realisation that a broken seal was the cause. So far, so good. But the action following this was pathetic – putting up a sign telling people not to use it. Indeed, when you think about it, no RCA was needed at all to get to the action of putting up the sign. The RCA was a waste of time if this is all it led to. What should they have done? Replaced the seal perhaps. Or replaced the fridge. Or removed the fridge. But putting a sign up was not good enough.

The second example – a case of regular slips on the hall floors outside the elevators – including one minor concussion. A RCA was carried out and the conclusion was that the slips were due to wet surfaces when the time people left the office coincided with the floors being cleaned. So the solution was to make sure there were more of the yellow signs warning of slips at the time of cleaning. But slips still occurred – because people tended to ignore the signs. A better solution might have been to change the time of the cleaning or to put an anti-slip coating on the floor. There’s no point in spending time on determining the root cause unless you think beyond the root cause to consider options that might really make a difference.

Root cause analysis is not always easy and it can be time consuming. The last thing you want to do is waste the output by not using it properly. Always ask yourself – could I have taken this action before I knew what the root cause was? If so, then you are clearly not using the results of the RCA and it is likely your action on its own will not be enough. Using this approach might help you to determine whether “retraining” is a good corrective action. I will talk more about this in a future post.

Here’s a site I found with a whole series of signs that helps us understand one of the reasons signs tend to be ignored. Some of them made me cry with laughter.

 

Photo: Hypotheseyes CC BY-SA 4.0

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott Metrics & Process Improvement Ltd.

Where’s My Luggage?

On a recent flight, I had a transfer in Dublin. My arriving flight was delayed as there weren’t enough available stands at the airport. I made it to my connecting flight but evidently my hold luggage did not. Have you ever been there? Stood by the baggage reclaim watching the bags come out. Slowly, they are collected by their owners who disappear off and you are left to watch the one or two unclaimed bags go round and round and yours is not there? Not great.

The process of finding my luggage and delivering it home the next day was actually all pretty efficient. I filled in a form, my details were entered in the system and then I got regular updates via email and text on what was happening. The delivery company called me 30 minutes before arriving at my house to check I was in. But it was still frustrating not having my luggage for 24 hours. It got me thinking…

How often does this happen? Apparently, on average, less than 1% of bags are lost. Although given the number of bags, that’s still a lot and explains why the process of locating and delivering them seems to be well refined with specific systems to track and communicate. But what is the risk on specific journeys and transfers? When I booked the flight, the airline had recommended the relatively short transfer time in Dublin. My guess is that luggage missing the connecting flight on the schedule I was on is not that unusual – it only needs a delay of 30 minutes or more and it seems your luggage is likely to miss the transfer. A 30 minute delay is not unusual as we all know.

This is a process failure and it has a direct cost. The cost of the administration (forms, personnel entering data into a system, help line, labelling), IT (a specific IT system with customer access), transport (from the airport to my home). I would guess at US$200 minimum. This must easily wipe out the profit on the sale of my ticket (cost US$600). So this gives some idea of the frequency – it cannot be so high as to negate all the profit from selling tickets. It must be a cost-benefit analysis by the airline. Perhaps luggage missing this particular connecting flight occurs 5% of the time and they accept the direct cost. But the benefit is that the shorter transfer time is preferred by customers and makes the overall travel time less. So far so good.

But, what about the cost of the 24 hours I had without my luggage? That’s not factored into the cost-benefit I’m sure because it’s not a cost the airline can quantify. Is my frustration enough to make me decide not to fly with that airline again? I have heard of someone recently whose holiday was completely messed up due to delayed luggage. They had travelled to a country planning to hire a car and drive to a neighbouring country the next day. But the airline said they could only deliver the delayed luggage within the country of arrival. And it would take 48 hours. Direct cost to the airline was fairly small but the impact to the customer was significant.

So how about this for an idea. We’re in the information age and the data on delayed luggage must already be captured. When I go to book a flight with a short transfer time in future, I’d like to know the likelihood (based on past data) of my luggage not making the transfer. Instead of the airline being the only one to carry out the cost-benefit, I want in on the decision too – but based on data. If the risk looks small then I might decide to take it. As we all have our own tolerance for risk, we might make different decisions. But at least we are more in control that way rather than leaving it all to the airline. That would be empowerment.

We can’t ensure everything always goes right. But we can use past performance to estimate risk and take our own decisions accordingly.

 

Photo : Kenneth Lu  license

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

Process Improvement: Let’s Automate Our Processes!

I came across an example of a process in need of improvement recently. Like you, I come across these pretty regularly in everyday life. But this one has an interesting twist…

I was applying for a service via a broker. The broker recommended a company and he was excited because this company had a new process using electronic signatures. They had ‘automated the process’ rather than needing paper forms, snail mail etc. etc. I was intrigued too and so was pleased to give it a go. The email arrived and it was a little disconcerting because warned that if I made any error in the electronic signature process that it was my fault and it might invalidate it. They would not check for accuracy. When I clicked on the link there was a problem because the broker had entered my landline number into the system and not my mobile number. The phone number was needed to send an authentication text. So he attempted to correct that and a new email arrived. When I clicked the link this time it told me that “the envelope is being updated”. I have no idea what envelope it was talking about – a pretty useless error message. I wasn’t feeling great about this process improvement now.

The broker said “Let’s go back to the paper way then.” He emailed me a 16-page form that I had to complete. I had to get it signed by 4 different people in a particular order. It was a pretty challenging form that needed to be completed, scanned and emailed back. I did wonder as I completed it just how many times there must be errors in completion (including, possibly my own). There seemed to be hundreds of opportunities for error. Makes sense, I thought, to implement a process improvement and use a process with electronic signatures – to ‘automate the process’. Where they had failed was clearly in the implementation – they had not trained the broker or given adequate instructions to the end user (me). Error messages using IT jargon were of no help to the end user. It reminded me of an electronic filing system I saw implemented some years ago, where a company decided to ‘automate the process’ of filing. The IT Department was over the moon because they had implemented the system one week ahead of schedule. But no-one was actually using it because they hadn’t been trained, the roll-out had not been properly considered, there was no thought about reinforcing behaviours or monitoring actual use. No change management considerations. A success for IT but a failure for the company!

Anyway, back to the story. After completing the good old paper-based process, I was talking some more with the broker and he said “their quote for you was good but their application process is lousy. Other companies have a much easier way of doing it – for most of them the broker completes the information on-line and then sends a two-page form via email to print, review, sign (once), scan and return. After that a confirmation pack comes through and the consumer has the chance to correct errors at that stage. But it’s all assumed to be right at the start.” These companies had a simple and efficient process and no need to ‘automate the process’ with electronic signatures.

Hang on. Why does the company I used need a 16-page form and 4 signatures I hear you ask? Who knows! They had clearly recognised that their process needed improving but had headed down the route of ‘let’s automate it’. They could have saved themselves an awful lot of cost of implementing their new improved process if they had talked with the broker about his experience first.

The lesson here is – don’t just take a bad process and try to ‘automate’ it with IT. Start by challenging the process. Why is it there? Does it have to be done that way? There might even be other companies out there who have a slick process already – do you know how your competition solves the problem? Even more intriguingly, perhaps another industry has solved a similar problem in a clever way that you could copy. If you discover that a process is actually unnecessary and you can dramatically simplify it then you’re mistake-proofing the process. Taking out unnecessary steps means they can’t go wrong.

In my next post I will explore the confusion surrounding the term CAPA.

Breaking News – the broker just got back to me to tell me I had got one of the pages wrong on the 16-page form. This is definitely a process in need of improvement!

 

Text: © 2017 Dorricott MPI Ltd. All rights reserved.