Is the risk of modifying RBQM in GCP worth it?

At SCOPE Europe in Barcelona earlier this month, I took the opportunity to talk with people about the proposed changes to section 5.0 of ICH E6 on Quality Management. People mostly seemed as confused as I was with some of the proposed changes. It’s great we get an opportunity to review and comment on the proposal prior to it being made final. But it is guesswork trying to determine why some of the changes have been proposed.

ICH E6 R2 was adopted in 2016 and section 5.0 was one of the major changes to GCP in twenty years. Since then, organizations have been working on their adoption with much success. Predefined Quality Tolerance Limits (QTLs) is one area that has received much discussion in industry and has been much written about. And I have listened to and personally led many discussions on the challenges of implementation (including through the long-running Cyntegrity mindsON RBQM series of workshops which is nearing episode twenty this year!) So much time and effort has gone into implementing section 5.0 and much of it remains intact in the proposed revision to E6 R3. And there are some sensible changes being proposed.

But there are also some proposed changes that appear minor but might have quite an impact. I wonder if the risk of making the change is actually worth the potential benefit that is hoped for. An example of such a proposed change is the removal of the words “against existing risk controls” from section 5.0.3 – “The sponsor should evaluate the identified risks, against existing risk controls […]” We don’t know why these four words are proposed to be dropped in the revised guidance. But I believe dropping them could cause confusion. After all, if you don’t consider existing risk controls when evaluating a risk then that risk will likely be evaluated as being very high. For example, there may be an identified risk such as “If there are too many inevaluable lab samples then it may not be possible to draw a statistically valid conclusion on the primary endpoint.” Collecting and analysis of lab samples is a normal activity in clinical trials and there are lots of existing risk controls such as provision of dedicated lab kits, clear instructions, training, qualified personnel, specialised couriers, central labs etc. If that risk is evaluated assuming none of the existing risk controls are in place, then I am sure it will come out as a high risk that should be controlled further. But maybe the existing risk controls are enough to bring the risk to an acceptable level without further risk controls. And there may be other risks that are more important to spend time and resource controlling.

We don’t know why the removal of these four words has been proposed and there may be very sound reasons for their removal. As someone experienced in helping organizations implement RBQM and an educator and trainer, however, it is not clear to me. And I worry that a seemingly simple change like this may actually cause more industry confusion. It may take time and resource away from the work of proper risk management to process, system, and SOP updates. It may delay still further some of the laggards in implementing Risk-Based Quality Management (RBQM). Delaying implementation is bad for everyone, but particularly patients. They can end up on trials where risks are higher than they need to be and patients may also not get access to new drugs as quickly because trials fail operationally (as their risks have not been properly controlled).

So my question is, is the risk of modifying RBQM in GCP worth it?

The deadline for comments on the draft of ICH E6 R3 has now passed. The guidance is currently planned for adoption in October 2024. I’ll be presenting my thoughts on the proposed changes at SCOPE in Florida in February.

Text: © 2023 Dorricott Metrics & Process Improvement Ltd. All rights reserved.

Picture: Neil Watkins

Enough is enough! Can’t we just accept the risk?

I attended SCOPE Europe 2022 in Barcelona recently. And there were some fascinating presentations and discussions in the RBQM track. One that really got me thinking was Anna Grudnicka’s on risk acceptance. When risks are identified and evaluated as part of RBQM, the focus of the team should move to how to reduce the overall risk to trial participants and the ability to draw accurate conclusions from the trial. Typically, the team takes each risk, starting with those that score the highest and decides how to reduce the scores. To reduce the risk scores (“control the risk”), they can try to make the risk less likely to occur, to reduce the impact if it does occur (a contingency) or to improve the detection of the risk (with a KRI, for example). It is unusual for there to be no existing controls for a risk. Clinical trials are not new, after all, and we already have SOPs, training, systems, monitoring, data review, etc. There are many ways we try to control existing risks. In her presentation, Anna was making the point that sometimes it may be the right thing to actually accept a risk without adding further controls. She described how at AstraZeneca they can estimate the programming cost for an additional Key Risk Indicator (a detection method) and to use this to help make the decision on whether to implement this additional risk control or not.

Indeed, the decision on whether to add further controls is always a balance. What is the potential cost of those controls? And what is the potential benefit? Thinking of a non-clinical trial example, there are many level crossings in the UK. This is where a train line crosses a road at the same level. Some of these level crossings have no gates – only flashing lights. A better control would be to have gates that stop vehicles going onto the track as a train approaches. But even better would be a bridge. But, of course, these all have different costs and it isn’t practical to have a bridge to replace every level crossing. So most level crossings have barriers. But for less well-used crossings, where the likelihood of collision is lower, the flashing light version is considered to be enough and the risk is accepted. The balance of cost and benefit means the additional cost of barriers is not considered worth it for the potential gain.

So, when deciding whether to add further controls, you should consider the cost of those controls and the potential benefits. Neither side of the equation may be that easy to determine – but I suspect the cost is the easier of the two. We could estimate the cost of additional training or programming and monitoring of a new KRI. But how do we determine the benefit of the additional control? In the absence of data, this is always going to be a judgement.

The important thing to remember is that not all risks on your risk register need to have additional controls. Make sure the controls you add are as cost-effective as possible and meet the goal of reducing the overall risk to trial participants and the ability to draw accurate conclusions from the trial.

 

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Image – © Walter Baxter CC2.0

Don’t let metrics distract you from the end goal!

We all know the fable of the tortoise and the hare. The tortoise won the race by taking things at a steady pace and planning for the end rather than rushing and taking their eye off the end goal. Metrics and how they are used can drive the behaviours we want but also behaviours that mean people take their eye off the end goal. As is often said, what gets measured gets managed – and we all know metrics can influence behaviour. When metrics are well-designed and are focused on answering important questions, and there are targets making it clear to a team what is important, they can really help focus efforts. If the rejection rate for documents being submitted to the TMF is set to be no greater than 5% but is tracking well above, then there can be a focus of effort to try and understand why. Maybe there are particular errors such as missing signatures, or there is a particular document type that is regularly rejected. If a team can get to the root causes then they can implement solutions to improve the process and see the metric improve. That is good news – metrics can be used as a great tool to empower teams. Empowering them to understand how the process is performing and where to focus their effort for improvement. With an improved, more efficient process with fewer errors, the end goal of a contemporaneous, high quality, complete TMF is more likely to be achieved.

But what if metrics and their associated targets are used for reward or punishment? We see this happen with metrics when used for personal performance goals. People will focus on those metrics to make sure they meet the targets at almost any cost! If individuals are told they must meet a target of less than 5% for documents rejected when submitted to the TMF, they will meet it. But they may bend the process and add inefficiency in doing so. For example, they may decide only to submit the documents they know are going to be accepted and leave the others to be sorted out when they have more time. Or they may avoid submitting documents at all. Or perhaps they might ask a friend to review the documents first. Whatever the approach, it is likely it will impact the process of a smooth flow of documents into the TMF by causing bottlenecks. And they are being done ‘outside’ the documented process – sometimes termed the ‘hidden factory’. Now the measurement is measuring a process of which we no longer know all the details – it is different to the SOP. The process has not been improved, but rather made worse. And the more complex process is liable to lead to a TMF that is no longer contemporaneous and may be incomplete. But the metric has met its target. The rush to focus on the metric in exclusion to the end goal has made things worse.

And so, whilst it is good news that in the adopted ICH E8 R1, there is a section (3.3.1) encouraging “the establishment of a culture that supports open dialogue” and critical thinking, it is a shame that the following section in the draft did not make it into the final version:

“Choose quality measures and performance indicators that are aligned with a proactive approach to design. For example, an overemphasis on minimising the time to first patient enrolled may result in devoting too little time to identifying and preventing errors that matter through careful design.”

There is no mention of performance indicators in the final version or the rather good example of a metric that is likely to drive the wrong behaviour – time to first patient enrolled. What is the value in racing to get the first patient enrolled if the next patient isn’t enrolled for months? Or a protocol amendment ends up being delayed leading to an overall delay in completing the trial? More haste, less speed.

It can be true that what gets measured gets managed – but it will only be managed well when a team is truly empowered to own the metrics, the targets, and the understanding and improvement of the process. We have to move away from command and control to supporting and trusting teams to own their processes and associated metrics, and to make improvements where needed. We have to be brave enough to allow proper planning and risk assessment and control to take place before rushing to get to first patient. Let’s use metrics thoughtfully to help us on the journey and make sure we keep our focus on the end goal.

 

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Image – openclipart.org

And Now For Some Good News

It feels as though we need a good news story at the moment. And I was reading recently about the incredible success of the Human papillomavirus (HPV) vaccine. It really is yet another amazing example of the power of science. HPV is a large group of viruses that are common in humans but normally do not cause any problems. A small number of them though can lead to cancers and are deemed “high risk”. Harald zur Hausen isolated HPV strains in cervical cancer tumours back in the 1980s and theorised that the cancer was caused by HPV. This was subsequently proved right: in fact we now think 99.7% of cervical cancers are caused by persistent HPV infection. This understanding along with vaccine technology led to the development of these amazing vaccines, which are incredibly as much as 99% effective against the high risk virus strains. And the results speak for themselves, as you can see in the graphic above. This shows the percentage of women at age 20 diagnosed with cervical cancer by birth year and that the numbers have dropped dramatically as the vaccination rates have increased. zur Hausen won the Nobel Prize for medicine for his fundamental work that has impacted human health to such a degree.

What had me intrigued particularly about this story is that here in the UK, there has been public concern that the frequency of testing for cervical cancer (via the “smear test”) is being reduced – in Wales specifically. The concern is that this is about reducing the cost of the screening programme. The reason the frequency is being reduced from 3 to 5 years is scientifically supported however, because the test has changed. In the past, the test involved taking a smear and then looking for cancerous cells through a microscope. This test had various problems. First, the smear may not have come from a cancerous part of the cervix. Second, as it involves a human looking through a microscope, they might miss seeing a cancerous cell in the early stages.

The new test, though, looks for the high risk HPV strains. If there is HPV present, it will be throughout the cervix and so will be detected regardless of where the test is from. And it doesn’t involve a human looking through a microscope. But there is an added, huge, benefit. Detecting the high risk HPV strain doesn’t mean there is cancer – it is a risk factor. And so further screening can take place if this test is positive. This means that cancer can be detected at an earlier stage. Because the new test is so much better, and gives an earlier detection, there is more time to act. Cervical cancer normally develops slowly.

In Risk-Based Quality Management (RBQM) in clinical trials, we identify risks, evaluate them, and then try to reduce the highest risks to the success of the trial (in terms of patient safety and the integrity of the trial results). One way to reduce a risk is to put a measurement in place. People I work with often struggle with understanding how to assess the effectiveness of a risk measurement but I think this cervical cancer testing gives an excellent example. The existing test (with the microscope) can detect the early stages of cancer. But the newer test can actually detect the risk of a cancer – it is earlier in the development cycle of the cancer. The newer test detects with more time to act. And because of that, the test frequency is being reduced. The best measurements for risk provide plenty of time to take action in order to reduce the impact – in this case, cervical cancer.

This example also demonstrates another point. That understanding the process (the cause and effect) means that you can control the process better. In this case by both eliminating the cause (via the HPV vaccine) and improving the measurement of the risk of cancer (via the test for high risk HPV strains). Process improvement always starts with process understanding.

Vaccines have been in our minds rather more than usual over the last couple of years. It is sobering to think of the number of lives they have saved since their discovery in 1796 by Edward Jenner.

 

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Image – Vaccine Knowledge Project https://vk.ovg.ox.ac.uk/vk/hpv-vaccine

Are we seeing a breakthrough in clinical trial efficiency?

I joined my first CRO as an “International Black Belt” in 2005. Having come from a forward-thinking manufacturer who had been implementing six sigma and lean philosophy, I was dumb-founded by what I saw. After the first few weeks, I mentioned to a colleague that most of what seemed to happen in clinical trials was about checking because the process could not be relied on to be right the first time. Manufacturing learned in the 1980s and 1990s that checking (or “inspection” as they call it) is costly, inefficient, and ineffective. This colleague recently repeated this back to me. We’ve all seen examples in clinical trial – TMF documents being checked before sending, checked on receipt, then checked during regular QCs; reports going through endless rounds of review; data queries being raised for items that can have no impact on trial results or patients. When challenged, often the response is that we’ve always done it that way. Or that QA, or the regulators, tell us we have to do it that way. I’ve spent my career in clinical trials trying to get people to focus on the process:

    • What is the purpose?
    • What are the inputs and outputs?
    • What is the most efficient way to get from one to the other?
    • How can we measure the process use the measurement to continuously improve?
    • What is the perspective of the “customers” of the process?
    • What should we do when a process goes wrong?

And I’ve had a number of successes along the way – the most satisfying of which is when someone has an “Aha!” moment and takes the ideas and runs with them. Mapping a process themselves to highlight where there are opportunities to improve, for example. But I do often wonder why it is so difficult to get the industry to make the significant changes that we all know it needs. Process improvement should not be seen as an optional extra. It is a necessity to stay in business. It seems unfair to blame regulators who have been pushing us along to be process focused – for example with the introduction of Quality Tolerance Limits in GCP in 2016.

COVID-19 has caused so much loss of life and impacted everybody’s lives. It has been hugely to the detriment of the people of the world. And yet, there are some positives too. In clinical trials, suddenly, people are starting to ask “how can we make this change?” rather than “why can’t we make this change?” At meetings in the Metrics Champion Consortium we have heard stories of cycle times that were thought impossible for developing a protocol, for example, of a company that has switched from 100% Source Document Verification to 0% after reviewing evidence of the ineffectiveness of the process; and of companies implementing remote and centralized monitoring in record time. There are some great examples from the COVID-19 RECOVER study in the UK. And, at the same time, pharmaceuticals and the associated clinical trials are seen as critical to helping us turn the corner of the pandemic.

Let’s hope this new-found momentum to improve continues in our industry when this pandemic is finally declared over. And we can bring new therapies to patients much quicker in the future – with less cost and with quality and safety as high or even higher than in the past. We are showing what’s possible. Let’s continue to challenge each other on that assumption that because we’ve always done things one way, we have to continue.

Text: © 2020 Dorricott MPI Ltd. All rights reserved.

Picture – Gerd Altmann, Needpix.com

Oh No – Not Another Audit!

It has always intrigued me, this fear of the auditor. Note that I am separating out auditor from (regulatory) inspector here. Our industry has had an over reliance on auditing for quality rather than on building our processes to ensure quality right the first time. The Quality Management section of ICH E6 (R2) is a much needed change in approach. And this has been enhanced by the ICH E8 (R1) (draft) “Quality should rely on good design and its execution rather than overreliance on retrospective document checking, monitoring, auditing or inspection”. The fear of the auditor has led to some very odd approaches.

Trial Master File (TMF) is a case in point. I seem to have become involved with TMF issues and improving TMF processes a number of times in CROs and more recently have helped facilitate the Metrics Champion Consortium TMF Metrics Work Group. The idea of an inspection ready TMF at all times comes around fairly often. But to me, that misses the point. An inspection ready (or audit ready) TMF is a by-product of the TMF processes working well – not an aim in itself. We should be asking – what is the TMF for? The TMF is to help in the running of the trial (as well as to document it to be able to demonstrate processes, GCP etc were followed). It should not be an archive gathering dust until an audit or inspection is announced when a mad panic ensues to make sure the TMF is inspection ready. It should be being used all the time – a fundamental source of information for the study team. Used this way, gaps, misfiles etc will be noticed and corrected on an ongoing basis. If the TMF is being used correctly, there shouldn’t be significant audit findings. Of course, process and monitoring (via metrics) need to be set up around this to make sure it works. This is process thinking.

And then there are those processes that I expect we have all come across. No-one quite understands why there are so many convoluted steps. Then you discover that at some point in the past there was an audit and to close the audit finding (or CAPA), additional steps were added. No-one knows the point of the additional steps any more but they are sure they must be needed. One example I have seen was of a large quantity of documents being photo-copied prior to sending to another department. This was done because documents had got lost on one occasion and an audit had discovered this. So now someone spent 20% of their day photocopying documents in case they got lost in transit. Not a good use of time and not good for the environment. Better to redesign the process and then consider the risk. How often do documents get lost en route? Why? What is the consequence? Are some more critical than others? Etc. Adding the additional step to the process due to an audit finding was the easiest thing to do (like adding a QC step). But it was the least efficient response.

I wonder if part of the issue is that some auditors appear to push their own solution too hard. The process owner is the person that understand the process best. It is their responsibility to demonstrate they understand the audit findings, to challenge where necessary, and to argue for the actions they think will address the real issues. They should focus on the ‘why’ of the process.

Audit findings can be used to guide you in improving the process to take out risk and make it more efficient. Root cause analysis, of course, can help you with the why for particular parts of the process. And again, understanding the why helps you to determine much better actions to help prevent recurrence of issues.

Audits take time, and we would rather be focusing on the real work. But they also provide a valuable perspective from outside our organisation. We should welcome audits and use the input provided by people who are neutral to our processes to help us think, understand the why and make improvements in quality and efficiency. Let’s welcome the auditor!

 

Image: Pixabay

Text: © 2019 Dorricott MPI Ltd. All rights reserved.

Hurry Up and Think Critically!

At recent conferences I’ve attended and presented at, the topic of critical thinking has come up. At the MCC Summit, there was consternation that apparently some senior leaders think the progress in Artificial Intelligence will negate the need for critical thinking. No-one at the conference agreed with those senior leaders. And at the Institute for Clinical Research “Risky Business Forum”, everyone agreed on the importance of fostering critical thinking skills. We need people to take a step back and think about future issues (risks) rather than just the pressing issues of the day. Most people (except those senior leaders) would agree we need more people to be developing and using critical thinking skills in their day-to-day work. We need to teach people to think critically and not “spoon-feed” them the answers with checklists. But there’s much more to this than tools and techniques. How great to see, then in the draft revision of ICH E8: “Create a culture that values and rewards critical thinking and open dialogue about quality and that goes beyond sole reliance on tools and checklists.” And that culture needs to include making sure people have time to think critically.

Think of those Clinical Research Associates on their monitoring visits to sites. At a CRO it’s fairly common to expect them to be 95% utilized. This leaves only 5% of their contracted time for all the other “stuff” – the training, the 1:1’s, the departmental meetings, the reading of SOPs etc. Do people in this situation have time to think? Are they able and willing to take the time to follow up on leads and hunches? As I’ve mentioned previously, root cause analysis needs critical thinking. And it needs time. If you are pressurized to come up with the results now, you will focus on containing the issue so you can rush on to the next one. You’ll make sure the site staff review their lab reports and mark clinical significance – but you won’t have time to understand why they didn’t do that in the first place. You will not learn the root cause(s) and will not be able to stop the issue from recurring. The opportunity to learn is lost. This is relevant in other areas too, such as risk identification, evaluation and control. With limited time for risk assessment on a study, would you be tempted to start with a list from another study, have a quick look over and move on to the next task quickly? You would know it wasn’t a good job but hopefully it was good enough.

Even worse, some organizations, in effect, punish those thinking critically. If you can see a way of improving the process, of reducing the likelihood of a particular issue recurring, what should you do? Some organizations make it a labyrinthine process to make the change. You might have to go off to QA and complete a form requesting a change to an SOP. And hope it gets to the right person – who has time to think about it and consider the change. And how should you know about the form? You should have read the SOP on SOP updates in your 5% of non-utilized time!

Organizations continue to put pressure on employees to work harder and harder. It is unreasonable to expect employees to perform tasks needing critical thinking well without allowing them the time to do so.

Do you and others around you have time to think critically?

 

Text: © 2019 DMPI Ltd. All rights reserved. (With thanks to Steve Young for the post title)

Picture: Rodin – The Thinker (Andrew Horne)

What My Model of eTMF Processing Taught Me (Part II)

In a previous post, I described a model I built for 100% QC of documents as part of an eTMF process. We took a look at the impact of the rejection rate for documents jumping from 10% to 15%. It was not good! So, what happens when an audit is announced and suddenly the number of documents submitted doubles? In the graph below, weeks 5 and 6 had double the number of documents. Look what it does to the inventory and cycle time:

The cycle time has shot up to around 21 days after 20 weeks. The additional documents have simply added to the backlog and that increases the cycle time because we are using First In, First Out.

So what do we learn overall from the model? In a system like this, with 100% QC, it is very easy to turn a potential bottleneck into an actual bottleneck. And when that happens, the inventory and cycle time will quickly shoot upwards unless additional resource is added (e.g. overtime). But, you might ask, do we really care about cycle time? We definitely should: if the study team can’t access documents until they have gone through the QC, those documents are now not available for 21 days on average. That’s not going to encourage every day use of the TMF to review documents (as the regulators expect). And might members of the study team send in duplicates because they can’t see the documents that are awaiting processing? Adding further documents and impacting inventory and cycle time still further. And this is not a worst case scenario as I’m only modelling one TMF here – typically a Central Files group will be managing many TMFs and may be prioritizing one over another (i.e. not First In, First Out). This spreads out the distribution of cycle times and will lead to many more documents that are severely delayed through processing.

“But we need 100% QC of documents because the TMF is important!” I hear you shout. But do you really? As the great W Edwards Deming said, “Inspection is too late. The quality, good or bad, is already in the product.” Let’s get quality built in in the first place. You should start by looking at that 15% rejection rate. What on earth is going on to get a rejection rate like that? What are those rejections? Are those carrying out the QC doing so consistently? Do those uploading documents know the criteria? Is there anyone uploading documents who gets it right every time? If so, what is it that they do differently to others?

What if you could get the rejection rate down to less than 1%? At what point would you be comfortable taking a risk-based approach – that assumes those uploading documents do it right the first time? And carrying out a random QC to look for systemic issues that could then be tackled? How much more efficient this would be. See the diagram in this post. And you’d remove that self-imposed bottleneck. You’d get documents in much quicker, costing less and with improved quality. ICH E6 (R2) is asking us to consider quality as not being 100% but concerning ourselves with errors that matter. Are we brave enough as an industry to apply this to the TMF?

 

Text: © 2019 DMPI Ltd. All rights reserved.

Picture: CC BY 2.0 Remko Van Dokkum

What My Model of eTMF Processing Taught Me

On a recent long-haul flight, I got to thinking about the processing of TMF documents. Many organisations and eTMF systems seem to approach TMF documents with the idea that every one must be checked by someone other than the document owner. Sometimes, the document owner doesn’t even upload their own documents but provides them, along with metadata, to someone else to upload and index. And then their work is checked. There are an awful lot of documents in the TMF and going through multiple steps of QC (or inspection as W Edwards Deming would call it) seems rather inefficient – see my previous posts. But we are a risk-averse industry – even having been given the guidance to used risk-based approaches in ICH E6 (R2) and so many organizations seem to use this approach.

So what is the implication of 100% QC? I decided I would model it via an Excel spreadsheet. My assumptions are that there are 1000 documents submitted per week. Each document requires one round of QC. The staff in Central Files can process up to 1100 documents per week. I’ve included a random +/-5% to these numbers for each week (real variation is much greater than this I realise). I assume 10% of documents are rejected at QC. And that when rejected, the updated documents are processed the next week. I’ve assumed First In, First Out for processing. My model looks at the inventory at the end of each week and the average cycle time for processing. It looks like this:

It’s looking reasonably well in control. The cycle time hovers around 3 days after 20 weeks which seems pretty good. If you had a process for TMF like this, you’d probably be feeling pretty pleased.

So what happens if the rejection rate is 15% rather than 10%?

Not so good! It’s interesting just how sensitive the system is to the rejection rate. This is clearly not a process in control any more and both inventory and cycle time are heading upwards. After 20 weeks, the average cycle time sits around 10 days.

Having every document go through a QC like this forms a real constraint on the system – a potential bottleneck in terms of the Theory of Constraints. And it’s really easy to turn this potential bottleneck into a real bottleneck. And a bottleneck in a process leads to regular urgent requests, frustration and burn-out. Sound familiar?

In my next post, I’ll take a look at what happens when an audit is announced and the volume of documents to be processed jumps for a couple of weeks.

 

Text: © 2019 DMPI Ltd. All rights reserved.

Picture: CC BY 2.0 Remko Van Dokkum

Wearables, Virtual Trials, Yes. But What About the Basics?

I was lucky enough to be presenting at and attending the SCOPE Europe conference recently. It started with some fascinating presentations and discussion on wearables and virtual trials. We all know technology is moving fast and some of the potential impacts in clinical trials are phenomenal. There was also a presentation by an extraordinary woman – Victoria Abbott-Fleming. She has started her own charity for sufferers of Complex Regional Pain Syndrome (Burning Nights CRPS) having been diagnosed with this condition. She had found it difficult to obtain information from her health professionals in the NHS. Talking with Victoria and her husband, it was shocking to hear of the daily challenges and prejudices she encounters through insensitive actions and comments due to her being young and confined to a wheelchair. On top of this she has taken an activism role trying to cajole the NHS and government to help get the support she and others like her need.

Victoria was presenting on the challenges patients face to getting on to a clinical trial. And it really makes you wonder how we can improve patient access. Often it is a real challenge to find out about, understand and access clinical trials. Victoria herself has wanted to go on a clinical trial for 15 years but has not managed it – if you’re not being treated by a physician who participates in clinical trials, your opportunities are limited. She has discovered clinical trials but too late to actually participate. When we talk about patient-centred this should be a clear concern. TJ Sharpe also speaks powerfully on this topic from a patient perspective.

Of course, wearables and virtual trials might hold some of the answers to including more patients in clinical trials but you can’t help thinking something is wrong at a basic level if we can’t match up patients desperately wanting to participate in a clinical trial with trials that are actually available.

The charity Victoria founded:

 

Text: © 2018 Dorricott MPI Ltd. All rights reserved.