Co-creation, Convincing Evidence and Avoiding ‘Pilotitis’: Insights on the Applications of AI in Heart Failure

T
The Link
By: Guest contributor, Mon Oct 17 2022
_

Author: Guest contributor

The use of artificial intelligence (AI) in medicine is an ever-growing field, spanning from diagnostics to treatment regimens. But how is this impacting the field of heart failure? In this blog, we look at the key insights from our webinar: AI in heart failure.

Digital technologies and artificial intelligence are on the rise in healthcare and are increasingly used by those living with heart failure. There are already technologies available supporting more personalised and timely shared decision-making, earlier identification of problems, and an improved experience of care. However, AI still has much to prove in a clinical setting. In a field that is understandably cautious about adopting technologies without considerable evidence, how can AI be applied in a way that supports clinicians and patients, without adding additional burdens?

Our recent webinar on AI in heart failure held a discussion – led by researchers in AI, biomedical engineering, heart failure, and pharmacology – that highlighted both the potential of AI and also the challenges in implementing it in a clinical setting. Below, we pull out the key themes.

Collaboration, co-creation, and interdisciplinary work

While the webinar’s topics ranged widely, an idea that was raised in almost every area was the need for interdisciplinary work and co-creation with stakeholders.

Professor Dean Ho, Department Head of Biomedical Engineering at the National University of Singapore asserted that he wanted viewers to take away the idea of ‘interdisciplinary empathy’ – in other words, engineers, clinicians, nurses, social scientists, and more, understanding how each other’s disciplines work and collaborating to develop new AI technologies that meet specific needs. This idea of co-creation was considered by all panellists to be vital to ensuring the adoption of new technologies.

Professor Ho emphasised the need to be patient-centric, doctor-centric, or pharmacy-centric, explaining that the people who will be using the technology must be involved from the start. Both in identifying the need and then in the process of co-creating the solution. Professor Martin Cowie, Consultant Cardiologist at Royal Brompton Hospital and King's College London, agreed with this approach.

“Engineers often get super excited – [saying] ‘we can have 14 continuous streams of data from your implantable device and we can set alarms so that you can be told whenever your patient has a brief burst of atrial fibrillation’. But that is the last thing doctors want – to have 1,000 patients monitored remotely, all these alerts and alarms pinging off. So what do they do? They reach for the off button and they solve the problem.”

This led to the idea that AI should be applied where it makes sense, not just ‘because we can’. Professor Cowie emphasized that the technology should be there to help doctors make better decisions, not just more decisions.

The risk of ‘pilotitis

The panel came back several times to the issue of so-called ‘pilotitis’ where digital healthcare technologies become stuck in the ‘pilot stage’ of development. Two of the panellists, Professor Dean Ho and Dr Matthias Egermark, Inaugural Executive Visiting Fellow at the National University of Singapore, have written a paper on the topic.

The paper identifies three key challenges to the adoption of digital technologies in healthcare, all of which were also raised in the discussion of AI in heart failure:

  • The current overreliance on big data approaches in clinical decision support systems and other areas of medical AI.
  • Insufficient scope and scale of clinical evidence generation across digital health technologies. 
  • Lack of economic incentives and funding structures for clinical implementation of evidence-based digital health.

In particular, the panellists discussed at length the challenges of identifying evidence-based solutions and convincing regulators that a new technology is acceptable for use in medicine.

“I think the problem for cardiology is its guidelines, which are very, very influential in changing cardiologist practice,” said Professor Cowie. “And they are really geared towards randomized trials. That doesn't fit [with digital technologies and AI] so you then find the guidelines are silent on this digital innovation or very lukewarm.”

There was also reference to the fact that many clinicians view AI as a ‘black box’. In other words, because they don’t know exactly how it works it’s not easy to trust the technology. However, Professor Cowie felt this was the wrong outlook to have, expressing his belief that there was a need for clinicians to be open-minded in their approach to AI.

Dr Egermark agreed, but also emphasised the need for clinicians to be ‘be convinced by the evidence for the pragmatic use of it.’

Convincing clinicians and patients

Throughout the discussion, panellists returned to the idea that convincing clinicians and patients of the usefulness of a particular AI technology was one of the biggest hurdles to overcome. One of the solutions discussed, as outlined above, was the idea of interdisciplinary development of technologies and co-creation. But ultimately, a lot came back to the idea that new AI technologies needed to add genuine value to clinicians and meet unmet needs, rather than addressing a problem that doctors ‘didn’t even know existed.’

An example that was returned to multiple times was the use of AI to improve the utility of implantable loop recorders (small devices that record the electrical activity of a patient’s heart). Loop recorders have been around for some time, but were known for creating many false positive alerts and a lot of signal noise. However, the application of machine learning algorithms to analyse the output of loop recorders has now made them considerably more useful and so adoption of this particular AI technology has been rapid.

Another example, raised by Professor Cowie, was that of ultrasonography. Until very recently, this could only be carried out by ultrasonographers with years of training – of whom there are too few for the number of patients requiring assessment. However, with AI now optimising ultrasound images from simpler devices, Professor Cowie believes it would be possible for people with less training to carry out initial assessments, leaving experienced ultrasonographers to focus on complex cases.

Ultimately, panellists were in agreement that while convincing clinicians and patients to adopt new technologies is difficult, AI will be part of a “toolbox” of solutions to support heart failure diagnosis and treatment in the future.

Professor Cowie very simply summed this up, quoting Dr Keith Horvath of the Association of American Medical Colleges: “AI is not going to replace physicians, but physicians who use AI are going to replace physicians who don’t.” 

More on AI in medicine and beyond

You can also find out more about how we support hospitals and healthcare settings to ensure staff are reliably informed about the latest, most effective advances in the healthcare industry on our Corporate & Health pages.



Don't miss the latest news & blogs, subscribe to our Librarian Alerts today!

_

Author: Guest contributor

Guest Contributors for THE LINK include Springer Nature staff and authors, industry experts, society partners, and many others. If you are interested in being a Guest Contributor, please contact us via email.