Blogpost headers_0001_32909-original-1593172162-FCh2-column-width-inline


Image recognition artificial intelligence (AI) is here to stay and has the potential to transform medical diagnostics by prioritising urgent cases and identifying disease at an early stage. As the technology has developed, it has become clear that the ability to identify a broad range of pathologies is important  which means the algorithms need training on ever increasing numbers of patients.

Detecting multiple pathologies requires detailed reporting on thousands, even millions of images, a huge amount of data needs to be collected in order for programmes to ‘learn’ and improve their functionality. 

At the same time, there are questions around how diagnostic AI and clinicians should work together - who is liable for a wrong diagnosis? How do medical indemnity providers assess liability risk? How is their use going to be regulated?

What about patient data? Data protection laws insist that patient data is anonymised but the risks of a breach resulting in identifiable data being released  is always there, along with the resultant civil litigation that always accompanies these events.

The UK/EU data protection regime treats medical-related data as a “special category” - in other words, a more rigorous set of rules is applied to the way that data is collected, managed and stored.

Patients must be informed what their data is being used for and how long it is being kept. Patients may also withdraw consent and even insist that their data is destroyed. In reality the risks remain the same, it’s just the volume of data that is changing.

The reputational damage that could be caused by a data leak is very real however and the fallout from a material breach could cripple a tech company  at a critical stage of their research

In order to mitigate the risk , AI algorithms only have access to medical imaging data, details of patients including condition and outcome, remain unknown during the analytical process which further limits the application.

Ultimately, AI should be helping doctors decide on treatment strategies but to do that, the algorithm will need much more access to patient health records, clinical trial results and drug data which means, a regulatory framework and a clearer pathway through patient consent, privacy laws and data security.

If developers go down this route (and it is almost inevitable that they will), there needs to be more thought given to the nature of consent for patients. How will they respond to being part of a study or the result of an AI generated diagnosis rather than a human one? They are entitled to know whether the AI is safe and effective or at an early stage in its development. In the absence of that, the risk of legal recourse, if the outcome is not as they would wish, is pretty much guaranteed.

We don't nee AI to tell us that!