The use and potential of artificial intelligence (AI) in eyecare is being hampered by a lack of access to quality clinical registry data, according to a team of Australian researchers.
AI is set to revolutionise the way doctors and patients interact with ophthalmic healthcare, with several published diagnostic AI models already boasting performance on par with eye specialists in the detection of diabetic retinopathy, macular degeneration, and glaucoma.
However, to be useful in the clinic AI requires large volumes of real patient data to produce good predictions.
Researchers from the Save Sight Institute, The University of Sydney and Sydney Eye Hospital have written a systematic literature review – Artificial Intelligence and Ophthalmic Clinical Registries which has been published in the latest issue of The American Journal of Ophthalmology.
They identified clinical databases like The Save Sight Registries to be “one such treasure trove of high-volume training data that could power AI models of the future”.
But that potential was yet to be fully realised because of “limited applications of deep learning algorithms to clinical registry data” and a “lack of standardised validation methodology and heterogeneity of performance outcome reporting”.
That suggested “the application of AI to clinical registries is still in its infancy constrained by the poor accessibility of registry data and reflecting the need for a standardisation of methodology and greater involvement of domain experts in the future development of clinically deployable AI”.
As part of their research, Dr Luke Tran, Dr Himal Kandel, Dr Daliya Sari, Dr Christopher Chiu, and Professor Stephanie Watson OAM FARVO reviewed 23 original research articles comprising 14 unique registries and 46 individual applications of AI algorithms to registry data. There was a wide distribution of registries by condition captured and scope of coverage.
They found that most studies included some measure of internal validation, but no papers justified their choice of internal validation methodology and only six conducted any external validation.
“Through our literature search, we observed a distinctive lack of deep learning models being applied to clinical registry data and a wide variation in methods and validation measures suggesting that the potential of clinical registries to train AI models has not been fully realised,” says lead author, Dr Tran.
The researchers identified several barriers to rectifying this, including the costs associated with storing large amounts of data, the difficulties in balancing the breadth of data collected and designing incentives to ensure longevity of its collection, and issues involving the strict regulation of clinical registry data and the lack of organisational infrastructure to promote collaborations between medical and machine learning experts.
“Considering these challenges, registries remain a largely untapped source of real-world data,” the report said.
It also concluded that “valuable research hours have been spent refining the methodology and creating models that may not have any prospects for deployment to real clinical settings”.
The researchers hope that these findings will help clinicians understand the current state of AI applied to ophthalmic clinical registries, the opportunities and barriers associated with mobilising registry data, and the need for early involvement of machine learning experts in the development of clinically deployable AI.
More reading
Take note: Using AI for clinical records
Tokai Optical: Where tradition meets innovation
AI scan for diabetes has potential to save sight and money