Australia's Leading Ophthalmic Magazine Since 1975

     Free Sign Up     

Australia's Leading Ophthalmic Magazine Since 1975

     Free Sign Up     
News, Feature

Decoding artificial intelligence in eyecare

30/08/2019By Myles Hume
Artificial intelligence is poised to have a transformative effect on eyecare, but also raises a host of legal and ethical issues. MYLES HUME examines how the sector is carefully embracing a new technological era.

The proliferation of artificial intelligence (AI) in healthcare is a concept many are considering with equal measures of caution and enthusiasm.

This is particularly relevant to Australian eyecare – one of the few sectors at the vanguard of AI-assisted care – where systems with the inbuilt ability to continuously improve are beginning to prompt a sea-change in screening and diagnostic capabilities.

AI software can in some cases now detect and grade common ocular diseases such as diabetic retinopathy with an accuracy of 97%, and systems are now sufficiently sophisticated to ascertain the gender and age of patients through the analysis of retinal images – a task not even the most qualified human observer is capable of.

While AI’s medical diagnostic capabilities fit firmly in the realm of preventative healthcare, the technology offers other practical solutions that could bring about significant cost and time savings by eliminating much of the manual work involved in optometry and ophthalmology.

As AI’s role in healthcare continues to progress, expand and solidify, the spotlight has unsurprisingly begun to focus on developing a new regulatory environment that can appropriately control its integration.

On the surface, tightened regulation will ensure AI-assisted healthcare adopts safe and ethical practices to protect patients. However, its potential value for AI developers and adopters is perhaps understated; clear rules and guidelines significantly reduce risk exposure, can help secure a competitive advantage on the world stage and if mistakes happen, ensure adverse events are subjected to fair scrutiny and due process.

For legislators, the challenge is multi-faceted. How does one regulate the ever-moving target that is AI? And, more importantly, how does one strike the balance between safeguarding the Australian public, while creating an environment that allows developers the freedom to explore and compete towards providing the best possible healthcare?

The state of play

Debate to address ethical and legal uncertainties surrounding AI-assisted healthcare is already under way in Australia, with regulators recently announcing their intent to reshape and tighten outdated laws for the technology it calls ‘software as a medical device (SaMD)’.

The Therapeutic Goods Administration (TGA) in February published a consultation paper titled: Regulation of software, including Software as a Medical Device (SaMD), which outlines sweeping regulatory reforms for the segment that could ultimately impact new and existing products.

Currently, SaMD is regulated by the TGA under existing medical device framework, which was adopted in 2002. However, it has acknowledged that these laws are no longer adequate.

For instance, AI or machine learning software – the type of technology the TGA insists should undergo ongoing performance monitoring – is typically only classified as a Class I medical device under current regulations. This is the lowest classification, and means it is not subject to third party oversight in the design, performance, or development before, or while, they are included on the Australian Register of Therapeutics Goods (ARTG).

The paper identifies that, under current regulations, if a medical device is driven or influenced by software, then the software has the same classification as the medical device. This rule does not capture software that is not associated with a medical device.

Further highlighting the issue, the TGA stated: “The classification rules currently only consider the possible harm caused by a physical interaction of a medical device and a human. Software as a Medical Device does not have this direct physical interaction. The risks posed by software are more along the lines of incorrect calculation, incorrect diagnosis, or an incorrect decision for a treatment modality, which may subsequently cause great harm. For this reason, the classification of software under the current framework is often not in accordance with the level of risk it poses.”

In its consultation paper, the government regulator proposed strengthening classification rules that would see new and existing products subjected to third party oversight, ensuring they are relevant to the potential harm they could cause to patients.

"If it’s done the right way and embedded in the health system, this could lead to better clinical outcomes for patients"
Peter Van Wijngaarden, Centre for Eye Research Australia

“Existing SaMD products that are currently included in the ARTG as Class I would need to be re-classified. This may result in a requirement for their sponsors or manufacturers to hold additional evidence to remain on the ARTG. This re-classification of existing SaMD products would be subject to a transition period when the new regulations are introduced.”

The SaMD consultation is closely correlated with another government-backed report titled: Artificial Intelligence: Australia’s Ethics Framework. It is a discussion paper produced by the Commonwealth Scientific and Industrial Research Organisation (CSIRO), which aims to focus the discussion around how AI should be used and developed in Australia.

Elsewhere, debate in US healthcare has reached a crossroads that Australian legislators will undoubtedly watch as they attempt to tackle the specific challenge of AI’s deep learning capabilities.

The US Food and Drug Administration (FDA) approved its first AI software program in April last year, called the IDx-DR, a milestone the agency itself called “a harbinger of progress”.

IDx-DR uses an algorithm to analyse images captured by the Topcon NW400 non-mydriatic retinal camera to detect for signs of diabetic retinopathy. A doctor uploads the images to a cloud server where the AI-powered program determines whether a patient needs to be referred for further assessment, or not.

However, to obtain FDA approval, the technology must be “locked”, thwarting the algorithm’s ability to continually learn each time it is exposed to new data. Instead, if the system is to be improved, these algorithms are modified by the manufacturer at intervals, which includes controlled “training” with new data, followed by manual verification and validation of the updated algorithm before being frozen again.

The FDA acknowledges there is a great deal of promise beyond locked algorithms “that’s ripe for application in the healthcare space”, which requires careful oversight to ensure the benefits of these advanced technologies outweigh the risks to patients.

As such, the FDA in April released a White Paper as it explores a framework that would allow for modifications to algorithms to be made from real-world learning and adaptation.

Meanwhile, within Australia’s eyecare sector, RANZCO has also acknowledged the arrival of AI, with CEO Dr David Andrews believing it will inevitably form part of future ophthalmology training. AI is also a key concern for the college’s newly-established Future of Ophthalmology Taskforce to consider, placing it on even footing with other emerging issues such as robotic surgery.

“RANZCO recognises that artificial intelligence is having and will continue to have an impact on the field of ophthalmology and we expect that this will mean ever more effective and efficient treatment for patients. As the use of AI in service delivery develops and it becomes one of the many tools used commonly by ophthalmologists, it will also develop as an element of the RANZCO training program,” he said.

Solving the AI conundrum

In analysing current AI ethical and legal issues confronting Australian eyecare, deputy director of the Centre for Eye Research Australia (CERA) Associate Professor Peter van Wijngaarden is highly credentialed.

A member of the RANZCO Future of Ophthalmology Taskforce, van Wijngaarden has helped develop the College’s soon-to-be-released AI White Paper, and is currently coordinating a survey of RANZCO fellows, as well as members of the dermatology and radiology colleges, to better understand AI awareness and perception in order to help shape education and training.

He also contributed to national consultations on mobile health applications and the Australian ethics framework on AI.

In a broader context, van Wijngaarden sees the potential for enhanced access to timely eyecare as one of AI’s most significant benefits. An example of this is a collaborative study between CERA and Lions Outback Vision that will equip 10 outlying primary care services in Western Australia with AI screening systems.

Topcon NW400
Topcon NW400

If successful, this could provide evidence to support an expanded role for non-eyecare health professionals in screening for eye diseases.

Additionally, he says AI has the potential to reduce the lag that exists between the generation of evidence in clinical studies and its eventual implementation in mainstream clinical practice.

“We are fortunate in ophthalmology that our imaging modalities are very standardised. Increasingly imaging hardware, such as OCT, is being miniaturised, made cheaper and more widely available,” he said.

“There is this huge promise that everyone will have access to convenient, timely and accurate diagnoses, which, if it’s done the right way and embedded in the health system, this could lead to better clinical outcomes for patients. That’s the real promise.”

In addressing the key ethical issues associated with AI, van Wijngaarden first identifies the murky issue of data ownership and consent. This becomes particularly pertinent when large technology companies begin to dominate and monetise AI-assisted eyecare.

“There are some ethical challenges relating to data ownership and the value that resides in that data. AI systems are contingent upon the data that’s used to develop them, so careful attention should be paid to data use permissions and consent,” he says. “This is particularly the case when data may be used to generate value for companies.”

In highlighting other legal and ethical concerns, van Wijngaarden says AI programs are incredibly complex with many components and parameters contributing to the final determination, meaning that systems are perceived to operate as “black boxes”. Naturally, clinicians are sceptical and demand a basis for decisions made by these systems. They also want assurance that decisions are free from bias.

“This can be tackled in a number of ways, for instance there is a growing awareness of the need to train diagnostic systems on retinal images from a host of different cultural and ethnic groups, different genders and different ages to ensure that the systems perform well in a broad context,” he says.

In relation to clinician acceptance, van Wijngaarden says developers have begun to incorporate ‘smart approaches’, such as heat-maps that demonstrate the areas of an image that were most contributory to the final classification.

“If the heat-map flags areas of pathology that we as human clinicians recognise, then that can help to win trust. I think that these developments are important in enabling human oversight of AI system performance,” he says, “and tools like this can be leveraged to accelerate and enhance the training of eye health specialists”.

“This is also about winning hearts and minds of clinicians. There’s a great deal of trust that needs to be developed. Clinicians are not going to adopt these systems in a meaningful way if they don’t believe in them, or if they have difficulty in understanding the basis or for those decisions.”

Designs for Vision

Dehumanising healthcare?

AI system transparency, accuracy and reliance also feed into another key issue; to what extent should AI be influential in healthcare?

According to van Wijngaarden there are some “ill-founded concerns” AI will replace clinicians and, by the same token, suggests serious debate is needed around setting the limits of AI assistance.

“It’s going to be very much in the hands of the regulators and professional bodies as to the extent to which AI is embraced,” he says.

“Does incorporating AI potentially dehumanise healthcare? What is the role for AI to enhance healthcare without reducing that really important human element of the doctor–patient relationship?

“I think we are already seeing an increasing shift from a doctor–patient relationship to a doctor–system relationship, so I think this really demands a much broader debate for the community around where the lines should be drawn.”

Interestingly, the Google AI research group, which has developed its own diabetic retinopathy AI screening system, recently investigated this specific issue. In its study – published in March – the company found a combined diagnosis between a physician and AI technology was more effective than either one or the other on its own when examining ocular diseases.

The researchers examined whether the algorithm could do more than simply diagnose disease, and developed a new computer-assisted system that could ‘explain’ the algorithm’s diagnosis.

According to the study’s authors, previous attempts to utilise computer-assisted diagnosis demonstrated some eyecare providers were overly reliant on the equipment, leading them to repeat the machine’s errors. Others were under-reliant or ignored the equipment’s recommendations and missed accurate predictions. The Google AI research team believed that some of these pitfalls might be avoided if the computer provided an explanation for its predictions.

In testing the theory, the researchers developed two types of assistance to help physicians read the algorithm’s predictions; Grades: a set of five scores that represent the strength of evidence for the algorithm’s prediction; and Grades + heatmap: an improved grading system which includes a heatmap that measures the contribution of each pixel in the image to the algorithm’s prediction.

Ophthalmologists were asked to read each image once under one of three conditions: unassisted, grades only, and grades + heatmap. Both types of assistance improved physicians’ diagnostic accuracy. It also improved their confidence in the diagnosis, however, the degree of improvement depended on the physician’s level of expertise.

Google AI
Google AI's retina scans with grades


Of equal importance is the issue of accountability. What should be done when a misdiagnosis occurs? Who is liable? And what process should be applied to resolve legal and ethical issues?

The Royal Australian and New Zealand College of Radiologists, which is also grappling with an AI transformation, sought to answer this in a recently released set of AI ethical principles, currently out for consultation amongst its members.

It highlights that liability for patient care principally rests with the responsible medical practitioner, but it may not always be so simplistic.

“However, given the multiple potential applications of machine learning (ML) systems and AI tools in the patient journey, there may be instances were liability is shared between; the medical practitioner caring for the patient; hospital or practice management who took the decision to deploy the systems or tools; and the company which developed the ML system or AI tool,” the paper reads.

“The potential for shared liability needs to be identified and recorded upfront when researching or implementing ML systems or AI tools.”

Once AI establishes itself as a mainstream healthcare tool, van Wijngaarden believes it will be important to avoid knee-jerk reactions to adverse events, akin to that seen in the trial of autonomous cars by ridesharing company Uber.

Driverless cars have been proven to be safer than human driven vehicles over many millions of on-road test hours, however, Uber’s trial was brought to an abrupt end after one of its cars hit and killed a pedestrian in Arizona last year.

“Are we more likely to forgive a machine or a human? I think at the moment we are more likely to forgive a human and that has been part of the current hold up in the wide-scale implementation of autonomous driving,” van Wijngaarden says.

“We like to think that we, as humans, are less error-prone than we are. There’s already bias in our understanding of human error, so any consideration of machine error needs to have an objective appraisal of the actual probability of human error.”

Looking ahead

"One of the tools we have developed to help address this challenge in our diabetic retinopathy AI system, is a quality control system which alerts the screeners if one of the images they take is not of sufficient quality"
Yogi Kanagasingam, CSIRO

Developing an up-to-date regulatory framework and oversight for deep learning AI in healthcare is essential, van Wijngaarden says, but the challenge lies in creating a system that is neither too loose nor too restrictive.

The TGA’s proposals for AI medical device reform have been out for consultation, which ended on 31 March.

“The TGA received 41 submissions that, whilst indicating broad support for the proposed regulatory changes set out by the TGA, identified a range of other considerations and issues to be addressed. Submissions have not yet been published,” a spokesperson told Insight, adding that consultation feedback has been presented to the government for consideration.

Feedback from the submissions and the issues identified in the submissions were also presented by the TGA in a Digital Devices Webinar on 20 June, and discussed at a workshop on 24 June.

van Wijngaarden says the Australian Government could learn from the current work in the US, referring to the FDA’s review of its rules that presently only allow approval for ‘locked’ AI systems.

“There is a very clear need for regulation, and that becomes all the more important when there is the autonomous provision of diagnostic or management advice, but how do you technically do that?

“I think in the first instance we need to set agreed standards that are based on broad consultation, but there’s also a challenge of getting the regulatory level right. You don’t want to stifle health-saving innovation at the expense of overly restrictive regulation. How do we think about regulating deep learning systems without compromising their performance and, in a way, ‘dumbing down’ an intelligent technology?

“There’s a balance that needs to be struck, because many of these technologies have the potential to deliver huge benefits for health. We need to get this right.”

Homegrown AI blazes trail

The global race to develop and commercialise AI systems for eyecare has firm roots in Australia, with one of the most advanced homegrown innovations being the DR Grader.

A system that adopts AI and rule based technology to detect and grade diabetic retinopathy by analysing high-resolution images, DR Grader was invented by a small group of scientists at Canberra-based CSIRO, co-lead by Professor Yogi Kanagasingam.

Silicon Valley technology company TeleMedC acquired the licence for the trailblazing software and initiated commercialisation efforts in 2017, remarkably beating other multinational giants such as Google AI and IBM.

Retina with signs of diabetic retinopathy

DR Grader works across a range of high quality fundus cameras, including EyeScan – another Kanagasingam invention – and has achieved a sensitivity of 97% and a specificity of 92%, supposedly the highest in the market.

The screening can be conducted by a GP, with a determination made within two minutes.

Kanagasingam, who is now aiming to develop AI systems for glaucoma and age-related macular degeneration, told Insight correct data input, such as retinal images, was a vital component in maintaining the software’s quality.

This was particularly important when lesser-trained health professionals were using the technology, or cataracts compromised image quality.

“One of the tools we have developed to help address this challenge in our diabetic retinopathy AI system, is a quality control system which alerts the screeners if one of the images they take is not of sufficient quality,” he said.

According to TeleMedC, DR Grader is the first such product in the Asia Pacific region to receive regulatory certifications for: ISO 13485 (worldwide), TGA (Australia, Class I) and HSA (Singapore). TeleMedC has lodged submissions for CE (European Union) and FDA (USA) approvals.

Prior to its commercialisation, the software was first put to work in a GP clinic in Western Australia (WA), with bold plans of a roll out into 20 further GP clinics in WA, extending into other Australian states and eventually Singapore.

However, TeleMedC has since failed to gain significant traction for DR Grader in Australia, largely due to state and government apathy, according to CEO Mr Para Segaram, indicating the healthcare system had some way to go before fully understanding and appreciating the technology.

“Unless we have a broad and sustainable scheme in place from part of the different states and/or the Australian Federal Government to support this initiative, it is very hard to implement this at a large scale in Australia,” he said.

“[However] we have secured the deployment of our DR Grader software in 120 clinics in Singapore, alone – with ongoing negotiations with other South East Asian countries to see this implemented in the near future. Singapore has acted swiftly on their interest in seeing this AI technology implemented in their country.

This will serve not only as a test for South East Asia as for the rest of the world.”


large leaderboard

Skyscraper article
Editor's Suggestion
Hot Stories

Skyscraper article


Subscribe for Insight in your Inbox

Get Insight with the latest in industry news, trends, new products, services and equipment!