Translating the Eye's Secret Language

Translating the Eye's Secret Language

DATE
AUTHOR
SHARE
The Language of Genomes

The practice of ophthalmology has been around for centuries. We find the first records of medical eye treatments as early as 1550 BC—nearly 3,600 years ago. But it was the invention of the ophthalmoscope in the 19th century that crystallized what had been “primarily speculative” into concrete understanding, precipitating a wave of new diagnostics and treatments for ocular diseases.

As modern medicine’s timeline unravels, we see bigger and better technologies dominating the research landscape—in it, AI looms as a city skyscraper. But while AI has stretched the dimensions of all medical domains, in ophthalmology, it has added an entirely new one: scientists are now peering through the eye as a window into the health of the whole body. Watch on YouTube

Ophthalmology in Crisis: Why Global Eye Care Needs a Radical Shift

To understand how AI is reshaping ophthalmology, we first need to consider what’s driving these two fields together.

Since the COVID-19 pandemic, healthcare systems around the world have increasingly teetered towards crisis, with staff shortages, ageing populations, and an increase in the spread of non-communicable diseases mostly to blame.

In the UK, demands are especially high—a recent report from the British Medical Association suggests that the NHS is experiencing some of the most severe pressures in its 75-year history.

Crucially, ophthalmology is one of the most adversely affected departments—in 2017, it overtook orthopedics and trauma as the busiest NHS outpatient specialty. According to a 2022 consensus from the Royal College of Ophthalmologists, 75% of all NHS ophthalmology departments are understaffed to meet current demand.

And concerns are rising. Recently, a 2024 survey revealed that 70% of ophthalmologists felt more worried about the overstretched workforce’s impact on patient care than they did the previous year. Perhaps more importantly, only 25% felt their department could accommodate local healthcare demands.

But it’s not just the UK. The ophthalmology crisis has swept across the globe—with it, darkening storm clouds gather overhead. A 2024 forecast by Dr Sean Berkowitz and colleagues predicted a “sizable shortage of ophthalmology supply relative to demand by the year 2035,” landing ophthalmology just shy of the bottom of the worldwide leaderboard for workforce adequacy.

The repercussions? Waitlists are growing longer while patients’ vision continues to deteriorate. As of December 2024, two-thirds of patients were waiting more than four months to receive specialist eyecare, while nearly a quarter had been waiting over a year.

But these waitlists aren’t just “inconvenient,” they have a pronounced impact on both physical and mental health—two interconnected forces that feed into each other. Of those waiting, 70% of patients reported worsening eyesight, and 54% said that the waiting had interfered with their ability to work. Nearly 70% agreed that their mental health had been affected.

Charting the Way Forward

These figures are not just statistics—they're alarms blaring against the backdrop of a collapsing workforce, with experts urging “immediate action” as the system nears collapse. What that action should be—however—is a mystery, its voice muted among the cries of concern.

For example, we might assume that—owing to the current shortage—one strategy would be to “fill in the gaps” with more ophthalmologists. But experts are doubtful this would make up the demand.

Dr Thiago GS Martins highlights this with a commentary published in the British Journal of Ophthalmology, noting that the problem requires “more complex solutions.” He points to a fundamental issue: eye care is an uneven playing field, with stark differences in both quality and access across countries. And because ophthalmology often relies on high-tech, expensive equipment, recruiting more clinicians without the necessary infrastructure is, in many cases, futile.

Thiago, like others, suggests that AI could address both digital and worker demands—with automated tools already licensed for skin cancer detection, and more AI technologies in line for FDA approval, this is well within the realms of possibility.

Pivoting Towards an AI-Powered Future

This is the idea that struck Professor Pearse Keane, consultant ophthalmologist at Moorfields Eye Hospital, London, in 2016.

He had watched Google’s AI, AlphaGo, take the media by storm after winning four out of five games against one of the world’s top Go champions—a landmark achievement in a game considered far more complex for computers than chess. This got Pearse thinking: What if Google could help solve the ophthalmology crisis with AI?

But there was a challenge: Building an accurate model for ophthalmology would require data—and lots of it.

That’s where Moorfields comes in.

According to Scimago, the hospital ranks as the leading ophthalmology institution globally—not only reflecting the clinical care it delivers, but also the scale of the data it collects. In 2023–2024, Moorfields saw nearly 800,000 patients—considering most of these cases would have received some form of eye scan, this would generate hundreds of thousands of medical images in a single year.

Moorfields Eye Hospital's main building. Taken by Fin Fahey.

It’s this huge reservoir of data that Pearse wanted to channel into something useful: an AI model to support rapid and easy eye disease detection. And, after floating the idea to Google, the two struck up a partnership in May that same year: Moorfields Eye Hospital would provide access to anonymized data (patient eye scans), and together with Google they would develop a model to detect signs of ocular disease from these scans.

In 2018, a seminal paper published in Nature marked the first breakthrough to emerge from this collaboration. Using ~15,000 anonymized eye scans, they developed an AI that could correctly refer patients for over 50 eye diseases 94% of the time—or “just as well as your doctor,” as the press excitedly claimed.

Then, two years later, the partnership bore its second research paper: “Predicting conversion to wet age-related macular degeneration using deep learning.” This time, they wanted to predict when patients who already had wet age-related macular degeneration (AMD) in one eye would start to develop it in their other, “good” eye, within a six-month window.

Finding and treating wet AMD early is essential—if not, patients could lose their sight in a matter of weeks or even days. But, because of the workforce pressures, backlogs seriously threaten doctors’ ability to intervene at the right time.

Though at a time when machine learning was still in its infancy, the research investigated whether AI was a feasible solution.

In the first paper, the group found that their model could indeed match human expert predictions with an error rate of only 5.5%—results which magnetized the media, stating that the AI “did not miss a single urgent case,” and that it could perform as well as two of the world’s leading ophthalmologists.

As for the second research paper, the model correctly identified the warning signs of wet AMD in 80% of patients, performing better than five out of six experts.

Reena Chopra, Honorary Research Fellow at Moorfields Eye Hospital, said:

“We found that the ophthalmologists and optometrists in our study had some intuition into which eyes will progress to wet AMD. The AI was able to outperform them, indicating there are signals within OCT scans that only the AI can detect. This unlocks new areas of research into a disease where there are still many unanswered questions about how it develops.”

Introducing INSIGHT

Following the Moorfields timeline, we reach a key milestone in 2019 with the launch of INSIGHT eye hub—which has grown to become the world’s largest ophthalmic bio-resource. INSIGHT is an NHS-led initiative based at Moorfields that aims to help transform eye care by making health data accessible for the research community, from healthcare professionals to life sciences start-ups to policy makers and pharma companies.

At its core, INSIGHT curates and stores anonymized eye scans from different two NHS Trusts—a vast wealth of data that serves as fertile ground for growing more powerful and inclusive AI models in ophthalmology. By drawing from institutions like Moorfields—one of the world’s leading eye-care centers—they hope digital innovations stemming from the hub will perform equally well across all populations. After all, a larger dataset increases the chances that different groups are represented.

This commitment laid the groundwork for one of INSIGHT’s most pivotal contributions. In 2023, a Pearse Keane's research group, spanning UCL and Moorfields, launched their AI foundational model, “RETFound,” designed to detect signs of disease based on two common ophthalmology imaging techniques: optical coherence tomography (OCT) scans and color fundus photography (CFP).

As the world’s first foundation model for eye health, RETfound signposted a key milestone for AI in ophthalmology. And, by allowing access to the pre-trained model, Moorfields enabled other research groups to fine-tune the AI for specific settings or for identifying particular conditions. For example, a paper published in NPJ Digital Medicine enhanced RETFound for a Shanghai-based community, further training the model using over 17,000 images from the local population.

In a more recent study published in Ophthalmology Science, researchers fine-tuned RETFound for the detection of choroid melanomas—a rare but aggressive cancer attacking the vascular layer of the eye. While this is usually picked up by imaging techniques such as CFP, OCT, and ultrasonography, it is difficult to differentiate between choroidal melanoma and nevi—a type of benign condition where clusters of cells appear as ‘freckles’ in the iris.

RETFound: Zooming in on the Framework

RETFound is a self-supervised model trained on 1.6 million images—over 100 times more data than used in the first research paper from Moorfields’ and Google DeepMind. Unlike traditional supervised models, which rely on labor-intensive, manually-annotated scans, RETFound learns to spot patterns without labels—effectively “teaching itself” from raw data. This self-supervised approach poses a major advantage: it dramatically reduces the time and effort needed to prepare training datasets, letting the model explore the data without any prior assumptions.

As for its framework, RETFound adopts a specific configuration of masked autoencoders—an application known as masked image modeling. Masked autoencoders ‘hide’ random values from the dataset, forcing the model to fill in the gaps before comparing its predictions with the original data. As the learning process continues, these predictions become more and more accurate.

Masked image modeling uses the same principle, but with images. Random portions of the image are hidden as the AI iteratively learns to reinsert the missing pieces. By doing this, RETFound can understand the patterns—things like textures or shadows—that make up the features of an eye scan, rather than just memorizing and recreating the small details.

Visualization of masked image modeling, where a masked autoencoder is forced to predict hidden portions of an image.

Think of it like studying for an exam: A student who purely memorizes the course content might struggle to answer a question in an unfamiliar scenario. However, a student who took the time to understand the context would be able to adapt their knowledge to help them solve the problem.

Global Reach: How AI is Transforming Eye Care in the Most Remote Corners

Zooming out from RETFound, we can trace INSIGHT’s impact across the globe to Australia. Lying to the west of the country sits Pilbara—a remote region almost double the size of the UK, with not even 1% of its population. Because of this, access to healthcare is extremely limited.

But, there’s a problem: the indigenous population living in these remote areas has a much greater risk of diabetic retinopathy—a disease where high blood sugar levels cause progressive damage to the blood vessels at the back of the eye. If left untreated, this leads to serious vision problems, including blindness. In fact, in many countries, diabetic retinopathy is the leading cause of blindness in the working age population.

And it poses an even greater challenge to the indigenous Australians: not only is diabetic retinopathy over five times more common in this group, but—because many are isolated in areas like Pilbara—they don’t have the same opportunity to monitor and treat the disease. It’s this double-edged sword that threatens an entire population’s eye health.

Australian non-profit organization “Lions Outback Vision” set out to tackle the issue in 2024. Collaborating with Topcon and Google, and with support from INSIGHT’s data hub and UCL, the company developed a breakthrough medical service: a van, fully equipped with an AI system and OCT machine, that can detect diabetic retinopathy on the spot that might otherwise go unnoticed for months.

Lions Outback Vision van, equipped with an OCT machine and AI system. Image taken from UCL press release.

Alexandra Black, research communications lead at INSIGHT, told us: “ is absolutely vast, and people are spread out across this great distance, meaning they don’t have the same access to regular healthcare. This technology is transformative for these people—and it’s really helping Indigenous Australians.”

Beyond Speed: Detecting the Undetectable

But it’s not just about identifying signs of eye disease faster—AI can go one step further: detecting things that humans can’t actually see.

For example, in 2024, a study from PNAS Nexus revealed that AI could help clinicians spot the difference between male and female retinas—something not easily done by humans. In fact, according to the paper, expert ophthalmologists were no better at correctly identifying the gender than non-experts, both guessing correctly 50% of the time. But, exposing a convolutional neural network (CNN) to these scans revealed some very subtle differences—for example, males tend to have a darker ring around the optic disk and more blood vessels than female retinas.

They trained clinicians on these distinguishing characteristics, after which, they correctly identified gender 66% of the time. Although the group acknowledges that this is "far away" from a 100% detection accuracy rating, they highlight that the findings “showcase an opportunity for biomarker discovery through CNN applications,” building up clinical toolkits with new diagnostic options.

By “peering under the hood” of the CNN, the researchers discovered—for the first time—the features that set male and female eyes apart—purifying what was once murky knowledge into a clearer body of water that scientists can now explore.

This positions AI as a key player in the future of ophthalmology: it’s not just about detecting diseases quicker, but accessing newfound knowledge—ultimately, deepening our understanding of the eye beyond what was thought possible.

The Eye’s Transparency Makes it a Powerful Multi-Organ Diagnostic Tool

We now rotate the ophthalmological kaleidoscope to the field of oculomics. Long regarded as a window to the soul, the eye is now revealing its deeper potential as a window to the whole body.

A wide range of metabolic and nervous diseases alter the eye’s biological makeup—Parkinson’s disease, for instance, appears to lower the retina’s blood vessel density. Meanwhile, vessels in the back of the eye narrow and balloon with cardiovascular disease, and yellow deposits in the corners of the eye are a warning sign of high cholesterol.

Owing to its unique transparency, clinicians can glimpse monitor these changes through a simple, non-invasive scan—that a local optician might perform—allowing them to “observe the body’s internal mechanisms directly.”

What’s more, the eye might uncover insights that are nearly impossible to obtain through traditional diagnostics. A 2025 review—published in Progress in Retinal and Eye Research—underscores this, noting the retina visualizes two critical systems—the microvasculature and the nervous system—that are “largely inaccessible elsewhere in the body.”

Although oculomics dates back as early as the mid-19th century, AI brought about a transformative wave for the field. Where once only trained ophthalmologists could interpret retinal images, AI models can now sift through millions of scans at scale, spotting correlations that humans might completely miss.

Core Advancements in Oculomics:

  1. From the tech giants: Google’s AI to detect cardiovascular disease. In 2018, Google published research on an AI algorithm trained to detect cardiovascular risk from 284,335 patients’ retinal scans.

    And—despite its somewhat early release in AI’s ophthalmology timeline—the results were surprising: the AI could distinguish between smokers and non-smokers 71% of the time—a much higher accuracy than human experts, who randomly guessed right only half the time. Although dating back seven years ago now, and Google stressing that it was purely for research purposes, this marked a “revolutionary” approach to non-traditional disease forecasting.
  2. Insitro and INSIGHT Collaboration. Announced at this year’s WIRED Health event, Insitro—a drug discovery company—have partnered with INSIGHT to build a foundational model that can detect signs of neurodegeneration from retinal eye scans.

    The goal is to understand the mechanisms behind dementia, pinpointing new biomarkers that scaffold the development of new therapies.
  3. Vitazi. This American company started its journey with a “teleretinal” system that detects diabetic retinopathy in under 45 seconds and—critically—with 95% accuracy. Amidst the landscape of diabetes care, they root the tool in a pressing problem: 90% of vision loss is preventable with screening, and yet, 50% of patients miss out on their annual check-ups.

    But Vitazi have since turned their attention to broader health, expanding into the field of oculomics. In fact, they describe themselves as “the first Oculomics company scanning for systemic diseases using retinal biomarkers, starting with the damaging impact of advanced diabetes.”In February this year, Vitazi partnered with Eye Associates of New Mexico—the state’s largest ophthalmology network—to access their huge ophthalmology data pool.With a bigger dataset, they hope to refine their OculoInsights platform—an AI tool for systemic disease detection, using the same retinal scans that diagnose diabetic retinopathy.
  4. Eyetelligence. An Australian AI-powered ophthalmic screening company. Eyetelligence’s software analyzes retinal fundus photos to identify common eye diseases–including diabetic retinopathy—but can also uncover signals of broader microvascular health. As its co-founder noted, algorithmic retinal analysis can reveal “any disease that affects the microvascular system.”

    The company’s FDA- and CE-cleared AI tools are used in optometry chains, enabling faster detection of vision-threatening eye conditions and, by extension, insight into systemic vascular diseases.
  5. Heart Eye Diagnostics. This company’s AI doctor,“Dr.Noon,” funnels into a dual-health channel with “eye” and “heart” branches—intuitively looking at ocular and cardiovascular disease risk, respectively. Dr.Noon Eye diagnoses eye diseases like glaucoma, cataracts, and diabetic retinopathy with a 96% accuracy.

    Perhaps more interestingly, Dr.Noon Heart can pick out things like high blood pressure, arterial plaques and stiffness, and coronary artery disease—all of which can lead to heart attacks and strokes if not carefully managed. According to the company, Dr.Noon Heart outperforms heart CT scans in its detection rates—but it’s much quicker and cheaper.

    The platform is also being investigated for estimating kidney function—something closely linked with both heart disease and immune function.

What The Eye Can Tell Us About Lifespan

From a research perspective, oculomics almost seems to have limitless potential for health monitoring.

We can find one of the most fascinating—if somewhat unnerving—applications tucked away in The Lancet Health Longevity’s vaults, with a 2024 paper investigating how well an AI model, “RetiPhenoAge,” could predict life expectancy from retinal images.

The number of candles you put on your birthday cake, or chronological age—that most of us use to describe how old we are—might not actually be a good measure of age. Instead, a much better indicator is biological age—how old your body seems based on molecular and physiological markers. Shaped by genetics, lifestyle, and environment, this might mean someone who has a chronological age of 40 more closely resembles a 50-year-old, ageing much faster than expected.

With global populations living longer than ever—acquiring more age-related diseases like diabetes and cancer—we need a better gauge of ageing than the calendar alone. This, coupled with the pull towards preventative health, makes biological age a valuable judge of what steps need to be taken—and when—to keep an individual healthy.

Quantatively speaking, biological age can be measured using telomere length—the “protective caps” on the end of chromosomes that shorten overtime—or through epigenetic clocks, which examine chemical tags on DNA that control gene expression and tend to change in predictable ways as we age. But getting this information is technically demanding and expensive—not feasible for routine clinical practice—so, scientists only really get a sense of biological age in research settings.

This invites the question: is there a better way of judging someone’s true age? The answer might be gazing into our eyes—literally.

Visualization of an eye as a biological clock. Image created with Flux1.1 Pro Ultra.

Researchers supported by the National Medical Research Council, Singapore, trained an AI on eye scans from over 34,000 UK Biobank participants—one of the largest health databases in the world tracking the health of over 500,000 individuals across the UK. But not only this, the model was exposed to a host of clinical information, like immune, liver, and renal functions, as well as inflammation and energy metabolism.

They tested its ability to predict health outcomes over the next decade—specifically, an individual's risk of dying, or suffering from serious health concerns like cancer, a heart attack, or developing a chronic disease. To do this, they needed a retrospective approach: they already knew the patients’ outcomes, but only let the model “see” their data from 10 years prior. That way, they could easily tell if it had made the right predictions.

And the results were surprising. When tested against other biological ageing markers—like telomere length, or even previous AI models—RetiPhenoAge came out on top.

It was better at flagging individuals at higher risk of dying, developing heart disease, or other chronic conditions in the years that followed. In fact, for all outcomes except cancer, RetiPhenoAge offered stronger risk prediction than existing methods.

Unlike DNA methylation or blood-based tests, it’s also faster, cheaper, and entirely non-invasive—relying only on a quick eye scan. That makes it a compelling candidate for up-scaled public health screening, especially in ageing populations.

Of course, this is preliminary research that is in no way ready for clinical application. But even if AI models like this became widely available, it raises a difficult question: How much data is too much? Should we really know someone’s risk of serious illness or death over a decade in advance—especially if early intervention is not enough to change the outcome?

Concluding Remarks

Nonetheless, these advancements highlight bigger questions for AI in ophthalmology: Is it just a time-saving tool, detecting blindness-causing diseases faster than clinicians? Or, could AI reshape an organ that scientists once “speculated” about into a multi-dimensional sensor—not just assessing the health of the body, but actually forecasting its future over the next decade?