Deep Learning Is Reaching Its Limits. Here’s What’s Coming.
Deep Learning Is Reaching Its Limits. Here’s What’s Coming.
AI is reaching a limit. But it’s not a ceiling that can be broken with just bigger and better systems. To truly reach the next stage of AI, Dr Vaishak Belle says we need a much bolder shift.
After a decade of breathtaking breakthroughs, from AlphaFold mapping proteins to ChatGPT writing code, many experts say deep learning is starting to show its cracks. The next revolution, they argue, won’t come from more data or larger models, but from a new way of reasoning: neuro-symbolic AI.
The radical burst of deep learning
It’s no doubt that deep learning has been instrumental for the life sciences industry.
Take AlphaFold, the first version of which Google DeepMind developed in 2018 and won its creators the Nobel Prize. The program uses a protein’s amino acid sequence to predict its structure in 3D space, allowing scientists to visualize protein folding without needing to do the traditional time-consuming lab experiments.
From a pharmaceutical perspective, deep learning is “trimming the fat” around drug discovery, especially for start-ups. 2023 saw Insilico Medicine push its AI-designed drug candidate into Phase II clinical trials as a potential treatment for idiopathic pulmonary fibrosis, a chronic lung disease where built-up scar tissue makes it increasingly difficult to breathe.
The company reportedly used generative AI for “every single step” of the drug discovery process: finding a molecular target, generating compounds that would interact with said target, and predicting the outcomes of clinical trials.
Although the word “efficiency” has become something of a ChatGPT buzzword in health reporting, the “e-word” gains were remarkable: What would’ve cost $400 million and taken as long as six years to complete was slashed to a mere tenth of the finances and a third of the timeframe.
… But is it the saving grace?
No less, a Financial Times article published in September asks a very good question: Where are all the AI drugs? If AI is putting scientists light-years ahead of the curve, why are we not seeing the results in clinics?
One reason is the amount of time that goes into creating the models in the first place. Training these models requires a lot of know-how. Experts have to make choices about how the model is built, how the data is prepared and cleaned, and how to design rules that guide the model’s learning. And, oftentimes, they won’t document every choice along the way. This happened with AlphaFold: either as a design choice or to protect intellectual property, its creators left knowledge gaps in the building process. Without an “instruction manual,” other developers have to start from scratch when building similar models.
Even then, clinical trials still take years to conduct. After all, it could be dangerous for patients to take a drug “straight from the computer” before doctors are absolutely certain that it’s safe for the target population. Even if AI can already predict the trial outcomes, it’s likely that at least some of the predictions are slightly ‘off’, perhaps because of things like bias or AI misalignment.
Another reason is slightly less-obvious: deep learning has reached its limits, according to some experts. To an extent, it’s at risk of the “technobabble” effect, sometimes used as a way for a product to sound more sci-fi than it actually is. It might even be thought of as a disco-ball term, where its glittery appeal hides its limitations.
AI knowledge is shallow
Dr Vaishak Belle, Director of the Bayes Center at the University of Edinburgh, explains that deep learning ‘knowledge,’ particularly in the drug discovery context, is quite superficial.
“What often happens is scientists have some way to capture these drug interactions: they have all of the data from some experimental setup that they’ve done. They feed these to the models and they get a prediction out of it.”
But predicting is distinct from understanding. The model might learn that targeting a molecule with one type of compound is effective, but it can’t “think up” a new compound that would target a specific genetic mutation, or the protein it makes, for optimum treatment. It’s regurgitating what scientists already know, just much quicker.
“All it’s really doing is putting together pieces that it has seen,” says Belle. “It couldn’t produce a masterpiece from scratch; it’s not smart enough to come up with new imaginary things that no one has ever thought of.”
It might, at some point, produce something excellent. But only by chance. Belle ties it to the saying “give a monkey a typewriter, and it will eventually produce a play by Shakespeare,” but qualifies that “there would have to be an infinite amount of time, and a very magical, coincidental set of things for that to happen.”
AI distinctions and their limitations
Belle points, instead, to a different strategy that merges both the “old and the new” in the field of artificial intelligence: neuro-symbolic AI.
To understand this, it helps to look back at what came before deep learning. In AI’s early days, scientists were more interested in symbolic AI, which uses established rules, a “knowledge base,” to calculate outcomes, rather than learning features from data.
Say, for example, clinicians know that disease X presents as a rash and vomiting. Symbolic AI would say “IF the patient has a rash AND is vomiting THEN they might have disease X,” reasoning like a human might, but it’s limited to what the clinicians already know.
Machine learning, on the other hand, doesn’t start with “if-then” rules. It learns statistical relationships from examples, mapping inputs to outputs. Given enough patient data, a model might infer that rashes and vomiting frequently co-occur in disease X, but it doesn’t reason that vomiting follows from inflammation or infection; it simply notices that the two symptoms appear together.
The best of both worlds
Neuro-symbolic AI combines the strengths from machine learning and symbolic AI. It picks out patterns from big datasets, just like deep learning, but represents that data using a knowledge base, just like symbolic AI. This means that it might be able to pick out that a rash and vomiting indicate disease X, but also use its knowledge base to reason that both symptoms stem from the same immune response.
From a technical angle, this means using representations of data, like a graph structure, that help understand underlying patterns.
“It helps us distinguish between causation and correlation,” says Belle. “We can then try to work out how certain genes lead to particular kinds of characteristics.”
It’s like adding a layer of context-awareness to a model; instead of just assigning a ‘+’ for correlation or ‘-’ for the opposite, neuro-symbolic AI manipulates the data within a real-world “bubble.”
It can also handle hierarchies of information, zooming in on the microscopic data and drawing links to the “bigger picture.” For example, it may help connect molecular changes caused to the diseases they produce.
“It gets us closer to the way someone who studies medicine might understand the relationship between genes and proteins and diseases,” says Belle.
To illustrate this kind of thinking, Belle refers to an all-too familiar yet frustrating scenario for many: “If your headphones aren’t working, you’re not necessarily going to rely on memory to fix them if this has never happened to you,” he says.
“But you understand, on some principle, that they connect to your phone using bluetooth and that going into those settings is probably a good starting point. So you’re going to try and solve it based on this working understanding you have already… That’s not a data thing. It lies in the context.”
Contextualization is one neurosymbolic approach, but these systems can take forms; for example, “constrained” predictions. In this scenario, a model is taught to follow certain laws so that it doesn’t produce impossible outcomes. In drug discovery, this might mean accounting for steric hindrance between two chemical groups, where they physically can’t fit together in 3D space under normal conditions.
Blurred lines are adding complexity
However, it’s not always clear when AI is neuro-symbolic or not. In the last five years, Belle cautions that there has been “some confusion” around what this manipulation and reasoning actually means – especially for LLMs. He notes how developers are taking “traces” of this reasoning and feeding it into chatbots. “It looks like they’re thinking, but they’re still really relying on retrieval to get the right answer,” he says. “That’s why we still see very silly mistakes from these models.”
He adds that “you won’t necessarily know when it’s right or when it’s wrong,” which could very well be the case for highly complicated scientific tasks: ones that require a huge amount of expertise to spot when things go awry.
An example from a few years back highlights this: “Someone had asked ChatGPT a question, ‘Who’s the daughter of Anna’s mother?’ and the chatbot came back saying it couldn’t reveal any personal information,” Belle says.
The answer, of course, is Anna. But Belle explains that “as the output of these models comes out, it’s passing through a series of gates. Some of those gates stop the model in its tracks.”
Lacking context-awareness, the model stumbles at the ‘don’t give out personal information’ gate; it doesn’t see the nuance of the question.
Neuro-symbolic AI isn’t as new as you think
While neuro-symbolic AI seems even more tech-heavy than deep-learning, it’s not as new a concept as it sounds. neuro-symbolic approaches have loosely existed since the 1990s, when a series of workshops were initially set up to bolster researchers’ knowledge of the area. But the term didn’t gain much traction until the likes of Gary Marcus and Garcez & Lamb published articles in 2020, cementing “neuro-symbolic AI” as a crucial means to crack the restrictions around classical machine and deep learning.
Then, in 2024, it really took off. DeepMind launched AlphaGeometry, which Google describes as a “neuro-symbolic system made up of a neural language model and a symbolic deduction engine, working together to find proofs for complex geometry theorems.” A year later, it became even more widely adopted, landmarked by Amazon incorporating neuro-symbolic AI into its Vulcan warehouse robot and Rufus shopping assistant.
Do we even need it?
These developments signal evolution in the AI landscape, yes, but they simultaneously pose an important question: why should we bother at all?
Deep learning tools are already alleviating NHS workloads. Ambient voice technology, for one, reduces time spent on clinical documentation by ~52% and saves 8% on appointment times, according to a 2025 study from Great Ormond Street Hospital across multiple NHS sites. And AI has already shown it can discover new drugs lightning-fast.
But the point isn’t to replace LLMs, or deep learning, Belle says. “All it’s providing is additional structures to not just control the output, but also contextualize the input, so you have better responses.”
Neuro-symbolic AI doesn’t automatically solve the problem of telling correlation from causation. But it lets us build in causal knowledge, so we can distinguish chance occurrences from those that are truly cause-and-effect.
At the core of all this is the idea that for AI to truly understand the world more like we do, it needs a sense of how things exist and relate to one another. Neurosymbolic AI is a step toward building that foundation.
Belle concludes: “I’m not saying neuro-symbolic is the only solution. It may not be, because ultimately it still relies on the neural network element. But it could certainly help us get correct and certifiable answers, so we’re not forced to work with hallucinations and not know what these models are producing.”
For now, it's less about replacing neural networks, and more about teaching them to "think straight."
.png)
.png)
.png)