FastGlioma: AI Can Detect Brain Tumor Infiltration in Under 10 Seconds

FastGlioma: AI Can Detect Brain Tumor Infiltration in Under 10 Seconds

DATE
December 3, 2024
SHARE
The Language of Genomes

Scientists at the University of Michigan (MI, U.S.) developed “FastGlioma”, a deep learning visual foundation model that can detect brain tumor cells in under 10 seconds.

The AI was trained using 11,462 whole-slide images (split into 4 million smaller patches) to look for infiltrating tumor cells from freshly resected samples.

The group hopes that this foundational tool could be implemented during surgery to easily identify residual tumor cells, helping surgeons to completely remove brain tumors from patients.

Brain Tumors Are Tricky to Get Rid of

Because few anticancer drugs can cross the blood-brain barrier, effectively treating brain tumor patients is a notorious challenge.

While other cancer areas move towards an age of precision medicine, charting the treatment pathway for brain tumor patients still heavily relies on surgical resection, sometimes supplemented with radiotherapy.

Nonetheless, modern advancements have radically improved surgical outcomes for brain tumor patients. For example, Magnetic Resonance Imaging or Computerized Tomography can be used intraoperatively (during operations) to detect fluorescently dyed tumor cells. Through this, surgeons can visualize the tumor margin lines, clearly labelling where the tumor cells are residing.

However, even with these advanced imaging techniques, it is easy for surgeons to miss some leftover malignant cells. Though there may only be a few left behind, any lingering cancer cells could quickly proliferate (multiply) and potentially mutate, causing the tumor to grow back and, more concerningly, become resistant to further treatment.

The speed of MRI or CT scan analysis is also limited, since scientists must manually interpret the results – because of this, information from these scans is not readily available during the operations themselves. In addition, some tumor cells may appear almost indistinguishable from normal cells, since there may only be subtle changes to their morphologies, i.e., physical appearance.

In this way, AI could provide a valuable solution, offering an intraoperative approach that can rapidly analyze patient samples and detect any remaining tumor cells. This, in turn, could enable doctors to completely remove tumors, significantly reducing the risk of remission.

Putting the AI in BrAI n Tumor Detection

With this in mind, a team of scientists (directed by senior author Tom Hollad) at the University of Michigan sought the help of AI, ultimately developing a model that can rapidly and accurately identify tumor cells at the sub-micrometer scale: FastGlioma.

FastGlioma analyzes images taken from patient samples using a technique known as Stimulated Ramon Histology (SRH) – essentially, SRH leverages the natural molecular vibrations of chemical bonds, such as those in proteins and lipids within cells and organelles to generate detailed molecular signatures that are translated into high-resolution images.

Because tumor cells are more metabolically active, they have higher levels of proteins and nucleic acids, allowing SRH to distinguish between healthy and cancerous cells.

Crucially, SRH provides scientists with almost instantaneous imaging at a very high resolution without the need for stains, labels or dyes that slow down the process.

The team utilized 11,462 SRH images from 3,000 patients to train the AI to identify key features associated with brain tumors. By leveraging this large and diverse dataset, they created a foundational model – a versatile framework that can undergo additional specialized training for different contexts.

The Nitty and Gritty Science  

So, how exactly does the AI work, and how did they go about building it?

The researchers based their model on a vision transformer architecture, a type of deep learning technique that breaks images into small patches (known as tokens) and looks at the relationships between them.

More specifically, they developed a hierarchical self-supervised vision transformer model.

But what does that actually mean?

The structure can be broken down into some core components:

Whole-slide images are first split into subsets of images - patches, also known as tokens. In vision transformer models, these patches are normally flattened from 2D structures into 1D vectors, so that the model processes these sections as if they were sequences of text.

However, with FastGlioma, whole slide images are too large to flatten and would require a huge amount of computational memory.

To overcome this, the researchers used a hierarchical discriminative tokenization approach. In this paradigm, patches are related (positive pairs) if they come from the same slide or patient and are considered unrelated (negative pairs) if their origins are not connected.

The model discriminates between the data at three hierarchical levels: patch-level discrimination, slide-level discrimination and patient-level discrimination.

This hierarchical approach generates tokenized patches, helping to break down large whole-slide images into manageable units that the transformer can process.

Next, these tokens are split, cropped and masked. In the first stage (splitting), tokenized patches are split into two mutually exclusive sets - i.e., the sets do not overlap.

In the cropping stage, the model zooms in and out of patches, introducing spatial variation so that the AI can understand relationships between different regions of a slide.

Finally, these crops undergo masking, where between 10-80% of the patches in each crop are randomly hidden (masked), encouraging the machine to infer missing information.

These different transformations generate two unique ‘views’ of the data, which are then passed to a Siamese architecture for self-supervised learning.

In this stage, the machine compares these different aspects of whole slide images and learns to detect patterns that can ultimately help to identify tumor cells.

FastGlioma’s Striking Performance Could Guide the Future of Surgery

The results of the study were impressive: FastGlioma identified residual tumor tissue with 92% accuracy when using full-resolution images (available in 100 seconds) captured by SRH. Moreover, even when lower-resolution images were used, (obtainable in only 10 seconds) the model had an accuracy rating of 90%.

The speed and accuracy of FastGlioma have exciting implications for surgical workflows; by detecting tumor infiltration in fresh patient samples, surgeons can more effectively achieve complete resection in brain tumor patients.

This has the potential to reduce cancer remission rates significantly. More broadly speaking, such a foundational model could be adapted globally to different populations or perhaps trained to detect different tumor types – expanding the potential applications of FastGlioma.