New AI Tool Makes Medical Imaging Process 90% more Efficient

When doctors analyze a medical scan of an organ or area in the body, each part of the image has to be assigned an anatomical label. If the brain is under scrutiny for instance, its different parts have to be labeled as such, pixel by pixel: cerebral cortex, brain stem, cerebellum, etc. The process, called medical image segmentation, guides diagnosis, surgery planning and research.

In the days before artificial intelligence (AI) and machine learning (ML), clinicians performed this crucial yet painstaking and time-consuming task by hand, but over the past decade, U-nets - a type of AI architecture specifically designed for medical image segmentation - have been the go-to instead. However, U-nets require large amounts of data and resources to be trained.

"For large and/or 3D images, these demands are costly," said Kushal Vyas, a Rice electrical and computer engineering doctoral student and first author on a paper presented at the Medical Image Computing and Computer Assisted Intervention Society, or MICCAI, the leading conference in the field. "In this study, we proposed MetaSeg, a completely new way of performing image segmentation."

In experiments using 2D and 3D brain magnetic resonance imaging (MRI) data, MetaSeg was shown to achieve the same segmentation performance as U-Nets while needing 90% fewer parameters - the key variables AI/ML models derive from training data and use to identify patterns and make predictions.

The study, titled "Fit Pixels, Get Labels: Meta-learned Implicit Networks for Image Segmentation," won the best paper award at MICCAI, getting recognized from a pool of over 1,000 accepted submissions.

"Instead of U-Nets, MetaSeg leverages implicit neural representations - a neural network framework that has hitherto not been thought useful or explored for image segmentation," Vyas said.

An implicit neural representation (INR) is an AI network that interprets a medical image as a mathematical formula that accounts for the signal value (color, brightness, etc.) of each and every pixel in a 2D image or every voxel in a 3D one.

While INRs offer a very detailed yet compact way to represent information, they are also highly specific, meaning they typically only work well for the single signal/image they trained on: An INR trained on a brain MRI cannot typically generalize rules about what different parts of the brain look like, so if provided with an image of a different brain, the INR would typically falter.

"INRs have been used in the computer vision and medical imaging communities for tasks such as 3D scene reconstruction and signal compression, which only require modeling one signal at a time," Vyas said. "However, it was not obvious before MetaSeg how to use them for tasks such as segmentation, which require learning patterns over many signals."

To make it useful for medical image segmentation, the researchers taught INRs to predict both the signal values and the specific segmentation labels for a given image. To do so, they used meta-learning, an AI training strategy whose literal translation is "learning to learn" that helps models rapidly adapt to new information.

"We prime the INR model parameters in such a way so that they are further optimized on an unseen image at test time, which enables the model to decode the image features into accurate labels," Vyas said.

This special training allows the INRs to not only quickly adjust themselves to match the pixels or voxels of a previously unseen medical image but to then also decode its labels, instantly predicting where the outlines for different anatomical regions should go.

"MetaSeg offers a fresh, scalable perspective to the field of medical image segmentation that has been dominated for a decade by U-Nets," said Guha Balakrishnan, assistant professor of electrical and computer engineering at Rice and a member of the university’s Ken Kennedy Institute. "Our research results promise to make medical image segmentation far more cost-effective while delivering top performance."

Vyas K, Veeraraghavan A, Balakrishnan G.
Fit Pixels, Get Labels: Meta-learned Implicit Networks for Image Segmentation. In: Gee, J.C., et al. Medical Image Computing and Computer Assisted Intervention - MICCAI 2025.
MICCAI 2025. Lecture Notes in Computer Science, vol 15962. Springer, Cham. 2025. doi: 10.1007/978-3-032-04947-6_19

Most Popular Now

AI Tool Offers Deep Insight into the Imm…

Researchers explore the human immune system by looking at the active components, namely the various genes and cells involved. But there is a broad range of these, and observations necessarily...

Multimodal AI Poised to Revolutionize Ca…

Although artificial intelligence (AI) has already shown promise in cardiovascular medicine, most existing tools analyze only one type of data - such as electrocardiograms or cardiac images - limiting their...

Improved Cough-Detection Tech can Help w…

Researchers have improved the ability of wearable health devices to accurately detect when a patient is coughing, making it easier to monitor chronic health conditions and predict health risks such...

AI, Health, and Health Care Today and To…

Artificial intelligence (AI) carries promise and uncertainty for clinicians, patients, and health systems. This JAMA Summit Report presents expert perspectives on the opportunities, risks, and challenges of AI in health...

New AI Tool Makes Medical Imaging Proces…

When doctors analyze a medical scan of an organ or area in the body, each part of the image has to be assigned an anatomical label. If the brain is...

AI System Finds Crucial Clues for Diagno…

Doctors often must make critical decisions in minutes, relying on incomplete information. While electronic health records contain vast amounts of patient data, much of it remains difficult to interpret quickly...

AI can Better Predict Future Risk for He…

A landmark study led by University' experts has shown that artificial intelligence can better predict how doctors should treat patients following a heart attack. The study, conducted by an international...

A New AI Model Improves the Prediction o…

Breast cancer is the most commonly diagnosed form of cancer in the world among women, with more than 2.3 million cases a year, and continues to be one of the...

Do Fitness Apps do More Harm than Good?

A study published in the British Journal of Health Psychology reveals the negative behavioral and psychological consequences of commercial fitness apps reported by users on social media. These impacts may...

AI Tool Beats Humans at Detecting Parasi…

Scientists at ARUP Laboratories have developed an artificial intelligence (AI) tool that detects intestinal parasites in stool samples more quickly and accurately than traditional methods, potentially transforming how labs diagnose...

Making Cancer Vaccines More Personal

In a new study, University of Arizona researchers created a model for cutaneous squamous cell carcinoma, a type of skin cancer, and identified two mutated tumor proteins, or neoantigens, that...