A Smartphone's Camera and Flash could Help People Measure Blood Oxygen Levels at Home

First, pause and take a deep breath.

When we breathe in, our lungs fill with oxygen, which is distributed to our red blood cells for transportation throughout our bodies. Our bodies need a lot of oxygen to function, and healthy people have at least 95% oxygen saturation all the time.

Conditions like asthma or COVID-19 make it harder for bodies to absorb oxygen from the lungs. This leads to oxygen saturation percentages that drop to 90% or below, an indication that medical attention is needed.

In a clinic, doctors monitor oxygen saturation using pulse oximeters - those clips you put over your fingertip or ear. But monitoring oxygen saturation at home multiple times a day could help patients keep an eye on COVID symptoms, for example.

In a proof-of-principle study, University of Washington and University of California San Diego researchers have shown that smartphones are capable of detecting blood oxygen saturation levels down to 70%. This is the lowest value that pulse oximeters should be able to measure, as recommended by the U.S. Food and Drug Administration.

The technique involves participants placing their finger over the camera and flash of a smartphone, which uses a deep-learning algorithm to decipher the blood oxygen levels. When the team delivered a controlled mixture of nitrogen and oxygen to six subjects to artificially bring their blood oxygen levels down, the smartphone correctly predicted whether the subject had low blood oxygen levels 80% of the time.

The team published these results Sept. 19 in npj Digital Medicine.

"Other smartphone apps that do this were developed by asking people to hold their breath. But people get very uncomfortable and have to breathe after a minute or so, and that's before their blood-oxygen levels have gone down far enough to represent the full range of clinically relevant data," said co-lead author Jason Hoffman, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. "With our test, we're able to gather 15 minutes of data from each subject. Our data shows that smartphones could work well right in the critical threshold range."

Another benefit of measuring blood oxygen levels on a smartphone is that almost everyone has one.

"This way you could have multiple measurements with your own device at either no cost or low cost," said co-author Dr. Matthew Thompson, professor of family medicine in the UW School of Medicine. "In an ideal world, this information could be seamlessly transmitted to a doctor's office. This would be really beneficial for telemedicine appointments or for triage nurses to be able to quickly determine whether patients need to go to the emergency department or if they can continue to rest at home and make an appointment with their primary care provider later."

The team recruited six participants ranging in age from 20 to 34. Three identified as female, three identified as male. One participant identified as being African American, while the rest identified as being Caucasian.

To gather data to train and test the algorithm, the researchers had each participant wear a standard pulse oximeter on one finger and then place another finger on the same hand over a smartphone's camera and flash. Each participant had this same set up on both hands simultaneously.

"The camera is recording a video: Every time your heart beats, fresh blood flows through the part illuminated by the flash," said senior author Edward Wang, who started this project as a UW doctoral student studying electrical and computer engineering and is now an assistant professor at UC San Diego's Design Lab and the Department of Electrical and Computer Engineering.

"The camera records how much that blood absorbs the light from the flash in each of the three color channels it measures: red, green and blue," said Wang, who also directs the UC San Diego DigiHealth Lab. "Then we can feed those intensity measurements into our deep-learning model."

Each participant breathed in a controlled mixture of oxygen and nitrogen to slowly reduce oxygen levels. The process took about 15 minutes. For all six participants, the team acquired more than 10,000 blood oxygen level readings between 61% and 100%.

The researchers used data from four of the participants to train a deep learning algorithm to pull out the blood oxygen levels. The remainder of the data was used to validate the method and then test it to see how well it performed on new subjects.

"Smartphone light can get scattered by all these other components in your finger, which means there's a lot of noise in the data that we're looking at," said co-lead author Varun Viswanath, a UW alumnus who is now a doctoral student advised by Wang at UC San Diego. "Deep learning is a really helpful technique here because it can see these really complex and nuanced features and helps you find patterns that you wouldn't otherwise be able to see."

The team hopes to continue this research by testing the algorithm on more people.

"One of our subjects had thick calluses on their fingers, which made it harder for our algorithm to accurately determine their blood oxygen levels," Hoffman said. "If we were to expand this study to more subjects, we would likely see more people with calluses and more people with different skin tones. Then we could potentially have an algorithm with enough complexity to be able to better model all these differences."

But, the researchers said, this is a good first step toward developing biomedical devices that are aided by machine learning.

"It's so important to do a study like this," Wang said. "Traditional medical devices go through rigorous testing. But computer science research is still just starting to dig its teeth into using machine learning for biomedical device development and we're all still learning. By forcing ourselves to be rigorous, we're forcing ourselves to learn how to do things right."

Hoffman JS, Viswanath VK, Tian C. et al.
Smartphone camera oximetry in an induced hypoxemia study.
npj Digit. Med. 5, 146, 2022. doi: 10.1038/s41746-022-00665-y

Most Popular Now

ChatGPT can Produce Medical Record Notes…

The AI model ChatGPT can write administrative medical notes up to ten times faster than doctors without compromising quality. This is according to a new study conducted by researchers at...

Can Language Models Read the Genome? Thi…

The same class of artificial intelligence that made headlines coding software and passing the bar exam has learned to read a different kind of text - the genetic code. That code...

Study Shows Human Medical Professionals …

When looking for medical information, people can use web search engines or large language models (LLMs) like ChatGPT-4 or Google Bard. However, these artificial intelligence (AI) tools have their limitations...

Bayer and Google Cloud to Accelerate Dev…

Bayer and Google Cloud announced a collaboration on the development of artificial intelligence (AI) solutions to support radiologists and ultimately better serve patients. As part of the collaboration, Bayer will...

Advancing Drug Discovery with AI: Introd…

A transformative study published in Health Data Science, a Science Partner Journal, introduces a groundbreaking end-to-end deep learning framework, known as Knowledge-Empowered Drug Discovery (KEDD), aimed at revolutionizing the field...

Shared Digital NHS Prescribing Record co…

Implementing a single shared digital prescribing record across the NHS in England could avoid nearly 1 million drug errors every year, stopping up to 16,000 fewer patients from being harmed...

Ask Chat GPT about Your Radiation Oncolo…

Cancer patients about to undergo radiation oncology treatment have lots of questions. Could ChatGPT be the best way to get answers? A new Northwestern Medicine study tested a specially designed ChatGPT...

North West Anglia Works with Clinisys to…

North West Anglia NHS Foundation Trust has replaced two, legacy laboratory information systems with a single instance of Clinisys WinPath. The trust, which serves a catchment of 800,000 patients in North...

Can AI Techniques Help Clinicians Assess…

Investigators have applied artificial intelligence (AI) techniques to gait analyses and medical records data to provide insights about individuals with leg fractures and aspects of their recovery. The study, published in...

AI Makes Retinal Imaging 100 Times Faste…

Researchers at the National Institutes of Health applied artificial intelligence (AI) to a technique that produces high-resolution images of cells in the eye. They report that with AI, imaging is...

Standing Up for Health Tech and SMEs: Sh…

AS the new chair of the health and social care council at techUK, Shane Tickell talked to Highland Marketing about his determination to support small and innovative companies, by having...

SPARK TSL Acquires Sentean Group

SPARK TSL is acquiring Sentean Group, a Dutch company with a complementary background in hospital entertainment and communication, and bringing its Fusion Bedside platform for clinical and patient apps to...