The same devices used to take selfies and type tweets are being repurposed and commercialized to provide quick access to the information needed to monitor a patient’s health. Heart rate can be measured by pressing your fingertip against the phone’s camera lens. A microphone placed next to the bed can screen for sleep apnea. Even speakers are tapped to monitor breathing using sonar technology.
In the best of this new world, data is transmitted remotely to medical professionals for the convenience and comfort of patients or, in some cases, to support clinicians without expensive hardware.
But experts say the use of smartphones as a diagnostic tool is still a work in progress. While doctors and their patients have had some real-world success in using cell phones as medical devices, the overall potential remains unrealized and uncertain.
Smartphones are equipped with sensors capable of monitoring a patient’s vital signs. They can help assess people for concussions, watch for atrial fibrillation, and conduct mental health screenings, to name just a few uses for emerging apps.
Companies and researchers eager to find medical applications for smartphone technology are taking advantage of modern phones’ built-in cameras and light sensors; microphones; accelerometers, which detect body movement; gyroscopes; and even speakers. The apps then use artificial intelligence software to analyze the images and sounds collected, creating an easy connection between the patient and the doctor. According to a report by Grand View Research, the more than 350,000 digital health products available in app stores demonstrate earning potential and marketability.
“It’s very difficult to put a device in a patient’s home or hospital, but everyone is just walking around with a phone that has a network connection,” said Dr. Andrew Gostine, CEO of sensor networking company Artisight. Most Americans own a smartphone, including more than 60 percent of those 65 and older, up from 13 percent a decade ago. The covid-19 pandemic is also prompting people to become more comfortable with virtual care.
Some of these products have sought FDA approval to be marketed as medical devices. That way, if patients have to pay to use the software, health insurers are more likely to cover at least some of the costs. Other products are designated as exempt from this regulatory process and fall into the same clinical classification as Band-Aids. But the way the agency handles artificial intelligence and machine learning-based medical devices is still being tweaked to reflect the software’s adaptability.
Ensuring accuracy and clinical validation is critical to gaining support from healthcare providers. Dr. Eugene Yang, a professor of medicine at the University of Washington, said many tools still need fine-tuning. Currently, Yang is testing non-contact measurements of blood pressure, heart rate and oxygen saturation collected remotely through footage of a patient’s face captured by a Zoom camera.
Judging these new technologies is difficult because they rely on algorithms built with machine learning and artificial intelligence to collect data, rather than the physical tools hospitals typically use. So researchers can’t “like-for-like comparisons” based on medical industry standards, Yang said. Failure to establish such assurances undermines the technology’s ultimate goal of reducing cost and access, since physicians still have to verify results.
“False positives and false negatives lead to more testing and more costs to the healthcare system,” he said.
Big tech companies like Google have invested heavily in researching the technology to meet the needs of clinicians and home caregivers, as well as consumers. Currently, in the Google Fit app, users can check their heart rate by placing their finger on the rear camera lens, or use the front camera to track their breathing rate.
“If you take the sensor out of a phone and out of a clinical device, they’re probably the same thing,” said Shwetak Patel, Google’s director of health technologies and a professor of electrical and computer engineering at the University of Washington.
Google’s research uses machine learning and computer vision, an area of artificial intelligence based on information from visual inputs such as videos or images. So, instead of using a blood pressure cuff, the algorithm could account for slight visual changes in the body, for example, as a proxy and biosignal of a patient’s blood pressure, Patel said.
Google is also studying the effectiveness of built-in microphones to detect heartbeats and murmurs, and the effectiveness of using cameras to protect vision by screening for diabetic eye disease, according to information released by Google last year.
The tech giant recently acquired Sound Life Sciences, a Seattle startup with an FDA-approved sonar technology application. It uses the smart device’s speaker to bounce inaudible pulses off the patient’s body to recognize movement and monitor breathing.
Israel-based Binah.ai is another company that uses smartphone cameras to calculate vital signs. Its software looks at the area around the eye, where the skin is slightly thinner, and analyzes light bouncing from blood vessels back into the lens. Company spokeswoman Mona Popilian-Yona said the company is wrapping up a U.S. clinical trial and is marketing its health app directly to insurance companies and other health companies.
The apps even cover subjects like optometry and mental health:
Using a microphone, Canary Speech uses the same underlying technology as Amazon’s Alexa to analyze a patient’s voice to understand mental health conditions. The company’s chief executive, Henry O’Connell, said the software could be integrated with telehealth appointments and allow clinicians to screen for anxiety and depression using a library of vocal biomarkers and predictive analytics.
Last year, Australia-based ResApp Health won FDA approval for an iPhone app that screens for moderate-to-severe obstructive sleep apnea by listening to breathing and snoring. SleepCheckRx, which requires a prescription, is minimally invasive compared to sleep studies currently used to diagnose sleep apnea. These can cost thousands of dollars and require a series of tests.
Brightlamp’s Reflex app is a clinical decision support tool to help manage concussions and vision rehabilitation, among other things. Using an iPad or iPhone’s camera, the mobile app measures how a person’s pupils respond to changes in light. Analyzed by machine learning, the images provide practitioners with data points for assessing patients. Brightlamp is sold directly to healthcare providers and used in more than 230 clinics. Clinicians pay a standard annual fee of $400 per account and are not currently covered by insurance. The Department of Defense is conducting a clinical trial using Reflex.
In some cases, such as with the Reflex app, the data is processed directly on the phone rather than in the cloud, said Brightlamp CEO Kurtis Sluss. By handling everything on the device, the app avoids privacy concerns, since streaming data elsewhere would require patient consent.
But algorithms need to be trained and tested by collecting large amounts of data, which is an ongoing process.
For example, researchers have found that certain computer vision applications, such as heart rate or blood pressure monitoring, may be less accurate for darker skin. Research is ongoing to find a better solution.
Small algorithmic glitches can also generate false positives and scare patients away from widespread adoption. For example, Apple’s new car crash detection feature, found on both the latest iPhone and Apple Watch, triggers and automatically calls 911 when people ride a roller coaster.
“We’re not there yet,” Yang said. “That’s the bottom line.”