Self-driving cars are poised to disrupt the transportation industry, and there is a similar revolution underway in health care. This effort seeks to engineer a new generation of “smart” medical equipment that melds scanning with interpretation, monitoring with treatment, and merges data from disparate devices into a common, readily interpretable stream. The goal: to help clinicians better monitor patients’ health in real-time and evaluate an ever-increasing stack of clinical information, thereby enabling the most informed decisions possible.
Click here to watch Disruptive Dozen: 12 AI Technologies That Will Reinvent Care
A key frontier for designing and deploying such intelligent systems is the intensive care unit (ICU), where clinicians treat patients with complicated, life-threatening conditions. Each year, some six million patients are cared for in ICUs across the U.S. The departments are filled with an array of life-saving equipment, including monitors, pumps, ventilators, and drug infusers. Typically, these devices do not talk to each other. In addition, they churn out data at an impressive rate, producing thousands of data points a day for just one patient.
Moreover, these machines contain their own suite of alarms, generating a cacophony of beeps and blares that alert doctors and nurses to a crisis — sometimes. False alarms are commonplace in ICUs and elsewhere in hospitals, and alarm fatigue, where clinicians become desensitized to the incessant sounds, is a widespread problem that jeopardizes patient safety. In 2014, The Joint Commission established a new patient safety goal centered on improving alarm systems and reducing alarm fatigue, making innovation in this area a priority for hospitals nationwide.
Now, teams across the country are harnessing artificial intelligence to create new tools that enhance patient monitoring in ICUs and reduce information overload for clinicians. For example, researchers in states like California, Massachusetts, and Minnesota are developing systems that harmonize, integrate, and display patient data from diverse types of clinical sensors, forming a kind of digital dashboard that can be viewed at the bedside of ICU patients. Other groups, including one in New Jersey, are using machine learning to make individual devices smarter — like ventilators that can sense when their pacing is off or when complications such as pneumonia are brewing, and pumps that can monitor how much fluid has been infused into a patient and whether the flow rate needs to be changed. With the rise of such smart systems comes the capacity for enhanced prediction — such as signaling a life-threatening event, like an abnormal heart rhythm, before it happens. For critically ill patients, that advance warning could be vital.
In a similar vein, efforts are also underway to make other clinical tools and technologies smarter. For instance, MRI scans can be lengthy procedures that require patients to spend as long as an hour lying motionless in a narrow enclosure. Although the end result — exquisitely detailed images of the body’s inner structures — is often instrumental in planning downstream clinical care, the patient experience can be unpleasant. But what if these machines could be made more intelligent — able to acquire images more quickly, for example, or ending MRI scans once lesions or other items of interest have been visualized? These questions are now being explored using machine learning techniques by groups across the U.S. and the world. The end results could help reduce scan time by as much two-thirds.
CT leaves room for improvement, too. With CT scans, a major concern is the level of radiation that patients receive, with higher quality images typically requiring higher doses of radiation. But with artificial intelligence, researchers in places like California, Massachusetts, Pennsylvania, and the U.K. are devising ways to enhance CT image quality algorithmically rather than with radiation. Although the radiation dose from a single CT scan poses minimal risks, for patients who require multiple scans over time, the cumulative effects may warrant concern, not to mention the exposures of young children. The results of recent AI-based studies suggest that one day it will be possible to generate diagnostic-quality CT scans with radiation doses that are orders of magnitude lower than the levels currently used — similar to the radiation exposure on a flight from Boston to London.
For more information about Dr. Michalski’s research, please contact Partners HealthCare Innovation by clicking here.
Most Recent Posts:
Mass Eye and Ear Scientist Identifies Molecule Unlocking Glaucoma
Potential Cure May Impact Other Neurodegenerative Disease Therapies There is no cure (yet). Glaucoma often robs…
First Brigham Ignite Awards of the Year Announced
The Two Projects Focus on Robotic AI-Guided Intubation and Gamma Delta T Cell Therapy for Solid Tumors…