In medicine, an image on a computer screen is more than just a picture; it is millions, even billions of data points that can be systematically scrutinized and mined for connections to health and disease. Understanding how these data points vary within and between images — and patients — stretches the limits of human cognition. Indeed, medical images are acquired by clinicians in fields across the medical spectrum, including radiology, pathology, dermatology, ophthalmology, as well as some surgical specialties. Now, artificial intelligence, or AI, is transforming how these digital images are analyzed and interpreted in various spheres of health care. With advances in computer vision and machine learning technology, a new era of automated disease detection is dawning, providing clinicians with tools to more rapidly — and in some cases, more accurately — diagnose, characterize, and predict the course of disease.
This revolution is expanding to encompass an increasingly important source of clinical images: smartphones. Although smartphone photos are generally less standardized and more variable in quality than conventional clinical images, there is a growing array of AI-based tools that seek to harness these photos for a variety of medical purposes including preventative care, monitoring of chronic disease, and providing expert care to underserved areas. Indeed, this area is likely to grow substantially: by 2021, there will be some six billion smartphone subscribers worldwide.
One promising area of research that seeks to harness this near-ubiquitous source of images involves the diagnosis of rare genetic disorders in young children. Since a significant fraction of these conditions include craniofacial abnormalities, a team of researchers in the U.K. recently developed facial-recognition software that can analyze ordinary photos for signs of developmental disease. The tool analyzes discrete features of the jaw, mouth, nose, eyes, and brow and then compares them to a database of over 90 disorders. These data are then used to construct a list of possible diagnoses.
The power of this approach lies not only in its ability to aid in diagnosis — indeed, the majority of children with a genetic disorder never receive a definitive diagnosis — but also in revealing clusters of unrelated patients who suffer from the same condition, especially if the cause of their illness is unknown. These individuals could be candidates for genome sequencing, thereby helping to unearth the genetic underpinnings of their disorder.
Another exciting application of AI to smartphone images emerged last year from researchers in Palo Alto, California. The team developed an algorithm that was trained using more than 1 million images to detect skin cancer. Each year, there are more than 5 million new cases of skin cancer in the U.S. For melanoma, the deadliest form of the disease, early diagnosis is crucial. If caught early, the 5-year survival rate is over 99 percent, yet that figure plunges to around 14 percent if the disease goes undiagnosed until its most advanced stages.
By analyzing photographs of moles and other skin lesions — including photos taken with everyday devices, such as smartphones — the Palo Alto team’s software was able to distinguish benign from malignant skin cancers, including melanomas. Notably, the method’s diagnostic accuracy was comparable to that of 21 board-certified dermatologists. If deployed widely on mobile devices, this skin cancer detection method could help dramatically expand access to diagnostic expertise in dermatology.
Additional efforts are underway to put low-cost health care tools into the hands of those who need them. For example, scientists in Israel and the U.S. are using computer vision and machine learning to develop an inexpensive, rapid test for cervical cancer. In rural health care clinics in Africa and other nations, women typically lack access to first-line cervical cancer screening methods, such as Pap and HPV tests. And yet, of the nearly 300,000 women who die each year of cervical cancer, the majority live in low-resource countries.
The researchers’ approach harnesses a decades-old discovery that acetic acid — essentially, vinegar — turns precancerous lesions white when applied to the cervix. Although clinicians can visualize these lesions directly, accurate diagnoses require significant expertise and training. Armed with a small device that clips on to a smartphone to capture photos of the cervix — and AI tools that are then applied to those photos — the team is now working to overcome this challenge and broaden the reach of early detection efforts in cervical cancer.
For more information about Dr. Shafiee’s research, please contact Partners HealthCare Innovation by clicking here.
Most Recent Posts:
Are you an Innovator? Applications for MGB Commercialization and Inclusive Leadership Program now open
The MGB Commercialization and Inclusive Leadership Program (CILP) is a one-day in-person program on commercial application and…
The Blood Shield Transfer device allows safe transfer of pediatric blood to microcontainers and blood gas syringes…
Collaboration demonstrates Mass General Brigham’s commitment to digital transformation and enhancing patient experience SEATTLE, Oct. 31, 2022…