Photo of radiological images on computer screen.

Photo by sfam_photo, Shutterstock.

Stanford Medicine Scope - June 22nd, 2016 - by Jennifer Huber

Specialized electronic circuits called graphic processing units, or GPUs, are at the heart of modern mobile phones, personal computers and gaming consoles. By combining multiple GPUs in concert, researchers can solve previously elusive image processing problems. For example, Google and Facebook have both developed extremely accurate facial recognition software using these new techniques.

GPUs are also crucial to radiologists, because they can rapidly process large medical imaging datasets from CT, MRI, ultrasound and even conventional x-rays.

Now some radiology groups and technology companies are combining multiple GPUs with artificial intelligence (AI) algorithms to help improve radiology care. Simply put, an AI computer program can do tasks normally performed by intelligent people. In this case, AI algorithms can be trained to recognize and interpret subtle differences in medical images.

Stanford researchers have used machine learning for many years to look at medical images and computationally extract the features used to predict something about the patient, much as a radiologist would. However, the use of artificial intelligence, or deep learning algorithms, is new. Sandy Napel, PhD, a professor of radiology, explains:

These deep learning paradigms are a deeply layered set of connections, not unlike the human brain, that are trained by giving them a massive amount of data with known truth. They basically iterate on the strength of the connections until they are able to predict the known truth very accurately.

“You can give it 10,000 images of colon cancer. It will find the common features across those images automatically,” said Garry Choy, MD, a staff radiologist and assistant chief medical information officer at Mass General Hospital, in a recent Diagnostic Imaging article. “If there are large data sets, it can teach itself what to look for.”

A major challenge is that the AI algorithms may require thousands of annotated radiology images to train them. So Stanford researchers are creating a database containing millions of de-identified radiology studies, including billions of images, totaling about a half million gigabytes. Each study in the database is associated with the de‐identified report that was created by the radiologist when the images were originally used for patient care.

“To enable our deep learning research, we are also applying machine learning methods to our large database of narrative radiology reports,” said Curtis Langlotz, MD, PhD, a Stanford professor of radiology and biomedical informatics. “We use natural language processing methods to extract discrete concepts, such as anatomy and pathology, from the radiology reports. This discrete data can then be used to train AI systems to recognize the abnormalities shown on the images themselves.”

Potential applications include using AI systems to help radiologists more quickly identify intracranial hemorrhages or more effectively detect malignant lung nodules. Deep learning systems are also being developed to perform triage — looking through all incoming cases and prioritizing the most critical ones to the top of the radiologist’s work queue.

However, these potential clinical applications have not been clearly validated yet, according to Langlotz:

We’re cautious about automated detection of abnormalities like lung nodules and colon polyps. Even with high sensitivity, these systems can distract radiologists with numerous false positives. And radiology images are significantly more complex than photos from the web or even other medical images. Few deep learning results of clinical relevance have been published or peer-reviewed yet

Researchers say the goal is to improve patient care and workflow, not replace doctors with intelligent computers.

“Reading about these advances in the news, and seeing demonstrations at meetings, some radiologists have become concerned that their jobs are at risk,” said Langlotz. “I disagree. Instead, radiologists will benefit from even more sophisticated electronic tools that focus on assistance with repetitive tasks, rare conditions, or meticulous exhaustive search — things that most humans aren’t very good at anyway.”

Here’s Napel:

At the end of the day, what matters to physicians is whether or not they can trust the information a diagnostic device, whether it be based in AI or something else, gives them. It doesn’t matter whether the opinion comes from a human or a machine. … Some day we may believe in the accuracy of these deep learning algorithms, when given the right kind of data, to create useful information for patient management. We’re just not there yet.

Originally published at Stanford Medicine Scope Blog