Investigators developed a deep learning algorithm to provide rapid diagnosis of clinical head CT-scan images. The goal? To help triage and prioritize urgent neurological events, potentially accelerating time to diagnosis and care in clinical settings.

Scientists at the Icahn School of Medicine at Mount Sinai in New York have taught a computer to read head computed tomography (CT) imaging for serious brain events. While the computer program was less accurate than a neuroradiologist in correctly diagnosing a serious event, it was able to pull out the serious cases much faster than is humanly possible and to assist humans in identifying critical cases sooner, according to a study published August 13 in Nature Medicine.

The study authors, led by Eric K. Oermann, MD, a neurosurgeon and mathematician by training, said that the machine-learning algorithm could help neuroradiologists better triage the most serious cases. Once these cases are flagged, the neurologist can quickly diagnose and treat those patients that need immediate attention.

The computer is faster at identifying critical events, which means the neurologists can start treatment earlier. Decreasing time to treatment can improve outcomes.
Dr. Oermann

Dr. Oermann began studying deep learning about a decade ago while working on his undergraduate degree in mathematics. One of his early interests was convolutions in higher spatial dimensions, such as 3D-CNNs, which is a similar technology to what is being used for supervised classification on 3D modeling and light detection and ranging data.

Three years ago, he and a colleague saw a string of clinical cases where they felt that the outcome would have been better if the imaging results were brought to their attention earlier. It got them thinking whether a 3D-CNN algorithm could be used to help classify CT images for acute neurologic events. (Dr. Oermann has always been interested in number crunching and even took a break during his neurosurgery residency to do a fellowship at Google.)

Could they teach a computer to identify critical problems? They began their investigation with tens of thousands of radiological medical records. They discovered that the language that radiologists use is highly structured. They compared the language in the reports to British novels, Amazon reviews, and Reuters' news stories and found that from a linguistic standpoint, the radiological reports were actually much simpler than the standard fare on Amazon, Reuters, or in novels. They taught the computer to read the reports — and then it was time to see how the program did with the scans. Would there be a match?


View Full Article