Sorry, you need to enable JavaScript to visit this website.
Skip to main content
David Gruen

In Conversation With... David Gruen, MD

January 31, 2020 
Save
Print

Editor’s note: David R. Gruen, MD, MBA, FACR is the Chief Medical Officer, Imaging at IBM Watson Health and is a thought leader and content expert for artificial intelligence in medical imaging. We spoke with him about the role artificial intelligence can play in healthcare diagnostics and the potential for reducing diagnostic errors.

Dr. Gruen serves as a Clinical Assessor and commission member for the American College of Radiology. He is also a site visitor and inspector for accreditation on behalf of the National Accreditation Program for Breast Cancer.

Dr. Kendall Hall: How did you become interested in using artificial intelligence in healthcare diagnostics?

Dr. David Gruen: I have worked in a varied assortment of healthcare environments, from private practice to hospital-based practice, and the one common denominator has been that radiologists can’t keep up. The amount of material that’s put in front of radiologists on a daily basis is, quite frankly, unsustainable. When you start to look at the numbers, an estimated 800 million imaging studies are done every year in the United States, which is about 90-95 billion individual images. If you divide that by the 31,000 radiologists, we are looking at one image every two seconds, 40 hours a week, all day every day. Radiologists are supposed to be doctors and provide patient-centric care. As a breast radiologist, I give all my patients the results of their biopsies, I discuss the implications when they’re diagnosed with breast cancer, and I provide emotional support to them and their families. All of that is non-interpretive time and all radiologists have this as part of their patient services. None of that is included in the one image every two seconds, 40 hours a week statistic.

It is no wonder that the miss-rate for really good radiologists is still estimated at 3-5%. For me, it’s far more remarkable that it is only 3-5%. However, 3-5% of 800 million scans a year… that’s a huge number.

A large number of radiologists are burned out. We’re in an environment where we can’t succeed. There is a mismatch between the amount of work we have to do and the rate and quality which we’re expected to do it. There is very low satisfaction in the field, a shortage of existing radiologists, and decreasing numbers of individuals entering the field.

Personally, I was one of those burned out physicians – I couldn’t do more and yet, no matter how many days, hours or weeks I worked, I couldn’t get it done at the level which I wanted to. So, that’s where I started to look for opportunities that could bring my imaging background and my desire to make our field better, to add value.

KH: It sounds like your experience brought you to the ideal point where there is new technology with untapped potential, and the field is reaching the point where it could use that technology to address its challenges.

DG: That’s right. We’re at a critical point of physician burnout and radiologists are right smack in the middle of that. We are also at a critical point in imaging where the volume is just massive. It would be one thing if there was not enough work for radiologists, but that’s not the case.

As we switch from a volume to a value-based paradigm, radiologist burnout becomes even more important. The change in paradigm is going to encourage radiologists to slow down and take more time to ensure they make fewer mistakes, but it’s not going to make the volume of work go away.

KH: Let’s talk about the technology - how do you see AI fitting into radiology practice and perhaps alleviating physician burnout?

DG: I think AI is a really broad concept. AI has been around for a while in the breast imaging space. We know that CAD [computer-aided diagnosis] came out along with digital mammography in the early 2000’s to flag differences in density and shape to identify potential breast cancers. The AI we are doing now with mammography to tag breast cancer, is not that. It’s a much deeper convolution of neural networks and serious machine learning that we didn’t have the ability to do 20 years ago. Now, there are many places for AI in the care pathway. For example, in breast imaging, is the technologist getting enough coverage of the pectoral muscle, the inframammary fold, and the axillary tail so that the patient is getting the best possible exam? If the patient had the exam in a screening environment and has since left, we’re unlikely to call them back in for another scan if the quality of the image is slightly less than we would like. If, on the other hand, at the time of the exam AI can analyze the image and alert the radiology technologists that an image does not meet certain parameters before the patient is even out of the room, they can repeat the image and ensure that we have good, actionable data in real time.

At the point of service, AI has a lot of applications. For risk-assessment triage, we are very early in this phase of development. How do we incorporate the whole body of knowledge about a patient to make sure that medicine is personalized? Does the patient need a mammography starting at 30, 40, or 50 years old? Do they need one every year? Every other year? Do they need an MRI? Do we need to do an ultrasound? What about their biopsy history? What about their family history? Their genomic history? We are using deep learning to study patterns of mammograms to predict breast cancer risk. We can then combine all of this information and data to provide patients with their real risk of breast cancer, not just a very superficial risk that we may get from the NCI [National Cancer Institute] website.

All of that is even before radiologists have looked at the images. We are hoping to get to a point where we will have really good AI helping us look at the images. And then on the back-end, we can compare the image taken to the actual report to make sure that there are no discrepancies.

For example, a patient goes to the emergency room, has a CAT scan for appendicitis, the radiologist correctly identifies the appendicitis. However, they don’t see the one incendiary pulmonary nodule at the base because they are really busy and a year later the patient has metastatic lung cancer.

What if there is some deep learning AI that can perform a second check on those images? Wouldn’t that be great? Let’s go a step further. The radiologist sees the nodule and lets the ED [emergency department] doc know. The ED doc is incredibly busy, forgets to put it on a work list, or puts it on a work list that doesn’t get tracked to the community doc, and a year later the patient has advanced lung cancer with metastatic adenopathy. Where did it fall through the cracks? That image finding wasn’t on the problem list. However, there are some AI systems that could have checked on the back-end that nothing was missed. What companies like IBM believe, and what I really believe, is that there are some things that doctors are really good at. We are really good at synthesizing complex issues, we are good at providing emotional support, and we are good at sensitivity and communication. AI is good at pattern recognition, not fatiguing, and not overlooking binary things that humans might, like is there a pulmonary nodule or not?

KH: One of the points you raised earlier was the sheer volume of work expected of radiologists, burnout, and how that can result in missed or delayed diagnosis. What we have talked about so far addresses potentially missed diagnoses, but what you have described doesn’t directly help with the volume issue. What about the use of AI as an adjunct for screening purposes? 

DG: I think that is an area of potential and a future role for AI. Are we at a point yet where studies don’t need to be looked at by a physician? Not yet. Certainly in our current environment radiologists need to sign off on any reports. However, a large degree of radiology workload is X-rays, predominantly chest X-rays. Many of those X-rays are normal. If a computer can help us to either triage abnormal studies or filter out normal studies, and flag for the radiologists’ more complex scans, then we’ve added value. I do see that as a future application.

There are also ethical decisions that need to be made. I think that as a society we are not ready to rely only on computers to read our studies. On the other hand, if the miss rate among radiologists is 3-5% and the computers’ miss-rate is half a percent, would we as a society be ready to rely on computers? This is a philosophical question and I’m not sure of the answer. Currently there is no system that can read a chest X-ray without a physician, the same goes for a mammogram or a CT scan, etc. But there are FDA approved systems that can identify bleeding in the brain, and bring it to a radiologist’s attention faster. So we are getting there.

KH: Given the crossover between imaging and pathology, do you see pathology as another way to use the same technology? Or, are there other areas where this technology can be used?

DG: For the average clinical oncologist, for example, it is inconceivable that he or she can keep up with the literature on the latest advances and guidelines. Nor can they optimally customize the best treatment for each patient based on their co-morbidities, age, backgrounds, genomic profile, molecular/biological testing, etc. So I think in the oncology space there is an opportunity to use big data and AI to provide value to the clinician and better care to the patients. 

KH: How do we establish the evidence base for using AI?

DG: At IBM, a lot of resources are put towards establishing the evidence for our AI tools. The truth and proof is in the use of pathology. For example, in the breast and lung cancer world we are comparing our imaging results using AI to the pathology, and determining whether the results are accurate, if the technology works, and if we can rely upon it.

For the subset of AI technologies that are getting FDA approval, the benchmark is that the technology can’t be worse than a human doing the same job and must be at least as good as a human to receive approval. That is a tenant that patients should know.

KH: What are some of the risks associated with using AI or increasingly relying on it? Do we even know yet? For example, EHRs have the potential to improve patient safety, but with the introduction of EHRs come new challenges.

DG: We are at the infancy of AI in medicine and one ethical question is, what if a computer misses a diagnosis, who is responsible? Or the opposite, what if a doctor chooses not to use available AI and something gets missed? What if with deep learning the AI learns something wrong and then applies that mistake to other patients? The question becomes how do we test on an ongoing basis the accuracy of AI, particularly when we get to the point where AI will be the primary (and perhaps only) interpreter of some studies? For example, we know that 30% of brain scans are normal and AI could identify that a neuroradiologist doesn’t need to review those scans. But then what is the process for checking that and making sure the AI is doing a good job?  

Another question is what is the benchmark we are comparing it to? Are we comparing the error rate of AI to that of the subspecialty fellowship trained neuroradiologist? Or are we comparing it with the general radiologist, who is one of three in a group practicing without fellowship training? The neuroradiologist may be better than AI, but is AI better than the general radiologist in a rural facility reading 250 cases overnight, including a large number of brain CTs? These are all really difficult questions. These are the philosophical questions that we have to address in parallel with the technology being developed.

KH: Where are we with radiologists using AI in their practice?

DG: I actually find it interesting how few practices have adopted AI. As I travel the globe, I find that radiology practices are actually quite technologically challenged with systems that don’t talk to each other. Adopting AI could make our lives easier, better. I think that radiologists are still reluctant and have the misperception that AI is going to put us out of business. This is a serious misconception. AI is not going to be reading an MRI of the abdomen instead of a radiologist any time soon. AI may get better at identifying key findings and addressing yes/no binary decisions, but it’s not going to replace a radiologist. That’s such an important message that radiologists and the general public need to understand.

KH: Do you think AI is a passing trend that is going to be too overwhelming to adopt in the long-term?

DG: I don’t think so. I think the fact that the FDA is issuing approval on products is proof of that. A couple years ago, the RSNA [Radiological Society of North America] President, to paraphrase, said that she envisioned a time in the future when a radiologist would have all the necessary information on hand about a patient’s history: structured, unstructured, laboratory, genomic, surgery, pathology, medical, social, etc. at their fingertips to be able to accurately interpret the study in a timely manner and to do the best job possible. That was her vision a couple of years ago. Well guess what? There is now a product in use in the United States that does just that, called Patient Synopsis. It synthesizes contextual data from the medical record in structured, and unstructured formats, pulls it together and puts it in one place for the radiologists to quickly review. When they see a patient with a hip fracture they can pull up that three years ago the patient had metastatic breast cancer and this might be a pathologic fracture. Or that when a patient’s CT or MRI is delivered, they can pull up the labs and say wait a minute, this patient has abnormal liver function tests, maybe I should take a look again at the shape of the liver and make sure that there’s not cirrhosis. All the information they need is in front of them. That is all to say I don’t think AI is going away. This is here. This is now.

KH: How do you see AI expanding from imaging into diagnosis? I was thinking with the differential diagnoses generators, some of them now can take the information and sort based on likelihood.

DG: If you are reading a chest x-ray and there are multiple nodules, how cool would it be to know that the patient lived in the San Joaquin Valley, and this could be coccidioidomycosis. Important structured, and unstructured data right are your fingertips and that is the first thing that pops up. We’re not there yet, but were not far off from that.

When practicing in the ED, the amount of information that would benefit you when you’re taking care of patients in the acute setting is massive, but what’s at your fingertips is negligible. There is just such a wide dichotomy, to the point where you don’t even know what the patient had done across the street or, if you’re in a hospital, you don’t know if they’ve had a CT scan at the outpatient imaging center last week. All things that would make your life easier. Really basic stuff. Ultimately, hospitals will be forced to harmonize this data. If a patient has a CT scan for a kidney stone two days ago across the street, that hospital is not going to be reimbursed because they didn’t have that information two days later and repeated the test. This is where value and quality collide in the best interest of patient care. However, everything we do has to be data driven. If the science doesn’t support added value or quality to patients and physicians, then we shouldn’t do it.

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources