Artificial Intelligence in Medicine 3

So I follow what is happening in startups because I find it fascinating.

One of the things I keep coming across is “machine learning” and “artificial intelligence” and their potential to “disrupt” medicine.

A recent article talked about this phenomenon in general: “Artificial Intelligence Software Is Booming. But Why Now?” – New York Times

Here are 90+ Artificial Intelligence Startups in Healthcare

So… why now?

Well, a lot has happened since the early days of AI. The algorithms have become very sophisticated and this combined with exponential increases in computing power has improved machine learning significantly. Additionally, with all the hype in press about Deep Blue beating Kasparov back in 1987 and then Watson beating champions in Jeopardy in 2011.

Then, this year, March 2016, Google Mind’s Alpha Go defeated Lee Sedol in Go, 4-1. Why is this important? It’s just another game right?

Well, Chess has more rules and a finite number of moves for any given situation. Go, however, has less rules and a near infinite number of moves for any situation. For this reason, prior to 2015, previous attempts were only able to achieve amateur dan levels. So then… what changed? The concept of “deep learning” was applied. Since it is not possible to map out every possible situation, you instead provide the computer with a ton of data from prior games to draw conclusions from.

So what’s all the hype about?

If you enable a machine to learn, it will learn much faster than that of a human if you give it enough data. Unlike a human, computers do not get tired.

Also, unlike humans, failure does not deter a computer for continually trying to find the correct answer from learning from.

Wait… so what are you saying? Artificial Intelligence is a thing?

Of course it’s a thing. It’s everywhere. Your computer and your phone already have some degree of artificial intelligence. Your favorite social media outlets and their advertisements also have some degree of artificial intelligence built in.

“Sensei “liked”these 3 posts, which are 67.34% related to this other post, so he may also like this one. I will show it to him.”

Ok, so then should I just accept that Skynet will eventually take over my job?


…? So what gives then?

Ok, the following is simply my opinion on artificial intelligence from my specific point of view as a radiologist. I may be completely wrong, but like I said, it’s just an opinion. No one can predict the future.

Everyone seems to think that throughout the entire field of medicine that radiology is the most ripe for this kind of disruption. All of diagnostic radiology can be reduced down to data in DICOM format and analyzed, it should be the easiest to disrupt correct? It’s just data.

I understand why people think that way:

The majority of the population, and even many doctors, have no idea how radiologists actually look at and interpret images. 

I think if you asked people what the most “objective” specialty was, they would probably say radiology or pathology. “All they do is look at images” is the reply you would most likely get. This mindset is understandable because people forget the infinite variability in “what is normal”. I think the fundamental flaw people think about radiology is that it is objective. It’s not.

Additionally, this process is difficult to explain and even more difficult to teach. For many radiologists, we don’t know why we look at things a certain way, and probably can not articulate why we do it that way. Many radiology residents start their R1 year completely lost, feeling like they are drowning. Then something clicks about 80-90% through R1 year and you begin to understand. I used to tell my junior residents this all the time… just get through your rotations in first year. By the end of the year you will be a different person.

It’s kind of like that point in The Matrix where Morpheus says “He’s beginning to believe” in reference to Neo.

Continuing on:

At any point in time, you can ask a radiologist “What are you looking at right now” or “What are you thinking right now”. Their answer would be much longer than you would think.

“This adrenal gland is slightly lobulated, but I don’t see a discrete nodule. However, I am trying to decide whether it hits my threshold for considering adrenal hyperplasia or not. Hmm… I think it’s ok.”

This thought process would get this one line in the report: Adrenal glands: Normal.

Also, there is the subject of “creating a story”: Imaging findings are not a diagnosis.

The jump from the finding of “lymphadenopathy” to “lymphoma” is plausible.

However, the jump from the finding of “adrenal hemorrhage” to “Waterhouse-Friderichsen syndrome (WFS)” is probably more difficult to diagnose with accuracy.

Additionally, when the radiologist is faced with multiple abnormalities, there is a decision to be made. Are all these abnormalities part of the same entity or only some of them. Are there two or more synchronously occurring disease processes occurring here? Are you a “lumper” or a “splitter”?

Has there been any recent discussion about this?

There was a recent post on AuntMinnie regarding AI in radiology – Will AI soon put radiologists out of a job?

I’m not sure I really like being called “wasted protoplasm” by some unnamed startup CEO. However, I would agree with Eliot Siegel on his particular predictions. The most important point he makes is paraphrased here:

Siegel also repeated his offer that he would go anywhere in the world to wash the car of anybody who can show him an algorithm that can identify the adrenal gland as well as he can teach a fifth grader to do in 10 minutes.  

He then goes on to say, ” “I’ve been repeating that [challenge] for 15 years, and I still haven’t washed any cars yet,” he said. “If a computer in 2016 can’t even find the adrenal glands, then I’m not sure how they’re going to replace me as a radiologist, or anybody else.” (emphasis mine)

I think another challenge would be to define a “normal colon”. Colonic loops can differ along a pretty significant spectrum of “normal” based on the presence or absence of oral contrast and/or stool.

Now, now, I can already hear all the protests:

We just need more data! There is deep learning here! When it happens it won’t be incremental, it will be an explosion.

You may be right. Perhaps tomorrow a new breakthrough will occur which enable the current computers to harness the ability to make radiological diagnoses with 90% accuracy.

The question(s) you need to ask yourself is:

Is that good enough?

Is it 95%? 99%? Or does it have to be 100%? Or does it simply have to be better than radiologist? Which radiologist? A first year attending? A seasoned attending with 30 years experience in that subspecialty?

What is good enough?

I don’t know. I guess it remains to be seen what the public will accept. However, it should be known that even radiologists don’t agree with each other 100%. Even the best radiologists in the same individual subspecialty could look at the same case with the same information, and disagree with one another… or both be wrong.

Let’s look at an individual example:

You read 1000 mammo screeners and you call them all negative, without looking at them. You miss 5 real cancers in those 1000 patients. Your accuracy is still 99.5%. 

In reality, you would probably read 1000 mammo screeners, recall ~10%, which is 100, maybe do 10 biopsies and find 4 cancers, while still missing 1 cancer. Your accuracy is 99.9%.

So then, for mammograms, what error rate is acceptable for AI? 

I would recommend reading the remainder of that article which quotes Eliot Siegel a great deal. I find that his thoughts about artificial intelligence in radiology are sound.

Another point he brings up is:

“What’s more, assuming this was all available and somehow integrated, it would need to go through the U.S. Food and Drug Administration (FDA) approval process. It could take up to decades to get approval for each and every one of those algorithms.” (emphasis mine)

I think if we’ve learned anything from Theranos, it’s that the “jump off a cliff and assemble a plane on the way down” adage for tech startups doesn’t necessarily work for healthcare. When people’s lives/health are at stake, there is no room for error or mistakes.

However, I think the most basic question to ask is:

Would you accept a diagnosis from a computer without any human oversight? What would it take for you to accept it?

For me, and probably most doctors, I would not accept a diagnosis without human oversight. Even if the diagnosis was always 100% correct, I would still want human oversight.

Making the correct diagnosis by itself, is not enough for me. 

Another concern is:

What is “too much information”?

I think we may underestimate a radiologist’s ability to filter out unimportant information from important information. Ask any radiologist, and many times the clinical history provided may lead you down the wrong path. For a program which is designed to utilize as much information as possible from an EMR, such as age, sex, medical history, lab values, chief complaint, and presentation there are multiple variables which are likely irrelevant or are even red herrings. This data may skew the judgement of the computer.

Another, somewhat controversial question is:

Can you teach it to make a judgement call?

If the patient is a 110 years old with a terminal disease, and you see a 12mm hypodensity on the right kidney which measures 21 Hounsfield units (indeterminate), would you recommend a CT and MRI to evaluate it? There are no guidelines for this kind of judgement call. Would the computer choose a cutoff based on age then? When is “too old”, 80? 90?

Would the computer then just print out “Recommend renal mass protocol CT and/or MRI to evaluate.” This would effectively place the primary team in the awkward situation of evaluating a lesion which is likely benign, and even if not benign would not be the cause of the patient’s death.

The real elephant in the room, however, is:

Who do I sue? (if something goes wrong) The company who made the software? The hospital? Both of them?

Will companies provide malpractice insurance for a computer? How would you depose them for trial? How would it defend itself?

This is a particularly difficult hurdle to overcome in my opinion.

How do you prevent people suing? Would it require a government mandate that Software X is the new gold standard and can not be sued?  Where would all the malpractice lawyers go then?

For these reasons, I believe artificial intelligence in radiology will serve as an adjunct to radiology in the very near future.

It remains to be seen how much of an adjunct or whether it becomes “necessary” for all radiologists to use. However, I doubt that AI will replace radiologists in the near future, if ever.


AI is happening and evolving right now.

How it will actually impact healthcare remains to be seen.

In my opinion, it is a near certainty that AI will be an adjunct to radiology in my lifetime.

However, I, like Eliot Siegel doubt it will replace radiologists in the near future, if ever.

Note: I wouldn’t mind being proved wrong though.


Apologies for the late post… for some reason my auto-post at noon PST didn’t work, so I had top post it at ~3:15 PM PST,


Agree? Disagree? Questions, Comments and Suggestions are welcome.

You don’t need to fill out your email address, just write your name or nickname.

Share this:

Leave a comment

Your email address will not be published.

3 thoughts on “Artificial Intelligence in Medicine