Wednesday, June 19, 2024

What We Learned from Google’s HealthCare AI Mixed Results

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Now a lot of you like me are watching Westworld and are likely thinking, whew, humanity is good for at least another year, but that’s not where I’m going with this. Google developed an AI to help with diagnosis at scale, and it was impressively accurate in the lab but not so much in actual practice Thailand; in fact, the deployment at times slowed the diagnosis and didn’t speed it up

But this was a field test, and initial field tests often fail, but this failure wasn’t due to a problem with the AI. It was due to the nature of the deployment, and the problems are fixable and far easier to fix than it would be to create a new AI.

I think this also showcases why so many AI project fail, for what Thailand found, is what a lot of IT folks find, and that is that if you don’t understand the limitations and strengths of the AI you are deploying that lack of knowledge will cause the project to fail.

Let’s talk about deploying AIs correctly this week and why the learning Thailand, and Google, got from this trial will help assure the success of future deployments.

Understanding The Limited Power Of A Current Generation AI

While science fiction programs like Westworld imply AIs that can replace humans, we are between one and three decades away from that kind of capability.  What we have today are focused AIs that are designed, and trained, to do one thing exceedingly well.

In the case of the Google project that one thing is being able to determine, from a high-quality image, whether a patent had diabetes while, the lab the AI, which was developed by Google Health, demonstrated it could identify signs of diabetic retinopathy in 10 minutes with 90% accuracy.

This capability is vast for Thailand, where their clinics are struggling to care for around 4.5 million patients with only 200 retinal specialists.  In practice, 20% of the images were rejected, patent screening dropped to 5 per hour, and some people had to wait days for a diagnosis.

Now the causes of this were poor internet bandwidth.  The AI tool is cloud-based, and the tests were often done in remote areas with poor internet connections, making uploads of high-quality images problematic. Also, overworked nurses were often unable to capture high enough quality images for the system to scan, which is what resulted in the extremely high image rejection rate.

This failure means that the AI required high-speed internet and a method to take high-quality pictures of eyes to function, both hardly a surprise given how the system was trained. But, for some reason, those requirements weren’t met, and that is what resulted in the system failure.

Now the system was designed to be used by people that weren’t well trained, but the apparent assumption was that the lack of training was in medicine, not in photography. It did require that nurses implementing the tool have adequate cameras and training to take the necessary high-quality pictures. It should have never been deployed in areas with limited bandwidth.

There was one nurse who was well trained and who had adequate bandwidth, and she screened 1,000 patients with great success. It is interesting to note that those patents were okay with a machine making the diagnosis.

Wrapping Up:  AI Deployment Isn’t Multiple-Choice

This somewhat failed trail is a showcase for why you must understand the design parameters of an AI before deployment. In this case, if you can’t properly train the nurses or supply them with proper photographic equipment or you have limited bandwidth where you want the system to function, you need a different solution because this one will fail.

If you saw the Ford vs. Ferrari movie, you may recall that one of the reasons the Ford GT won the race was because the Ferrari driver pushed his car beyond the physical limits set for it and blew the engine.  Or, if you were a fan of Clint Eastwood’s Dirty Harry, one of his sayings was “a man’s got to know his limitations.”

If you understand and stay within the design parameters of the AI, you are likely to be successful; if you don’t, you’ll probably feel a bit like that Ferrari driver and maybe even hear Dirty Harry’s voice mocking you. And, my friends, that wouldn’t be a good thing.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles