AI was going to save us from the pandemic but it failed for a simple reason

AI
Image by sarah5 | Bigstockphoto

It turns out that AI is not great at everything – yet. Early last year, we all thought that, somehow, it would save the world from the pandemic.

But it didn’t.

In certain areas, of course, AI was useful. Sifting and re-sifting through endless piles of data and test results to produce a vaccine was a good result. So was unheard-of collaboration, co-operation (and a little light-hearted hacking).

Yet many medical experts thought that AI could help in other areas.

One Laure Wynants from Maastricht University in the Netherlands studies predictive tools in medicine. She was shocked by how badly it had performed in helping doctors understand how to triage and treat patients. This was early on in the pandemic when everyone was reeling, and everything was new, and the one thing we knew was that we had to keep our health services from breaking.

The AI community leapt into action, excited by the prospect (and boost) it would get if it worked. Hundreds of predictive tools were developed. Fully 232 algorithms were developed.

None of the algorithms passed a test that many people hoped would get others onside in the ever more divisive worlds of AI being for good or bad.

It also turns out that it failed for the reason that AI specialists keep coming back to.

Data. Or what Derek Dreggs at Cambridge University calls ‘Frankenstein data’.

If you use multiple data sets to train AI, then you will be training it on ‘muddy’ data, with duplicates and inconsistent formats. And, in some cases, experts were using the same data to train AI and asking it to evaluate it as a data set. This produced serious false positives.

One set of data was scans of kids’ chests. They did not have COVID, yet it was used to teach AI tools to identify what the disease looked like. Instead, it learnt what a kid looks like.

Too often, we are told that, whatever the question, AI is the answer, but we must be very careful not to overhype it. Instead, the community should learn from this experience.

The AI community will recover, and there are more AI healthcare startups than in any other sector, according to NVIDIA in this blog post.

Let us hope they go forward with one fundamental lesson in their minds. As people with experience in data will tell you, any transformation or implementation of new technology must start with a complete understanding of the data that is being managed. Without that, AI cannot operate, transformations will fail, and companies will suffer.

Related article:

1 Comment

  1. There are certain things in the universe that cannot be other than what they are. These must be studied and thought to be understood so that we can treat them when something goes wrong. There are also things in the universe that can be other than what they are. We must not use the approach and methods and tools from the former to treat the latter. This is some of the reasons why AI and machine learning are not the answer to everything. Actually far from it.

Leave a Reply to Cato RasmussenCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.