Artificial Intelligence (AI) is popping up everywhere, at least in discussions. Intelligent systems are being used in many places, and they are becoming smarter. But the real bottleneck is not the intelligence or ‘brains’ of the systems; it’s that AI also needs ‘hands’ to do things.
AI has become a very popular keyword over the last five years. Most company management groups and boards want to see some AI development in their organizations. Unfortunately, the reality, and actual use cases and expectations are not always in line. The biggest problem is not having smart enough machine learning (ML) or AI models to analyze data, handle tasks and make decisions.
Let’s take a simplified AI task. A system collects data, analyzes the data, makes needed conclusions and decisions and sends the results for operative use. If a whole system is built to work around AI, like a self-driving car, the capability to analyze the data and make decisions can be the bottleneck. But most systems are different.
We can take another example utilizing AI – automating insurance claim processing. We have the same phases, but data and interactions with other systems are much more complex:
- A policyholder fills out a claim, probably a web form, but in some cases, it can still be a paper form. They also have some other documents, e.g. receipts, a report of an offence or a medical report. To get it all to a digital format, e.g. OCR (Optical Character Recognition) and NLP (Natural Language Processing), might be needed.
- The insurance company collects data from other sources. For example, they can use a person’s insurance history from a national database, credit rating data, criminal records, data from other similar incidents. All kinds of data that can be used to see that information in the claim makes sense, is in line with other data sources, within a statistical margin of expected behavior and is not fraudulent.
- Then the system analyzes the data and makes a decision. The decision can be to pay a certain sum, not pay, or send the case onward for further investigation.
- When the decision has been reached the system must then send a letter or email to the policyholder, store the decision and all documents, start the payment process, and inform third parties (for example, national insurance database, health care provider, other parties in the incident, police).
- After this, the policyholder might not be happy with the decision and can trigger a new process.
In this example, we can see that the data analytics and decision-making is a small part of the overall process flow. There are many other parts, especially getting data from several sources, formatting the data, entering decision data to other systems and triggering actions in different systems. And what makes this even more complex is that typically the data is in many different formats and a part of the information is missing or is inaccurate (just think the claim form the policyholder fills and add attachments). Even the case of a data value being “null” needs to be handled, “null” is not “zero” and depending on the data set, it can have meaning or not. There are many handlers needed.
One of my companies implemented this kind of system several years ago. Although it was quite a digitally advanced insurance company and environment (Scandinavia), there was still a lot of work to be done. A typical rule of thumb in the data business is that 60% to 80% of the work is to pre-process the data. This is reality when you try to implement AI in any enterprise with many existing systems, and some of them can be quite old-fashioned. Just think SAP, Netsuite and links to banking systems.
We can even think of a more modern solution to get data from several wearable devices (Apple Watch, Fitbit, Withings, Garmin, Oura, etc.) to one place and bring it into a format that you could build ML/AI solutions on top. Even to collect all that data is not as simple as you would think, even when people talk about open APIs. APIs are still not so common, and while an API will be structured, the quality of data included can vary from one source to another.
A term I have started to like is ‘AI hands’. It means solutions, how to get data collected from many old and new systems, format them in one place and then get the processing results to operative use in other systems. Companies often forget or ignore the development of ‘hands’ when it is fancier to talk about the latest innovations for the ‘brains’. As always, great thinking is rarely enough; you must collect and organize information first and then get things done based on your thoughts.
In reality, these ‘hands’ are like software robots (RPA) that can work with different systems and devices. These include additional software components (e.g. OCR, NLP, data cleaning, APIs) to get the data and trigger actions (e.g. sending emails, start payment, start delivery). Other useful tools are webhooks that can trigger background tasks, for example, in the serverless environment and such as verifying data and running NLP. This means the capability to work with a vast number of different systems and formats.
Open source is often the best way to support many kinds of needs from small and rare systems to major systems. There are many data formats and even unformatted data that no company can implement in their proprietary system. Here, open source is the only option. These ‘hands’ and ‘brains’ should be based on commonly used and widely available programming languages (e.g. Python) that help get ‘brains’ and ‘hands’ work together utilizing open source components.
To get more AI and ML use, we need more and better ‘hands’ for AI. Management groups must also invest in these capabilities if they want to implement and utilize AI. And it is the same with consumer services, someone must offer the solutions where the data is available in a usable format, and there are tools to get results in real use. In last year’s Gartner Hype Cycle, many AI solutions were on the hype peak. AI ‘hands’ are needed to improve productivity.