We have seen many AI developments recently, from deep learning to ChatGPT. At the same time, it is recognized that the user experience (UX) is probably the most important aspect of building products and services. But how intertwined is the design of AI and UX?
Not as much as it should be. And that’s not good for the end-users expected to use these products.
ChatGPT has gotten a lot of publicity and interest recently. One of its strengths is its conversational user interface. Anyone can easily ask some questions and get answers in plain English. There are concerns about the reliable information it can offer, because it gets all its information from the internet. But at its core, ChatGPT is a generic tool based on generic information that requires easy, plain language to operate.
This is different from many other AI solutions that are typically tools that enable professionals to build a solution for a specific need. This means that you first need data, training tools and professionals before they can build something an end user can utilize.
The problem is that often the end user tool isn’t actually based on the end user’s needs – it’s based on what’s feasible or easy to build with the tools that are available in the market. That means it’s not really an optimal or full solution to a user’s need – it just offers some functions that are easy to implement with certain tools. Sometimes this can result in a very confusing UX, where AI can handle some tasks with some data, but users still need to take care of some other tasks and handle aspects that are not in the data the AI is using.
Tool-oriented approach to AI
This matters for many systems, from professional expert systems to software and products to consumers. Let’s look at a couple of simple examples.
First, let’s look at RPA and intelligent automation tools. They are used to automate simple office tasks. In marketing slides, these systems make office work more effective and relevant, and employees are happier when they can avoid boring, time-consuming routines.
In practice, however, these solutions are very much based on the “tool-oriented” approach – i.e., they offer a toolset to automate tasks. But this doesn’t guarantee that the automation of certain tasks will offer an excellent user experience. Typically, automation has all kinds of problems.
In accounting automation, for example, some entries may be missing or put in the wrong accounts. The user will end up spend a lot of time reviewing and correcting entries and fixing the automation implementation. In practice, automation can result in more errors than after manual work where an accountant handles all transactions systematically. A professional accountant has a lot of background information and experience for things like how a similar invoice or transaction would be booked differently in different situations.
Personalization and optimization of functions in machines is another example. We all know, for example, that streaming and music services want to recommend content and personalize our service front page. Or a machine will optimize a user interface or make suggestions based on what you have done earlier.
The problem is that this is still based on very limited data. For example:
- Netflix only recommends movies based on your watching history on its platform; it cannot know your interests generally or what you have watched elsewhere.
- Your car may recommend that you take a break after two hours of driving, but it doesn’t know how tired you actually are.
- Your running shoes may recommend taking faster and shorter steps after comparing all runners’ average running style data, but they don’t know your target and energy level for your run that day, or even why you are running.
Relevant data needed for a good AI UX
Those are simple examples of basic tasks. The more automation and AI we want to implement for specific needs, the more important it becomes that a system is based on the user’s needs, can offer a good user experience, and has enough data to truly solve user’s needs. Simply sub-optimizing something based on inadequate data and average user cases isn’t enough.
Of course, we can always say that no system is perfect and no data is complete. Therefore, we must ask what is relevant to offer a good UX.
If we consider the examples above:
- An automated accounting system should learn from an accountant’s work how they actually book a large number of transactions, and how auditors may have asked to correct them.
- Movie recommendations should be based on data from all movies, TV shows, and even books you have watched and read, including what time you consumed them and your stress level at the time.
- Your car should know how tired you are, how well you slept last night, and your capability to concentrate.
- Your running shoes should know your targets for running, your running style, heart rate, tiredness, and how long you plan to run today.
As with many other solutions that use personal data, we also have privacy questions. Consumer-facing companies must be aware that collecting and processing sensitive user’s data through IoT devices and AI systems is very risky. That’s why these solutions should also be based on models where the user can control their own data and use it in these solutions without having to share the data with anyone.
A proper AI UX starts with the user
Nowadays, many tools, recommendations, and machines are designed with a technology-first mindset. Some technologies have been developed and then put into use without first determining how much value they really offer to the end user.
More and more data is available all the time. AI is coming to many products and services, although the inevitable hype makes it hard sometimes to know what is real AI and what is marketing talk. But to really offer value to users, the AI solution design must be based more on customer needs.
ChatGPT shows us the value of making AI products easy to use. But it also relies on generic public data from the internet. Most AI products are based on more specific use cases and more specialized data. They can’t just offer generic results like ChatGPT does – their results must be very accurate and reliable.
It takes a lot of work to design an AI product that offers a proper UX. The starting point really must be understanding how to solve a user’s needs properly, what relevant data is needed, and what is the full solution required. Put simply, it’s not enough to say your product has AI in it – that means nothing if the UX is poor and offers no real value to the customer.
Be the first to comment