When we look back on the various ‘G’s, 3G and smartphones were a major paradigm shift from the old-fashioned phone calls and feature phones of the 2G era to mobile internet and apps that really took off when 4G arrived. But the last fifteen years have been a relatively stable era for mobile services – the networks are faster and the smartphones have more processing power and bigger screens, but the business model has remained basically the same since that big paradigm shift from voice calls to mobile broadband data, streaming and apps.
But now that 5G is currently being rolled out, while 6G already in the pipeline and expected to arrive in 2030 or so, it seems likely we will see another major paradigm shift in mobile services.
There are a number of ideas about what that paradigm shift will look like, but a common thread is that the black rectangles we use today won’t be our primary interface with mobile services. Many companies like Meta, Google and Microsoft are working on augmented headsets and similar concepts to replace handsets. Nokia CEO Pekka Lundmark goes a step further, commenting at the recent World Economic Forum in Davos that by the time 6G mobile networks begin launching by the end of the decade, instead of smartphones, we’ll be using smart glasses and other wearables, and some devices “will be built directly into our bodies.”
The user interface is a fundamental part of mobile services, of course – but it is not the only important part. Many other essential or even more critical components can also change what kind of devices and services we use in the future.
The current model doesn’t work in the future
The most typical way to use mobile services now is to run an app locally in the handset, which also supplies the user interface. Typically, the apps are not totally independent, but linked to the back office of the app provider. This also means that a significant part of the data is often in the back office.
This model isn’t really suitable for the mobile services of the future when we look at the changes coming down the road. Here are some of the changes we can expect.
First, data has been playing a more critical and constant role in mobile. This means we have more sensors to collect data, which we can also combine with more external data sources. Many sensors are not in the phone, but there will be a lot of wearable sensors in clothing, shoes, on the skin and maybe in some headset-type devices. The critical question is how to combine and utilize all that data, which means each sensor cannot just provide data independently to its own app.
Second, when users generate and utilize a lot of data, a new model for storage and process is needed. Nowadays, each application just keeps its own data, or the user separately shares data from one service to another one. As we know, this kind of data sharing creates many privacy concerns, mainly because it is very difficult for the user to track it.
Third, the user will have several UIs for their mobile services, e.g. through augmented reality, virtual reality, voice and maybe something we don’t even know yet. But somehow, the user should be able to control all services through these UIs – most probably, this won’t work if all apps are independent and have their own user interface.
Fourth, the user will need one simple way to manage all this. When there are many connected devices, it is unclear with the current architecture who controls all devices and applications. Now the mobile phone is a kind of ‘base station’ for user control. However, is it going to stay like that? And is it really the optimal way to use all devices and data?
Fifth, combining and utilizing all data and all services that can work with all devices the user possesses will make all this incredibly complex. This probably means new intelligent layers will be needed for the data and application ecosystem to combine things and create a user-level view of all data, apps and devices. This would change the platform concept for mobile services from the current situation (where basically all apps and wearable devices are independent and without any platform models) to a new approach where everything each user has could be combined.
Separating the front ends (i.e. user interfaces) from the backends will likely also expand from the current status quo, in which the backends are almost synonymous with a mobile phone OS or a centralized server operated by a service provider company.
In the future, we may well see more ‘headless’ applications that operate in the backend instead of only in the user interface at the front end. These could be more intelligent AI applications for users that send actions and suggestions to wherever the user is (for instance, the user’s phone, laptop, Siri/Alexa assistant, home devices or car dashboard, etc.).
Running apps only on the user’s device will have limitations. This is why the cloud is becoming more important to mobile services. But as computing capacity in the cloud increases, we can see a scenario where apps run in the user’s own backend environment, for example a personal cloud, rather than in a cloud controlled by the app provider.
Transitioning towards user-centric mobile services
All five points above mean that the architecture for these future devices and mobile services must differ from today’s model. One scenario is that we would have platforms with layers to combine data from many devices and other sources and then base applications on them. Then those applications could be used by several devices with different UIs. For example, when you go for a run, you have sensors in your clothes, shoes and skin to measure your running and biometrics – a smart layer can combine your heart rate, step length, breathing, blood glucose and other metrics at specific moments. This makes it possible to create different kinds of apps to analyze your run and give you real-time advice via different UIs – voice, AR/VR, pulses or virtual charts.
The critical question is where all these extra layers and user data are stored. Most probably, there will be personal cloud solutions for consumers, where consumer data and at least part of the apps are kept. But most probably it will be more decentralized than the current cloud models in two ways: 1) it will be a user held data model, i.e. each user can own and control their data and application environment themselves, and 2) data and processing are near the user – in other words, it’s a kind of edge computing model that offers lower latency and better availability.
We are moving to a new era where we’ll have lots of sensors, new devices to control functions, and much more data. We don’t know all the answers yet; still, we can be pretty sure that the current model of mobile phones and apps is not the best model for that era. As enterprises want to get control of all their data and services and create extra layers to manage all functions, consumers will face the same needs when they have more devices, data and daily services to manage their daily life better. We will see a shift to something more like a platform model for personal mobile services, but it will be more decentralized than the current cloud and mobile services architecture.