Q&A: Disrupting your OSS with analytics, automation and AI

Mounir Ladki, president and CTO of Mycom OSI. Photo courtesy of TM Forum

When people talk about disruptive technologies, they don’t usually think of the backend. But that’s where some of the more crucial disruption needs to happen – and is happening. In this exclusive Q&A, Mounir Ladki, president and CTO of Mycom OSI, explains how analytics, AI and automation can take the OSS to the next level, the challenges of integration and why service assurance will be crucial for IoT services.

Disruptive.Asia: The OSS space arguably needs disrupting – how is Mycom OSI contributing to that effort?

Mounir Ladki: We are playing a disruptive role in the OSS space. We came up with technologies that we take natively to the cloud with an open-source, open architecture, REST API’s, a lot of analytics and automation.  And now we’re helping our customers to play a constructive role in the ecosystem.

How so?

For example, in the IoT space, there are a lot of problems that are very similar to what we see in the telecoms space in that they need to assure services. For example, in the smart energy domain, the energy company wants to put sensors in the homes, so as a service provider I can collect data, take it back to the cloud and remotely optimize your settings for your thermostat – the idea is to give you the hot water and heating and lighting that you need, but at the same time reduce your energy bill by 50%. But if that entire chain is not working, if the service is not assured continuously, you might end up without hot water.

Then there are the analytics that’s required. Because we’re now collecting data, you need a brain – the analytics software – to process all the information. For example, if there is high energy consumption because one of the appliances is consuming more than what it should, you can propose solutions for that.

The point is that if you want to assure a smart energy service, making sure the connectivity is working properly is only one aspect. You have to assure that the entire chain is working, and then put an intelligence layer on top of that: analytics. These are value added services that CSPs can offer to smart energy companies. I don’t think Singtel or some other operator is going to become a smart energy service provider or a healthcare provider or a smart transport provider – they will not be everything. But they can leverage platforms such as ours to be a critical managed services provider to those other verticals and then share part of the benefit with them.

You mentioned automation – where does that come in?

Our customers want to offer a digital experience, but this is incompatible with the latency introduced by the human factor. If you look at any network operation center (NOC) or service operations center (SOC), you see people looking at screens, trying to identify problems and looking into them. But when you have a connected car experience, which requires minimal latency, the human factor can no longer be in the middle of the chain.

So we have introduced extreme automation where monitoring of the data, detection of the problem, the identification of the solution, fixing the problem in the network, the measurements and so on are done automatically. The idea here is zero-touch operation and maintenance.

Tomorrow’s NOCs and SOCs for DSPs in the digital world should have no human presence, or very little. It should all be algorithms, software, cognitive platforms, robots – it’s not going to be a physical operator sitting there.

How easy is this to integrate into existing processes?

It can be difficult sometimes, because operators are very traditional or very resistant to the idea of automation. So there are two important things here. One is to choose a pilot project where you implement it in a specific context – IoT, for example, because this is a new use case, so people will be a lot more open to it.

The second thing is to drive it from the top. If you talk to the people in the SOC and you tell them you’re going to automate their job, naturally they’re not going to like that. But on the management side, people understand it better and they see the value. So with a pilot project, we can prove that works and prove the value and then we can roll it out at a larger scale, but this has to go hand-in-hand with a broader cultural shift or evolution – that needs to happen.

But that’s the big challenge isn’t it – getting top management buy-in to drive that cultural shift?

This is a big challenge. When I talk to some of my customers, I talk to the CTO and they tell us that their biggest challenge is that they’ve been doing this for 15, 20 or 25 years, doing the same thing and using technology that was made in the 90s, so they’re not really at ease at all with the cloud, open software, machine learning, artificial intelligence, open APIs etc. It is a big problem.

There’s another problem with agility. People are not used to delivering something new every two weeks. At the same time, when you test and introduce new services in the telco world, there is still high demand for quality and reliability. This has to be taken into consideration as well – you have to be disruptive, and you have to be agile, but at the same time you have to make sure to remember that this is part of an infrastructure business and it should be able to manage itself.

On the analytics front, are you working on artificial intelligence as well?

Absolutely. Right now we’ve started work in machine learning, and the first use case we started with is fault management – where the system detects if there’s a bad incident, whether minor or critical.

Ever since fault management systems were invented, their mission has always been reactive. You look for faults, detect them, and process it very quickly to get to the root cause of that problem and figure out how to fix it. With machine learning, you can become predictive, and predict faults before they occur.

Netflix is doing this now. They use machine learning to detect the probability of faults in certain nodes – maybe they find one that has a high probability of failure, and they reroute the traffic away from it so as to minimize the chances of their service being impacted. And they proactively change whatever needs changing.

We apply this to fault management systems to make them proactive and help them predict what is going to go wrong and then fix it before it can even be impact the customer.

What other kinds of machine-learning use cases are you looking at?

You can also apply this to predicting consumption of digital content – what kind of content people will consume, when and where, so that you are ready for it and you can provide the right network resources and the right capacity, etc, to respond to demand for that moment and maximize the efficiency of your capex investments. And at the same time, it helps you to deliver a better customer experience.

We are part of the 5G Innovation Centre in the UK at the University of Surrey, where they have been experimenting with machine learning to predict digital media player consumption. The accuracy was over 90%. That’s a sample of what we can achieve with this technology – the ability to predict with such accuracy what people will do. There is so much more you can do – not only in terms of selling data, but being really proactive and prescriptive in your approach and be much better prepared for what people want.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.