Escalating demand for data centers over the last few years created a dilemma for service providers who wanted to build more of them – where to put them? You want then as close to your users as possible, but there are only so many spaces in urban centers where you can build them. The need to move DCs to the suburbs spurred interest in data center interconnect (DCI) solutions to enable DCs to talk to each other by linking them up with enough capacity that distance was irrelevant.
On the sidelines of Capacity Asia last week, Disruptive.Asia editor in chief John C Tanner caught up with Anup Changaroth, senior director of Business Development & Solutions at Ciena, to talk about DCI’s latest challenge – interconnecting DCs internationally over subsea cables. Perhaps inevitably, AI also came up.
Disruptive.Asia: Initially the discussion about DCI was about connecting metro and suburban data centers – how did it end up becoming a long-haul issue involving subsea cables?
Anup Changaroth: You’re right, the original premise was that you couldn’t build data centers in the middle of cities – you had to go outside where the costs were cheaper. But you were addressing geographically the areas where the populations were, so you had to have massive DCI capacity. So the data centers were closer to the edge of where the customers are. But now you look at the ASEAN market, and you have Singapore as a digital hub, and a lot of inbound traffic from Malaysia, Indonesia and so on – all of which drives huge amounts of DCI traffic. So DCI is now much broader than just metro.
How does that change things in terms of planning, performance issues, etc?
The latency effect of longer distances has an impact. If you look at the content players, like Google, they are replicating data center content every four hours, which is, in turn, driving capacity demand. You have to maintain that synchronization of content, otherwise you will not deliver what the customer demands. There are some things that you cannot change – for instance, the light takes a certain amount of time to travel the distances involved, which means there is always latency.
But it also depends on what the customers need in terms of latency performance. If you look at high-frequency trading on stock exchanges for example, two years ago they wanted the lowest latency, the fastest links, so that their trades are microseconds faster than competitors. That’s still relevant, but now, because of machine trading, speed has an impact on false spikes, so they are adding latency into the system so that the false spikes are less likely to cause problems.
Sounds like something where you’d want automation to play a role – what are you up to along those lines?
We have been looking at how you automate these kinds of things. For example, when we look at how to put more wavelengths on the fiber, the traditional way of planning capacity was to look at capacity requirements, year by year, and add in wavelengths over time. And you factor in such issues as cuts in the fiber – you could say that you would have five cuts a year and each would cause a certain amount of capacity loss, each splice would experience ‘X’ dB loss. So, you plan for end-of-life for fiber.
But that takes a lot of manual time and effort. Our WaveLogic AI chipset takes a lot of that effort away from humans. It constantly monitors the network and it becomes self-aware and intelligent. The system margin is proactively monitored all the time, in real-time, so it can tell you about capacity requirements, spikes and problems and help you generate and manage revenue. We are adding requirements to improving it all the time, and you will see some announcements in early 2018.
The “AI” part of the WaveLogic AI chip – is that mainly just about clever automation, or is there an actual machine learning component as well?
We base it on big data analytics. We have now a ‘network health predictor’ based on monitoring performance that can predict breaks or fails, and alert the customer what he needs to look at it, now. For example, if fiber is along train tracks, each passing train affects the fiber through vibration. The AI engine can help predict when this will happen – at 4.00 pm every day, for example. Also, if you combine historical data with the network health predictor you can really begin to see patterns and possible areas where attention is needed, based on where and when cards failed, for example, and so predict when things are going to fail.
How easy is it to install that?
Good question. There are questions about whether you need Ciena hardware of course, but it is all based on new DevOps platforms. It uses an SDN control platform, and it is based on APIs, so it doesn’t take long – it’s pretty straightforward.
How does it work in a multi-vendor environment?
The application is designed as a multi-vendor product. We are fully supportive of that environment. All of the information is presented through a range of open APIs. I don’t know about our competitors, but our customers can use their own software.
Be the first to comment