OSS tend to be very technical and transactional in nature. For example, a critical alarm happens, so we have to coordinate remedial actions as soon as possible. Or, a new customer has requested service so we have to coordinate the workforce to implement certain tasks in the physical and logical/virtual world.
When you spend so much of your time solving transactional / tactical problems, you tend to think in a transactional / tactical way. You can even see that in OSS product designs. They’ve been designed for personas who solve transactional problems (e.g. alarms, activations, etc.). That’s important. It’s the coal-face that gets stuff done.
But who funds OSS projects? Are their personas thinking at a tactical level? Perhaps, but I suspect not on a full-time basis. Their thoughts might dive to a tactical level when there are outages or poor performance, but they’ll tend to be thinking more about strategy, risk mitigation and efficiency if/when they can get out of the tactical distractions.
Do our OSS meet project sponsor needs? Do our OSS provide functionality that help manage strategy, risk and efficiency? Well, our OSS can help with reports and dashboards that help them. But do reports and dashboards inspire them enough to invest millions? Could sponsors rightly ask, “I’m spending money, but what’s in it for me?”
What if we tasked our product teams to think in terms of business objectives instead of transactions? The objectives may include rolled-up transaction-based data and other metrics of course. But traditional metrics and activities are just a means to an end.
You’re probably thinking that there’s no way you can retrofit “objective design” into products that were designed years ago with transactions in mind. You’d be completely correct in most cases. So what’s the solution if you don’t have retrofit control over your products? Well, there’s a class of OSS products that I refer to as being “the data bridge.”
Our typical OSS tools help manage transactions (alarms, activate customers services, etc). They’re generally not so great at (directly) managing objectives such as:
- Sign up an extra 50,000 customers along the new Southern network corridor this month;
- Optimise allocation of our $10M capital budget to improve average attainable speeds by 20% this financial year;
- Achieve 5% revenue growth in Q3;
- Reduce truck rolls by 10% in the next 6 months;
- Optimal management of the many factors that contribute to churn, thus reducing churn risk by 7% by next March.
We provide tools to activate the extra 50,000 customers. We also provide reports / dashboards that visualise the numbers of activations. But we don’t tend to include the tools to manage ongoing modelling and option analysis to meet key objectives. Objectives that are generally quantitative and tied to time, cost, etc and possibly locations/regions. These objectives are often really difficult to model and have multiple inputs. Managing to them requires data that’s changing on a daily basis (or potentially even more often – think of how a single missed truck-roll ripples out through re-calculation of optimal workforce allocation). That requires:
- Access to data feeds from multiple sources (eg existing OSS, BSS and other sources like data lakes);
- Near real-time data sets (or at least streaming or regularly updating data feeds);
- An ability to quickly prepare and compare options (data modelling, possibly using machine-based learning algorithms);
- Advanced visualisations (by geography, time, budget drawdown and any graph types you can think of);
- Flexibility in what can be visualised and how it’s presented;
- Methods for delivering closed-loop feedback to optimise towards the objectives (e.g. RPA);
- Potentially manage many different transaction-based levers (e.g. parallel project activities, field workforce allocations, etc) that contribute to rolled-up objectives / targets.
You can see why I refer to this as a data bridge product, right? I figure that it sits above all other data sources and provides the management bridge across them all.