Network KPIs and Quality of Experience (QoE) do not always mirror each other.
Networks from Communications Service Providers’ (CSPs) are complex systems. Between control and user planes, hundreds of metrics can be obtained to evaluate a multitude of performances. Fittingly, network engineers know that no single metric can tell a whole story by itself. When asked which network KPIs are the most critical to keep end users satisfied, their answer is inevitably ‘it depends’.
Move away from network teams, though, and that understanding often starts to fade. It is not unusual, for example, for customer experience managers to develop a single-minded obsession with 3 popular network KPIs:
- Packet loss.
The need to improve those 3 network KPIs frequently becomes a rallying cry for business teams. Partly, they are right: it is true that most digital experiences are negatively impacted when those metrics falter. However, not all experiences are impacted equally: subscriber perceptions are shaped differently depending on the different applications they use. As a result, indiscriminate calls for the improvement of those 3 network KPIs generally lead to misplaced (and suboptimal) effort allocations.
In one analysis, Niometrics investigated what was the implied impact of each one of those KPIs in the Quality of Experience (QoE) enjoyed by mobile data users when engaging with different popular apps.
True QoE can only be established by collecting primary subscriber feedback. In the absence of that, we stipulated session duration as a proxy for perceived QoE. Our hypothesis was that if you do not suffer noticeable hiccups, then you will watch videos, carry calls and play games for longer intervals. On the other hand, if your browsing experiences are being negative, then you will cut them short and switch your attention to something else (thereby reducing your session durations).
Sure enough, increased latencies, increased packet losses and lower throughputs all produced, in one way or another, shorter session durations. But, as expected, different applications demonstrated different sensitivities to the deterioration of each network KPI. For example:
- The same latency increases had a higher shortening impact on Instagram session durations than on YouTube session durations;
- Packet loss deteriorations proved more damaging to the duration of WhatsApp messaging sessions than to the duration of sessions from other apps;
- Throughput declines did not cause meaningful reductions in Instagram session durations. At least, not until they crossed a threshold that, for YouTube, would have already pushed away even the most persevering of the watchers.
All those observations make intuitive sense. They demonstrate expected behaviours when considering the nature of the usage that each application imposes on the network.
But how does empirical proof from network analytics help? And, most importantly, how can this network analytics knowledge be leveraged?
“Different digital experiences call for different network KPIs.”
Empirical evidence makes it easier for network engineers to build surgical cases for KPI prioritisation. And, in possession of the knowledge that different digital experiences call for different network performances, business teams can demand network improvements that make real sense. They can, for example, avoid fixating on high throughputs everywhere when specific locations have a majority usage of WhatsApp messaging (for which latency and packet loss may prove much more critical).
By tailoring the right medicine to the right symptoms, and by understanding with network analytics that different applications respond in unique ways to different network KPIs, CSPs can better allocate their time and resources. Neither over- nor under-spending, then, on their network improvement investments.