Maximum performance and fragmented architecture do not always go together.
Keeping an entire technology stack under direct control is not an easy feat. It requires excellence at a wide range of software engineering domains and comes with heavy ramp-up work. Make-or-buy scenarios confront, then, the option of developing everything yourself versus opening up to (and coordinating with) external components developed by 3rd parties.
In Niometrics, we have chosen the former: for our ambitions, and for our clients’ requirements, the advantages of keeping an end-to-end stack ownership of our network analytics technology have far outweighed some of the challenges that it imposes.
Our end-to-end technology has helped us to avoid the integration pitfalls that Communications Service Providers (CSPs) often face with fragmented, multi-vendor, generic legacy tools. By developing a monolithic solution block, from NCORE (our DPI engine) all the way up to our visualisation workspaces, we have eliminated the need for clients to work on time-consuming integrations. With a full-stack ready to roll from day one, that typically means plug-n-play deployment agility.
“Retaining end-to-end control over our technology stack contributes to an optimised, big-data-with-small-footprint performance.”
While part of that hardware ‘sweating’ stems from the higher efficiency delivered by each separate module of our stack, another significant portion of those gains derives exactly from how tightly those modules are built together. The seamless chain of data extraction, exchange, mediation and visualisation that our solutions deliver would have been difficult to achieve without the structural forethinking that an integrated stack commands.
Moreover, maintaining control over our entire stack has offered both our engineering and design teams a degree of flexibility that, for some aspects of our solutions, is invaluable. Take, for example, our visualisation layer (NIO UX/UI): it comprises of custom-crafted workspaces designed for different user groups to access, analyse and act on the data that matters most to them. More often than not, those solutions require information to be displayed in ways that can only ‘get the job done’ when designed without the shackles of standard visualisation tools. Being able to deploy the best design for the exploration of information at hand, without external constraints imposed by idiosyncratic limitations, guarantees an intuitive, informative interface across all of our solutions.
Finally, a proprietary stack drastically simplifies our clients’ lives, in that they can rely on a single point of support to enjoy a continuous flow of software upgrades that will retain high-performant interoperability no matter what.
Naturally, our choice of a deeper stack control has posed its challenges as well. Our development teams must, by definition, possess more holistic skills, covering larger chunks of the spectrum between back-end and front-end technologies. Everyone is expected, then, to know beyond the boundaries of the areas they directly work with. In an age when companies and universities tend to polish engineering talent within narrow bands of highly specialised skills, more comprehensive expertise requirements may at times prove harder to recruit for.
Ultimately, the thinking behind a verticalised software stack forces engineers to cover larger conceptual spaces. Because cross-modular performance considerations become both a must and a possibility, engineers are expected to systematically assess the suitability of their approaches – and how they will ripple across the rest of the stack.
For Niometrics, that added complexity has paid off. Full-stack ownership has played a central role in contributing to the high-performance, stability and finer-grained outputs that our network analytics solutions deliver. As such, it is a piece of our model that has suited us very well until now.