When forces collide, human ingenuity comes to rescue.
The digital revolution is at a crossroads: on the one hand, it still carries huge potential to keep transforming and improving our lives. On the other, it can only do so by consuming continuously larger volumes of human and machine-generated data, advancing perilously against data privacy limits.
Tensions to reconcile data innovation with data protection have commanded much attention in recent decades. They have spawned the appearance of global organisations such as the IAPP (International Association of Privacy Professionals, founded in 2000), conferences, professional service offerings and, naturally, regulations.
Luckily, humans have a knack for accommodating conflicting ambitions and circumventing constraints. It lies in the root of our problem-solving nature. When faced with aspirations to both advance on data intelligence creation and protect data subjects, academics and practitioners alike ended up creating a brand-new knowledge field: privacy engineering.
“Challenges to building privacy-sensitive software are not only technical. They are also mental and economical.”
Privacy engineering aims to provide methodologies, tools and techniques that enable systems to deliver acceptable levels of privacy. It designs and deploys innovative means for personal data to be explored with no harm to the individuals to whom it belongs, marrying the best of both worlds.
Because data privacy has implications that span the entire spectrum of service propositions’ design, privacy engineering involves software development, cybersecurity, human-computer interaction and legal know-how. Not surprisingly, privacy engineers must be versed at a broad range of skills. They are expected to connect all dots holistically, from the outset, for any new data-centric product development.
Of all areas influenced by stringent data privacy requirements, software development is perhaps the most exacted one. It is bombarded by requests to aggregate and de-identify data while still making advanced statistical analyses possible. In a world constantly demanding the ingestion and integration of an explosive number of data sources, for them to be examined and subsequently processed in ongoing operations, that is not trivial.
Differential privacy (the act of introducing randomness into the results of queries on underlying confidential data), quasi-identifiers (pieces of information that are not of themselves unique identifiers, but which can be combined with other quasi-identifiers to create a unique identifier) and encrypted identifiers are just a few of many mechanisms available to help developers out.
Challenges to building privacy-sensitive software are not only technical, though. They are also mental and economical. Embedding data protection elements at every stage of a data processing pipeline consumes development time – it requires more, and more complex, coding. It also makes software hungrier for hardware resources. And, perhaps most vexing of all, it forces software engineers to see excitement where most of them just see boring new features. Contributing to enhancing data privacy capabilities is not exactly considered ‘cool’ – not yet.
That shall change. Not breaking the law, for a starter, should be a big enough motive to overcome resistances. It will not be, however, the main one: the proactive offering of ‘safe’ digital products is quickly becoming a core source of competitive advantage. People have long been aware of the benefits from sharing their personal data; most recently, they have grown conscious of its risks too. If governments do not punish lax companies, markets will.