Unauthorised access, data leaks, information breaches, and data misuse might seem part and parcel of today’s internet-driven and connected world, but they don’t have to be…


After all, privacy is paramount when it comes to data sharing. Privacy and cybersecurity experts have long been warning everyone regarding entities having access to sensitive information. This is where PETs (privacy-enhancing technologies) come in.

According to the report published by the WEF (World Economic Forum) and Frontiers, PETs is one of 2024’s top ten emerging technologies. As the name suggests, PETs, also known as privacy-preserving or data-protection technologies, are essentially practices, techniques, and tools designed to protect the privacy of individuals. In this article, we’ll delve into what PETs encompass, how they work, and why they’re so essential to the digital toolkit today.

Why Are PETs Important?

In the current climate, data breaches cost enterprises an average of USD 4.45 million per incident. With 94% of users outwardly expecting companies to protect their data, organisations are continuously seeking ways to harness data sans compromising privacy.

As regulations tighten, PETs have become a critical solution for maintaining trust and ensuring compliance by safeguarding personal data during transmission, processing, and storage. They include methods like access control, anonymisation, encryption, and solutions like confidential computing, synthetic data generation, and differential privacy.

In doing so, they help individuals and organisations maintain control over their data while mitigating privacy risks in today’s increasingly data-centric world.

Types Of PETs

There are essentially five major emerging privacy-enhancing technologies that are considered PETs in the true sense of the form: differential privacy, federated learning, secure multi-party computation, AI-generated synthetic data, and homomorphic encryption. Organisations that handle and deal with sensitive user data, like financial institutions, are already using PETs to accelerate ML (machine learning) and AI development and to share information and data across and outside the enterprise network. In fact, most organisations end up using different combinations of PETs to cover all of their data use cases.

  1. Homomorphic encryption: One of the most well-known of all PETs, this allows third parties to manipulate and process data in its encrypted form. To put it simply, one never actually sees the original data, making it extremely promising for detecting double fraud and money laundering. However, it’s not without its limitations, especially since it isn’t helpful when the person doing the analysis has no prior knowledge about the dataset. Another limitation is that it has restricted functionality and is incredibly compute-intensive.
  2. AI-created synthetic data: One of the most versatile of all PETs, these are trained using real data. Post the training exercise, the generator can generate datasets that are statistically identical but sized flexibly. Hence, re-identification is impossible as no individual data point matches its original counterpart. ML, AI, advanced analytics, and data anonymization are the largest users of this tech, but it isn’t suitable in cases where re-identification needs to take place.
  3. Secure multi-party computation: This encryption methodology sees multiple parties collaborating on encrypted data. Just like homomorphic encryption, the objective is to keep the data private during the computation process. Fraud detection, distributed signatures, and key management are some of its most popular use cases. However, the timing needs to be right and the processing needs to be synchronous.
  4. Federated learning: This specific form of machine learning sees the data stay on devices instead of feeding it onto central models, with the multiple model versions operated and trained locally. The result? Model updates get fed back into the central model, improving it and proving to be indispensable to IoT applications. This training, which takes place on edge devices like smartphones, eliminates the need for data sharing during model training but doesn’t actually protect privacy in the strictest of terms as edge models can also be hacked. Hence, it’s usually combined with another PET to make it effective.
  5. Differential privacy: This is a mathematical definition of privacy rather than being a PET in itself. It essentially quantifies the privacy leakage occurring when analysing a differentially private database, a measure that’s called the ‘epsilon value.’ The higher this value, the more there’s a potential for privacy leakages to occur. Since determining the epsilon in real life is still a challenge, it simply puts forward a mathematical upper boundary for potential privacy leakage. Hence, getting the epsilon value set accurate is extremely critical; it should be low enough to protect privacy. This is usually combined with another PET, like federated learning, to make it more effective.

To Sum Up

By 2034, the size of the global PETs market is expected to be worth nearly USD 28.4 billion, growing at a CAGR (compounded annual growth rate) of 24.5% between 2025 and 2034 (2024’s value was USD 3.17 billion). With data privacy concerns at an all-time high, people are demanding more control and protection of their data, given the awareness surrounding data collection and usage.

PETs are putting privacy into practice, offering a way to balance protecting sensitive data with the benefits of data utilization, thus promoting responsible data practices.

In case you missed:

Malavika Madgula is a writer and coffee lover from Mumbai, India, with a post-graduate degree in finance and an interest in the world. She can usually be found reading dystopian fiction cover to cover. Currently, she works as a travel content writer and hopes to write her own dystopian novel one day.

Leave A Reply

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved