The pharmakon
of empathy

The doctrine of empathy as both remedy and poison in the face of privacy protection challenges.

Context

For

CNIL, 2020

This scenario was sketched for the speculative and foresight exploration Protecting Privacy in 2030 ↗, conducted for the Laboratoire d’Innovation Numérique de la CNIL (the French data protection authority) as part of their “Scenes of the Digital Life” Innovation and Foresight report.

At the heart of the reflections and speculations were privacy and personal data protection, viewed through the lens of ordinary digital uses and the weight of inequalities in accessing rights. The approach combined an analysis of fragments of imaginaries and design fiction to inspire a triptych of speculative futures, which were discussed and enriched collectively in workshops.

F·r·ictions

Synopsis of this future

By 2030, public authorities believe they have found the solution to the burden of social inequalities in digital usage through the concept of empathy. Shaped by fifteen years of “user-centred” thinking and agile-creative sprints, public administration has shifted from a legislative dogma to one of innovation. Empathy is now viewed as a miracle formula: “putting oneself in another’s shoes” is seen as providing the right solution to a problem – without necessarily taking the time to understand the problem first. Techno-solutionism has gradually merged with behavioural and cognitive sciences.

Originally imported from across the Atlantic, this cardinal value of empathy has also permeated all levels of society. It has become the primary tool for citizen collectives convinced that a “tech for good” fundamentally relies on the ability to truly understand users. Yet, in doing so, there is a risk of forgetting that users are, above all, complex human beings. Empathy has quietly become a pharmakon – a remedy as well as a poison.

Fragments of f·r·iction


To break social determinism, particularly those that exacerbate inequalities and asymmetries in digital use, the State has turned to its “old” solutions. In 2028, the Empathetic Digital Service was introduced:
Every young person between the ages of 16 and 20 is invited to spend six months with a family from a different social group. The aim is to use this time to understand other living conditions and digital practices. The goal is twofold: to break social determinism while fostering an empathetic connection with social environments different from one’s own. This applies to both the so-called “privileged” and “underprivileged” groups.

This immersion in other social circles helps young people acquire new forms of sociability and face different realities of digital usage, aiming to create a shared relationship with digital technologies. Politically, this serves to present a “united people” front against related issues.


Most public and semi-public actors of this near future share the same desire to promote a new form of digital technology “conscious” of issues related to social determinism.

In the name of empathy, it is now seen as appropriate to impose a “degraded digital experience”.
Whether on computers, smartphones/tablets, or optical wearables, these “conscious” actors choose to opt for a user experience that incorporates the obstacles and limitations faced by another social group in order to deepen their awareness and hone their empathy. “Empathetic browsing” plug-ins are made available by hacktivist collectives and other associative or union structures.

However, this diminishing-augmentation logic works in one direction only: the most vulnerable do not yet have augmentations to help them overcome the challenges imposed by these techno-administrative-legal systems.
While we have not succeeded in fully reducing inequalities in this future, we have become experts at simulating them, hoping to encourage solidarity among users from different social backgrounds.


Jointly developed by public administration and several associations, the Protaction system is the perfect example of a certified “made in France” tech backlash.
Also nicknamed “The Bad Samaritan”, this multiplatform tool is a guardian AI capable of making privacy decisions or guiding navigation on behalf of the user to protect them. Originally intended for the most vulnerable, Protaction’s AI is designed to “empathise” with the user it oversees. It adapts the choices made in their name, based on the person’s profile. This assistant is often described as a form of delegation in managing privacy. However, by making decisions for already vulnerable people and obscuring those choices in the name of a “seamless” user experience, the system fails to foster awareness and capacity for action.

Another, lesser-known but equally controversial system is a hotline with a voice bot that “assists” in filing complaints to assert rights when appealing decisions. Following the long tradition of state hotlines, this system quickly became a target for trolls and was notable for its ineffectiveness in handling genuine requests from affected individuals.


Some critics say that if Don Quixote had his windmills, public administration had its “algorithmic biases”. To prevent unconscious programming discrimination and ensure “keeping humans at the centre”, most public algorithms benefit from an “empathic override”, where random and/or targeted decisions are subject to human control and verification. This system has been harshly criticised in medical fields, where automation – or simply the algorithm – is essential for quick resolutions in urgent situations, such as organ allocation.