Why anonymity is our biggest enemy online
The ability to trace our actions online back to us as individuals will shape our behaviour.
Why is it that most of us fill out our tax returns properly? Or scan every item at the self-checkout, even when no one is watching?
You could say we follow our moral compass as humans, but that is only half the truth. The other half is that we do these things because there are rules with consequences if we fail to comply. These rules exist because we collectively chose them through democratic processes. They, the elected representatives, create laws, and authorities enforce them.
When you walk through a supermarket, cameras record your movements. They create an audit trail. The footage follows you from aisle to aisle. But crucially, that footage alone cannot identify you. You remain pseudonymous. Your face is captured, but your identity is not known.
If you steal something, that footage becomes evidence. But to link you to your identity, you would need to identify yourself (show an ID), or authorities would need to obtain your identity through official channels. The audit trail exists. The ability to identify you exists. But the link between the two requires a deliberate step.
This is how accountability works offline. You are tracked, but not identified. Private, but accountable.
Privacy, Anonymity, and Pseudonymity
Before we move on to how this works online, there is an important distinction: privacy, anonymity, and pseudonymity are not the same.
Privacy means certain information about you is not publicly accessible. Your medical records, your bank statements, your conversations at home.
Anonymity means you cannot be recognised or identified at all. No trail exists. No link to any identity is possible.
Pseudonymity sits in the middle. You can be recognised. You have a reputation. Your actions are tracked and audited. But your identity remains private until you choose to reveal it, or until a legitimate process (like a court order) links your pseudonym to your real identity.
In our day-to-day offline life, we are pseudonymous. Your face acts as a pseudonym. People recognise you, but they do not know your name, address, or official identity unless you share it. A shop clerk sees your face every week but does not know who you are. You become identified when you choose to identify yourself, perhaps by showing ID or using a bank card.
The key point: a pseudonym is always tied to a fixed identity. You cannot create unlimited pseudonyms without accountability. Each pseudonym corresponds to a single real entity. This creates reputation, consequence, and responsibility.
Online: No Audit Trail, No Accountability
Creating fake content online was always easy. All anyone had to do was create an email address and social media accounts under a fake name, register a domain name that sounds official, and design a website that looks like a news outlet. Without any ability to trace who created the content, the creation and spread of misinformation was trivial.
But today’s synthetic content is far more sophisticated. AI can generate photorealistic deepfake videos in seconds. Large language models write convincing fake articles at scale, complete with citations, quotes from “experts,” and official-sounding language. Voice cloning technology can impersonate anyone with just a few seconds of audio. Synthetic media is now indistinguishable from reality.
We are not just dealing with fake articles anymore. We are dealing with fake people, fake videos, fake audio. Entire synthetic realities created by anonymous actors with zero accountability.
The problem is not just “fake news.” It is inauthentic content at scale: AI-generated, synthetic, fabricated. Content designed to mislead, manipulate, or deceive. And the creators face no consequences because they cannot be traced.
The Erosion of Online Accountability
When you publish content under your own name, there is more at stake. Your reputation. Your relationships. Your career. Both public accountability and legal accountability create consequences for behaviour. But neither works without the ability to identify the person responsible.
For a while, influencers seemed like a viable model. They are not anonymous. Their reputation is their business. Brand partnerships worth millions are at stake. Platforms can demonetise them. Audiences can turn them on instantly. If they break the law, authorities can pursue them.
But this model is eroding. Synthetic influencers are now emerging with realistic faces, voices, and video content. There is no real person behind them. No reputation at stake. No accountability. An army of AI-generated personas can push any agenda, and nobody can be held responsible. It is the next evolution of troll armies: faces that move, voices that speak, clips that look indistinguishable from real people.
The vast majority of content online is still created by anonymous accounts. A deepfake video can rack up 10 million views in two hours, spread across platforms, influence elections or stock markets. The creator? Unknown. The consequences? None.
Online, you can create chaos and walk away. Offline, you cannot steal a candy bar without an audit trail following you.
Current Approaches Are Not Working
Since misinformation became a mainstream concern, we have attempted to solve it in several ways.
Algorithms for detecting and removing fake content have improved. X (formerly Twitter) introduced Community Notes, a crowd-sourced fact-checking system where users collectively add context to posts. Meta partners with third-party fact-checkers. These efforts show promise.
But AI-generated content now outpaces detection. For every improvement in moderation algorithms, AI generation improves by 10x. Content moderation teams at major platforms remove millions of posts daily and still cannot keep up.
Regulations have advanced significantly. The EU Digital Services Act (DSA), enforced since 2024, requires platforms to moderate illegal content and be transparent about their algorithms. Platforms can be fined up to 6% of global revenue for non-compliance. The EU Digital Markets Act (DMA) designated platform gatekeepers. Multiple countries have passed content moderation laws.
This is real progress. But even with the DSA forcing platforms to moderate, they are playing whack-a-mole. The core issue remains: platforms moderate symptoms, not the source. Anonymous authors still face no accountability.
Financial sanctions against platforms have increased. The EU has issued multi-billion euro fines against Meta, Google, and others for various violations. But fines do not change behaviour fast enough, and wealthy tech companies can absorb them.
AI detection tools are the newest attempt. Services claim to detect AI-generated text, deepfakes, and synthetic media. But this is an arms race, and detection is losing. The same AI companies building detection tools are also building better generation tools.
A key reason these solutions are ineffective is that they do not address the underlying issue: creators of inauthentic content have no accountability because they can remain anonymous online.
A Protocol for Accountable Online Communication
What if we could create a system that, like offline activities, leaves an audit trail for all online activities, without endangering privacy or creating surveillance infrastructure?
Not a platform. A protocol. One that can be embedded in any type of online communication: social media like Instagram and TikTok, messaging apps like WhatsApp and Telegram, email services, forums, any digital platform where content is created and shared. A universal layer for accountability and trustworthiness.
The technical foundations already exist. The W3C Verifiable Credentials standard, published in 2025, provides cryptographically secure, privacy-respecting credentials that can be verified by anyone without contacting the issuer. These credentials support selective disclosure: you can prove you are over 18 without revealing your birthdate, or prove you are a resident of a country without revealing your address. You share only what is necessary, nothing more.
Here is how it works:
Every action you take online (posting, commenting, sharing, messaging) is logged in a way that maintains your pseudonymity. You remain unidentifiable to the public and to platforms. The audit trail exists, but it is not connected to your identity by default.
When content becomes the subject of a criminal investigation, a structured process begins. Authorities obtain a court order for that specific piece of content. The audit trail is then queried across an index of KYC (Know Your Customer) providers, whether in a single country or globally. One of these providers can respond: “I have that audit trail on file, which means I have the identity documents connected to this content.” With the court order in hand, that specific provider discloses the identity.
No single entity holds all the data. No platform knows who you are. The identity information is distributed across independent KYC providers, and only the provider that onboarded you can make the connection. And they can only do so with judicial authorisation for a specific piece of content.
Critically, this requires separate authorisation for each piece of content. Authorities cannot ask, “Who is this person and what have they posted in the last five years?” They must specify: “This particular post is under investigation. We need the identity of the author.”
This process can be automated with appropriate safeguards. When certain thresholds are met (verified harm, scale of distribution, legal complaint), the system can trigger a data release order through judicial oversight. Human judges, accountable to democratic institutions, make these decisions. Not Big Tech content moderators with commercial incentives.
This is not proactive surveillance. It is a retroactive investigation with democratic oversight. Decentralised identity storage. Division of powers. Checks and balances.
Compare this to a thief being caught in a supermarket: the police can obtain the thief's identity for that specific incident, but they cannot ask which supermarkets the thief has been in over the last five years, what items they bought, or who they spoke to. Each investigation requires a separate, specific authorisation.
This is Trias Politica in the digital age. Separation of powers. Checks and balances. Judicial oversight. The same principles that protect us offline apply online. Accountability flows from democratically elected institutions, not from private company policies.
In an age of AI-generated everything, the question is not “is this content real?” but “who created it?” Accountability does not stop synthetic content from being created. But it makes creators think twice before hitting publish.
Who manages this system?
Now the key question: who manages this system?
Currently, Big Tech companies like Google, Meta, and X manage our digital identities and activities. They have commercial motives: advertising, engagement, growth. This is problematic because information can be manipulated, deleted, or used for purposes we never agreed to. Their content moderation decisions are not accountable to any democratic process.
The answer is a distributed protocol that nobody owns but everybody can verify. A shared, public, immutable audit trail maintained by many independent parties. No single company can manipulate it for profit. No single government can censor it unilaterally. The architecture itself prevents the concentration of power.
The identity layer can be provided by existing infrastructure: the EUDI Wallet and the network of KYC providers across Europe. The audit trail layer requires a separate, immutable ledger where all online actions are recorded pseudonymously.
This is not hypothetical. The technology exists. At mintBlue, we facilitate the integration of platforms and applications into this distributed protocol. We provide the infrastructure layer that enables any online service to connect to a shared audit trail without building it themselves. The EUDI Wallet handles identity. The distributed ledger handles accountability. Together, they create the architecture for trustworthy online communication.
The protocol creates the accountability we need without creating the surveillance infrastructure we fear.
Europe’s Digitalisation Opportunity
Europe is uniquely positioned to lead this transition. And it is already building the infrastructure.
The EU has already built the regulatory foundation. The Digital Services Act and Digital Markets Act establish accountability requirements for platforms. The Digital Product Passport mandates traceability for physical goods. EUDR requires supply chain auditability. The AI Act demands transparency for AI systems. Traceability, auditability, and accountability are already mandated across multiple domains.
More importantly, the EU is deploying the technical infrastructure. The European Digital Identity Wallet (EUDI Wallet), mandated by eIDAS 2.0, launches across all Member States by the end of 2026. Every EU citizen will have access to a digital wallet that stores verifiable credentials: identity documents, diplomas, professional licenses, and age verification. The wallet is built on W3C Verifiable Credentials and supports pseudonymous authentication by design.
The EUDI Wallet is a puzzle piece to what this article proposes. Users can authenticate to platforms using pseudonyms unique to each service, preventing cross-platform tracking. They can prove attributes selectively, sharing only what is necessary. And, when legally required, their identity can be verified by the KYC providers that issued their credentials, with appropriate judicial oversight.
By November 2027, large online platforms must accept the EUDI Wallet for authentication upon user request. The infrastructure for accountable, pseudonymous online identity is not hypothetical. It is being built right now.
A protocol for accountable online communication fits naturally into this regulatory landscape. The infrastructure requirements for sustainable supply chains and for trustworthy online content are fundamentally the same: immutable audit trails with privacy-preserving verification.
Europe can pioneer an accountable, level-playing-field online. Tools for legislators to govern digital spaces. Enforcement of the rule of law online without infringing on privacy.
Unifying the Online with the Offline World
The offline world has accountability built in. Your face is your pseudonym. Your movements create audit trails. You can be traced if you break the law. Privacy exists: your medical records, your conversations, and your home life remain private. But pseudonymity, not anonymity, is the default.
Distributed protocols enable the same model to run online. Your actions are traceable yet pseudonymous. Authorities accountable to democratic institutions can link your online identity to your real identity when authorised by a court order. Just like they can identify you from security footage offline.
We do not need to choose between privacy and surveillance. We need the same balance online that we have offline.
Five years ago, I wrote the first version of this piece. The problem has only gotten worse. AI made synthetic content exponentially easier to create and harder to detect. Regulations tried to catch up, but they are still chasing symptoms.
The solution has not changed. We need accountability. And accountability requires identity: not surveillance, but pseudonymous identity with judicial oversight, accountable to the democratic process.
The question is: do we build it before the next deepfake election? Or after?
Originally published January 2021. Updated January 2026.





Love the parallel between offline psedonymity (your face as a pseudonym) and what could work online. The distinction matters a lot more than most people realize. I've seen similar traceability work in supply chain contexts, and the deterrence effect is real even when actual enforcement is rare. The EUDI Wallet integration point is brillant timing.