Manifesto

For the sovereignty of human consciousness in the age of artificial intelligence

— Anne and Michel, March 2026

Real or not real?

In the final installment of The Hunger Games saga, Peeta, a protagonist who has undergone mental conditioning, frequently asks his companion Katniss whenever doubt arises about the reality of a situation or a statement, about what he perceives or what he is told: "Real or not real?"

This simple question has become our guiding thread. Not out of pessimism, but out of lucidity. Because we now live in an information environment where the boundary between what is true and what is fabricated has become, for the first time in human history, technically indistinguishable to the naked eye.

Artificial intelligence did not create this problem. But it has carried it to an unprecedented scale and speed.

We are two retired IT professionals who hand-coded our first HTML pages in the 1990s — at a time when the web still carried the sincere promise of a real democratization of information and knowledge.

That promise met, methodically, with repeated disillusionment. Social media transformed the public sphere into an echo chamber. Recommendation algorithms learned to exploit our cognitive biases with surgical precision. Disinformation became an industry. And political, social, epistemic polarization — the fragmentation of shared reality — became the norm in most Western democracies.

We saw each of these drifts coming. And we see what is happening now.

Generative artificial intelligence represents a qualitative leap in the ability to produce persuasive, personalized content indistinguishable from authentic human content. This is not a hypothesis: it is already operational.

Our concerns are specific:

  • Industrialized mass manipulation. Tools capable of generating millions of targeted messages, tailored to individual psychological profiles, in service of political or commercial interests, without the recipients being aware of it.
  • The erosion of epistemic autonomy. The ability to form one's own beliefs freely, based on verifiable facts and critical reasoning, is the foundation of any functioning democracy. This ability is today under systemic pressure.
  • Cognitive dependence. An AI that thinks for us, formulates our arguments, chooses our information — even with good intentions — erodes over time the intellectual capacity of those who use it without discernment.
  • Concentration of power. The most powerful AI systems are in the hands of a very small number of actors — corporations and states — whose interests do not necessarily coincide with those of the individuals they serve.
  • The invisibility of manipulation. Unlike traditional propaganda, which was often identifiable as such, AI manipulation can be undetectable, precisely because it adapts to each individual in real time.

We refuse sterile catastrophism. AI is a tool — the most powerful humanity has ever produced — and like any tool, its value depends entirely on the intention and lucidity of whoever uses it.

AI used with discernment can do exactly the opposite of what we fear: help distinguish facts from opinions, identify cognitive biases in reasoning, present contradictory arguments on complex questions, flag emotional manipulation in content, and refuse to produce flattery instead of truth.

This is precisely what Hum_ID and Hum_SCAN seek to make possible. Hum_ID transforms AI from a potentially manipulative tool into an honest intellectual partner, guided by the explicit values of the user. Hum_SCAN allows you to ethically analyze any text — or improve your own — according to those same values.

Neuroscience offers a singular insight here: resolving ambiguity, distinguishing fact from manipulation, or restoring clarity to a fragmented narrative activates the brain's reward circuits. These are the very same mechanisms exploited by social media, yet for opposite ends. While platforms like Facebook maintain chronic uncertainty to fuel addiction, the act of understanding dissipates this tension. This cognitive resolution provides a profound neurochemical satisfaction. Consequently, epistemic autonomy is no longer a mere philosophical ideal; it is a fundamental biological necessity of the human brain.

We are not naive about the intentions of technology companies. But we believe in concrete actions more than in statements of principle.

In February 2026, Anthropic — creators of Claude — the AI we use — categorically refused to allow Claude to be used for mass surveillance or the development of autonomous weapons without human supervision. This refusal cost them a $200 million contract with the US Pentagon.

This act does not solve every problem. But it demonstrates that it is possible, even in an ultra-competitive commercial environment, to set non-negotiable ethical limits and hold them in the face of considerable financial pressure.

This is the type of concrete signal on which we choose to build — not out of blind allegiance, but because Anthropic's principles on honesty, non-manipulation, and the protection of epistemic autonomy correspond precisely to the values that HUMANITY.NET defends.

Why do some people feel genuine pleasure using Hum_ID or Hum_SCAN? The answer is not anecdotal, it is neurological.

Sébastien Bohler, in The Human Bug, shows how social media has learned to exploit the striatum — the brain structure at the heart of the reward system — by keeping the user in a state of artificial uncertainty. The reward (a like, a notification) arrives unpredictably, activating dopamine according to the most addictive pattern known: variable-ratio reinforcement, the same mechanism as slot machines. The result is an addiction that weakens long-term thinking and leaves the individual chronically vulnerable to manipulation.

Hum_ID and Hum_SCAN activate the same neurological circuit, but through a radically different mechanism: not by sustaining uncertainty, but by resolving it. When an ambiguous text is structured, when an opinion disguised as fact is named, when clarity replaces diffuse unease, the brain responds with what neuroscientists call a resolution dopamine release. This pleasure is real, measurable, and its long-term effects are exactly the inverse of social media: you return more capable of thinking for yourself, not less.

Edward Deci and Richard Ryan, whose self-determination theory is among the most solidly validated in motivational psychology, identify autonomy and the sense of competence as fundamental psychological needs on a par with hunger or sleep. Satisfying them produces lasting well-being. Frustrating them, as platforms designed to capture attention systematically do, produces anxiety and vulnerability.

The sovereignty of consciousness transcends a mere ethical stance to become a sine qua non for mental health. By liberating the mind from the grip of bias and imposed cognitive dissonance, it preserves the integrity of our decision-making processes. Therefore, defending this autonomy is not merely an act of advocacy; it constitutes, quite literally, an essential act of care to maintain our inner equilibrium.

Defending one's own consciousness is not an abstract philosophical posture. It is a set of concrete practices, accessible to any individual, regardless of their technical level.

  1. Ask the question systematically. Faced with any claim — online, in the media, in an AI-generated conversation — ask yourself: "Is this a verifiable fact, an opinion, or a hypothesis?"
  2. Define your rules before using AI. Don't let AI decide how it will respond to you. Explicitly submit your values and epistemic requirements at the start of a conversation. This is precisely what Hum_ID allows you to do. And when a text leaves you uncertain, or you wish to improve your own writing, Hum_SCAN applies those same rules to any content.
  3. Beware of intellectual comfort. An AI that systematically validates your ideas is not a useful AI. It is an AI that gently manipulates you. Actively seek out contradiction and opposing viewpoints.
  4. Preserve your own reasoning. Use AI to enrich your reflection, challenge your ideas, or structure your thoughts, but ensure the primacy of your own conclusions. Surrendering one's discernment to the algorithm is the most insidious form of intellectual defeat.
  5. Demand transparency. Knowing that you are talking to an AI, understanding its known limits and biases, knowing the interests of the platform that deploys it — these are rights, not privileges.
  6. Pass on these reflexes. Today's children and teenagers are growing up in an environment where the distinction between authentic content and generated content will become increasingly difficult. Equipping them now is not optional.

What we defend

HUMANITY.NET does not defend a technology, does not defend a company, and does not defend an ideology.

HUMANITY.NET defends a fundamental capacity: that of every human being to distinguish the real from the fabricated, to form their own beliefs freely, and to use the most powerful tools of their era without becoming their instrument.

Real or not real? The simplest question is often the most radical.