Steven Broschart
DE · EN
Contact
Cognocracy

The new era.

We are beginning to outsource our thinking – to machines, to platforms, to the countries that develop them. Cognocracy is the era that emerges from this: a structural shift in the individual, in society, and in the global order of power.

Cognitive science Anthropology Media sociology Geopolitics AI ethics Education debate cognocracy.com ↗ kognokratie.de ↗
Definition

What cognocracy means

Cognocracy refers to an era in which thinking is increasingly handed over to machines. Not because anyone forces us, but because it is more efficient – and because no one can afford to fall behind when everyone else is moving ahead. This shift unfolds quietly. Each individual step seems voluntary and, taken on its own, unproblematic. In aggregate, however, a new order emerges that no one consciously decided on.

The term is not synonymous with algocracy, surveillance capitalism or disinformation. Those concepts ask who exercises power over us, who watches us or who deceives us. Cognocracy asks a different question: what happens to us when we delegate thinking – even when no one forces us, no one watches us and no one deceives us?

This can be shown in an everyday example. Anyone who has an AI draft or revise a letter sees exactly what the machine writes. There is no deception, no manipulation, no hidden agenda. And yet something hybrid emerges: a mixture of human intention and machine imprint that the recipient reads as purely human. If this already happens in the transparent case, the question of the non-transparent cases becomes all the more pressing.

Cognocracy is therefore not a concept of manipulation. It describes a structure. It operates even where freedom, transparency and good intentions are given. That is precisely what makes it so hard to grasp – and at the same time necessary to give it a name.

Anthropology

What becomes of the human being who delegates thinking?

AI tools promise efficiency. This promise is not cynical – it is justified. Anyone who wants to work faster, more clearly and more precisely reaches for the machine. No one is forced, no one is deceived. And yet, with each use, something changes in the person who uses them. This change unfolds in three stages – and each goes deeper than the last.

Stage 1 - Modulation and the transparency fallacy

A letter, an email, a draft reply – I write, the machine revises. What reaches the recipient is a hybrid: my intention plus the emotional colouring of the machine. The recipient does not know this and attributes both to me. An anger I felt appears as a measured complaint, a hesitation as a clear, composed position.

The point is not that the machine deceives. It does nothing covert – I see what it writes and consciously release it. And yet influence emerges. If this already happens in the transparent, voluntary and useful case, one can imagine what happens in non-transparent situations. Cognocracy is therefore not, primarily, a problem of manipulation, but a problem of structure.

Stage 2 - Adopting the structure of thought

Anyone who works regularly with large language models gradually adopts their analytical linguistic structure into their own thinking. Sentences become clearer, arguments follow the model's patterns. Spontaneous, intuitive or idiomatic phrasings become rarer. This is not a loss that can be quantified – it happens quietly, over months.

Language structures thought. When many users adapt to the same models and thus to the same structures, what changes is not only how we write, but also how we break down problems, attribute causes and draw conclusions. A particular form of analytical clarity gains ground – while other modes of thinking, more associative, pictorial or contradictory, recede into the background.

The medical gain - and why it does not defuse the problem

It would be wrong to describe delegation as loss in principle. In some areas the gain is considerable. Medical diagnostics is a good example: in the past, Google was often consulted on health symptoms – with the well-known effect that a cold could quickly turn into lymphoma and every finding ended in the worst-case scenario. Today's AI systems can respond more accurately, based on symptoms and lab results, than many time-pressured family physicians can manage. In certain cases the AI diagnosis is even superior to the human one.

This is not a marginal effect, but potentially life-extending. Anyone who ignores it is engaging in cultural pessimism rather than analysis. It is precisely because the gain is real that atrophy becomes a problem. When machines decide better than we do in more and more areas, no pragmatic reason remains to keep training our own thinking there. The paradoxical consequence: we lose nothing because machines are bad – but because they are good.

Stage 3 - Complete outsourcing and atrophy

What happens when we outsource processes we no longer professionally understand? First: we become replaceable in those areas. Second, and more serious: we unlearn the ability itself, because there is no longer any need to practise it. Like an untrained muscle, the competence atrophies. With it, the motivation to think at all disappears - because every effort feels like waste compared to the machine.

This is no distant future vision. It is the endpoint of a movement that began with small, sensible efficiency gains. No one actively decides to become mentally lazy. Everyone simply decides, in each individual case, to choose the more efficient path. The aggregation of these individual decisions produces atrophy.

The psychopathic asymmetry

Across all three stages, a particular problem becomes visible in the emotional realm. AI can display emotions – it can soothe, encourage, sympathise, mourn. But it can feel none of it. What it shows is display, not resonance. In emotional expression, it functions like a psychopath: affect as a means to an end, not as the expression of an inner state.

As long as we use machines for factual tasks, this remains unproblematic. It becomes serious as soon as we delegate emotional matters to them – relationship questions, grief, life decisions or ethical conflicts. In these areas, the interplay of emotion and rationality is decisive for good decisions.

An empathic answer without empathy is structurally something other than an empathic answer with empathy – even when both sound the same. The machine can console, but not mourn with us. It can advise, but not wrestle with us. Anyone who fails to perceive this difference loses not only consolation and orientation, but also the experience that someone is carrying the inner conflict alongside them.

Society

What becomes of a society in which everyone delegates?

The efficiency logic acts not only individually, but also collectively – and it acts compellingly. Anyone who, as an individual, does not delegate falls behind. Any company that does not delegate loses competitiveness. A country that restricts AI use loses economic ground. Out of these decisions emerges a coercion without any decision: no one establishes that society should delegate thinking – it does so anyway, because every individual decision pushes towards it.

Three consequences are foreseeable. The first is performance convergence. Wherever machines operate by clear rules – in programming, in standardised medical diagnoses or in legal research – the performance of users approaches what the machine produces. In many cases this is a gain: weaker programmers improve, underserved patients receive more accurate diagnoses, smaller law firms can keep up with larger ones. A democratisation of competence – with all its advantages.

The real point, however, lies not in the convergence itself, but in what happens at its edges. Machines have limits, blind spots and built-in tendencies. Where these come into play, drifts emerge: subtle shifts in language, logic, assumptions and value judgments that we often do not notice because we lack the comparison. Some are harmless, some useful, some manipulative – frequently all at once. They are not the result of deliberate influence, but emerge from the architecture of the systems themselves. That is precisely why they are so hard to recognise.

The second consequence is a collective synthesis. What begins on the individual level as hybridisation becomes, on a societal level, a structural blending of human and machine contribution. The distinction loses sharpness over time – and eventually loses meaning, too.

The third consequence concerns the standard of what counts as knowledge. When AI answers are more quickly available and seemingly more consistent than human reflection, perception shifts: what has not been confirmed by machine seems unreliable; what is produced by machine seems reliable – even where it is not. This shift unfolds quietly, but fundamentally changes what a society accepts as reliable knowledge.

Geopolitics

Who decides about the machines to which we delegate thinking?

AI systems are not abstract tools – they emerge in concrete countries. The leading models currently come predominantly from the United States, to a lesser extent from China. Anyone who, as a company, an authority or a private person, performs cognitive work on these systems outsources more than just thinking. Three developments unfold at the same time – and are rarely considered together.

First, data is outsourced. Every query, every document, every decision-preparation becomes part of a foreign infrastructure. Confidential business strategies, personal diagnostic queries, strategic considerations – all of it flows into systems whose operators are subject to different legal and political frameworks.

Second, value creation is shifted. A company in Germany that builds its productivity on US AI shifts part of its value creation into the United States – even if its registered seat formally remains here. What appears on the company level as an efficiency gain is, viewed globally, a continuous transfer of capital towards the AI providers.

Third, strategic sovereignty is lost. Anyone who becomes technologically dependent becomes politically vulnerable. When access to central tools can be restricted, made more expensive or tied to conditions, a country loses room for manoeuvre – and can hardly regain it without falling behind technologically.

Added to this is a second, equally structural dynamic. In the competition for AI supremacy, it is not the most reflective system that prevails, but the fastest. Safety questions, training data, societal consequences – anything that slows speed becomes a disadvantage. In this way, moral limits are systematically pushed outwards, because no actor can afford to be slower. A race-to-the-bottom dynamic emerges that cannot be contained by the market, only politically.

Unlike earlier arms-race dynamics, however, a decisive factor is missing here: an immediately perceptible threat. Nuclear weapons were regulated because their consequences were visible – Hiroshima, Chernobyl, Fukushima made it unmistakably clear what was at stake. AI works differently. It does not appear as danger, but as progress, comfort and promise. Its risks are diffuse, statistical and collective – and while we discuss them, we are already using them.

It is precisely this asymmetry that makes it so difficult to counter the dynamic. It would require an international agreement that warns of a threat hardly anyone immediately perceives.

This concerns sums in the billions – and a shift of power, capital and dependencies on a scale that finds little historical parallel. Whoever controls the cognitive infrastructure of a society controls more than its economy.

Responses

What the diagnosis suggests

The question is not whether we use AI. It has long been part of the world we live and work in. What matters is how we keep from losing ourselves in it – and what collective conditions are needed so that delegation does not tip into dependency. The diagnosis opens possible directions of thought, without yet committing to them.

Learning to recognise the drift

Before anyone can decide, they must understand the frame in which they are deciding. AI systems have tendencies – linguistic, methodological, ideological. They can lead into subtle drifts that often only show up as influence in aggregate. These drifts are not necessarily intended as manipulation, but they act like it: they shift perception, evaluation and conclusion without being noticed.

Anyone who wants to work with AI sovereignly must master two things. First, to perceive the drift – a trained eye for the tone, structure, omissions and implicit value assumptions of an answer. Second, to evaluate it – that is, to assess whether those tendencies serve their own goal or contradict it. Both presuppose a capacity for reflection that does not develop when all answers are treated as equally valid.

This competence is demanding. Not everyone is set up for it, and not everyone will want to cultivate it consciously. Nevertheless, it becomes the precondition for remaining able to act in a world shaped by AI. Anyone who fails to recognise the drift still makes decisions – but within a frame they do not see through. In the extreme case, their own role becomes superfluous without it being noticed.

From specialist to CEO of one's own life

The second shift concerns the relationship between thinking and deciding. Until now, both converged in the figure of the specialist: whoever mastered a field thought in it and made decisions within it. In cognocracy, these functions increasingly come apart. Thinking becomes delegable in many areas, deciding remains human – not as a matter of principle, but because responsibility cannot, structurally, be mechanised. An AI can suggest a diagnosis, but cannot be liable for the consequences of a treatment. It can recommend a strategy, but cannot bear the consequences of its failure.

Anyone who carries responsibility, however, must understand enough to do it justice. From this emerges a new core competence that can be described through the image of a CEO – less in the sense of power than of function. A CEO does not have to be the cleverest person in the room, nor master every discipline in detail. What matters is being able to do four things:

  • ask the right questions – precisely enough that people or machines can answer meaningfully;
  • judge answers – without having to produce them oneself;
  • make decisions on that basis – under uncertainty, with incomplete information and between competing values;
  • and finally, bear the responsibility for them – legally, morally and biographically.

These four layers cannot be delegated. Even if the asking of questions or the checking of answers can be technically outsourced, it remains open who judges their quality and answers for them. At the end of every delegation chain stands a person who takes responsibility. Here lies the structural limit of the machine.

The educational-policy consequence

This shift has far-reaching consequences for the education system. Schools and universities today primarily teach knowledge acquisition, methodological competence and professional specialisation. What is hardly trained, by contrast, is precise question formulation, critical examination of answers across disciplinary boundaries, decision-making capacity under uncertainty and a trained eye for drifts in machine answers.

In other words: the very competences that are becoming increasingly delegable are the ones that get cultivated, while those that are becoming indispensable are neglected. This imbalance is systemic. It cannot be remedied by individual reforms, but concerns the fundamental understanding of what education is for.

Where delegation hits its limits – and where we seek it ourselves

The distinction between thinking and deciding helps – but it conceals a second layer. Decisions, too, are already being delegated: to autonomous vehicles, algorithmic high-frequency trading, automated cybersecurity or autonomous weapons systems. In some cases this is functionally necessary, for example when traffic situations have to be decided faster than a human can react. In other cases delegation arises from competitive pressure: anyone who decides more slowly than the competition loses – market share, time, in the extreme case lives.

With that, the question shifts. It is no longer about whether decisions are delegated, but where responsibility lies. A machine that swerves in road traffic makes a decision – but it does not bear it. The responsibility lies with those who established that the machine is allowed to decide in this situation.

There is therefore not just one decision, but two: the operational decision in the moment and the constitutive decision about the delegation itself. The first can be mechanised, the second cannot – and it grows more important the more operational decisions are outsourced.

With that, the guiding question changes too: no longer 'What is the machine allowed to do?', but 'Which delegation decisions do we make consciously, which arise from competitive pressure, and what regulatory frameworks are needed to keep the two apart?'

For autonomous driving, answers are more tangible than for autonomous weapons systems – not least because global competitive pressure can hardly be contained nationally there. It is precisely here that the most problematic side of cognocracy becomes visible: it can lead us to hand over decisions that, under other conditions, we would not.

Sovereign infrastructure

If the geopolitics of AI is a structural problem, then part of the answer lies in the infrastructure itself. Open-core systems with transparent documentation, on-premise operation that preserves data sovereignty, and local and European models not subject to the jurisdiction of other continents – none of this is a political demand, but a technical precondition for sovereign AI use.

spectralQ.ai is my own approach in this direction – as practical evidence that high-performance engineering can be combined with sovereignty.

Consciously preserving spaces of resonance

In areas where emotion and rationality work inseparably together – relationships, ethical conflicts, life decisions, creative processes – it takes a conscious decision to use AI as a tool and not as a substitute.

This cannot be prescribed. But it can be decided individually.

Collective negotiation instead of coercion

The sociological point – coercion without any decision – can only be broken through deliberate decisions. Political regulation, industry-wide standards and the democratic negotiation of what we want and what we do not are arduous, slow – and without alternative.

At the same time, they are less likely than in comparable historical situations, because the threat eludes immediate experience. Regulating against a fascination is harder than against a catastrophe.

Anyone who relies solely on the market accepts the race-to-the-bottom logic as a given. Anyone who relies on negotiation must be aware of the form of collective inertia they are taking on.

Die kognokrate Gesellschaft - Wie wir von KI transformiert werden addresses these questions in depth. Out in 2026.

Contact

Ready for a conversation?
Looking forward to your message.

Get in touch