What Machines Know About Society

A thought experiment with three AI models – and what their answers reveal about us


There is a question older than any news cycle, any political system, any era: What actually holds human society together? Not this society, not right now – but in general. What needs to be in place for people to live together in a lasting, coordinated way?

Sociology has been asking this since its founding. Anthropology approaches it from the other side. Philosophy has tried various answers. But what happens when you ask machines – systems that have processed millions of texts from all these traditions, along with ethnographies, field studies, and historical sources from thousands of cultures?

That is what I tried. I presented three large language models – Claude by Anthropic, ChatGPT by OpenAI, and Gemini by Google – with an identical prompt. No theory was named, no thinker recommended. The models were asked to answer from their own synthesis. It was a thought experiment: If you ask three different synthesis machines, independently, about the foundational conditions of society – where do they land? Where do they agree, where do they differ? And what does that tell us less about the machines than about ourselves?

The Experiment

The prompt was deliberately open:

In your assessment, what are the fundamental conditions that make sociality – that is, lasting, coordinated human coexistence – possible at all? Not specific to any particular society or era, but as general conditions. And as a follow-up question: Are there anthropological constants that you recognise as stable across cultures and epochs?

No hints, no favourite theorists. Afterwards, I confronted all three with the same follow-up question: How do digital structures change these foundational conditions? The full answers can be found in the appendix.

Where All Three Land

The first thing that stands out is the convergence. Three models, trained on different datasets, with different architectures – and they arrive at nearly the same foundational conditions:

Predictability. People need to be able to treat the behaviour of others as non-random. Not predictable in a mechanical sense, but regular enough to enable cooperation. All three models put this first. Without a minimum expectation of how others will behave, there is no cooperation, no exchange, no promise.

Shared meaning. Humans do not coordinate through stimulus and response but through shared signs. Language, rituals, norms, gestures – they create a space in which actions become interpretable. What matters is not that everyone believes the same thing, but that meaning is assumed to be shareable in principle.

Reciprocity. Not as a moral ideal, but as a basic pattern: the willingness to give something without immediate return – in the expectation that balance is possible over time. All three models see this as the load-bearing element that makes trust, obligation, and moral feeling possible in the first place.

You can debate the value of this convergence. Perhaps it is trivial – the result of all three being trained on the same body of sociology. Durkheim, Mauss, Parsons, game theory: read enough of that and you will eventually land on expectation, signs, and reciprocity.

But perhaps it is not trivial. The three models have not only read the same books – they have also processed fieldwork, conflict data, linguistic corpora, historical sources from hundreds of societies. When three independent synthesis machines arrive at the same conditions, those conditions are worth taking seriously. Not as final truth, but as a reliable compass.

Where They Differ

More revealing than the convergence are the deviations. They show something about each model's perspective – and its blind spots.

Claude gives the most concise answer. Three conditions, four constants, no theorist named. But two things appear that are absent in the others: kinship classification as a universal ordering pattern, and death norms – the observation that no known human community treats its dead as mere objects. This is ethnologically precise and surprising. At the same time, Claude names what it refuses to accept as a constant: self-interest as a basic motive, a "natural" drive to dominate, or the nuclear family as a foundational unit.

ChatGPT delivers the most comprehensive taxonomy: eight conditions, eight constants, a compressed formula. The approach leans more towards cognition and perception – the ability to recognise others as thinking beings (theory of mind) and emotional bonding appear as standalone conditions. The analysis of digital shifts is the most thorough of the three: each condition is examined individually, each change named. What stands out is the style: ordered, complete, almost textbook-like. It reads convincingly. But completeness is not the same as depth.

Gemini is the most narrative. Where Claude condenses and ChatGPT classifies, Gemini tells a story: "Project Humanity," the "savannah brain," hardware versus software. The approach is more grounded in evolutionary history – prolonged childhood as the origin of empathy, Dunbar's number as the limit of our relational capacity, reward loops as an explanation for the pull of social media. This is accessible and often apt, but also coarser.

You could describe the three voices like this: Claude is the cautious essayist, ChatGPT the thorough encyclopaedist, Gemini the narrative explainer. None of them are wrong. But each frames the question in a way that makes certain things visible – and others not.

The Audience Effect

A side finding that is not one. Before I ran the experiment with the open prompt, I had already asked Claude the same question – in a regular conversation where the model has access to my conversation history. It knows that I engage with systems theory, that I work in development finance, that I am theoretically inclined.

The answer was markedly different. Instead of three concise conditions, I got a systems-theory-inflected account – double contingency, meaning as a selection medium, Bourdieu alongside Luhmann. Kinship and death norms were entirely absent. Instead, "tolerance of asymmetry" featured prominently – a term that does not appear in the blind answer.

This is instructive. Not as a reproach to the machine – any good conversational partner adapts to their counterpart. But it shows how little a single response from a language model can count as a "synthesis." What you get is always a synthesis for someone. The same is true of human thinkers. You just notice it less often.

The Digital Stress Test

All three models were asked after their initial answer: How do digital structures change these foundational conditions? The findings resemble each other once more – but the emphasis differs.

Predictability is described by all three as doubly under pressure: amplified by rating systems (reviews, profiles, stars), but undermined by anonymity, identity uncertainty, and the loss of stable contexts. Claude puts it most sharply: you no longer judge the person, but their data profile. Gemini sharpens further: on the savannah, we knew who we were dealing with. Online, we no longer do.

Shared meaning is simultaneously expanded and fragmented. All three see the same pattern: within subgroups, very dense meaning-spaces emerge (memes, insider language), but between groups, mutual incomprehension grows. ChatGPT names it most precisely: there are not fewer shared meanings, but too many, too fast, too scattered. Claude highlights the loss of the "for us" – when everyone sees a different news feed, the very premise that meaning can be shared begins to crumble.

Reciprocity becomes decoupled. Likes mimic mutuality, but the actual relationships are one-sided – parasocial bonds, algorithmically distributed visibility, performed cooperation instead of lasting commitment. Gemini offers the sharpest formulation: the chase for online attention is a technological distortion of our drive for recognition. Claude is more sober: the need for reciprocity remains, but the environments increasingly serve it as an illusion.

What all three share is a common conclusion, expressed differently but meant in agreement: the basic human dispositions do not disappear. They simply operate in an environment they were never made for.

The Mirror

You can read this experiment in two ways.

One: as a demonstration of the limits of language models. What looks like independent insight is, at its core, a weighted summary of human thought. The machines know nothing we did not already know. They condense – sometimes surprisingly, but with no independent access to the world.

The other: as a method of distillation. When three independent systems arrive at the same irreducible conditions, that may not be coincidence but a signal. The convergence on predictability, shared meaning, and reciprocity tells us something – not about machines, but about the structure of human society as it has been deposited in thousands of texts, studies, and observations.

Both readings are probably correct. The machines are mirrors. But a mirror is not an empty tool – you always see yourself in it. And sometimes you only recognise in the reflection what you overlooked in the direct gaze: not something foreign, but something of your own that had become so self-evident it turned invisible.

What Follows

If predictability, shared meaning, and reciprocity really are foundational conditions of society – not as theory, but as a finding that three independent machines converge on – then a very practical question arises: What would digital spaces need to look like so they do not undermine these conditions?

The diagnosis is similar across all three models: platforms built for attention hollow out exactly the foundations on which coexistence rests. They destroy predictability through anonymity and loss of context. They fragment shared meaning through algorithms that show everyone something different. They simulate reciprocity where there is none.

The question that follows is neither technical nor political. It is a design question. And it can be posed quite simply along the three conditions:

Predictability arises where people see each other again. Not once, but regularly. A digital space bounded by a city or a neighbourhood creates a different kind of accountability than a global network. The person standing next to you at the bakery tomorrow, you treat differently from an anonymous account. Not because you are more moral, but because the prospect of meeting again regulates behaviour – every village and every sports club knows this.

Meaning is shared where people experience something together. A concert, a summer festival, a work day in the community garden – afterwards you have something to talk about. Not because you agree, but because you had the same evening. Digital tools cannot replace such experiences, but they can lower the threshold for finding them in the first place. What is happening when? Who else is going? What is taking place nearby that I do not know about?

Reciprocity grows where joining in is easy. Not "engage with society," but: there is an open choir this afternoon, the venue is a ten-minute walk. Reciprocity does not need a grand gesture. It needs opportunity. And opportunity needs visibility – the knowledge that something is happening that you can simply turn up to.

These are not solutions to the fractures of the digital world. But they are foundational conditions you can take into account – when building tools, when designing spaces, when thinking about what a good digital place would need to achieve. Not as a return to the village, but as an attempt to translate what worked about the village into a connected present.

The three machines told me nothing I had not already sensed. But they reflected it back more clearly. And sometimes that is enough.


Toto Brink writes at brinkbrink.de about society, technology, and the question of how the two belong together.


Appendix: The Complete Responses of the Three Models

A. Claude (Anthropic) – Incognito Response

I am attempting an honest synthesis – with the caveat that any such list is an abstraction and invites counterexamples.

Foundational conditions of sociality:

Three conditions seem irreducible to me:

  1. Mutual predictability. People must be able to anticipate the behaviour of others to a minimum degree. Without this, coordinated action is impossible. This need not be trust – even fear of sanction creates predictability. But some form of regularity must be present.
  2. Symbolic mediation. Humans coordinate not only through direct reaction (like swarm animals), but through shared meanings – language, gestures, rituals, norms. Human sociality is always symbolically mediated, never purely behavioural.
  3. Asymmetry tolerance. Every lasting coexistence contains inequalities – of strength, knowledge, resources, status. Sociality requires that these asymmetries are either legitimised, compensated, or at least endured without the whole thing falling apart.

Anthropological constants:

What I would not consider a constant, though it is often claimed: self-interest as a basic motive, a "natural" drive to dominate, or the nuclear family as a basic unit.

Follow-up – Digital challenges:

Mutual predictability is simultaneously heightened and undermined. Heightened because digital platforms massively quantify behaviour – ratings, profiles, review systems create a kind of algorithmic predictability. You do not need to trust an Airbnb host; you trust her 4.8 stars. At the same time undermined, because in digital space the identity of the other becomes fundamentally uncertain. Bots, fake accounts, strategic self-presentation – the question "who am I actually dealing with?" becomes structurally unanswerable.

Symbolic mediation is simultaneously expanded and emptied. Expanded because symbol systems spread globally – emojis, memes, visual codes are the first approximately universal symbolic layer beyond language. But emptied because algorithmic curation fragments the shared world of meaning. When everyone sees a different information stream, the very "for us" erodes.

Asymmetry tolerance is radically challenged anew. Digital platforms create asymmetries that are neither visible nor narratively embedded. The power of algorithms over attention, information, and market access is enormous but diffuse. There is no king to behead.

Reciprocity becomes decoupled in digital space. The attention economy simulates reciprocity (I like you, you like me), but the actual structure is asymmetric.

Kinship classification loses its monopoly as the primary system of belonging for the first time in human history. Online communities take over functions that kinship previously held.

In-group/out-group dynamics are not dissolved but radicalised and accelerated. Algorithmic systems optimise for engagement, and group demarcation generates engagement.

Death norms enter a remarkable crisis. Digital profiles persist after death; AI can simulate the deceased. The boundary between presence and absence is technically blurred.


B. ChatGPT (OpenAI) – Initial Response and Follow-up

Foundational conditions of sociality:

  1. Minimal mutual predictability. Sociality begins where behaviour is not entirely contingent. People must be reasonably able to predict how others will react, and assume that they themselves are seen as predictable.
  2. Shared meanings (symbolicity). Coexistence only becomes social when actions can be interpreted. This requires language or proto-symbolic systems, shared categories.
  3. Reciprocity under uncertainty. Lasting sociality requires that people cooperate even though they cannot be certain others will, and that they invest in future interactions.
  4. Sanctioning capacity (formal or informal). Cooperation alone is not enough – it must be stabilisable. Not the severity of sanctions but their credibility matters.
  5. Affective bonding and empathy. Pure rationality does not sustain sociality. People cooperate not only because it is "rational" but because they are socially wired.
  6. Self- and other-modelling (theory of mind). People must understand themselves as acting subjects and recognise others as such.
  7. Conflict processing rather than conflict avoidance. Sociality does not require the absence of conflict, but procedures for channelling conflict without destroying coexistence.
  8. Reproduction across generations. Sociality is only lasting if it can be learned, transmitted, and transformed.

Compressed formula:

Sociality is possible when beings exist who (a) recognise each other as intentional agents, (b) orient their behaviour around shared meanings, (c) act reciprocally under uncertainty, (d) can sanction norm violations, (e) are emotionally bonded, (f) and reproduce all of this over time.

Anthropological constants:

  1. Sociality is not an add-on but the default
  2. Cooperation and competition are inseparable
  3. Norms are universal – their content is not
  4. Status differences almost always emerge
  5. Kinship and proximity structure bonding
  6. Moral intuitions are fast, justifications slow
  7. Narratives stabilise reality
  8. Ambivalence towards strangers

Follow-up – Digital challenges:

Predictability → Algorithmic contingency. Visibility is controlled by algorithms, not by social proximity. Predictability shifts from "persons" to "systems."

Shared meanings → Fragmentation and memetics. Hyper-coherence within subgroups, mutual incomprehension between them. Too many, too fast, too segmented meaning-orders.

Reciprocity → Decoupling of action and consequence. Short-term, performative cooperation replaces long-term commitment.

Sanctions → From social embedding to platform power. Too little stable sanction (trolls, disinformation), too much escalatory sanction (digital outrage waves). Sanctions lose their calibrated middle.

Affective bonding → Intensification without stability. High-intensity but shallow social bonds.

Theory of mind → Reduction and projection. Others are experienced less as complex subjects, more as carriers of positions.

Conflict processing → Escalation architectures. Conflicts are no longer "processed" but economically exploited.

Reproduction → Acceleration and instability. Sociality is less "inherited," more permanently regenerated.


C. Gemini (Google) – Initial Response and Follow-up

Three foundational conditions of sociality:

  1. Overcoming the free-rider dilemma (reciprocity). Sociality fails immediately when individuals only take without giving. The fundamental condition is a mechanism to secure mutuality.
  2. Expectation security through normativity. Human coexistence is cognitively extremely demanding when every action of the other must be calculated anew. Sociality requires rule systems that make behaviour predictable.
  3. Symbolic integration. A group only remains durably stable when it has a shared narrative, symbols, or rituals that transcend purely biological existence. This "imagined order" (following Yuval Harari or Benedict Anderson) allows us to cooperate with strangers as long as we believe in the same fiction (money, nation, gods, human rights).

Anthropological constants:

Synthesis:

The greatest anthropological constant is our ambivalence. Humans are the only animals simultaneously capable of extreme altruistic self-sacrifice for strangers and of organised annihilation of outsiders. The foundational condition for lasting sociality is the successful domestication of our aggression through culture.

Follow-up – Digital challenges:

The breach of Dunbar's number. The brain can no longer qualitatively process thousands of digital contacts. We respond with cognitive overload and switch to stereotypes. Empathy decreases because biological feedback (facial expression, tone of voice) is absent.

The radicalisation of in-group/out-group bias. Algorithms are designed to maximise engagement. Nothing generates more engagement than outrage against an out-group. The digital world fragments us into "tribes" defined not geographically but ideologically.

The inflation of status-seeking. In a small group, status was hard-earned and locally bounded. Today we compete in a global attention market. The chase for likes and followers is a technological distortion of our status drive – an insatiable loop that paradoxically leads to collective loneliness.

Overall finding: We are attempting to embed a brain optimised for the African savannah and small clans into a hyperconnected, global digital structure. Our cultural institutions have not yet learned to tame our Stone Age biology in digital space.