Consciousness as Recursive Social Modeling
Anton Gladkoborodov, June 11, 2025
While working on a separate project focused on decision-making in social environments, I arrived at a conclusion that wasn’t part of the original goal: that consciousness may not be a primary feature of brains, intelligence, or biology at all. Instead, it appears as a structural consequence of recursive social modeling—a byproduct of simulating other agents who, in turn, simulate you.
At first, I treated this idea as a side effect and cut it. But it kept resurfacing. Over time, I came across research on animal cognition that reinforced the intuition: self-awareness isn’t an internal phenomenon—it’s a structural one. So I decided to explore it directly in a separate essay.
The core claim is simple: consciousness doesn’t emerge from introspection or sensation. It emerges from the structural demands of social prediction. To navigate a world where outcomes depend on how others interpret your actions, you must build a model of yourself.
1. Simulation as Predictive Engine
One of the main features of the human brain is simulation engine. To navigate in the world humans need to simulate what might happen next. Its primary function isn't merely to perceive the environment or respond passively—it actively forecasts potential outcomes, predicts behaviors, and evaluates complex future scenarios.This predictive ability is crucial not only for immediate responses but also for long-term strategic planning essential to survival.
Simple predictions are just simulation of physical linear world. When a ball flies toward you, you don't passively wait to react; your brain anticipates its trajectory and positions your hand to catch it.
Prediction becomes more intricate if you consider how system evolve on your actions. When assessing the strength of a branch before climbing, you simulate your body’s interaction with the physical environment.
Complexness grow when interactions involve other agents. When you see a snake, you don't just predict its trajectory—you simulate its intentions. Can it bite me?
Even more when you consider reactions of agents on your actions. When you encounter a barking dog, you simulate reactions on your actions. Will it bite if I move closer?
However, the most intriguing and complex simulations occur when we engage with social agents—when we don't just simulate how they react on your actions, but how they are trying to predict what you will do next.
2. Social Simulation and Recursion
Social contexts dramatically escalate the complexity of simulation. In physical environments, prediction outcomes means forecasting how systems behave. But in social environments, outcomes depend on interpretation. Your action doesn’t just cause an effect—it triggers a perception. And to predict perception, you must model the perceiver and the context they perceive. Thus, predicting social outcomes inherently demands recursive modeling.
This process initiates a recursive loop of social simulation:
To simulate social outcomes, you must simulate how other agents will respond
This loop doesn’t just add complexity—it fundamentally changes the kind of simulation being run. It requires a second-order model: a model of a model. And the recursive structure introduces a new artifact into the system—a self-model—not for introspection, but for external projection.
Another way to frame it: Suppose you want to tell a joke. You don't just say it without considering the reaction—you simulate how another agent might react. To predict their reaction, you must consider a model of the other person, their preferences, sense of humor, and mood. But humor also depends on context. The same joke might be hilarious in one scenario but offensive or awkward in another. Context here isn't just the place, situation, or people around—it also includes you. Your identity, reputation, and relationship to the listener are essential components of the context. Therefore, to correctly anticipate the reaction, you must simulate yourself as part of the scenario. You aren't just predicting how someone reacts to a joke; you’re predicting how someone reacts to you telling that joke.
This recursive loop is the origin of self awareness. To simulate non social physical world scenarios you don't need notion of self.
When you catch a ball, you predict its trajectory—you model the ball’s motion and your hand’s position.
When you climb a tree, you predict whether the branch can carry your weight—you model its strength and your body.
When you see a tiger, you predict whether it sees you as prey—you model an agent with goals, movement, and attention.
When you encounter a dog, you predict whether it will bite if you move closer—you model its behavior as a response to your actions.
None of these require a self. They require physical simulation, body awareness, and agent modeling.
But when you speak you have to predict whether others will laugh, judge, or dismiss you—now your prediction depends on how they perceive you. You must model not just them, but also their model of you. At that point, a self-model becomes necessary—not for introspection, but for strategic forecasting. The self-model becomes necessary when future outcomes depend not on what you do, but on how others perceive you doing it.
This internal simulation loop—constantly modeling how others see us—isn’t vanity or insecurity. It’s a structural adaptation, essential for surviving in a world where social feedback shapes consequences.
And once the self-model exists, it doesn’t disappear when others leave. You can carry it alone. You can reflect on yourself, see yourself in the mirror, project yourself into imagined futures. You can simulate how you will be perceived—not just now, but tomorrow, or ten years from now. This is what enables solitary self-awareness, memory, and long-term planning.
We tend to think the concept of “I” is personal and internal. When we reflect on ourselves, it feels like an individual act—something that arises from within, independent of anyone else. That experience creates a powerful illusion: that the self is a thing in itself, separate from the world around us.
But consciousness is not private by design—it is social by origin. The self-model was never built in isolation. It emerged because we were modeled, named, judged, praised, and interpreted by others. Without that loop, there is no “I.” The experience feels internal only because the external structure has been internalized.
3. Self-Model and the Mirror Test
This account of consciousness—as a recursive social simulation engine—offers an elegant explanation. But can it be tested? Is there any external sign that this structure exists, or fails to form?
One behavioral consequence of self-awareness is mirror self-recognition. You can’t recognize yourself in a reflection unless you have a concept of “I”—not just a body, but a stable identity model you can map onto visual input. If the self-model is absent, the mirror shows only another agent. If the self-model is present, the mirror reveals you.
The mirror test doesn’t prove consciousness. But it hints that threshold has been crossed. A model of self now exists—structured enough, stable enough, and internalized enough to connect the image to identity.
Mirror self-recognition (MSR) has been studied across dozens of species using variations of the “mark test”—in which a visible mark is placed on an animal in a location only viewable through a mirror. The question is: does the animal understand that the reflection is itself?
Most animals do not. Dogs, bears, and tigers, for example, fail the test. They may interact with the mirror, show curiosity or aggression, but they do not touch or investigate the marked area on their own bodies. And yet these are animals with sophisticated behavior: they can solve problems, form memories, and adapt flexibly to changing environments. Intelligence alone, it seems, is not enough.
In contrast, a small set of species reliably do pass the test: chimpanzees, bonobos, orangutans, elephants, dolphins, orcas, magpies, and some whales. These animals exhibit consistent self-directed behaviors in front of mirrors, including inspecting the mark or using the mirror to explore body parts otherwise hidden from view.
What unites them is not a particular brain structure or level of intelligence—but social complexity. These species live in cooperative, fluid social systems. They negotiate roles, track status, build reputations, and engage in recursive modeling of others' behavior. In other words, they are embedded in social simulations—and must track not just what others do, but what others think of them.
The pattern is striking: mirror self-recognition correlates not with general intelligence, but with the structural presence of recursive social modeling. The mirror test doesn't directly measure consciousness, but it does reveal when a self-model is present—and that model appears to emerge only in species where navigating social perception is a structural necessity.
This leads to a deeper, and perhaps counterintuitive, conclusion. Wolves and lions also fail the mirror test—despite being social mammals that live in groups. Packs and prides hunt together, defend territory, and share parenting roles. Their systems are organized around dominance, not mutual modeling. They are hierarchical, not recursive.
These are non-recursive social structures: stable, effective for coordination, but structurally closed. Reputation does not evolve through signaling or reputation-tracking—it is dictated by strength and reproductive access. Social position is not constructed or reflected—it is imposed. There is no need to ask: “How does the alpha see me?”—because the roles are fixed, enforced through force, and leave little room for negotiation.
In such systems, a self-model is not structurally required. There is no strategic value in simulating how others see you, because your position is not a function of perception—it is a function of domination. And where recursive modeling is unnecessary, consciousness, in the self-referential sense, does not emerge.
Gorillas live the same way. Dominance structure. One silverback, passive followers. In the wild, they fail the mirror test. But some gorillas do recognize themselves—and they almost always come from human environments.
These are not random exceptions. They are structural interventions. Raised in rich, interactive, emotionally responsive settings, these gorillas learn language. They receive feedback. They are mirrored—not just physically, but socially. And over time, they begin to see themselves—not just as bodies, but as agents inside a shifting social world.
That’s the threshold. Gorillas have the cognitive capacity. But in the wild, the structure doesn’t push it.
Change the structure—and consciousness appears. Not as magic. Not as essence. But as a response. A recursive adaptation to recursive pressure.
And in humans, the same pattern holds. Children typically don’t recognize themselves in mirrors until around
In extreme cases—such as severe social neglect—this process can stall. Children may grow physically, but never pass the mirror test. Without social mirroring, the internal loop may never form. The brain matures—but the self does not.
It is non-dominance-based social structure that creates the 'I am'—not intelligence, not biology, cognitive capacity, not God.
Prediction
If consciousness is a product of recursive social modeling, then animals from self-aware species—like bonobos, dolphins, or elephants—should fail the mirror test if raised without social interaction. That means: no interaction with their own kind, no exposure to human names or roles, no observed feedback about identity or status.
The predicted outcome is simple: if the internal simulation loop never forms, the animal will not recognize itself. It will behave like a non-social species—treating the reflection as another creature, not as itself.
In other words: consciousness, in the reflective, self-aware sense, is not automatic. It is not a function of having a brain. It is not triggered by age. It is a function of being seen—modeled, named, interpreted, responded to. The self does not emerge from within. It emerges from between.
4. Cognitive Models: From Body to Concept
Most theories confuse body-modeling with self-modeling. But they are not the same. You can simulate your body without simulating your self. When you predict whether a branch will hold your weight, or whether a falling rock will break your leg, or can I jump over the stream, you're using a body-model—a functional map of your physical structure, limits, and dynamics.
But that’s not a self. A self-model emerges only when the outcome of your action depends on how others will interpret you. When your survival, opportunity, or status is shaped by how other agents perceive and respond to you, then do you need a recursive model of yourself—not as you feel, but as you exist inside the minds of others.
But there are more than just two levels.
Some animals display an agent-awareness: modeling others without modeling themselves. A dog, for example, may learn how to manipulate its owner—whining, nudging, or performing tricks to get food. This behavior implies a model of the human: the dog predicts how the human will respond. But it doesn’t appear to have a model of itself as seen by the human. It does not anticipate shame, status, or deception relative to its identity. It is aware of others, but not of itself-in-others.
In other words, agent-awareness represents a first-order recursion (direct prediction)—predicting another agent’s behavior. Self-awareness is second-order recursion (predicting prediction)—predicting how another agent predicts your behavior. This recursive step, modeling another agent’s perception of you, is precisely where self-awareness emerges.
Let's look at another layer.
If a log falls and hits a bear, the bear will attack the log. This isn’t random aggression—it reflects a pattern response. The bear treats the impact as if caused by an agent. It seems to carry a default agent model—a basic assumption that sudden force comes from something alive. But it doesn’t build a concept of the log. It doesn’t simulate inanimate cause. It simply reacts.
Humans still carry this reflex. If you stub your toe on the leg of a bed, you might get angry at the bed—just for a moment. You treat it like a hostile agent. But then the higher model kicks in. You remember it’s just an object. That switch is only possible because you have an concept of an object—not just a shape in space, but an inanimate structure with no goals or intentions.
The bear never makes that switch. And the reason the bear doesn't have concept-awareness. Concept-awareness—the ability to group, abstract, and manipulate categories like country, company, tool, or justice—doesn’t emerge from physical interaction or social mirroring alone. It requires a structure that compresses symbolic patterns across instances, and uses that compression to navigate or predict outcomes. Language enables concept-awareness.
Language doesn't just label things we already understand. It enables the very formation of concept categories. That is: some types of awareness, especially abstract class-based awareness, cannot emerge without linguistic encoding. Without language, concepts like “country,” “company,” or “mammal” remain unformed. You might perceive a dog and a cat as a separate models, but the common abstraction "mammal" doesn't arise naturally from direct experience—it requires semantic compression across multiple objects and interactions. Language is the compression protocol for shared structure.
Just as self-awareness emerges from recursive social modeling, concept-awareness emerges from symbolic abstraction. And just as the “self” becomes the internal map for navigating social structures, concepts become the internal map for navigating invisible structures—like law, math, or belief.
5. Structural Necessity of Consciousness
We don’t need to mystify consciousness to explain it. We don’t need panpsychism, quantum phenomena, simulated universes, or God. The structure itself is enough. Once you trace the recursive loop—from physical prediction, to social simulation, to self-modeling—you see that consciousness is not a spark.
Self-awareness emerges when the structure demands it. And it doesn’t emerge when it doesn’t. That’s why it isn’t universal. Why most animals do not have it. Why some do. Why even in humans it doesn’t appear at birth.
It’s not a metaphysical essence—it’s a social phenomenon. The ability to simulate consequences in complex social environments. And this is why we miss it: we study it by looking deeper into the brain, rather than outward—toward the structure that built the loop in the first place.
References
Gallup, G. G. Jr. (
). Chimpanzees: Self-recognition. Science, ( ), – Plotnik, J. M., de Waal, F. B. M., & Reiss, D. (
). Self-recognition in an Asian elephant. Proceedings of the National Academy of Sciences, ( ), – Reiss, D., & Marino, L. (
). Mirror self-recognition in the bottlenose dolphin: A case of cognitive convergence. Proceedings of the National Academy of Sciences, ( ), – Prior, H., Schwarz, A., & Güntürkün, O. (
). Mirror-induced behavior in the magpie (Pica pica): Evidence of self-recognition. PLoS Biology, ( ), e202. Anderson, J. R., Gallup, G. G., & Shillito, D. J. (
). Do gorillas recognize themselves in mirrors? Animal Behaviour, ( ), – Rochat, P. (
). Five levels of self-awareness as they unfold early in life. Consciousness and Cognition, ( ), – Gallup, G. G., & Anderson, J. R. (
). The mirror test. In Self and Identity (pp. – ). Routledge. Marino, L. (
). Consciousness in cetaceans. Behavioral and Brain Sciences, ( )