The Responsibility of Deterministic Systems
It is really challenging to argue for free will. Every step forward in scientific understanding reinforces the notion that free will likely doesn’t exist. Physics, biology, neuroscience, and even societal studies all point in the same direction: people are not in control of their actions.
Free-will skepticism is no longer an abstract philosophical stance—it is shaping real-world debates about crime, justice, and accountability, and its influence is beginning to appear in policy discussions. Does a thief deserve punishment if he has no control over his actions, desires, or real-life obstacles like poverty? Robert Sapolsky contends that because human behavior is entirely shaped by biology and environment, concepts like “punishment” and “desert” lose their traditional meaning. Sam Harris argues that recognizing the illusion of free will should fundamentally transform how we assign blame and approach moral responsibility.
Such arguments have their merits, but I want to show that a lack of free will does not undermine responsibility. The question seems paradoxical: how can we retain accountability without relying on control over our actions? I'm going to show that responsibility functions as a mechanism within a deterministic system rather than as a purely moral construct.
Hamartic Determinism
By definition, determinism rules out accountability since the future and the past are already predetermined. Before we can even discuss these topics, I need to introduce a framework of hamartic determinism that gives us room to breathe.
Evolution of a physical system with certain initial conditions is governed entirely by the laws of physics. For example, a ball on a slope will inevitably roll downhill due to the gravitational potential. If you reset the configuration, the ball will roll downhill again. Such a system—the ball, the slope, and the gravitational field—is fully deterministic.
But physics also tells us that, at a fundamental level, the world is probabilistic. Quantum effects introduce elements of uncertainty that are not fully deterministic—random, if you will. Randomness does not imply free will, like determinism, it operates beyond our control. Instead, randomness breaks the strict cause-and-effect chain, introducing uncertainty into the process.
To describe this, let’s define a deterministic process that contains a small chance for error as Hamartic Determinism. The term comes from the ancient Greek word ἁμαρτία (hamartia), meaning "error," "mistake," or "flaw." In this sense, hamartic determinism can be thought of as determinism with a built-in error or flaw. In modern Greek, it also means "sin," which is funny in the context of free will and moral responsibility.
Why is a system still mostly deterministic even when randomness is introduced? While quantum events are random, their effects 'average out' at larger scales, making their influence negligible. This is why, on a macro level, systems remain effectively deterministic. For example, atomic clocks—the most accurate clocks ever built. They rely on quantum effects, yet they function deterministically at the macro level. Cesium atomic clocks measure the vibrations of cesium atoms as their electrons transition between two energy states. While the individual transitions are probabilistic (governed by quantum mechanics), the aggregate behavior of a large number of atoms averages out these uncertainties. This makes the clock's operation highly predictable and deterministic at the macroscopic level. This demonstrates that hamartic deterministic systems can incorporate randomness without losing their overall predictability.
Hamartic determinism represents an important departure from traditional determinism. Determinism suggests that every event in the future and in the past is fixed. If we had complete knowledge of a system’s current state and the “equation of everything,” we could, in theory, predict or reconstruct every event across time. All events are already there; the future and the past are written in stone. Hamartic determinism, on the other hand, introduces the concept of error—small uncertainties arising from probabilistic elements. While we can still predict general outcomes with almost 100% confidence, these errors mean we can’t recreate or foresee every detail with absolute precision. The future, therefore, is not entirely predetermined.
Extropic Systems
Let's consider the ball-slope system. An external event or observer can reconfigure the physical system, for instance by stopping the ball. This reconfiguration alters the system temporarily. If the external influence is removed, the system returns to its original behavior.
Now, consider another system where reconfigurations persist and create recurring future behavior. For example, arranging a spring and a door in such a way that the door closes automatically after being pushed. This modification creates repeatable response to an input—pushing the door. Even if the external influence that introduced the 'arrangement' is removed, the system retains its new behavior. This type of reconfiguration resembles learning—the system 'learns' to consistently respond to future events based on past interventions.
Take a supermarket door as another example. It can be programmed to open whenever a large object approaches. This programming reconfigures the system’s future behavior, enabling consistent reactions to specific environmental inputs. The door doesn’t think or reflect—it simply reacts according to its programming. In other words, we can say the door 'learns' to react to the presence of a large object.
In this context, learning is a stable reconfiguration of a system that ensures that specific inputs produce consistent and distinct outcomes. It does not require consciousness or intention—only that the system be reconfigured in a way that consistently influences its future reactions. The system doesn't 'learn' in the cognitive sense but is altered to behave differently. In that sense, learning is simply a type of reconfiguration of a physical system.
An extropic system is a physical system that, when fed by external energy (input), consistently produces distinct outcomes. The term comes from "exo" (Greek ἔξω, éxō, meaning external) and "tropic" (from trophḗ, meaning feeding), reflecting the system’s reactions' dependence on external energy.
Most extropic systems we know are man-made, like staplers, scissors, or taps. But many also occur naturally in physics, chemistry, and biology. For instance, consider an electron in an atom. When light of a specific wavelength hits an electron, it absorbs the energy, jumps to a higher level, then returns to its original state, emitting light. This behavior is structured and repeatable—each time the same wavelength of light strikes the electron, the same sequence of events follows.
An extropic system can be created through learning, forming consistent and distinct outcomes without requiring ongoing oversight. It can arise from either a conscious agent or a random event. For example, a falling rock might unintentionally alter a system’s state, creating a lasting modification that shapes how it responds to future inputs. Even an electron in an atom requires an initial configuration; it must be introduced into a nucleus to form a stable system.
Complex Extropic Systems
Simple examples of extropic systems: bicycle bell, door lock, light bulb, or transistor. These systems are designed to respond distinctively to specific inputs. But extropic systems are not limited to simple physical configurations. They can also be highly complex, such as a CPU with billions of transistors, a living cell, a brain, societies, or even entire biological ecosystems all function as extropic systems, maintaining distinct outcomes by processing external inputs.
Complex extropic systems emerge when multiple simpler systems are arranged so that the output of one becomes the input for another. This structure enables the collective system to develop new, emergent properties that don’t exist in its individual parts. As a result, the whole becomes more than just the sum of its components.
Consider a CPU as an example. Each transistor in the system functions as a basic switch, turning "on" or "off" in response to an input. On its own, a transistor is a simple extropic system, producing simple outcome. However, when billions of transistors are interconnected, their interactions create a system capable of processing complex inputs and producing sophisticated outputs.
While we can map and understand the processes in a CPU, systems can scale to far greater levels of complexity. For instance, consider AI, particularly Large Language Models (LLMs). These systems rely on billions of low-level switches in GPUs and CPUs, yet their emergent behavior goes beyond individual operations. At a higher level, this complexity produces emergent capabilities—such as the ability to 'think' or reason, that cannot be easily reduced to a specific chain of software operations or GPU switches.
In such systems, the interaction of simple components creates outcomes that seem independent of their low-level mechanisms. While the system remains deterministic and follows physical laws, its complexity makes it difficult to predict or trace specific results back to individual components. We don't know exactly how LLM generate each response, but we know that LLMs can be reconfigured (learn) to produce consistent and distinct outcomes in response to specific inputs, making them extropic systems.
Humans as Extropic Systems
Humans are physical systems that can be reconfigured to react to specific inputs in certain ways. We don’t need to understand every detail of how a brain forms new neural connections to recognize that a child who touches a hot surface will avoid similar surfaces in the future. This behavior is evidence that a system can learn to produce consistent and distinct outcomes when fed specific inputs. Just as we accept that simpler systems can 'learn' to react on inputs, we can extend this definition to humans—humans are extropic systems.
While this may seem reductive, the justification lies in outcomes rather than internal mechanics. Despite vastly different levels of complexity, the defining characteristic of a extropic system is its ability to produce consistent and distinct outcomes when fed specific inputs—remains consistent across these systems.
Modern compatibilists argue that the complexity of the human brain gives rise to emergent properties like free will and a sense of agency, operating within deterministic constraints. While this view is coherent and might ultimately be true, it cannot be conclusively derived from current knowledge. That’s why I focus on a different question: How does responsibility function if we treat humans as deterministic, extropic systems in the absence of free will? Jumping ahead, my argument does not contradict compatibilism—it reframes responsibility in terms of deterministic processes, regardless of whether free will exists.
Ok, let's move forward—imagine someone throws a ball at my head. When it hits, my brain forms new neural connections, reconfiguring my system to dodge or catch the ball next time. When triggered by the ball, my brain reacts and forms new neural connections. These processes obey the laws of physics and are deterministic, with small probabilistic variations. This experience modifies me as a system. Much like the programmed supermarket door, I have 'learned' to react to certain environmental inputs.
As a hamartic deterministic system, I did not choose to be hit by the ball or to learn from it—the event and the subsequent change were outside my control. These two events reconfigured me as a system, and my new state will lead me to react differently to similar events in the future.
Nothing new—we’ve always known that humans can learn by life experiences, random events or by other people. For instance, someone can teach another person to drive a car. With enough effort, almost anyone can be taught how to drive. For some, it may take more time and effort; for others, less. But most people can be trained to drive. Of course, some individuals cannot learn due to physical or mental conditions—they may be too young, too old, have a fear of driving, or be blind. Typically, we can identify the conditions that make certain learning impossible. But in general, people who do not face these obstacles can learn how to drive.
By training, a person learns how to react to specific events. These reactions form complex chains of "doors with springs" in the brain, creating consistent, repeatable responses to inputs. Each "door" acts independently, outside our conscious control, but together they produce predictable outcomes. The complexity of the system doesn’t guarantee that every individual will respond in exactly the same way to a given situation, but the overall pattern of behavior is similar. For instance, most drivers will stop if the car in front of them stops. If someone drives too fast and fails to stop, that failure will trigger a learning process—next time, they are less likely make the same mistake. In this way, even if the system doesn’t learn properly the first time, a bad outcome reinforces learning and contributes to the shaping of consistent reactions.
So basically, we can think of humans as universal "doors with springs," capable of being reconfigured to react for countless inputs—even ones they haven’t been specifically trained for. Every interaction with information or events leads to new change—learning. From a purely physical standpoint, this universality makes no fundamental difference; learning is simply hamartic deterministic process unfolding in our brains. The key logical conclusion is that humans, like a door with a spring, can be modified to exhibit consistent outcomes in response to specific inputs.
Learning Mechanics
There are multiple ways to modify a physical system to achieve a configuration that produces distinct outcomes in response to specific inputs. The simplest modifications can be done manually by arranging physical objects, possibly even on an atomic scale. As we move to higher-order systems, the modifications becomes more abstract and complex. For example, programming doesn’t physically rearrange transistors in a CPU but instead defines logical commands that generate specific outputs for given inputs. For even higher-order systems, reconfiguration becomes even more abstract—such as training Large Language Models (LLMs) through supervised learning, fine-tuning, or retraining.
Different methods—such as manual arrangement, programming, or training—aimed at creating or modifying an extropic system can be referred to as learning mechanics. In this sense, the way we reconfigure a system—whether through direct manipulation, logical programming, or abstract training—represents a specific learning mechanic.
Responsibility of Extropic Systems
Consequences are a learning mechanic. When an action leads to a specific outcome, that outcome serves as feedback, reconfiguring the system to react differently in the future. But humans don’t always need to experience consequences directly to understand them. Through prior learning, we can imagine consequences before they occur.
For instance, if I have a cat, I don’t need to wait for it to starve to death to understand the outcome of neglecting to feed it. My understanding of the world and prior experiences allow me to anticipate the consequences of my inaction. This ability to connect current actions to potential outcomes, even far into the future, is what we call responsibility.
Responsibility, as the internalization of imagined consequences, acts as a learning mechanic. It functions as imaginary feedback embedded within the system, reconfiguring behavior to align with expected outcomes. In the case of feeding the cat, my imagined consequences drive me to act to prevent harmful outcomes. Responsibility ensures that my behavior is shaped not only by direct experience but also by my ability to project and evaluate future scenarios.
Responsibility is traditionally understood as accountability. But in the context of a extropic system, a person is not inherently accountable for their actions in a moral or philosophical sense. Instead, responsibility functions as a learning mechanic, connecting actions to possible outcomes and reshaping the system’s behavior over time. Returning to the spring-door analogy, responsibility is like the 'arrangement' of a spring—an input that modifies the system to produce consistent and distinct responses.
So, yes, even if people do not possess free will, they can still be responsible for their actions—not as moral agents, but as systems shaped by learning mechanics to act.
I want to clarify that my argument is not meant to be seen as strictly instrumental. I deliberately set aside the moral aspect from my logic to focus on how moral constructs can be understood from the standpoint of a hamartic deterministic extropic system. Morality functions as a sophisticated learning mechanic that shapes extropic systems, but the question of what constitutes moral or immoral is entirely separate from understanding its mechanism of action. Just as understanding how a car's engine works doesn't tell us where we should drive, recognizing morality as a learning mechanic doesn't tell us what we should consider moral. I do not mean to dismiss the importance of moral questions, they are still hold and are essential for shaping how we navigate the world.
My argument is simply that we cannot disregard moral frameworks and practices as irrelevant or redundant within a framework that denies free will. It would be a mistake to assume that if a person does not have free will, they cannot be held responsible for their actions. On the contrary, a person is still responsible for their actions, even in the absence of free will.
Extensions of Learning Mechanics
As mentioned before, responsibility is just one example of a learning mechanic. Morality itself can be viewed through the same lens. Let’s start with a simpler example:
Punishment is just one form of consequence and, therefore, a learning mechanic. It is important to recognize that punishment, within an extropic system, is just one of many tools for learning and behavior modification. We can break punishment in two distinct learning mechanics:
Potential punishment functions as motivation to learn, connecting actions to possible consequences.
Real punishment operates as a feedback loop, reinforcing or learning through actual consequences. It’s like a car crash—a real event that forces the system to learn.
I am not here to discuss the merits, drawbacks, or effectiveness of punishment—my only point is that punishment itself is a learning mechanic, one that attempts to arrange the "springs to doors" in the brain.
But learning mechanics extend beyond rewards and consequences. Concepts of morality and immorality can also be viewed as tools for shaping human behavior. In general, we tend to think of learning as education rather than as the reconfiguration of a system. This is why learning mechanics might be seen as ways to acquire specific skills or knowledge. But if we view learning as a reconfiguration human as a physical system, it becomes clear that most social frameworks and institutions are fundamentally designed to facilitate this process.
Social norms, religion, law, morality or even language—are large-scale learning mechanics. These societal and cultural frameworks function as feedback loops that shape our behaviors. Just as a physical system is reconfigured by external inputs, humans are influenced by exposure to social norms, religious doctrines, legal systems, and moral codes.
Morality itself functions as a sophisticated learning mechanic, shaping human behavior through abstract principles rather than immediate consequences. Moral codes, whether rooted in religion, philosophy, or societal norms, provide a framework for connecting actions to long-term or imagined outcomes, such as maintaining social harmony or avoiding guilt.
For example, the moral principle "treat others as you wish to be treated" operates as an internalized rule, guiding behavior even in the absence of external feedback. Morality reconfigures extropic systems by embedding abstract but consistent patterns of behavior, enabling individuals to navigate complex social environments.
Responsibility of Non-Human Extropic Systems
If a person can be responsible for their actions as an extropic system, can the same argument apply to broader learning systems, both simple (like a door with a spring) and complex (like AI and Large Language Models)? What about natural phenomena or animals?
The key argument here is that responsibility functions as a learning mechanic—a way for systems to be reconfigured by imaginary consequences. Humans can be modified through responsibility. But the same does not hold true for systems that lack the capacity to learn (reconfigurate) by a connection between actions and potential outcomes.
For example, a door with a spring can only be altered by physically rearranging its components. Responsibility, as a learning mechanic, has no impact on its function.
Similarly, LLMs rely on external reconfiguration—such as supervised learning, fine-tuning, or retraining—to adjust their behavior. These systems do not autonomously connect their current actions to potential outcomes because their learning mechanisms are not built to connect their current actions to imagined future consequences. They predict statistical outcomes, generating the most probable next token based on training data. But they do not anticipate long-term consequences of their outputs.
So neither a door nor the LLM can be considered 'responsible' for their actions. Responsibility only applies to systems where potential outcome serves as a mechanism for consistent and meaningful reconfiguration—a criterion these systems do not meet.
This principle can be easily seen when we look at non-extropic systems. Natural disasters like hurricanes or earthquakes cause immense harm, but we do not hold them 'responsible' for their actions. These phenomena follow physical laws without any intent or capacity to be influenced by potential outcome. Similarly, a tree that falls and damages property operates without any ability to be 'taught' or influenced, making responsibility irrelevant in such cases.
On the other hand, animals like dogs are different. They are extropic systems that can be partially responsible for their actions. They don’t understand morality, but they learn from consequences. If your dog eats your shoe and you scold it, the next time the dog eats your shoe, it might act guilty—not because it reflects deeply, but because it connects action with potential outcome.
But we also recognize the limitations of a dog to learn. While a dog can connect immediate actions to immediate outcomes, it cannot project consequences far into the future. That's why a dog can't be fully responsible but you can train a dog to act responsibly for certain situations.
Similarly, with humans, we acknowledge certain conditions where responsibility does not apply. For instance, we do not hold a toddler responsible if they break something, as their capacity for learning and understanding consequences is still developing. In both cases, the system’s limitations determine the scope of responsibility that can reasonably be attributed to them.
Responsibility applies to a extropic system only if the system can be learned to connect current actions to potential outcomes in future.
Let’s consider a self-driving car. If the car hits a pedestrian, is it responsible? The AI system in the car predicts possible future events and adjusts its actions accordingly. So technically the car 'connect current actions to potential outcomes in future'. But the car does not internalize consequences beyond immediate safety outcomes—it does not anticipate blame, legal repercussions, or moral judgment. The car is programmed to minimize risk, but if an accident occurs, it does not learn from the event in real time. The car is not responsible.
This means the system must be capable of internalizing connection between current actions to potential outcomes in a meaningful way and reconfiguring itself accordingly. For example:
Humans meet this criterion because moral, social, and experiential feedback consistently help them connect actions to potential outcomes, provided they have the conditions to learn (e.g., no significant cognitive or physical impairments).
Dogs partially meet this criterion because they can learn to associate specific actions with immediate outcomes, but their ability to process complex or long-term connections is limited.
Door on a spring, fail to meet this criterion because their learning mechanisms do not process responsibility and are limited to external reprogramming.
This rule provides a clear framework for determining when a system can be considered 'responsible' for its actions, distinguishing between systems that can be influenced meaningfully by responsibility and those that cannot.
Responsibility of Multi-Agent Extropic Systems
The framework of responsibility becomes complex when applied to multi-agent extropic systems, such as teams, institutions, or societies. In such systems, each agent (e.g., an individual human) can be responsible for their actions, but the responsibility of the collective is not as straightforward. This raises challenging questions: Can a multi-agent extropic system, as a whole, be held responsible for its actions? How does responsibility operate in a system where individual accountability interacts with collective behavior?
Responsibility in multi-agent systems acts on each agent separately, shaping their individual behavior. These individual responsibilities accumulate to produce the collective behavior of the group. Even when all agents feel a sense of responsibility, the group as a whole may struggle to act cohesively. Coordination problems, conflicting interests, and systemic inertia frequently impede meaningful collective action, highlighting a disconnect between individual and group responsibility. Responsibility, as a learning mechanism that drives individual learning, does not directly translate to the group level.
This creates an apparent paradox: while the multi-agent system as a whole cannot be responsible—since responsibility as a learning tool acts on individuals—each agent is responsible not only for their own actions but also for the group’s behavior, even if they lack direct control over the group’s decisions. This is because an individual’s responsibility for the group shapes their behavior within it.
For example, if a group decides to take harmful action, an individual agent who feels responsible might attempt to intervene or oppose the decision. Responsibility shapes their behavior within the group, potentially influencing the collective outcome. Conversely, if the individual does not responsible, their inaction reinforces the group’s trajectory.
Responsibility in multi-agent systems emerges as a distributed property. It acts on each agent separately but cumulatively shapes the system as a whole. While the group itself cannot "learn" or "be responsible" in the same way an individual agent can, the collective behavior is still indirectly shaped by the responsibility-driven actions of its members.
This brings us to the concept of leadership, where a single individual takes responsibility for guiding the actions of the group. At first glance, it may seem that the leader is solely responsible for the group, leaving individual agents free from accountability. However, the presence of a leader does not fundamentally alter the responsibility of individual agents.
In groups with leaders, individual agents often have less direct power to influence the group’s actions. Yet their sense of responsibility for the group continues to shape their behavior. This responsibility manifests in various ways: agents might attempt to influence the leader, advocate for change within the group, or take independent actions that alter the group’s trajectory.
Leadership, then, does not eliminate distributed responsibility; it redistributes power while leaving responsibility intact. Responsibility remains a learning mechanism that drives individual actions, even in systems where power is centralized.
What about responsibility for the past actions of a group? Can an individual be responsible for actions taken by a group before they were born or before they joined it? It seems illogical, as the individual had no direct involvement. How can a system be reconfugured by a learning mechanism if the system itself did not exist? Similarly, just as responsibility for a group’s actions can alter the behavior of individual agents within it, responsibility for past actions can lead to reconfiguration in the individual system. It functions as a learning tool that connects past mistakes, injustices, or successes to future expectations and behaviors.
Responsibility for past actions can act as a feedback mechanism that influences present and future behavior. But similarly as with responsibility for the group actions responsibility for the past actions can change future behavior of an individual system. In this way, responsibility for the past does not apply retroactively but functions as a tool to guide future behavior. In this way, responsibility for the past does not apply retroactively to actions that have already occurred. Instead, it serves as a mechanism to guide and influence future actions, creating conditions that shape how individuals behave going forward. This implies that individuals are responsible for the past actions of the group.
Responsibility as a learning mechanism cannot be directly applied to multi-agent extropic systems but remains for individual agents within those systems. But it still a bit more complicated than that. Multy-agent systems can be different forms: by size, by hyractical structure, but internal dynamics and internal responsibility makes the question of responsibility in multi-agent systems far more complex and highlights the need for further exploration.
Conclusion
This exploration of extropic systems in hamartic deterministic framework demonstrates that responsibility is not dependent on the existence of free will. Instead, it emerges naturally as a mechanism within extropic systems—a critical part of how these systems reconfigure (learn). By understanding responsibility as a form of learning mechanics, we can preserve its essential role in shaping consistent and meaningful outcomes, even in a deterministic framework.
Responsibility is not about assigning blame in a moral sense; it is the process by which humans as physical systems learn. This learning mechanism ensures that systems—whether individuals, groups, or societies—can change and improve over time. But even blame in a moral sense act as a learning mechanics.
Anton Gladkoborodov,
January 15, 2025