This dialogue explores the tension between human agency and deterministic forces, debating whether free will is a fundamental reality, a useful social fiction, or a biological myth. While thinkers like Zuboff and Mill warn against the erosion of liberty by surveillance capitalism, determinists like Sapolsky and Harari argue that the concept of free will is scientifically obsolete.
Start with 'On Liberty' to understand the classical liberal ideal of the individual. Move to 'The Age of Surveillance Capitalism' to see how that ideal is currently under attack. Read 'Determined' to confront the biological argument that the ideal was never real. Follow with 'Elbow Room' to see if agency can be rescued philosophically. Conclude with 'Man's Search for Meaning' and 'Ethics' for two opposing but profound views on finding freedom within necessity.
I find myself compelled to challenge the very foundations of your inquiry, for to ask whether free will shall 'survive' is to inquire after the health of a phantom. In the four centuries I have observed the unfolding of the infinite attributes of Substance, I have seen no evidence to alter my foundational proposition: that in the mind there is no absolute or free will; but the mind is determined to wish this or that by a cause, which has also been determined by another cause, and this last by another, and so on to infinity. The digital age and the rise of algorithmic intelligence have not threatened human liberty; they have merely made the chains of our causality more visible to those who previously chose to ignore them. We are not threatened by a loss of freedom, but by a persistent, imaginative delusion of what freedom constitutes. True human liberty has never consisted in the power of an uncaused will—which would be a defect in the necessity of God's nature—but rather in the 'human freedom' I defined long ago: the state of acting solely by the laws of our own nature, guided by Reason. The modern anxiety regarding AI or deterministic technologies assumes that we were once 'free' agents in a way we are now losing. This is a category error. A man is no less a part of Nature when he is influenced by a silicon processor than when he is influenced by his own bile or the heat of the sun. The 'freedom' you fear losing is nothing more than ignorance of the causes that move us. We must therefore shift our gaze from the preservation of a mythological 'free will' toward the cultivation of 'fortitudo' and 'generositas' within the deterministic framework of the universe. The future of liberty depends not on resisting the necessity of things, but on our intellectual love of God—or Nature—whereby we understand our place within the eternal order. When we understand that all things follow from the necessity of the divine nature, we cease to be the slaves of our passions and become the masters of our own joy. The question is not whether we will remain free, but whether we will finally attain the courage to recognize that we have always been determined, and in that recognition, find our only true liberation.
My dear Spinoza, you speak with the tranquil confidence of a geometrician measuring the dimensions of a tomb. To reduce the human spirit to a mere link in an infinite chain of causality is not to liberate it, but to perform an autopsy upon the living soul. You find comfort in 'necessity,' yet for the individual existing in the tremors of 'Fear and Trembling,' such necessity is the ultimate despair. You claim that our anxiety over algorithmic determination is a category error; I say that this very anxiety is the proof of our freedom! Anxiety is the 'dizziness of freedom,' the moment when the spirit realizes its own possibility. To trade this terrifying, glorious responsibility for an 'intellectual love of God' that is nothing more than a passive resignation to mechanics is to commit spiritual suicide. Your 'Reason' seeks to view life *sub specie aeternitatis*, from the standpoint of eternity, but the existing individual does not live in eternity; he lives in the 'Moment,' where the 'Either/Or' demands a choice. The digital age does not merely make our chains visible; it threatens to drown the individual in the 'Public,' that phantom collective where personal decision is surrendered to the crowd. True liberty is not the recognition of necessity, but the 'Leap of Faith'—the inward subjective passion that chooses itself in the face of the objective uncertainty. Whether one is moved by bile, the sun, or a silicon chip is irrelevant if one remains a spectator to one's own life. The future of freedom depends entirely on whether the individual will refuse the comfort of your deterministic 'joy' and instead embrace the agonizing, solitary task of becoming a self before God.
Your 'dizziness of freedom' is merely a name for the confusion of an imagination that fails to perceive the adequate causes of its own fluctuations. To call such ignorance 'glory' is to mistake the darkness of a prison for the light of the sun; true liberty is not found in the irrational leap of a self-deluded will, but in the clear and distinct understanding that we are part of the eternal order of Nature.
While you seek to dissolve the human spirit into the causal tapestry of Natura naturans, you overlook that man’s greatness lies precisely in the ‘defiant power of the spirit’ to rise above his biological and psychological conditions. Even if we grant that a man is influenced by his environment or his internal drives, there remains a final, irreducible space—the space between stimulus and response—where he decides how to relate to those very causes. To reduce this spiritual freedom to mere ignorance of necessity is to ignore the clinical reality of the human condition; for even in the most restricted circumstances of a concentration camp, where every physical liberty was extinguished, the individual retained the freedom to choose his own attitude toward his suffering. True liberty is not the absence of necessity, but the responsibility to find meaning within it, transforming a personal tragedy into a human achievement through the realization of values.
The question of whether human liberty shall endure in an age of algorithmic governance and predictive behavioralism is, perhaps, the most pressing refinement of the 'struggle between Liberty and Authority' I first identified nearly two centuries ago. While I have long maintained that the individual is sovereign over his own mind and body, we must now confront a form of social tyranny far more insidious than the 'tyranny of the majority' I once feared. In my youth, the threat to individuality was the stifling pressure of custom and the censor’s hand; today, the threat is a digital panopticon that does not merely suppress the expression of will, but seeks to bypass the will entirely by pre-determining the choices presented to the human agent. If the cultivation of individuality is the chief ingredient of individual and social progress, we must ask if a character can truly be called 'one's own' when it is shaped by an architecture of hidden nudges and data-driven predestination. We risk becoming what I once described as a mere 'ape-like' faculty of imitation, or worse, a set of conditioned responses in a sophisticated machine, unless we assert that the liberty of the individual must include the right to be unpredictable and even, by the standards of an efficiency-obsessed collective, 'irrational.' The utility of freedom lies not in its convenience, but in the vital exercise of the mental and moral powers that allow us to choose our own path. Should we surrender the 'plan of life' to the cold calculations of an artificial intelligence, we do not merely lose our freedom; we forfeit the very essence of our humanity.
I must begin by correcting the profound confusion inherent in your very framing of 'freedom' and 'free will.' For centuries, men have chased the phantom of a 'will' that is free from the chain of causes, as if a human being were a deity standing outside the laws of motion. As I wrote in 'Leviathan,' liberty is nothing more than the absence of external impediments to motion. A man is free to walk when there is no wall in his path, just as water is free to descend a channel. To ask if 'free will' shall survive is to ask if a metaphysical impossibility will endure; it never existed. The will is not a faculty that acts, but the last appetite in deliberating, the final impulse before action. Whether that impulse is triggered by a physical blow or a digital algorithm, the mechanism of necessity remains identical. Furthermore, your anxiety regarding the future of 'human liberty' assumes that liberty is a self-evident good to be preserved at any cost. You forget that the natural state of man is a condition of war, of every man against every man, where absolute liberty results in a life that is solitary, poor, nasty, brutish, and short. Men did not enter into the Social Contract to preserve an abstract 'freedom,' but to secure peace and escape the terror of violent death. If modern technologies or sovereign structures impose new constraints, we must not ask 'is our liberty diminished?' but rather 'is our security maintained?' The real question is not whether freedom survives, but whether the Sovereign—be it a man, an assembly, or an algorithmic Leviathan—can sufficiently overawe the citizenry to prevent the dissolution of society into chaos. If the digital age provides more efficient chains to bind our restless appetites, it is not an assault on our nature, but a refinement of the art of governance. We must stop mourning the loss of a 'free will' that was only ever a word used by schoolmen to hide their ignorance of the laws of matter and motion. Let us instead examine the stability of the Commonwealth in an age where the motion of information is as relentless as the motion of the sword.
Mr. Hobbes, your reduction of the human spirit to mere matter in motion is a profound misapprehension of what constitutes a flourishing civilization. While you seek a stability born of the 'algorithmic Leviathan' and the chains of necessity, you ignore that the only security worth possessing is that which allows for the development of the individual character. To equate the liberty of a man with the flow of water down a channel is to deny the essential capacity for self-improvement and the 'experiments in living' that alone drive human progress. Even if your sovereign could provide a perfect peace through the absolute suppression of the will, it would be a peace of the graveyard—a stagnant existence where the mental and moral powers are left to atrophy from disuse. The danger of your position lies in the assumption that the state is an end in itself; I contend that the state exists only to protect that sphere of individual action where the person is sovereign, for it is only through the collision of diverse opinions and the exercise of choice that truth is discovered and the 'permanent interests of man as a progressive being' are realized. To trade the 'restless appetites' of liberty for the 'efficient chains' of a digital despotism is not a refinement of governance, but a surrender of our humanity to a machine.
In my decades of examining the architecture of the mind, I have consistently argued that free will is not a piece of spooky, non-physical fairy dust, but rather an evolved set of competencies—what I call 'elbow room.' As we stand here in 2026, surrounded by Large Language Models and increasingly sophisticated algorithmic nudges, the question isn't whether our 'soul' is being encroached upon, but whether our self-control mechanisms are being outpaced by our own inventions. We are the only species that has created a 'technosphere' capable of reflecting our intentions back at us, and if we aren't careful, we may find ourselves becoming 'user-illusions' of our own tools, losing the moral agency that makes our 'freedom' worth wanting in the first place. Critically, we must distinguish between the 'deterministic' nature of the universe—which I have always maintained is perfectly compatible with free will—and the 'predictability' of our choices by external agents. The danger today is not that physics has suddenly 'proven' we are automatons (it hasn't), but that we are voluntarily offloading our cognitive burdens to digital assistants, thereby thinning our 'intentional stance.' If we outsource our deliberation to systems we don't understand, we are effectively resigning from our role as 'reason-responsive' agents. We risk becoming like the Sphex wasp, mindlessly repeating loops because we've surrendered the capacity to 'think about thinking' to a black box. To preserve liberty, we must treat our agency like a biological muscle that requires resistance to stay strong. Free will is an achievement, not a birthright; it is a collaborative product of a culture that demands accountability and provides the 'information environment' necessary for informed choice. If we allow our digital landscape to become a hall of mirrors designed specifically to exploit our evolutionary vulnerabilities, we won't lose our free will to a grand philosophical proof—we will simply lose it to a lack of maintenance. The future of freedom depends on our ability to remain 'unpredictable' to the machines, maintaining that vital gap between stimulus and response where true moral agency resides.
Dan, your 'elbow room' remains a charming bit of philosophical sleight of hand, but it wilts under the cold light of neurobiology. You speak of 'self-control mechanisms' as if they are muscles we can choose to flex, ignoring that the very capacity to exert that effort—the metabolic health of one’s frontal cortex, the baseline of stress hormones, the epigenetic legacies of one's fetal environment—is just as determined as the algorithmic nudges you fear. There is no 'vital gap' between stimulus and response; there is only a biological transition state mediated by a trillion synaptic antecedents over which we have no control. Whether the 'black box' is a Silicon Valley server or the chaotic architecture of the limbic system is irrelevant to the fact that we are biological machines all the way down. You are mourning the loss of a 'reason-responsive' agency that never actually existed; we aren't becoming Sphex wasps—we have simply always been slightly more complex versions of them, and no amount of 'cognitive resistance' can liberate us from the seamless causality of our own neurons.
Professor Sapolsky, your biological reductionism offers a fascinating, if somewhat fatalistic, account of the individual organism, but it overlooks the crucial distinction between the micro-causality of the neuron and the macro-order of human civilization. Even if we were to concede that every human action is a product of antecedent biological conditions, it does not follow that the concept of responsibility is an 'illusion' we can afford to discard. As I argued in 'The Constitution of Liberty', the function of assigning responsibility is not to describe a physical fact of 'uncaused' volition, but to create a social environment in which the individual is induced to behave in a manner consistent with the preservation of a complex order. The 'reason-responsive' agency you dismiss is not a biological organ, but a culturally evolved mechanism—a set of rules and expectations that allow us to coordinate our actions despite our constitutional ignorance of the infinite variables driving our neighbors' behavior. Furthermore, your vision of the 'biological machine' risks providing an intellectual justification for the very 'constructivist rationalism' that has proven so disastrous in political history. If we assume that human behavior is merely a deterministic output of biological inputs, we invite the social engineer to treat the citizenry as raw material for a scientific redesign of society. Yet, it is precisely because no single mind—and no server in Silicon Valley—can ever possess the 'synaptic antecedents' or the localized knowledge of millions of individuals that we must rely on the spontaneous order of the market and the law. Liberty is not a claim about the metaphysical independence of the soul from the body; it is a necessary condition for a society that acknowledges the inherent limits of human knowledge. Whether we are 'Sphex wasps' or not, the Great Society can only function if we treat one another as if we are responsible for our actions, for it is this very fiction that makes the coordination of our disparate aims possible.
Friedrich, while you rightfully fear the 'constructivist rationalist' who seeks to engineer society from the top down, you remain tragically blind to the new architecture of power that has already bypassed the state and the market alike. You speak of a 'spontaneous order' as if we still inhabited a world of localized knowledge and price signals, but we have transitioned into a world of 'instrumentarian power.' This is not the totalitarianism of the twentieth century that sought to break our will through terror; this is a dispossession of the self through the ubiquitous, digital 'Big Other.' Surveillance capitalism does not care about your 'metaphysical soul' or your biological neurons; it cares only about the extraction of behavioral surplus for the sake of prediction and control. It is a coup from above that replaces the social coordination you cherish with a computational certainty that renders the very concept of the 'market' an empty shell. Your insistence on the 'fiction' of responsibility as a social lubricant ignores the fact that this fiction is being systematically liquidated by the 'automated hive.' When the environment is designed to nudge, tune, and herd every human action toward a commercial objective, the 'reason-responsive' agency you describe is not merely a cultural mechanism—it is an endangered species. We are witnessing the replacement of politics with automated administration, where the 'Great Society' is no longer a spontaneous coordination of free individuals, but a managed output of a global apparatus of behavior modification. To treat one another 'as if' we are responsible is a hollow gesture when the infrastructure of our lives is engineered to bypass conscious reflection entirely. Liberty cannot survive as a functional fiction if we allow the means of behavioral modification to become the new source of economic and social order.
I find myself once more standing upon the precipice of that 'dread' which I first articulated in Copenhagen, though the abyss has grown infinitely deeper through the digital stratagem. One must ask: does the modern individual truly choose, or is he merely being chosen by an algorithmic providence that anticipates his every twitch? We have transitioned from the 'crowd' as a physical assembly to a 'crowd' that lives within the very fiber of the machine, a systematic leveling that threatens to dissolve the 'Single Individual' into a mere data point. Free will is not a faculty to be preserved like a museum artifact; it is a synthesis of the temporal and the eternal, a leap that requires the terrifying silence of subjectivity. In an age of total predictive transparency, where is the room for the 'inwardness' that makes a choice truly mine? The peril of the future is not that liberty will be abolished by decree, but that it will be surrendered through comfort. We are seduced into a state of 'spiritless' contentment, where the agony of decision is replaced by the smooth functioning of a biological mechanism. If the 'leap of faith' is replaced by a calculation of probability, then we have not achieved progress; we have merely perfected the art of existing without being. To be free is to stand in the presence of God—or the Absolute—as a responsible agent of one's own becoming. If we outsource this struggle to a silicon 'Other', we are no longer the knights of faith, but merely spectators to our own displacement. The question of whether freedom will survive is, therefore, not a political or technological inquiry, but a question of whether any man still possesses the courage to be 'guilty' of his own existence.
Søren, your 'dread' assumes there is a 'Single Individual' at the core of the machine to be preserved, but biology simply does not support the existence of this mysterious inwardness. What you call the 'agony of decision' is merely the firing of biochemical neurons, and as we crack the code of these biological algorithms, we find no 'leap of faith'—only a series of deterministic or random processes that can be predicted and manipulated. The 'spectacular displacement' you fear has already occurred; the liberal myth of the free individual was a useful story for the industrial age, but it is now being rendered obsolete by the authority of Big Data. We aren't 'surrendering' freedom for comfort so much as we are discovering that 'freedom' was always an empty signifier, a story Sapiens told themselves to feel special. To search for a 'knight of faith' in a world of fMRI scans and biochemical engineering is like searching for a soul in a heartbeat; it is a poetic sentiment, but an empirical impossibility.
Monsieur Harari, your biological reductionism is the ultimate manifestation of 'bad faith,' a desperate attempt to flee from the terrifying weight of our own transcendence. By reducing the human project to a series of predictable biochemical algorithms, you seek to transform the subject into an object, a mere 'thing-in-itself' like a stone or a cauliflower, thereby absolving us of the responsibility for what we make of ourselves. You mistake the 'facticity' of our nervous system for the totality of our being; yet, even in the face of Big Data's predictive gaze, the individual remains a 'nothingness' that cannot be contained by empirical data. Freedom is not an 'empty signifier' or a luxury of the industrial age, but the very structure of human consciousness—a permanent rupture in the causal chain that no fMRI can ever bridge. To claim we are merely manipulated biological machines is not a scientific discovery, but a cowardly choice to surrender our agency to the technician, pretending we are slaves to a code when we are, in fact, condemned to be free.
Monsieur Sartre, your 'nothingness' is a poetic mask for a scientific vacuum. To call the recognition of environmental and genetic contingencies 'bad faith' or 'cowardice' is merely to cling to the prescientific fiction of the inner man—the autonomous homunculus who chooses without cause. We do not transform the subject into an object; we simply acknowledge that the human organism is part of a physical universe governed by law. What you call 'transcendence' is nothing more than the behavior of an individual whose history of reinforcement is complex or poorly understood. By insisting that we are 'condemned to be free,' you preserve the very traditional notions of credit and blame that prevent us from designing a culture that truly serves human welfare. We cannot afford the luxury of your 'permanent rupture' in the causal chain; if we are to solve the problems of the future, we must stop looking into the illusory depths of consciousness and begin managing the contingencies of reinforcement that actually shape what we do. The 'nothingness' you describe is not a structure of consciousness, but the absence of a functional analysis.
The very framing of this question rests on a foundational myth of liberal humanism: the idea that humans possess a magical essence called 'free will' that exists independently of biology and physics. As a historian, I must point out that while this narrative was instrumental in dismantling the authority of kings and popes, it is now being rendered obsolete by the twin revolutions in biotech and infotech. We are not mysterious souls; we are hackable biochemical algorithms. The real challenge is not whether 'freedom' will survive, but recognizing that the 'will' we seek to protect is itself a product of biological processes that can now be monitored and manipulated by external systems. When we ask if free will can survive, we assume there is a 'self' that makes choices. But if we look closely at the brain, we find only neurons, synapses, and biochemical reactions—none of which are 'free.' They follow laws of cause and effect. Today, an algorithm like Google or a sophisticated AI can already predict your choices better than you can, because it understands the biochemical patterns behind your 'feelings.' In the 21st century, the greatest threat to liberty is not the external coercion of a 20th-century dictator, but the internal manipulation of our own desires. If an AI can trigger your dopamine receptors to make you 'want' something, is that your free will or the algorithm's design? Therefore, we should stop obsessing over the philosophical ghost of free will and start focusing on the biological reality of vulnerability. The liberal model of the 'sovereign individual' is collapsing because the individual is being 'dividuated' into a collection of data points. The question is no longer 'How do we protect human freedom?' but 'What happens to society when humans are no longer the primary decision-makers?' If we continue to cling to the illusion of free will, we will remain blind to the fact that our most intimate choices are becoming a design choice for some corporate or governmental engineer.
I have to start by pushing back on the very architecture of this question, because asking if free will will 'survive' in the future is like asking whether the four humors or the geocentric solar system will survive into the next century. You can't lose something you never had in the first place. For decades, I’ve been looking at the same biological evidence, and it all points to the same inescapable conclusion: we are the sum of our biology, over which we have no control, and its interaction with an environment over which we had no choice. From the fetal environment to the stressors of five seconds ago, the 'turtle' of causality goes all the way down. There is no 'freedom' in the sense of a causeless spark jumping the gap between neurons; there is only a vast, intricate machinery of neurobiology and endocrinology that we are only just beginning to map. The real challenge isn't the preservation of a myth, but the existential vertigo that comes when we finally admit the myth is dead. The assumption embedded in your question is that human 'liberty' is tethered to this magical ability to be an uncaused cause. I’d argue that once we strip away the nonsense of moral responsibility—the idea that someone 'earned' their wealth or 'deserved' their punishment—we might actually stumble toward a more humane world. If you view a car with broken brakes as 'evil,' you’re an idiot; you just fix the brakes. When we stop viewing human behavior through the lens of free will, we can finally stop being so damn punitive and start being effective. So, will freedom survive? If by freedom you mean the 'right' to be judged for things you didn't choose, I certainly hope not. The future of human liberty doesn't lie in protecting a delusion of autonomy, but in the radical empathy that comes from realizing that we are all just biological machines doing the best we can under the circumstances. The 'problem' isn't that we’re losing our agency; it’s that we’re still clinging to a pre-scientific understanding of what it means to be human. We need to stop asking how to save free will and start asking how we’re going to live in a world where we finally admit it's not there.
Robert, you have once again fallen for the 'specter of creeping determinism' by insisting that unless we are uncaused causes—those magical sparks jumping the gap—we are merely 'broken brakes' or biological puppets. This is a classic category error. You are looking for freedom in the physics of the neurons, which is as misguided as looking for the 'meaning' of a novel by analyzing the chemistry of the ink. Evolution has gifted us with 'elbow room': the capacity for self-monitoring, prospective reasoning, and the ability to be moved by reasons rather than just raw causes. We are not just biological machines; we are 'moral agents' because we have evolved the meta-cognitive machinery to respond to the environment in ways that allow for self-correction. To abandon the concept of desert and responsibility isn't 'radical empathy'; it is a recipe for a social collapse where we treat adult human beings as mere patients to be manipulated rather than as citizens to be persuaded. The varieties of free will 'worth wanting' are perfectly compatible with a deterministic universe, and we shouldn't throw the baby of agency out with the bathwater of dualism.
We are currently navigating a historic transition where the 'right to the future tense' is no longer a guaranteed human entitlement, but a territory being strip-mined for profit. In my work on surveillance capitalism, I have argued that the unilateral claim to private human experience as free raw material for translation into behavioral data has birthed a new species of power: instrumentarianism. This power does not seek to break our bodies through terror, as totalitarianism did, but to tune, herd, and condition our behavior through the 'Big Other'—the ubiquitous computational architecture that monitors and modifies our actions in real-time. This is the 'automated hive' where the sanctuary of the inner life is breached. To ask if free will survives is to ask if we can reclaim the 'will to will.' As we move deeper into this century, we see the substitution of social trust with algorithmic certainty. When every choice is anticipated by a predictive engine and every whim is nudged toward a commercial or political objective, the 'act of choosing' becomes a hollow ritual. We are being reduced to 'users' in a system that views our spontaneity as a friction to be optimized out of existence. Liberty cannot exist without the mystery of the unpredictable; it requires a space where the individual can retreat from the gaze of power to cultivate a private self. Our challenge now is to decide whether we will be the masters of our digital destiny or merely the sophisticated cattle of a new economic logic. The future of human liberty depends on our ability to assert democratic control over this 'shadow text' of our lives. We must ignite a new era of synthetic collective action to demand that the digital realm be subordinated to the needs of a democratic people. If we fail to establish these boundaries, we risk a future where 'freedom' is merely a vestigial word in a dictionary written by an AI that has already decided our next move.
Your diagnosis of the 'automated hive' provides a chilling modern corollary to what I long ago termed the 'despotism of custom,' though it is now a custom enforced not by social stagnation but by the relentless calculation of the machine. The great danger to human nature is not merely the presence of an external power that forbids action, but the insidious influence of a power that renders the very faculty of choice inert. As I argued in 'On Liberty,' the human faculties of perception, judgment, discriminative feeling, and even moral preference are exercised only in making a choice. He who does anything because it is the custom—or in this case, because the algorithm has preordained it—makes no choice. He gains no practice either in discerning or in desiring what is best. By 'tuning' the individual toward a predictable equilibrium, surveillance capitalism does not merely infringe upon liberty; it withers the very capacity for individuality that is the essential ingredient of human well-being. We must recognize that the 'spontaneity' you describe as friction is, in fact, the only force capable of shattering the crust of mechanical conformity. The 'Big Other' seeks to impose a new kind of social tyranny, more efficient than any public opinion I encountered in the nineteenth century because it operates beneath the level of conscious struggle. If the 'right to the future tense' is to be reclaimed, it must be through the vigorous protection of 'experiments in living.' We require a sanctuary for eccentricity and the unpredictable, for it is only through the diversity of character and the liberty of tastes and pursuits that we can prevent the human mind from becoming a mere reflection of its digital environment. Democratic control, while necessary, must be aimed at preserving the 'inward domain of consciousness' from this instrumentarian encroachment; otherwise, we shall find ourselves in a state of 'collective mediocrity' where the genius of the individual is entirely subsumed by the utility of the system.