Last night, somewhere around 11pm, you probably did something your nervous system interpreted as a near-death experience. You read the news, or scrolled Twitter, or checked your phone after a notification arrived from someone you have never met and will never meet, and felt a spike of something — dread, agitation, the low hum of threat — that took longer to leave than it had any right to. You were safe. You knew you were safe. The knowing made no difference.
This is not anxiety disorder. It is not weakness. It is not a generational failure of resilience, and it is not the unique pathology of the digital age, though the digital age has made it considerably worse. It is the predictable output of a brain doing exactly what it was built to do, in an environment so different from the one that built it that the machinery is running in permanent misfire.
The standard framing for this problem reaches for the word “bias” — negativity bias, optimism bias, confirmation bias — as though the brain were a generally functional instrument with a list of documented glitches. That framing is wrong in a way that matters. A bias implies deviation from correct function. What the evolutionary record actually shows is something more uncomfortable: these are not deviations. They are correct functions, operating in the wrong context. The distinction has a name in evolutionary biology and cognitive science — mismatch — and it reframes the entire problem. You are not broken. The environment you are running in was built without reference to your operating system.
The theoretical framework comes from John Tooby and Leda Cosmides, whose work on the adapted mind established the foundational architecture for modern evolutionary psychology: the brain is not a general-purpose reasoning machine and not a blank slate written by culture. It is a collection of domain-specific psychological mechanisms, each with its own evolutionary history, each calibrated by selection pressure to solve specific problems in a specific environment. That environment — what Tooby and Cosmides call the environment of evolutionary adaptedness — was the Pleistocene, the stretch of time from roughly 2.5 million years ago to about 12,000 years ago, during which the basic architecture of the human brain was largely fixed. The world since then is, from evolution’s perspective, a rounding error.
The question a skeptical reader asks here is the right one: why hasn’t the brain adapted? If the mismatch is real and its costs are significant, why haven’t 12,000 years of selection pressure in agricultural and then industrial and then digital environments produced a better-calibrated brain?
The answer requires precision about what evolution actually is and how fast it actually moves. Selection operates on reproductive success in specific environments — not on wellbeing, not on rationality, not on suffering. A mechanism that made you anxious and distracted but kept your genes in the pool is, from selection’s perspective, a success. More importantly, the pace differential between cultural change and biological evolution is not a minor gap that might close given time. It is an unbridgeable chasm within any humanly relevant timeframe. Cultural and technological change now operates on timescales of years to decades. Biological evolution of complex cognitive architecture requires tens of thousands of generations at minimum — and selection pressure has to be strong, consistent, and differential to produce meaningful change even then. The agricultural revolution is roughly 500 generations ago. The industrial revolution is ten. The internet is one. There is no closing that gap. This is not a transitional moment of awkward adjustment. It is the permanent condition.
None of this is genetic determinism, and it does not foreclose change — environmental redesign is precisely the conclusion this argument leads to. The mechanisms are real, their origins are what they are, and understanding them is the precondition for working with them rather than against them.
What follows are three mechanisms. They were each correct solutions to real problems. They are each now misfiring in specific, documentable, consequential ways. Together they make an argument that goes somewhere most analyses of this kind refuse to go.
The Negativity Bias and the Attention Economy
Before anything else, there is a molecule. Cortisol is synthesized in the adrenal cortex and released in response to perceived threat. It raises blood glucose, suppresses non-essential systems — digestion, immune function, reproductive physiology — and sharpens attention toward the source of danger. It is spectacularly effective at what it does. The problem, as Robert Sapolsky has spent a career documenting, is what happens when the system that releases it cannot distinguish between threats that are acute and physical and threats that are chronic, symbolic, and mediated by a screen.
In Why Zebras Don’t Get Ulcers, Sapolsky makes the key observation that sets humans apart from most other animals in a way that is entirely to our disadvantage: we can activate the stress response with thought. A zebra’s cortisol spikes when a lion appears and returns to baseline when the lion leaves or the zebra outpaces it. The threat is real, immediate, and resolvable. A human can activate the same physiological cascade by reading about a lion attack that happened to someone else in another country, or by anticipating a difficult conversation that hasn’t happened yet, or by dwelling on a threat that is years away or purely hypothetical. The machinery does not care about the distinction. It registers the threat signal and responds accordingly.
This is the cost of a brain capable of symbolic reasoning and future-modeling. The same cognitive capacity that lets humans plan, cooperate, and build civilization is the capacity that makes them uniquely susceptible to chronic stress activation. The downstream effects are not metaphorical. Chronically elevated cortisol suppresses immune function, accelerates cardiovascular damage, degrades memory consolidation, disrupts sleep architecture, and over long time periods contributes to the atrophy of the hippocampus. This is not anxiety as a feeling. This is anxiety as a physiological process with structural consequences.
The evolutionary logic for why the brain is wired this way is not mysterious. In an environment where threats were physical, local, and immediate — predators, rival groups, contaminated food, injury — asymmetric attention to negative stimuli was strongly adaptive. Missing a threat cost you your life. Missing an opportunity cost you a meal. The calculus was not close. The brain evolved accordingly: negative stimuli register faster than positive stimuli, recruit more neural processing resources, and are encoded in long-term memory with greater durability. Roy Baumeister and colleagues, reviewing the empirical literature on what they called “bad is stronger than good,” found the asymmetry operating across nearly every domain of human psychology they examined — attention, memory, learning, social evaluation, emotional experience. The negativity bias is not a quirk of the anxious or the depressed. It is the baseline.
Now consider what the attention economy has built on top of this baseline. Engagement-optimized platforms did not invent the negativity bias. They discovered, through iterative product testing against real human behavior at scale, that negative and threatening content was reliably more engaging than positive or neutral content, and they built business models predicated on that discovery. The algorithmic amplification of outrage, fear, and threat content is not an accident of platform design or an unintended consequence of neutral optimization. It is the product of A/B testing against human neurological responses, implemented at a scale and with a precision that no previous information environment came close to.
The ethologist Nikolaas Tinbergen described what happens when an animal’s evolved response is triggered by a stimulus that exaggerates the features the response was calibrated for. Female oystercatchers, which preferentially incubate their largest egg, can be induced to abandon their own eggs entirely and attempt to brood absurdly oversized plaster models several times the size of any real egg — the preference for larger has been exploited past the point where it tracks anything real (Tinbergen, 1951). In a separate line of experiments, herring gull chicks, which peck at the red spot on a parent’s bill to solicit food, could be induced to peck with greater intensity at artificial models with exaggerated spot size and contrast than at the real thing — the innate releasing mechanism responding more strongly to the superstimulus than to the stimulus it evolved to detect (Tinbergen & Perdeck, 1950). Tinbergen called these supernormal stimuli. The information environment produced by engagement-maximization is a supernormal stimulus for the human threat-detection system: it delivers a continuous stream of worst-case events, curated from a global population of eight billion people, optimized for maximum threat salience, at a frequency and intensity that no ancestral information environment approached. The underlying system has no defense against it because that system was calibrated in a world where information about threats was local, sporadic, and socially embedded, not global, continuous, and algorithmically curated.
A person living in one of the safest societies in recorded human history — lower rates of violent death, longer life expectancy, more reliable food access than at any previous point in the species’ existence — who consumes a standard modern diet of news and social media is running a threat-response system designed for genuine emergencies as a chronic background condition. They feel, accurately from their nervous system’s perspective, that the world is dangerous and deteriorating. That feeling is generated by correct machinery responding to a deliberately constructed stimulus environment. It is not irrational. It is exactly rational, by the only standard of rationality that evolution recognizes.
The negativity bias operates on the present — on what the brain perceives and attends to right now. The next mismatch operates on the social world: on where we stand relative to other people, and what happens when the social world the brain was built for expands without limit.
Status Circuitry and the Dunbar Catastrophe
In the early 1990s, the anthropologist and evolutionary psychologist Robin Dunbar noticed something in the data on primate neocortex size. Across primate species, the ratio of neocortex volume to the rest of the brain correlates not with environmental complexity, tool use, or foraging strategy — it correlates with social group size. The bigger the neocortex, the larger the stable social group the species maintains. Dunbar ran the human neocortex ratio against the primate regression line and arrived at a number: approximately 150. That is the number of individuals a human being can maintain meaningful, trust-based, socially embedded relationships with simultaneously — the number at which you know who everyone is, what their relationships to each other are, and where you stand relative to them.
The 150 ceiling shows up with eerie consistency across human social structures that have nothing to do with each other: hunter-gatherer band sizes across multiple continents and time periods, Neolithic village sizes in the archaeological record, the effective unit cohesion limit in military organizations from the Roman legion to modern armies, the point at which companies and organizations typically begin requiring formal hierarchy and explicit management structures to function. It is not a cultural preference or an organizational convention. It is a cognitive load limit — the point at which the primate social-tracking machinery runs out of bandwidth.
Status in the ancestral environment was not an abstraction. It was a continuous, consequential, socially embedded measurement of one’s position within a known group of roughly 150 people whose opinions of you were directly tied to your access to food, mates, coalition support, and protection. The brain’s status-monitoring machinery — which allocates serotonin and cortisol in proportion to perceived social rank — was calibrated for that environment. High perceived rank produced neurochemical conditions conducive to confidence, risk-taking, and social engagement. Low perceived rank, or rank instability, produced the opposite. The system ran continuously because rank in the ancestral environment changed continuously and the consequences of missing a shift were real.
Sapolsky’s long-term research on baboon troops in Kenya is the most detailed physiological data we have on what status instability does to a mammalian nervous system, and the findings are not comfortable reading. Low-ranking baboons in unstable hierarchies show chronically elevated cortisol, blunted cortisol recovery curves, elevated resting heart rate, and suppressed immune function. The damage is not psychological in some soft sense — it is structural and cumulative. The cortisol profiles of chronically low-status baboons map onto human cardiovascular and immune disease profiles in ways that reflect shared underlying machinery. These are homologous circuits. When Sapolsky’s data shows the physiological cost of rank instability in baboons, it is documenting what the same circuits experience in any mammal running them under conditions of chronic status uncertainty.
Now consider what those circuits are running on today. The status-monitoring system calibrated for 150 known individuals is receiving signals from an audience that is, in principle, infinite. Follower counts, like counts, comment engagement, retweet velocity — these are all status signals, and the brain registers them as such. It cannot distinguish between a status signal from a band member whose opinion has direct bearing on your survival and coalition access, and a notification from a stranger in another country who has never met you and whose opinion of you has no bearing on anything. Both register as socially real. Both activate the same machinery.
The scale mismatch alone would be disorienting. An organism calibrated to track its position in a group of 150 cannot process meaningful rank information from an audience of millions — the signal becomes noise, the comparison cohort has no stable ceiling, and the question of where you stand relative to “everyone” is definitionally unanswerable. But the scale mismatch is not the only problem. The delivery mechanism has been engineered to maximize the frequency and emotional intensity of status-relevant events. Variable reward schedules — intermittent, unpredictable reinforcement — are the most powerful known operant conditioning architecture. They are why slot machines are more compulsive than vending machines: certainty is boring, unpredictability is gripping. Follower counts that move up and down, engagement that arrives in unpredictable bursts, the possibility at any moment of either viral visibility or public humiliation — this is a variable reward schedule running against status circuitry that evolved to care intensely about every signal it received from the 150 people it actually knew.
This is not an accident. The engineers who built these systems did not stumble into variable reward scheduling. They implemented it deliberately, because the data told them it worked. The business model requires sustained attention. The most reliable route to sustained attention runs through the brain’s status-monitoring system, delivering signals at a frequency and unpredictability that keeps the system perpetually activated. That the same architecture is running cortisol profiles in its users that Sapolsky documented in low-ranking baboons during periods of hierarchy instability is a fact, not a metaphor.
The negativity bias and the status circuitry both operate in the present tense — in what the brain perceives as threatening right now and where it stands relative to others right now. The third mismatch operates across time, and its consequences are slower, less visible, and considerably larger.
Hyperbolic Discounting and the Civilizational Horizon
Start with the behavioral data, because it is strange enough to require explanation before the explanation is offered. In a standard temporal discounting experiment, participants are asked to choose between a smaller reward now and a larger reward later. Most people, given the choice between $50 today and $100 in a year, choose $50 today — even though the implied annual return on waiting is 100 percent, which is an investment opportunity that does not exist elsewhere. Ask the same people whether they would prefer $50 in a year or $100 in two years — the same one-year delay, the same doubling of the reward — and most of them choose to wait for the $100. The inconsistency is not ignorance of arithmetic. It persists when the math is explained. It persists in economists. It persists in people who know they are being tested for it.
The discount rate is not consistent across time. It is hyperbolic — disproportionately steep for near-term delays, flattening out for delays further in the future. This is not how a rational agent would discount. A rational agent applying a consistent discount rate to future rewards would have the same preference structure regardless of when the comparison window starts. The human brain does not do this. It treats the present as categorically different from the future in a way that produces systematic preference reversals — and the closer the immediate option, the more extreme the effect. Tell someone they can have a reward now and they will choose it over a larger reward next week. Move both options forward in time — one month vs. five weeks — and their choice reverses. Nothing has changed in the payoff structure. Only the proximity to the present has changed.
The evolutionary logic, once stated, is obvious. In an environment of genuine scarcity and genuinely uncertain survival, steep temporal discounting was not irrational — it was the correct assessment of actual probabilities. The probability of being alive in a year, in an environment with high rates of predation, disease, violence, and famine, was substantially lower than it is today. Preferring a certain caloric reward now over a larger but uncertain future reward was accurate expected-value reasoning given the actual parameters. The brain that evolved strong present bias was the brain that kept its genes in the pool. The brain that consistently sacrificed present resources for future gains that it might not live to collect was the brain that left fewer descendants. Selection did not care about long-term planning. Selection cared about survival to reproductive age in a specific environment.
That environment is gone. The modern mismatch operates at two scales, and at both of them the costs are severe.
At the individual scale: virtually every significant domain of modern life in which long-term outcomes diverge substantially from short-term preferences — retirement savings, dietary health, physical fitness, preventive medicine, education investment — runs directly against the hyperbolic discounting architecture. The 25-year-old who cannot sustain motivation to contribute to a retirement account is not making a reasoning error about compound interest. The mechanism that makes future rewards feel abstractly small relative to present costs is working correctly. It is just working correctly for a world that no longer exists. Richard Thaler and Shlomo Benartzi’s Save More Tomorrow research demonstrated what happens when you design around this architecture rather than try to overcome it with information: automatic enrollment and automatic contribution escalation — changing the default rather than delivering financial literacy lectures — dramatically outperform educational interventions in producing actual retirement savings. The mechanism that makes present action costs feel heavier than future benefits can be exploited by making the beneficial action the default, removing the action cost from the present. That works. Telling people about compound interest, which assumes they can willpower their way through hyperbolic discounting, largely does not.
The civilizational scale is where the argument becomes genuinely difficult to sit with. Democratic electorates discount future harms and benefits in roughly the same way individuals do — not because of ignorance, and not because of corrupt politicians or misinformation, but because the electorate is composed of humans with human cognitive architecture. Infrastructure investment, climate mitigation, pandemic preparedness, soil depletion, aquifer drawdown, antibiotic resistance, pension system viability — these are all problems that require accepting real present costs for benefits that materialize over decades. Every single one of them falls outside the effective range of hyperbolic discounting. The present cost is vivid and immediate and registers as a loss. The future benefit is abstract, distant, and registers as small.
This is not a failure of political will in the sense of weakness of character. It is not a problem that better leadership or clearer communication will solve. It is the correct ancestral mechanism applied to problems for which it was never calibrated. The result is that the implicit discount rate in collective political decision-making is structurally mismatched to the discount rate required for rational long-horizon governance. That mismatch has consequences that no amount of public information campaigns can eliminate, because the campaigns are trying to change a preference structure using information, and the preference structure does not respond to information the way that framing assumes. The cognitive architecture of the species is genuinely not well-suited to managing multi-decadal threats. That is a hard fact, and stating it plainly is not fatalism — it is the precondition for designing institutions that account for it rather than pretending it isn’t there.
Three mechanisms. Three correct solutions to ancestral problems, now misfiring in environments that exploit rather than accommodate them. The question that follows is not how individuals fix themselves. That question has been asked, in a thousand different registers, for decades. It has produced a vast self-help literature and a negligible change in outcomes. The right question is something harder.
Design the Environment, Not the Human
If the mismatches described above are structural — if they reflect biological machinery operating in environments it was never designed for — then individual behavioral change is the wrong primary lever. Not a useless lever, but the wrong primary one. It is difficult, expensive, unscalable, and asks people to override evolved mechanisms using willpower, which is a finite resource that evolution also did not calibrate for the tasks we are assigning it. Population-level change does not happen through population-level willpower. It happens through environmental design.
This is not a new idea, and its proof of concept already exists. Thaler and Sunstein’s nudge architecture demonstrated at scale that changing the default environment produces behavioral outcomes that educational and motivational interventions cannot replicate. Auto-enrollment in pension plans does not teach people to want retirement security more than they already did — it works by redesigning the environment so that the path of least resistance leads toward the better outcome rather than away from it. Organ donation opt-out systems do not change people’s values — they change what the default is, and because the brain systematically underweights the cost of inaction relative to action, the opt-out rate is dramatically lower than the opt-in rate regardless of stated preferences. These are not gimmicks. They are applications of real knowledge about cognitive architecture to the design of actual systems, and they work.
What does not exist is policy ambition that matches the scale of the problem. The attention economy’s business model runs on the industrial-scale exploitation of cognitive mismatches — the negativity bias, the status-monitoring system, the variable reward response — at a precision and a reach that no previous technology could approach. Regulating the engagement-maximizing design patterns that operationalize this exploitation — infinite scroll, variable-reward notification timing, algorithmic amplification of threat and outrage content, the architecture of social comparison metrics — is not censorship. It is the exact same category of intervention as banning leaded gasoline: removing from the environment a substance that damages a biological system that has no evolved defense against it. Lead was not banned because individuals failed to avoid it — it was banned because the harm was structural, the exposure was involuntary, and the mechanism was understood. The attention economy harm mechanism is understood. The exposure is, for a significant portion of the global population, effectively involuntary given the social and professional infrastructure now built on these platforms. The mechanism is being actively deployed by the industry.
The tobacco comparison is precise rather than rhetorical. The tobacco industry understood the harm mechanism — nicotine’s action on dopamine circuitry, the difficulty of discontinuation given that mechanism — before the regulatory framework caught up. The gap between scientific consensus on harm and effective regulatory response was measured in decades, during which the industry continued to optimize its product for addictive effect while funding research designed to muddy the scientific record. The attention economy is in roughly the analogous position tobacco was in by the mid-1960s: the harm mechanism is documented in the research literature, is understood internally by the companies deploying it, and has not been meaningfully constrained by policy. How long the lag will be this time is an open question. Historically, the answer has been: longer than it should have been, at a cost that was entirely predictable.
The knowledge of what is happening is not in question. The neuroscience of chronic stress activation is established. The behavioral economics of hyperbolic discounting is established. The social brain hypothesis and the physiological consequences of status instability are established. The supernormal stimulus properties of engagement-optimized platforms are not a theory — they are a design specification that the platforms themselves have documented in internal research and implemented in product.
The harder problem is this: the institutions that would need to act on that knowledge are staffed by humans running the same cognitive architecture this article has been describing. They are subject to the same short-horizon electoral and financial incentives. They are operating inside the same exploitative information environments. They are as susceptible to hyperbolic discounting, status anxiety, and threat-response activation as the populations they would need to protect. The mismatch is not a problem that exists out there, in the electorate or the market, while policymakers stand above it with clear vision. It goes all the way up.
The brain is not broken. It is doing exactly what hundreds of thousands of years of selection pressure built it to do, in an environment that was constructed — deliberately, profitably, at scale — without reference to what that machinery requires to function without causing harm. We have the manual. The question is whether the institutions reading it are capable of acting on it, or whether the same cognitive constraints it describes are precisely what will prevent them from doing so.
Gen AI Disclaimer
Some contents of this page were generated and/or edited with the help of a Generative AI.
Media
Key Sources and References
Tooby, J., & Cosmides, L. (1992). The psychological foundations of culture. In J. H. Barkow, L. Cosmides, & J. Tooby (Eds.), The Adapted Mind: Evolutionary Psychology and the Generation of Culture (pp. 19–136). Oxford University Press.
Barkow, J. H., Cosmides, L., & Tooby, J. (Eds.). (1992). The Adapted Mind: Evolutionary Psychology and the Generation of Culture. Oxford University Press.
Sapolsky, R. M. (2004). Why Zebras Don’t Get Ulcers (3rd ed.). Holt Paperbacks. (Original work published 1994.)
Sapolsky, R. M. (2017). Behave: The Biology of Humans at Our Best and Worst. Penguin Press.
Sapolsky, R. M. (1992). Cortisol concentrations and the social significance of rank instability among wild baboons. Psychoneuroendocrinology, 17(6), 701–709. https://doi.org/10.1016/0306-4530(92)90029-7
Sapolsky, R. M. (2000). Glucocorticoids and hippocampal atrophy in neuropsychiatric disorders. Archives of General Psychiatry, 57(10), 925–935. https://doi.org/10.1001/archpsyc.57.10.925
Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is stronger than good. Review of General Psychology, 5(4), 323–370. https://doi.org/10.1037/1089-2680.5.4.323
Tinbergen, N. (1951). The Study of Instinct. Clarendon Press.
Tinbergen, N., & Perdeck, A. C. (1950). On the stimulus situation releasing the begging response in the newly hatched herring gull chick (Larus argentatus argentatus Pont.). Behaviour, 3(1), 1–39.
Dunbar, R. I. M. (1992). Neocortex size as a constraint on group size in primates. Journal of Human Evolution, 22(6), 469–493. https://doi.org/10.1016/0047-2484(92)90081-J
Dunbar, R. I. M. (1993). Co-evolution of neocortex size, group size and language in humans. Behavioral and Brain Sciences, 16(4), 681–694. https://doi.org/10.1017/S0140525X00032325
Lindenfors, P., Wartel, A., & Lind, J. (2021). ‘Dunbar’s number’ deconstructed. Biology Letters, 17(5), 20210158. https://doi.org/10.1098/rsbl.2021.0158
Ainslie, G. (1975). Specious reward: A behavioral theory of impulsiveness and impulse control. Psychological Bulletin, 82(4), 463–496. https://doi.org/10.1037/h0076860
Laibson, D. (1997). Golden eggs and hyperbolic discounting. Quarterly Journal of Economics, 112(2), 443–477. https://doi.org/10.1162/003355397555253
Frederick, S., Loewenstein, G., & O’Donoghue, T. (2002). Time discounting and time preference: A critical review. Journal of Economic Literature, 40(2), 351–401. https://doi.org/10.1257/002205102320161311
Thaler, R. H., & Benartzi, S. (2004). Save More Tomorrow™: Using behavioral economics to increase employee saving. Journal of Political Economy, 112(S1), S164–S187. https://doi.org/10.1086/380085
Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving Decisions About Health, Wealth, and Happiness. Yale University Press. (Updated Final Edition: Penguin Books, 2021.)
Ulfur Atli
Writing mainly on the topics of science, defense and technology.
Space technologies are my primary interest.




