The Distributed Chain

Where the Heirs of Bernays, Lippmann, and CreelAre Working Right Now

The Question of Succession

A companion paper to this one, The Chain of Custody, traced the documented lineage of psychological manipulation techniques in American media from Joseph Pulitzer’s circulation wars through Edward Bernays’s consent engineering, Ernest Dichter’s motivational research, B.J. Fogg’s Persuasive Technology Lab at Stanford, and into the algorithmic optimization engines that now curate every feed on every screen. The chain had names. The handoffs had dates. The target—the human amygdala—never changed.

That paper ended in the present tense, with the observation that the chain continues. This paper asks the next question: who is holding it? If Bernays was the operational architect of mass persuasion, if Lippmann was its intellectual theorist, and if George Creel and the Committee on Public Information represented its institutionalization within the state—then who occupies those roles now? Where are they? What are they publishing? Who do they work for? And what are they building?

The answer is more unsettling than a single name. The chain did not produce a successor. It branched. The roles that Bernays, Lippmann, and Creel performed as individuals have been distributed across institutions, industries, and algorithms. The modern apparatus of mass persuasion does not have a face. It has an org chart—and the org chart spans governments, universities, platforms, consulting firms, and venture capital portfolios. What follows is a field guide to the heirs.

The Heirs of Creel: The Nudge State

George Creel’s Committee on Public Information was a wartime instrument: a federal propaganda bureau with seventy-five thousand volunteer speakers, a poster division, a film division, and a daily newspaper for editors. It ran for two years and was dismantled after the armistice. The modern equivalent is permanent, operates in peacetime, and exists in over four hundred government units worldwide.

The intellectual architect of this infrastructure is Cass Sunstein, the Robert Walmsley University Professor at Harvard Law School and, by citation count, the most referenced legal scholar in the United States. In 2008, Sunstein and University of Chicago economist Richard Thaler published Nudge: Improving Decisions About Health, Wealth, and Happiness, which argued that human decision-making is systematically irrational and that institutions can and should redesign “choice architectures”—the environments in which people make decisions—to steer behavior toward outcomes deemed beneficial by the architects. The book sold over two million copies and gave rise to a global movement.

Sunstein did not merely theorize. In 2009, President Barack Obama appointed him Administrator of the Office of Information and Regulatory Affairs, the executive branch’s regulatory review body—a position that gave him direct influence over the design of every federal regulation, form, and communication that touches American citizens. He later served as Senior Counselor to the Secretary of Homeland Security under President Biden and received the Distinguished Public Service Medal, the Department’s highest civilian honor, in 2024. His most recent book, Look Again: The Power of Noticing What Was Always There, co-authored with Tali Sharot, extends the behavioral framework into the psychology of habituation and attention.

Read Sunstein’s language carefully and you will hear Bernays rewritten for the academy. Bernays called it “the engineering of consent.” Sunstein calls it “choice architecture” and “libertarian paternalism.” The semantic distance is considerable. The functional distance is not. Both men argue that the public is systematically irrational, that the irrational public must be guided by experts, and that the guidance should be designed to feel like freedom. 

Bernays was more honest about the power dynamics. He wrote openly about invisible government and the manipulation of organized habits. Sunstein wraps the same project in the language of welfare optimization and consumer protection. The CPI’s Four Minute Men delivered scripted emotional appeals in movie theaters. Sunstein’s nudge units redesign the default options on government enrollment forms so that citizens are automatically opted into programs they might not have chosen if asked. The mechanism is gentler. The presumption is identical: the architect knows better than the citizen what the citizen should want.

In the United Kingdom, David Halpern has directed the Behavioural Insights Team—the original “Nudge Unit”—since its founding at the British Cabinet Office in 2010 under Prime Minister David Cameron. The unit has since been partially privatized and advises governments on multiple continents. By 2024, the OECD’s Behavioural Insights Network coordinated over two hundred such units globally. Canada, Australia, Germany, Japan, the World Bank, the United Nations, and the European Commission all operate behavioral intervention programs. The CPI was an emergency instrument. The nudge state is permanent infrastructure.

The most consequential application is in public health communication. The United States Centers for Disease Control and Prevention devotes a substantial portion of its discretionary budget to behavioral-science-driven messaging campaigns. These are labeled not as persuasion programs but as “Strategic Communication and Stakeholder Engagement” or “Vaccine Confidence Initiatives.” The language is clinical. The mechanism is emotional activation calibrated by behavioral research—fear appeals, social norm framing, default-option design, and the strategic deployment of trusted messengers. Whether this constitutes responsible public health practice or state-sponsored behavioral manipulation depends on whether you trust the architects more than Lippmann trusted the public. The CPI sold Liberty Bonds. The CDC sells compliance. The difference is the product. The technique is Creel’s.

The Heirs of Bernays: The Playbook Writers

Bernays published his methods. That was his most consequential act—more consequential than any individual campaign—because it meant the techniques could be learned, replicated, and scaled by anyone who read the books. Crystallizing Public Opinion in 1923. Propaganda in 1928. The playbook was open-source before the term existed.

The first heir to note is the man who trained the others. B.J. Fogg, who founded Stanford’s Persuasive Technology Lab in 1998 and whose students went on to co-found Instagram, launch the Center for Humane Technology, and staff the growth teams at every major platform, is still at Stanford. But the lab has been renamed. It is now the Behavior Design Lab. The word “persuasive” has been removed from the title. The lab’s stated mission has shifted from studying how computers change what people think and do to helping people create positive habits in their own lives. 

Fogg’s 2020 bestseller, Tiny Habits, is a self-help book about building small behavioral changes—a far cry from the 2003 textbook that taught a generation of engineers how to design interfaces that exploit psychological triggers. The lab’s website now encourages anyone studying persuasive technologies to review its early contributions on ethics. The pivot is significant. Bernays renamed “propaganda” as “public relations” when the first term acquired a negative connotation after the Second World War. Fogg renamed “persuasive technology” as “behavior design” as the first term acquired a negative connotation after The Social Dilemma. The technique persists under a new label. The graduates are already in the field.

The modern Bernays is Nir Eyal, and the parallel is almost too precise. Eyal holds an MBA from Stanford, taught at the Stanford Graduate School of Business and the Hasso Plattner Institute of Design, worked in the video gaming and advertising industries, and in 2014 published Hooked: How to Build Habit-Forming Products—Silicon Valley’s operational manual for engineering compulsive user behavior. The book lays out what Eyal calls the “Hook Model”: a four-phase cycle of trigger, action, variable reward, and investment, designed to create habits that bring users back without the company needing to spend on advertising or aggressive messaging. The book has sold over a million copies in more than thirty languages. Eyal consults for Fortune 500 companies and invests in habit-forming startups including Eventbrite, Canva, and Kahoot.

The candor is Bernaysian. Eyal does not disguise what the Hook Model does. He describes it as exploiting “a vulnerability in human psychology”—a phrase that Facebook’s founding president, Sean Parker, would later use to describe Facebook itself. Like Bernays, Eyal presents the techniques as morally neutral instruments. Like Bernays, he offers an ethics chapter that reads as an appendix rather than a constraint. And like Bernays, he then published a second book arguing against the very behavior his first book taught people to engineer. Indistractable: How to Control Your Attention and Choose Your Life appeared in 2019—a guide to resisting the addictive products that Hooked taught people to build. Bernays sold the cigarettes and then consulted on public health campaigns. The pattern persists.

His new book, Beyond Belief, scheduled for March 2026, covers how beliefs are formed, held, and changed. The trajectory from engineering habits to engineering beliefs is the trajectory from Bernays to Lippmann, collapsed into a single author’s bibliography.

Robert Cialdini, professor emeritus of psychology at Arizona State University, occupies a parallel position. His 1984 book Influence: The Psychology of Persuasion identified six principles of compliance—reciprocity, commitment and consistency, social proof, authority, liking, and scarcity—and a seventh, unity, was added in a 2021 revision. These principles are now embedded in the engagement architecture of every major platform, taught in every marketing curriculum, and deployed by every growth team in Silicon Valley. Cialdini is the Dichter of the digital age: the man who translated the psychology of persuasion into a checklist that any practitioner could apply. The checklist is more rigorous than Dichter’s depth interviews, more replicable, and infinitely more scalable. If Eyal is the modern Bernays, Cialdini is the modern Dichter—the researcher who provided the empirical toolkit that the operators deploy.

The Heirs of Bernays: The Platform Confessors

The most damning evidence for the chain’s continuity comes not from critics but from the builders themselves. In November 2017, within weeks of each other, two former Facebook executives delivered public confessions that read like depositions.

Sean Parker, Facebook’s founding president, told an Axios event that the platform was designed from the beginning to answer a single question: “How do we consume as much of your time and conscious attention as possible?” He described the like-and-comment system as a “social-validation feedback loop” that delivers intermittent dopamine rewards—the same variable reinforcement schedule that makes slot machines addictive. Then he said the sentence that belongs in the permanent record of the chain: “The inventors, creators—it’s me, it’s Mark [Zuckerberg], it’s Kevin Systrom on Instagram, it’s all of these people—understood this consciously. And we did it anyway.”

Days later, Chamath Palihapitiya, Facebook’s former Vice President of User Growth from 2007 to 2011, told a Stanford Graduate School of Business audience that he felt “tremendous guilt” for his role. “The short-term, dopamine-driven feedback loops we’ve created are destroying how society works,” he said. “No civil discourse, no cooperation; misinformation, mistruth.” He revealed that he does not use social media and does not allow his children to use it. He told the Stanford students in the room—future Silicon Valley operators, many of them—that they were “being programmed” and that their Stanford credentials made them more susceptible, not less: “Don’t think, ‘Oh yeah, not me, I’m at Stanford.’ You’re probably the most likely to fall for it.”

These are not critics speaking from outside the system. These are the Bernays figures of the twenty-first century, recanting. Parker designed the dopamine trap. Palihapitiya scaled it globally. Both walked away. Both described the mechanism in clinical terms—variable reinforcement, dopamine feedback loops, exploitation of psychological vulnerability—that Bernays would have recognized instantly, even if the vocabulary had changed. And both admitted the critical fact that separates the modern chain from the historical one: they knew. Bernays could plausibly claim that the long-term consequences of his techniques were unforeseen. Parker and Palihapitiya cannot. They did it, in Parker’s words, “consciously.”

The people who did not recant—who are still building—are harder to name, because they are inside the platforms. The growth engineering teams at Meta, TikTok, YouTube, and X are the institutional successors to Bernays. They do not publish books. They ship code. The engagement-optimization algorithms they build are the automated Bernays: systems that discover, test, and deploy psychological manipulation at a speed no human propagandist could match. They have no public faces. They have quarterly metrics.

The Heirs of Bernays: The Political Operators

Cambridge Analytica collapsed in 2018 after investigations in multiple countries revealed that it had harvested data from eighty-seven million Facebook profiles to target psychologically tailored political advertising during the 2016 U.S. presidential election and the Brexit referendum. Its CEO, Alexander Nix, was suspended after undercover footage captured him discussing the use of honey traps and fake news campaigns. The British Parliamentary investigation concluded that the company’s relentless targeting played “to the fears and the prejudices of people, in order to alter their voting plans” and constituted a “democratic crisis.”

Cambridge Analytica is gone. Its infrastructure is not. The Custom Audiences system at Meta—the exact tool Cambridge Analytica used to upload voter files and match them to platform user profiles—still functions in 2026. The platform’s response to the scandal was not to dismantle the targeting architecture but to restrict third-party API access while keeping the matching algorithm intact for advertisers who use Meta’s own interface. The architecture was not removed. It was internalized.

The next generation of political operators is not a single firm. It is an ecosystem of AI-driven microtargeting capabilities embedded in the platforms themselves. According to an October 2025 investigation by the American Prospect, campaigns preparing for the 2026 U.S. midterm elections are using large language models to generate thousands of unique, personalized political advertisements that are automatically tested and optimized by algorithmic feedback loops. 

A 2024 study published in PNAS confirmed that AI-generated microtargeted political messages can be persuasive, and that targeting by even a single demographic variable is sufficient to yield a measurable advantage over generic messaging. A companion PNAS study noted that computer-based personality judgments derived from as few as three hundred Facebook likes can be more accurate than those made by a person’s own spouse. The bottleneck that limited Cambridge Analytica—human strategists designing and interpreting each campaign—has been removed. The 2026 midterms will be the first major American election in which AI-generated persuasion operates at scale without human editorial intervention at the message level.

The implications extend beyond any single election cycle. The platforms have every financial incentive to make the targeting more effective, not less. More effective targeting means campaigns spend more on advertising. More advertising spending means more platform revenue. The system is self-reinforcing: the better the manipulation works, the more money flows to the manipulators, and the more money they have to invest in making the manipulation better. Cambridge Analytica was a startup with limited capital operating on borrowed API access. The 2026 operations run on the platforms’ own infrastructure, with the platforms’ own optimization engines, funded by the campaigns’ own budgets. The middleman has been eliminated. The platform is the propagandist.

Behind the platforms, Palantir Technologies—the data analytics firm co-founded by Peter Thiel—connects to the chain through government contracts, proximity to the Cambridge Analytica network, and its capacity to integrate disparate data sources into behavioral models. In the United Kingdom, Faculty AI, formerly known as ASI Data Science, reportedly employed several former Cambridge Analytica staff members and provided data infrastructure for the Vote Leave campaign’s targeting operation. The personnel circulate between firms. The techniques transmit. The chain does not require a single company. It requires a labor market of people who know how to build the systems.

The Heirs of Lippmann: The Theorists of Manufactured Reality

Walter Lippmann’s contribution was not operational but conceptual: the argument that the public operates on “pictures in their heads”—manufactured representations that bear only approximate relationships to the world they describe. Lippmann understood that the press does not mirror reality. It constructs the mental environment in which citizens form opinions. The modern Lippmanns are the scholars who have extended this insight into the algorithmic age, mapping how reality is now constructed not by editors but by engagement-optimization systems.

Renée DiResta is the most operationally significant figure in this category. A former CIA intern, Wall Street quantitative trader, venture capitalist, and startup founder, she became the Technical Research Manager at the Stanford Internet Observatory, where she led the investigation into the Russian Internet Research Agency’s multi-year campaign to manipulate American society through social media. She delivered findings to the bipartisan leadership of the Senate Select Committee on Intelligence and advised Congress, the State Department, and dozens of academic and civic organizations. Her phrase “freedom of speech is not freedom of reach”—co-authored with Aza Raskin, the inventor of infinite scroll—captures the Lippmann insight for the platform era: the issue is not who is allowed to speak but whose speech the algorithm chooses to amplify.

In June 2024, DiResta’s contract at Stanford was not renewed. The Stanford Internet Observatory was effectively dismantled after sustained political pressure from Republican lawmakers who accused it of colluding with the government to censor conservative voices. House Judiciary Committee Chairman Jim Jordan posted “Free speech wins again!” on the day the closure was reported. DiResta moved to Georgetown University’s McCourt School of Public Policy. The observatory that studied how reality is manufactured was itself destroyed by a manufactured narrative about censorship. Lippmann would have recognized the mechanism instantly.

Shoshana Zuboff, professor emerita at Harvard Business School, published The Age of Surveillance Capitalism in 2019, coining the term that now defines the business model of the dominant technology platforms. Zuboff’s thesis extends Lippmann into the economic sphere: the platforms do not merely construct “pictures in their heads” but extract behavioral data to build predictive models that increasingly function as behavioral modification instruments. She calls this “instrumentarian power”—the capacity to shape behavior at scale through the architecture of digital environments. Where Lippmann’s manufactured reality was constructed by editors choosing which stories to print, Zuboff’s is constructed by algorithms optimizing for engagement metrics that serve as proxies for neurochemical arousal. The “pictures in their heads” are now personalized, dynamically updated, and selected by machines that have learned what each individual nervous system responds to most intensely.

Tim Wu, professor at Columbia Law School, occupies the space between Lippmann and Creel. His 2016 book The Attention Merchants traced the full lineage from the penny press through broadcast television to the digital platform, documenting how each medium monetized human attention through the same core transaction: free content in exchange for the viewer’s time, resold to advertisers. Wu also coined the concept of net neutrality, served in the Biden White House, and has argued that the attention merchants’ business model is not merely exploitative but structurally incompatible with democratic self-governance. Like Lippmann, he maps the system. Unlike Lippmann, he argues that the system should be dismantled rather than managed by a more enlightened elite.

The Branch Point: Why the Chain Distributed

The historical chain ran through individuals. Pulitzer to Creel to Bernays to Dichter to Fogg. The modern chain runs through systems. Why?

The answer is scale. When Bernays engineered the “Torches of Freedom” campaign in 1929, he needed to coordinate a few dozen debutantes, a photographer, and a sympathetic press. The campaign reached millions, but it required a human orchestrator at every step. When Cambridge Analytica targeted psychologically tailored advertisements during the 2016 election, it needed a team of data scientists, a voter file, and API access to Facebook. The campaign reached one hundred and twenty-six million Americans, but it still required human strategists to design the messages and interpret the data.

The 2026 operations require neither. The large language model generates the messages. The platform’s engagement algorithm tests them against live audiences. The feedback loop optimizes in real time. The human operator uploads a voter file and defines a desired outcome. The machine does the rest. The chain has been automated, and automation distributes the function across the system rather than concentrating it in an individual. There is no single Bernays to identify, confront, or hold accountable. There is an architecture.

This is the most significant change in the chain’s 126-year history. The techniques that Pulitzer discovered through competition, Bernays formalized through theory, Dichter tested through depth interviews, and Fogg taught through coursework are now embedded in code that runs without human supervision. The persuasion is continuous. The optimization is automatic. The accountability is distributed to the point of diffusion. When a newspaper published a sensational headline, an editor’s name was on the masthead. When Bernays engineered a campaign, his firm took the credit. When Cambridge Analytica targeted voters, its executives could be subpoenaed. When an algorithm selects the content most likely to activate a user’s amygdala and hold their attention for another thirty seconds, no individual made the decision. The system made the decision. The system was designed by thousands of engineers implementing specifications written by hundreds of product managers interpreting strategies set by dozens of executives pursuing a single metric: engagement. The chain is everywhere and nowhere. That is why it persists.

The Watchers and the Watched

A pattern emerges from the map. The operational heirs—the Sunsteins, Eyals, and platform growth teams—are thriving. They have budgets, institutional support, and expanding mandates. The theoretical heirs—the DiRestas, Zuboffs, and Wus—are being marginalized. DiResta’s research lab was shut down under political pressure. Zuboff retired from Harvard. Wu left the White House. The Center for Humane Technology, founded by Tristan Harris and Aza Raskin, continues to operate but has shifted focus from social media harms to AI governance, acknowledging that the social media fight was lost. The Stanford Internet Observatory’s Election Integrity Partnership, which monitored misinformation in real time during the 2020 and 2022 elections, no longer exists.

The asymmetry is structural, not accidental. The operators generate revenue. The theorists generate friction. In a system optimized for engagement, the people who study the system’s harms are a cost center. The people who build the system are a profit center. The market resolves this asymmetry in the obvious direction. Vance Packard published The Hidden Persuaders in 1957 and advertising spending continued to climb. Tim Wu published The Attention Merchants in 2016 and screen time continued to increase. DiResta documented Russian manipulation of American social media and the lab that documented it was defunded. The pattern is consistent across seventy years: exposure does not stop the system. Exposure is metabolized by the system. The alarm is sounded. The architecture absorbs it.

The most recent data point is the most telling. In his August 2025 interview with the Hoover Institution, Sunstein noted that demand for behavioral economists in the private sector is higher than it has ever been. Silicon Valley, Saudi Arabia, Germany, France, Italy—all are competing for professionals trained in the science of behavior modification. The supply of people who know how to manipulate human attention and decision-making is increasing to meet demand. The supply of people who study the consequences of that manipulation is decreasing under political and institutional pressure. The ratio is moving in one direction.

What the Distribution Reveals

The distributed chain has no single point of failure and no single point of accountability. That is its power and its danger. When the chain ran through individuals—Bernays, Dichter, Ogilvy—it could be named, critiqued, and at least theoretically regulated. When the chain runs through algorithms, nudge units, platform architectures, and AI-generated microtargeting systems, the naming becomes harder, the critique more diffuse, and the regulation perpetually one step behind the technology.

But the distribution also reveals something the historical chain obscured: the universality of the target. Bernays targeted consumers. Creel targeted citizens. Dichter targeted the unconscious. Sunstein targets the irrational decision-maker. The algorithm targets the nervous system directly, without needing to theorize about what it is targeting. They are all targeting the same thing. They have always been targeting the same thing. The human organism—evolved to detect threats, crave social validation, seek novelty, avoid cognitive effort, and respond to emotional activation faster than it can evaluate it—is the constant in a 126-year equation. The variables are the delivery systems, the institutional structures, and the language used to describe what is being done.

Bernays called it the engineering of consent. Sunstein calls it choice architecture. Eyal calls it habit formation. Facebook’s growth team called it user engagement. The algorithm calls it nothing at all. It has no name for what it does. It simply measures which stimulus produces the longest session and serves more of it. The removal of language from the process—the replacement of human intention with machine optimization—is the final evolution of the chain. The system no longer needs to justify itself because it no longer needs a justifier. It runs.

The question for the citizen is the same question it has been since 1898, when a headline about the USS Maine sent a nation to war. It is the question Lippmann posed in 1922, when he asked whether the public could distinguish the pictures in their heads from the world those pictures claimed to represent. It is the question Packard posed in 1957, Wu posed in 2016, and Harris posed to the United States Senate in 2019 and 2021. The question has never been answered.

Who decides what you are afraid of?

Because someone—or something—always does. And the answer, for the first time in the chain’s 126-year history, may be: nobody. Not in the sense that nobody is responsible, but in the sense that the decision is now made by a system so distributed that responsibility dissolves before it can be assigned. Bernays could be confronted. Creel could be disbanded. Dichter could be exposéd. Even Cambridge Analytica could be shut down. But the engagement algorithm cannot be confronted because it has no address, no office, no public face. It is not a person. It is not even a single program. It is a property of the architecture—a behavioral tendency built into the infrastructure of every platform that monetizes attention. To dismantle it would require dismantling the business model of the information economy. No government has attempted this. No regulator has proposed it. The chain has achieved what no individual link ever could: it has become the environment.

The chain has names. The names have changed. The function has not. And the heirs are not hiding. They are publishing books, advising governments, shipping code, and optimizing for engagement. They are doing it in the open. Just like Bernays did.

The difference is that Bernays worked alone, and the distributed chain works everywhere, all the time, on every screen, in every pocket. It has no off switch because it was never designed to have one. It has no conscience because conscience is not a metric that can be optimized. And it has no natural end because the nervous system it targets will not evolve fast enough to outrun a system that adapts in real time.

The only asymmetric advantage the citizen retains is the one the chain cannot automate: the decision to look up from the screen and recognize that what is being done to you has a history, that the history has been documented, and that the documentation is itself an act of resistance. Not because knowledge stops the system. It does not. Packard proved that in 1957. But because knowledge is the precondition for every other form of resistance that might.

The chain is distributed. The witness does not have to be.

RESONANCE

Sources, evidence, and the evidentiary chain

Cialdini RB (1984; rev. 2021). Influence: The Psychology of Persuasion. Harper Business. Summary: Identifies six (now seven) principles of compliance—reciprocity, commitment, social proof, authority, liking, scarcity, unity—that are embedded in the engagement architecture of every major platform and taught in every marketing curriculum.

Confessore N (2018). Cambridge Analytica and Facebook: The Scandal and the Fallout So Far. The New York Times. https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html Summary: Comprehensive reporting on Cambridge Analytica’s harvest of 87 million Facebook profiles for psychologically targeted political advertising, including the British Parliamentary finding that it constituted a “democratic crisis.”

DiResta R (2024). Invisible Rulers: The People Who Turn Lies into Reality. Crown. Summary: Maps the mechanics of modern information warfare, narrative manipulation across social networks, and the role of algorithmic amplification in constructing manufactured reality—extending Lippmann’s framework to the platform age.

Eyal N (2014). Hooked: How to Build Habit-Forming Products. Portfolio/Penguin. Summary: Silicon Valley’s operational manual for engineering compulsive user behavior. The Hook Model—trigger, action, variable reward, investment—is the Bernays playbook translated into product design. Over one million copies sold.

Eyal N (2019). Indistractable: How to Control Your Attention and Choose Your Life. BenBella Books. Summary: The same author who taught companies to build addictive products then wrote the guide to resisting them—replicating Bernays’s pattern of selling both the cigarettes and the filter.

Fogg BJ (2003). Persuasive Technology: Using Computers to Change What We Think and Do. Morgan Kaufmann. Summary: Foundational textbook of captology. Fogg later rebranded the Stanford Persuasive Technology Lab as the Behavior Design Lab—mirroring Bernays’s renaming of propaganda as public relations when the first term acquired negative connotation.

Halpern D (2015). Inside the Nudge Unit: How Small Changes Can Make a Big Difference. WH Allen. Summary: Account of the UK Behavioural Insights Team’s founding in 2010, its methods, and its expansion from British Cabinet Office to global advisory practice. The institutional Creel of the behavioral age.

Lewis P (2017). “Our Minds Can Be Hijacked”: The Tech Insiders Who Fear a Smartphone Dystopia. The Guardian. https://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-silicon-valley-dystopia Summary: Profiles Tristan Harris, Aza Raskin, and other former tech insiders who describe the persuasive design techniques used to exploit human psychology, confirming that the mechanisms were understood consciously by their creators.

Palihapitiya C (2017). Money as an Instrument of Change. Stanford Graduate School of Business, November 2017. Summary: The recorded public confession in which Facebook’s former VP of User Growth stated: “The short-term, dopamine-driven feedback loops we’ve created are destroying how society works.” He does not use social media and does not allow his children to use it.

Parker S (2017). Interview with Mike Allen. Axios, November 9, 2017. https://www.axios.com/2017/12/15/sean-parker-unloads-on-facebook-god-only-knows-what-its-doing-to-our-childrens-brains-1513306792 Summary: Facebook’s founding president stating the platform was designed to exploit “a vulnerability in human psychology” and that the creators “understood this consciously. And we did it anyway.”

Sanders NE, Schneier B (2025). AI Is Changing How Politics Is Practiced in America. The American Prospect. https://prospect.org/2025/10/10/ai-artificial-intelligence-campaigns-midterms/ Summary: Investigation of AI-driven political advertising in the 2026 midterm cycle, documenting the use of large language models to generate personalized campaign messaging at scale without human editorial intervention.

Sunstein CR, Thaler RH (2008; rev. 2021). Nudge: Improving Decisions About Health, Wealth, and Happiness. Penguin. Summary: The foundational text of choice architecture and libertarian paternalism, generating over 400 nudge units in governments worldwide. Sunstein served as OIRA Administrator under Obama and as Senior Counselor at DHS under Biden.

Tappin BM, et al. (2024). The persuasive effects of political microtargeting in the age of generative artificial intelligence. PNAS Nexus 3(2). doi:10.1093/pnasnexus/pgae035. Summary: Peer-reviewed study confirming that AI-generated microtargeted political messages can be persuasive, and that computer-based personality judgments from 300 Facebook likes exceed spousal accuracy.

Zuboff S (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs. Summary: Coined “surveillance capitalism” and “instrumentarian power”—the capacity to shape behavior at scale through digital architecture. Extends Lippmann’s manufactured reality into the economic sphere of behavioral futures markets.

The Chain of Custody

How Techniques of Psychological Manipulation Transmit Across Generations in American Media

The Handoff

There is a story we like to tell about the manipulation of the American mind. In this story, each generation’s media discovers independently that fear sells, that emotion outperforms reason, and that human attention, once captured, can be converted into profit or political power. The story is comforting because it implies that the manipulation is accidental—an emergent property of free markets and human nature, reinvented from scratch each time technology changes the delivery mechanism.

The story is wrong.

The techniques of mass psychological manipulation in American media were not independently invented in each era. They were transmitted through a documented chain of individuals and institutions, each generation refining and scaling the methods of the last. The chain has names. The handoffs have dates. The target—the human amygdala—has never changed. What changed was the delivery system: from the broadsheet to the broadcast to the algorithm. What never changed was the playbook. And the playbook was passed, hand to hand, from the newsrooms of 1890s New York to the server farms of twenty-first-century Menlo Park.

A necessary caveat before the evidence. To trace a chain of transmission is not to allege a conspiracy. Conspiracies require coordination and concealment. What follows requires neither. Each link in the chain operated openly, published books, gave lectures, trained students, and took clients. The chain is visible to anyone who reads the primary sources in chronological order. That almost no one does—that each generation imagines it invented its own predicament—is itself a testament to how effectively the techniques work. The manipulated mind does not know it is being manipulated. Neither, apparently, does the manipulated era.

The Laboratory: Pulitzer, Hearst, and the Discovery of Activation

The chain begins in the 1890s, in the circulation wars between Joseph Pulitzer’s New York World and William Randolph Hearst’s New York Journal. The techniques they pioneered—scare headlines in oversized type, lavish illustrations, faked interviews, pseudoscience paraded as expertise, and theatrical sympathy with the underdog—were catalogued by journalism historian Frank Luther Mott, whose five defining characteristics of yellow journalism remain the standard taxonomy. Every one of those characteristics is an emotional accelerant. Not one requires the reader to think. They require the reader to feel.

The business model was simple and transformative: activate the reader’s threat-detection circuitry, sell the activation to advertisers, and ensure that tomorrow’s edition promises resolution that never arrives. The Spanish-American War of 1898 was the proof of concept—a conflict partially manufactured by headline pressure, demonstrating that sufficiently sustained emotional activation could move not only individual purchasing decisions but national policy. Pulitzer and Hearst did not theorize this. They stumbled into it through competition. But they built the laboratory in which every subsequent practitioner would conduct experiments.

The Federal Prototype: The Committee on Public Information

The first institutional handoff occurred in April 1917, when President Woodrow Wilson established the Committee on Public Information under the directorship of George Creel. The CPI was the United States government’s first systematic propaganda bureau—a wartime machine tasked with manufacturing consent for American entry into the Great War. Creel, a former investigative journalist who understood the mechanics of mass persuasion from the inside, recruited journalists, artists, filmmakers, and academics to staff an operation that would touch virtually every channel of American communication.

The CPI’s most remarkable instrument was the Four Minute Men: seventy-five thousand volunteer speakers who delivered scripted talks in movie theaters during reel changes, in churches, in lodge halls, and at public gatherings across the country. The scripts were drafted centrally, updated weekly, and designed to compress maximum emotional impact into the four minutes available before the next reel loaded. The topics followed a deliberate sequence: first, the threat—German atrocities, submarine warfare, the danger to American shores. Then the call to action—buy Liberty Bonds, conserve food, report suspicious behavior. The structure was pure yellow journalism translated into speech: activate the threat response, then direct the activated body toward a specific behavior. The CPI also produced posters, films, press releases, and a daily newspaper for editors. It was a total-spectrum persuasion operation, and it worked. Liberty Bond sales exceeded targets. Enlistment surged. The American public, which had been broadly isolationist in 1916, supported the war by 1917.

The CPI did not invent its techniques. It borrowed them directly from the Pulitzer-Hearst playbook: emotional activation, oversimplified narratives, visual shock, and relentless repetition. What the CPI added was scale, intentionality, and a feedback loop. For the first time, the techniques of mass emotional manipulation were deployed by a government, with a budget, under centralized direction, with measurable objectives, and with the ability to adjust the message based on results. The lesson was not lost on the young men who served in the bureau.

Two of those young men would become the most consequential figures in the history of American persuasion. Edward Bernays and Walter Lippmann both served on the CPI. Both witnessed firsthand what happened when the techniques of yellow journalism were professionalized, funded, and pointed at a specific target. Both left the CPI with the same recognition: that what could be done for a nation at war could be done for organizations and people in a nation at peace. Bernays said exactly this in his 1965 autobiography. Lippmann arrived at the same conclusion through a different lens. The CPI was the handoff point. Everything that follows traces back to it.

The Architects: Bernays and Lippmann

Edward Bernays was Sigmund Freud’s nephew twice over—his mother was Freud’s sister, his father was the brother of Freud’s wife. This was not incidental to his career. Bernays explicitly adapted his uncle’s theories about unconscious desire and irrational motivation to the practice of what he initially called propaganda and later rebranded as public relations. He published Crystallizing Public Opinion in 1923 and the more audacious Propaganda in 1928, in which he declared that the conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society.

His client list reads like a catalog of twentieth-century American power: General Electric, Procter & Gamble, the American Tobacco Company, CBS, United Fruit, and President Calvin Coolidge. His most famous campaign—the 1929 “Torches of Freedom” action, in which he arranged for debutantes to smoke Lucky Strikes during the Easter Sunday Parade in New York, framing cigarettes as symbols of women’s liberation—demonstrated a principle that would define the next century of American persuasion: the product is irrelevant; what you sell is the emotion. He did not sell cigarettes. He sold rebellion, identity, and freedom. The cigarettes were delivery vehicles.

Bernays lived to 103 and died in 1995—long enough to see his techniques automated by machines he could not have imagined. But his most consequential legacy may have been unintentional. Joseph Goebbels confirmed reading Bernays’s work by 1933. Bernays learned this from a Hearst newspaper foreign correspondent, and he recorded the discovery in his autobiography with evident discomfort. The toolbox he built had no lock on it.

Walter Lippmann, who also served on the CPI, took a parallel and equally consequential path. His 1922 book Public Opinion theorized what Bernays practiced. Lippmann argued that the public operates not on reality but on “pictures in their heads”—manufactured representations that bear only an approximate relationship to the world they purport to describe. The press, Lippmann argued, does not mirror reality. It constructs the mental environment in which citizens form opinions and make decisions. Lippmann provided the intellectual framework; Bernays provided the operational manual. Together, they were twin architects of the consent-manufacturing apparatus that would define the American twentieth century.

The Freud of Madison Avenue

The next link in the chain arrived from Vienna, carrying the same Freudian toolkit but a different target. Ernest Dichter, born in 1907, trained as a psychoanalyst, fled the Nazis, and arrived in the United States in the late 1930s. By 1946 he had founded the Institute for Motivational Research in Croton-on-Hudson, New York, and by the mid-1950s he had earned the title “the Freud of Madison Avenue.”

Dichter’s innovation was to apply the Bernays approach—Freudian psychology deployed for commercial purposes—not to public relations but to advertising specifically. The connection between the two men was not personal mentorship but shared intellectual DNA: both drew directly from Freud, both treated the public as a collection of unconscious drives to be decoded and redirected, and scholars at the Hagley Museum and elsewhere have documented the parallel trajectories in detail. Where Bernays had manufactured public consent for political and corporate clients, Dichter probed the unconscious desires of individual consumers. He conducted depth interviews, uncovering why people bought what they bought—and the reasons were almost never the ones they stated. He discovered that soap was experienced as an erotic ritual, that convertibles represented mistress fantasies, and that cake mixes sold better when they required the cook to add a real egg, satisfying an unconscious need to nurture. He created Esso’s “Put a Tiger in Your Tank” campaign, linking gasoline to virility.

By the late 1950s, nearly three-quarters of the largest advertising firms in America were using what the industry called “depth techniques”—methods inspired by psychoanalysis to access the irrational desires beneath purchasing decisions. Advertising spending in the United States had exploded from two billion dollars in 1939 to nearly twelve billion by the mid-1950s. The Bernays playbook had been industrialized.

Vance Packard blew the whistle in 1957 with The Hidden Persuaders, which attacked Dichter and the motivation researchers for manipulating consumers and invading their psychological privacy. Packard compared Dichter’s gothic mansion research institute to the surveillance apparatus of George Orwell’s Big Brother. The book became a bestseller. The public was alarmed. And nothing changed. Advertising spending continued to climb. The techniques were refined, not abandoned. The whistle was blown. Nobody stopped running.

David Ogilvy, who founded Ogilvy & Mather in 1948 and would be crowned the “Father of Advertising” by Timemagazine in 1962, acknowledged the lineage explicitly. In Confessions of an Advertising Man, Ogilvy wrote that he followed Edward Bernays’s advice on matters of professional strategy. Ogilvy had also worked for George Gallup’s Audience Research Institute—importing the scientific polling methods that the CPI had pioneered in cruder form—and during the Second World War he served in British Intelligence, where he analyzed propaganda and applied the Gallup technique to matters of diplomacy and security. Ogilvy carried the techniques from wartime intelligence to Madison Avenue as directly as Bernays had carried them from the CPI to public relations.

The Revolution That Wasn’t: Bernbach and the Selling of Identity

The advertising industry’s so-called Creative Revolution of the 1960s is often presented as a break from the manipulative traditions of the Dichter era. Bill Bernbach, who co-founded Doyle Dane Bernbach in 1949, is remembered as the visionary who replaced the heavy-handed depth techniques with wit, honesty, and respect for the consumer’s intelligence. His landmark 1959 Volkswagen campaign—“Think Small”—was a masterpiece of visual minimalism and sardonic understatement. Advertising Age later named it the greatest advertising campaign of the twentieth century.

But look more carefully at what the Creative Revolution actually changed. Bernbach did not stop selling emotion. He refined the emotional sale. The earlier generation had sold aspiration: bigger, shinier, more expensive, as proof of social status. Bernbach sold identity: smaller, simpler, smarter, as proof of character. The Volkswagen Beetle became the car for people who were too sophisticated to need a big car. Avis became the rental company for people who appreciated the underdog. The psychological mechanism was identical—the consumer purchases not a product but an image of themselves—but the Creative Revolution upgraded the sophistication of the appeal. The crude Freudian symbolism of Dichter gave way to a subtler, more culturally attuned manipulation. The target was still the same: the gap between who you are and who you want to be.

Bernbach himself wrote a letter to his agency’s management that, read carefully, reveals he understood the continuity. He acknowledged the technicians of advertising who knew all the rules—the Dichter school—but argued that advertising is fundamentally persuasion, and persuasion is not a science but an art. This is not a rejection of manipulation. It is a claim of superior craftsmanship. The Creative Revolution was a refinement, not a repudiation. The chain continued.

The Broadcast Multiplier

A note on the medium that carried the chain from print to screen. Television did not originate the techniques of emotional manipulation—it inherited them—but it did something the newspaper could never do. It delivered the activation into the living room, in moving images, with sound, in real time, and it did so to tens of millions of people simultaneously. The print headline activated the amygdala through language. The television broadcast activated it through the full sensory apparatus: the footage of the body bag, the burning village, the weeping mother, the mushroom cloud. The viewer could not skim. Could not look away as easily as turning the page. The image arrived unbidden and stayed.

The advertising industry adapted instantly. The thirty-second spot became the dominant unit of commercial persuasion by the 1960s, and it drew on every technique in the existing chain. Dichter’s depth research informed the creative strategy. Bernbach’s identity-selling informed the tone. Bernays’s principle of selling the emotion rather than the product became the foundation of brand advertising. By the mid-1960s, NBC and CBS were locked in a prime-time ratings war as fierce as the Pulitzer-Hearst circulation battles, and for the same structural reason: the network that captured the most attention could charge the most for advertising. The commodity had not changed. The delivery mechanism had.

Television also introduced a feature that would prove critical to the chain’s next evolution: passivity. The newspaper required the reader to pick it up, unfold it, and move their eyes across the page. The television required only that the viewer not leave the room. The remote control, introduced widely in the 1950s, gave viewers the ability to change channels but not to stop the flow. The default state was reception. The broadcast came to you. You had to act to stop it. This inversion—from active seeking to passive receiving—was the prototype for the infinite scroll that would arrive half a century later. The chain was learning that the most effective manipulation is the kind that requires no effort from the manipulated.

The Inversion: Herbert Simon and the Naming of the Prize

In 1971, at a Johns Hopkins University colloquium, an economist and cognitive scientist named Herbert A. Simon delivered a paper titled “Designing Organizations for an Information-Rich World.” Seven years later, Simon would win the Nobel Prize in Economics for his research on decision-making within organizations—but the 1971 paper, written before that recognition, contained a passage that would become the foundational text of the attention economy: “In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention.”

Simon’s contribution was not operational. He built no campaigns, sold no products, manipulated no public. His contribution was taxonomic. He named the commodity that Pulitzer, Hearst, Creel, Bernays, Lippmann, Dichter, Ogilvy, and Bernbach had been trading for seventy years without quite articulating what it was. They had all been in the attention business. They had all been harvesting the same finite cognitive resource and reselling it. Simon’s paper provided the intellectual framework that connected the nineteenth-century newspaper circulation war to the twentieth-century advertising industry to whatever was coming next.

What was coming next would not arrive for another quarter century. But when it did, it would arrive with Simon’s insight baked into its architecture. The engineers who built the platforms that now harvest human attention at industrial scale did not stumble into the attention economy by accident. They were trained in it. They had a syllabus.

The Syllabus: Stanford’s Persuasive Technology Lab

In 1998, a behavioral scientist named B.J. Fogg founded the Stanford Persuasive Technology Lab—later renamed the Behavior Design Lab—to study how computers could be designed to change what people think and do. Fogg coined the term “captology”: the study of computers as persuasive technologies. In 2003, he published the foundational textbook, Persuasive Technology: Using Computers to Change What We Think and Do. The title is not ambiguous. It is a declaration of purpose.

Fogg’s lab became a finishing school for Silicon Valley’s most consequential designers. His students were assigned readings drawn from decades of research into psychological manipulation—the same body of knowledge that ran from Bernays through Dichter to the motivation researchers of Madison Avenue. They were taught to identify the triggers, motivations, and abilities that govern human behavior, and to design interfaces that exploit those factors systematically. The lab’s influence was not theoretical. It was operational. In 2007, Fogg co-taught a Stanford course on building Facebook applications in which seventy-five students designed persuasive apps that collectively amassed millions of users in ten weeks. Fogg described the moment to the New York Times with a phrase that belongs in the permanent record: it was, he said, “a period of time when you could walk in and collect gold.”

The gold was not money. The gold was attention. And the prospectors had been trained.

Among Fogg’s students: Mike Krieger, who co-founded Instagram. Among those who took courses in Fogg’s lab: Tristan Harris, a magician’s son who had been fascinated since childhood by how easily human perception could be shaped. Harris later interned at Apple, then launched a startup called Apture, which Google acquired in 2011, bringing Harris into the company as a product manager. At Google, Harris was given the title of Design Ethicist—a role that, in retrospect, reads like a system’s immune response to its own pathology.

The Machine That Runs Itself

What Silicon Valley automated was not a new process. It was the entire Bernays lineage, compressed into code and running at a speed and scale that no human editor, propagandist, or advertising executive could have achieved.

Consider the architecture. The newspaper headline of 1900 was handcrafted by an editor who understood, intuitively, that fear and outrage sold papers. Bernays formalized the intuition into theory. Dichter tested the theory in depth interviews and sold the findings to corporations. Ogilvy and Bernbach refined the creative execution. Simon named the underlying commodity. Fogg taught a generation of engineers how to design interfaces that harvested that commodity through behavioral triggers. And the algorithm—the engagement-optimization engine that now curates every feed, every recommendation, every notification on every screen—completed the automation. The algorithm does not need to understand Bernays or Freud or Dichter. It does not need to understand anything. It simply measures which stimuli produce the longest engagement, feeds those stimuli to the user, and iterates. It is an amygdala-activation machine that has been stripped of every human mediating intelligence—every editor’s judgment, every creative director’s taste, every propagandist’s strategic objective—and reduced to a single function: maximize time on screen.

The engagement metrics that drive the algorithm are, as Tim Wu argued in his 2016 book The Attention Merchants, behavioral proxies for neurochemical arousal. A click is a cortisol spike, measured. A share is an emotional activation, quantified. A scroll is a dopamine hit, harvested. Wu traced the business model from Benjamin Day’s penny press in the 1830s through every subsequent medium—radio, television, the internet—and demonstrated that the core transaction has never changed: free diversion in exchange for a moment of your attention, sold in turn to the highest-bidding advertiser. The New York Times Book Review called Wu’s work a Hidden Persuaders for the twenty-first century. The comparison was precise. Wu is to the algorithmic era what Packard was to the Madison Avenue era: a chronicler of techniques that the public will find alarming and then accommodate.

And then there is the infinite scroll. Invented in 2006 by Aza Raskin while he was working as the creative lead for Firefox at Mozilla, the infinite scroll eliminated the natural stopping cue—the bottom of the page, the end of the article, the moment when the reader might set down the paper and go outside. Raskin designed it to improve the user experience by removing friction. What it removed was agency. The scroll has no floor. The feed has no end. The amygdala has no exit. Raskin later estimated that his invention wastes two hundred thousand human lifetimes per day. He did not say this with pride.

Here the chain delivers its cruelest irony. Aza Raskin is the son of Jef Raskin, the human-computer interface expert who conceived and initiated the Macintosh project at Apple in the late 1970s. Jef Raskin dedicated his career to the principle of “cognetics”—the ergonomics of the mind—and believed that technology should amplify human capabilities rather than exploit them. His son invented the single most effective mechanism for exploiting them. The father built the tool. The son built the trap. The chain does not require malice. It does not even require awareness. It requires only that each generation inherit the previous generation’s tools and discover, under competitive pressure, what those tools can really do.

The Reckoning

In February 2013, Tristan Harris—by then a Design Ethicist at Google—wrote a 141-slide presentation titled “A Call to Minimize Distraction & Respect Users’ Attention.” He shared it with ten colleagues. It spread organically to thousands of Google employees. The deck argued that the technology industry was engaged in a race to capture human attention that was degrading the capacity of individuals and societies to function. Harris urged Google, Apple, and Facebook to recognize the enormous responsibility that came with designing interfaces used by billions of people.

The presentation went viral inside Google. Harris was given the Design Ethicist title. Nothing else changed. He left Google in December 2015.

In 2018, Harris joined forces with Aza Raskin—the inventor of infinite scroll—and Randima Fernando to found the Center for Humane Technology. A student of the persuaders and the creator of the most addictive delivery mechanism in the history of digital media had, together, decided to try to undo what they had helped build. Harris coined the phrase “human downgrading” to describe the interconnected system of harms—addiction, distraction, isolation, polarization, misinformation—that he argued were not bugs in the system but features of a business model optimized for engagement at any cost.

In 2019, Harris testified before the United States Senate at a hearing titled “Optimizing for Engagement: Understanding the Use of Persuasive Technology on Internet Platforms.” He returned in 2021 to testify before the Senate Judiciary Subcommittee on Privacy, Technology and the Law. In 2020, he was the primary subject of the Netflix documentary The Social Dilemma, which reached over one hundred million viewers in one hundred and ninety countries. The Atlantic called Harris “the closest thing Silicon Valley has to a conscience.”

The fact that Silicon Valley’s conscience is a single person tells you something about the ratio of exploitation to self-awareness in the industry. The fact that his co-founder is the man who invented the mechanism of exploitation tells you something about the chain. It does not end cleanly. It loops. The people who inherit the tools and discover what those tools can do sometimes become the people who try to stop what those tools are doing. But they build the organizations to stop it using the same techniques—viral presentations, emotional appeals, media appearances designed to capture attention—because those are the only techniques that work at scale. The chain does not break. It doubles back on itself.

The Counterargument and the Evidence

A fair objection to the chain-of-custody thesis is that these techniques were not transmitted so much as independently rediscovered. Human psychology is universal. Fear sells. Emotion outperforms reason. Attention is finite. Any sufficiently competitive information market will discover these facts on its own, without needing a lineage from Pulitzer to Bernays to Fogg.

The objection is worth taking seriously, and it is half right. The underlying psychology is universal, and some degree of convergent discovery is inevitable. But the historical record shows something more specific than convergent evolution. It shows named individuals reading named books, citing named predecessors, studying at named institutions, and working for named organizations that were themselves staffed by alumni of earlier named organizations. Bernays served on the CPI and explicitly described applying its wartime techniques to peacetime commerce. Dichter applied Freudian psychoanalysis to consumer behavior and was linked by multiple scholars to Bernays through their shared theoretical starting point in Freud. Ogilvy read Bernays and followed his advice. Fogg trained students in persuasive technology. Those students built Instagram and then co-founded the organization trying to dismantle the attention economy. Harris studied under Fogg at Stanford, then worked at Google, then testified before Congress.

This is not convergent evolution. This is a chain of custody with receipts.

The distinction matters because the response to convergent evolution is resignation—if the exploitation of human attention is inevitable, then nothing can be done. The response to a chain of transmission is intervention: identify the links, name the handoffs, and make the inheritance visible. A system that operates in the dark cannot be held accountable. A system whose lineage is documented can.

What the Chain Reveals

The chain of custody, fully assembled, runs as follows. Pulitzer and Hearst discovered that emotional activation is a commercial engine. The Committee on Public Information professionalized and scaled those techniques for wartime propaganda. Bernays carried the CPI’s methods into peacetime commerce and provided the theoretical framework of consent engineering. Lippmann provided the complementary intellectual architecture of manufactured reality. Dichter imported the Freudian toolkit into advertising and demonstrated that consumer behavior could be shaped by accessing unconscious desires. Television multiplied the sensory bandwidth of the delivery system and introduced the passivity that would define every subsequent medium. Ogilvy and Bernbach refined the creative execution, selling not products but identities and emotions. Packard and Wu documented the system and were absorbed by it. Simon named the underlying commodity. Fogg taught a generation of engineers how to design interfaces that harvest it. And the algorithm completed the automation, stripping the process of every human mediating intelligence and reducing it to a function: maximize engagement, maximize time on screen, maximize the harvest of the single most valuable commodity in the information economy.

Every link in the chain operated openly. Every handoff is documented. Every technique was refined, not invented. And the target—the human nervous system, evolved over millennia to prioritize threat, crave social validation, and pursue novelty—was never consulted about its participation.

The deepest lesson of the chain is not about technology or media or advertising. It is about time. The chain has been operating for 126 years. It has survived two world wars, the invention of radio, the invention of television, the invention of the internet, and the invention of the smartphone. It has survived muckraking exposés, congressional hearings, bestselling books, and Emmy-winning documentaries. It has survived because each generation believes it is encountering the problem for the first time. Each generation reaches for the smartphone—or the newspaper, or the television, or the radio—and imagines it is making a free choice.

The chain suggests otherwise. The choice was engineered, a long time ago, by people who published books about engineering it. The techniques were transmitted. The handoffs have dates. And the system continues to run, not because it is hidden, but because exposure has never been sufficient to stop it. Packard exposed it in 1957. Wu exposed it in 2016. Harris testified about it in 2019 and 2021. The documentary reached a hundred million people. The scroll continues.

Perhaps the final link in the chain will be different. Perhaps the documentation of the chain itself—the naming of every link, the dating of every handoff—will provide what the headline never offered and the algorithm was designed to withhold: agency. The recognition that you are not a consumer of information but a target of a system that has been refining itself for longer than you have been alive.

The chain has no natural end. But it can have a witness.

RESONANCE

Sources, evidence, and the evidentiary chain

Bernays EL (1923). Crystallizing Public Opinion. Boni and Liveright. Summary: Bernays’s first major work theorizing the practice of public relations as a systematic discipline. Establishes the intellectual framework for consent engineering drawn from Freudian psychology and crowd theory.

Bernays EL (1928). Propaganda. Horace Liveright. Summary: The foundational text declaring that “the conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society.” The operational manual that Goebbels confirmed reading by 1933.

Bernays EL (1965). Biography of an Idea: Memoirs of Public Relations Counsel. Simon and Schuster. Summary: Bernays’s autobiography, containing the explicit statement that wartime CPI techniques could be applied to peacetime commerce—the documented handoff point from government propaganda to commercial public relations.

Bernays EL and Garner W (2020). Propaganda: A Master Spin Doctor Convinces the World That Dogsh*t Tastes Better Than Candy. Adagio. Summary: William Garner’s 21st-century edit of Bernays’ classic book. 

Curtis A (2002). The Century of the Self. BBC. Summary: Four-part BBC documentary tracing Bernays’s influence from the CPI through the consumer economy, with primary-source interviews confirming the chain from Freud to Bernays to Madison Avenue.

DiResta R, Raskin A (2022). Freedom of Speech Is Not Freedom of Reach. Wired. Summary: Co-authored by the inventor of infinite scroll and the Stanford Internet Observatory’s research manager, articulating the Lippmann insight for the platform era: algorithmic amplification, not content creation, is the mechanism of modern propaganda.

Fogg BJ (2003). Persuasive Technology: Using Computers to Change What We Think and Do. Morgan Kaufmann. Summary: The foundational textbook of captology—the study of computers as persuasive technologies—published by the founder of the Stanford Persuasive Technology Lab whose students co-founded Instagram and the Center for Humane Technology.

Harris T (2013). A Call to Minimize Distraction and Respect Users’ Attention. Internal Google presentation. Summary: The 141-slide deck that went viral among Google employees, arguing that the technology industry was engaged in a race to capture human attention that degraded individual and societal capacity. Harris left Google in December 2015.

Lippmann W (1922). Public Opinion. Harcourt, Brace. Summary: Theorized that the public operates on “pictures in their heads”—manufactured representations of reality. The intellectual framework complementing Bernays’s operational manual. Both men served on the CPI.

Mott FL (1941). American Journalism: A History of Newspapers in the United States Through 250 Years. Macmillan. Summary: Foundational taxonomy of yellow journalism’s five defining characteristics, establishing the Pulitzer–Hearst circulation wars as the laboratory for all subsequent mass persuasion techniques.

Ogilvy D (1963). Confessions of an Advertising Man. Atheneum. Summary: Ogilvy acknowledged following Bernays’s advice on professional strategy. Ogilvy also worked for George Gallup’s Audience Research Institute and served in British Intelligence during WWII, carrying techniques from wartime to Madison Avenue.

Packard V (1957). The Hidden Persuaders. David McKay Company. Summary: The bestselling exposé of Dichter and motivation research that alarmed the public and changed nothing. Advertising spending continued to climb. The paper uses Packard as evidence that exposure does not stop the system.

Raskin A (2019). I Invented the Infinite Scroll. I’m Sorry. BBC. https://www.bbc.com/news/technology-44640959 Summary: Aza Raskin, son of Macintosh creator Jef Raskin, describing how he invented the infinite scroll in 2006 and estimating it wastes 200,000 human lifetimes per day. Co-founded the Center for Humane Technology with Tristan Harris.

Samuel LR (2010). Freud on Madison Avenue: Motivation Research and Subliminal Advertising in America. University of Pennsylvania Press. Summary: Scholarly account of how Freudian psychoanalytic techniques were transmitted from European émigrés to Madison Avenue, with Dichter as the central figure linking Bernays’s PR framework to postwar advertising.

Simon HA (1971). Designing Organizations for an Information-Rich World. In Greenberger M (ed.), Computers, Communications, and the Public Interest, pp. 37–52. Johns Hopkins Press. Summary: The paper that named the underlying commodity: “A wealth of information creates a poverty of attention.” Simon won the Nobel Prize in Economics in 1978, seven years after this publication.

Tye L (1998). The Father of Spin: Edward L. Bernays and the Birth of Public Relations. Crown. Summary: Full-length biography confirming Bernays’s CPI service, his adaptation of Freudian psychology to commercial persuasion, Goebbels’s reading of his work, and the Torches of Freedom campaign.

Wu T (2016). The Attention Merchants: The Epic Scramble to Get Inside Our Heads. Alfred A. Knopf. Summary: Traces the business model from Benjamin Day’s penny press to digital platforms: free diversion in exchange for attention, resold to advertisers. The New York Times Book Review called it a Hidden Persuaders for the twenty-first century.