The Distributed Chain

Where the Heirs of Bernays, Lippmann, and CreelAre Working Right Now

The Question of Succession

A companion paper to this one, The Chain of Custody, traced the documented lineage of psychological manipulation techniques in American media from Joseph Pulitzer’s circulation wars through Edward Bernays’s consent engineering, Ernest Dichter’s motivational research, B.J. Fogg’s Persuasive Technology Lab at Stanford, and into the algorithmic optimization engines that now curate every feed on every screen. The chain had names. The handoffs had dates. The target—the human amygdala—never changed.

That paper ended in the present tense, with the observation that the chain continues. This paper asks the next question: who is holding it? If Bernays was the operational architect of mass persuasion, if Lippmann was its intellectual theorist, and if George Creel and the Committee on Public Information represented its institutionalization within the state—then who occupies those roles now? Where are they? What are they publishing? Who do they work for? And what are they building?

The answer is more unsettling than a single name. The chain did not produce a successor. It branched. The roles that Bernays, Lippmann, and Creel performed as individuals have been distributed across institutions, industries, and algorithms. The modern apparatus of mass persuasion does not have a face. It has an org chart—and the org chart spans governments, universities, platforms, consulting firms, and venture capital portfolios. What follows is a field guide to the heirs.

The Heirs of Creel: The Nudge State

George Creel’s Committee on Public Information was a wartime instrument: a federal propaganda bureau with seventy-five thousand volunteer speakers, a poster division, a film division, and a daily newspaper for editors. It ran for two years and was dismantled after the armistice. The modern equivalent is permanent, operates in peacetime, and exists in over four hundred government units worldwide.

The intellectual architect of this infrastructure is Cass Sunstein, the Robert Walmsley University Professor at Harvard Law School and, by citation count, the most referenced legal scholar in the United States. In 2008, Sunstein and University of Chicago economist Richard Thaler published Nudge: Improving Decisions About Health, Wealth, and Happiness, which argued that human decision-making is systematically irrational and that institutions can and should redesign “choice architectures”—the environments in which people make decisions—to steer behavior toward outcomes deemed beneficial by the architects. The book sold over two million copies and gave rise to a global movement.

Sunstein did not merely theorize. In 2009, President Barack Obama appointed him Administrator of the Office of Information and Regulatory Affairs, the executive branch’s regulatory review body—a position that gave him direct influence over the design of every federal regulation, form, and communication that touches American citizens. He later served as Senior Counselor to the Secretary of Homeland Security under President Biden and received the Distinguished Public Service Medal, the Department’s highest civilian honor, in 2024. His most recent book, Look Again: The Power of Noticing What Was Always There, co-authored with Tali Sharot, extends the behavioral framework into the psychology of habituation and attention.

Read Sunstein’s language carefully and you will hear Bernays rewritten for the academy. Bernays called it “the engineering of consent.” Sunstein calls it “choice architecture” and “libertarian paternalism.” The semantic distance is considerable. The functional distance is not. Both men argue that the public is systematically irrational, that the irrational public must be guided by experts, and that the guidance should be designed to feel like freedom. 

Bernays was more honest about the power dynamics. He wrote openly about invisible government and the manipulation of organized habits. Sunstein wraps the same project in the language of welfare optimization and consumer protection. The CPI’s Four Minute Men delivered scripted emotional appeals in movie theaters. Sunstein’s nudge units redesign the default options on government enrollment forms so that citizens are automatically opted into programs they might not have chosen if asked. The mechanism is gentler. The presumption is identical: the architect knows better than the citizen what the citizen should want.

In the United Kingdom, David Halpern has directed the Behavioural Insights Team—the original “Nudge Unit”—since its founding at the British Cabinet Office in 2010 under Prime Minister David Cameron. The unit has since been partially privatized and advises governments on multiple continents. By 2024, the OECD’s Behavioural Insights Network coordinated over two hundred such units globally. Canada, Australia, Germany, Japan, the World Bank, the United Nations, and the European Commission all operate behavioral intervention programs. The CPI was an emergency instrument. The nudge state is permanent infrastructure.

The most consequential application is in public health communication. The United States Centers for Disease Control and Prevention devotes a substantial portion of its discretionary budget to behavioral-science-driven messaging campaigns. These are labeled not as persuasion programs but as “Strategic Communication and Stakeholder Engagement” or “Vaccine Confidence Initiatives.” The language is clinical. The mechanism is emotional activation calibrated by behavioral research—fear appeals, social norm framing, default-option design, and the strategic deployment of trusted messengers. Whether this constitutes responsible public health practice or state-sponsored behavioral manipulation depends on whether you trust the architects more than Lippmann trusted the public. The CPI sold Liberty Bonds. The CDC sells compliance. The difference is the product. The technique is Creel’s.

The Heirs of Bernays: The Playbook Writers

Bernays published his methods. That was his most consequential act—more consequential than any individual campaign—because it meant the techniques could be learned, replicated, and scaled by anyone who read the books. Crystallizing Public Opinion in 1923. Propaganda in 1928. The playbook was open-source before the term existed.

The first heir to note is the man who trained the others. B.J. Fogg, who founded Stanford’s Persuasive Technology Lab in 1998 and whose students went on to co-found Instagram, launch the Center for Humane Technology, and staff the growth teams at every major platform, is still at Stanford. But the lab has been renamed. It is now the Behavior Design Lab. The word “persuasive” has been removed from the title. The lab’s stated mission has shifted from studying how computers change what people think and do to helping people create positive habits in their own lives. 

Fogg’s 2020 bestseller, Tiny Habits, is a self-help book about building small behavioral changes—a far cry from the 2003 textbook that taught a generation of engineers how to design interfaces that exploit psychological triggers. The lab’s website now encourages anyone studying persuasive technologies to review its early contributions on ethics. The pivot is significant. Bernays renamed “propaganda” as “public relations” when the first term acquired a negative connotation after the Second World War. Fogg renamed “persuasive technology” as “behavior design” as the first term acquired a negative connotation after The Social Dilemma. The technique persists under a new label. The graduates are already in the field.

The modern Bernays is Nir Eyal, and the parallel is almost too precise. Eyal holds an MBA from Stanford, taught at the Stanford Graduate School of Business and the Hasso Plattner Institute of Design, worked in the video gaming and advertising industries, and in 2014 published Hooked: How to Build Habit-Forming Products—Silicon Valley’s operational manual for engineering compulsive user behavior. The book lays out what Eyal calls the “Hook Model”: a four-phase cycle of trigger, action, variable reward, and investment, designed to create habits that bring users back without the company needing to spend on advertising or aggressive messaging. The book has sold over a million copies in more than thirty languages. Eyal consults for Fortune 500 companies and invests in habit-forming startups including Eventbrite, Canva, and Kahoot.

The candor is Bernaysian. Eyal does not disguise what the Hook Model does. He describes it as exploiting “a vulnerability in human psychology”—a phrase that Facebook’s founding president, Sean Parker, would later use to describe Facebook itself. Like Bernays, Eyal presents the techniques as morally neutral instruments. Like Bernays, he offers an ethics chapter that reads as an appendix rather than a constraint. And like Bernays, he then published a second book arguing against the very behavior his first book taught people to engineer. Indistractable: How to Control Your Attention and Choose Your Life appeared in 2019—a guide to resisting the addictive products that Hooked taught people to build. Bernays sold the cigarettes and then consulted on public health campaigns. The pattern persists.

His new book, Beyond Belief, scheduled for March 2026, covers how beliefs are formed, held, and changed. The trajectory from engineering habits to engineering beliefs is the trajectory from Bernays to Lippmann, collapsed into a single author’s bibliography.

Robert Cialdini, professor emeritus of psychology at Arizona State University, occupies a parallel position. His 1984 book Influence: The Psychology of Persuasion identified six principles of compliance—reciprocity, commitment and consistency, social proof, authority, liking, and scarcity—and a seventh, unity, was added in a 2021 revision. These principles are now embedded in the engagement architecture of every major platform, taught in every marketing curriculum, and deployed by every growth team in Silicon Valley. Cialdini is the Dichter of the digital age: the man who translated the psychology of persuasion into a checklist that any practitioner could apply. The checklist is more rigorous than Dichter’s depth interviews, more replicable, and infinitely more scalable. If Eyal is the modern Bernays, Cialdini is the modern Dichter—the researcher who provided the empirical toolkit that the operators deploy.

The Heirs of Bernays: The Platform Confessors

The most damning evidence for the chain’s continuity comes not from critics but from the builders themselves. In November 2017, within weeks of each other, two former Facebook executives delivered public confessions that read like depositions.

Sean Parker, Facebook’s founding president, told an Axios event that the platform was designed from the beginning to answer a single question: “How do we consume as much of your time and conscious attention as possible?” He described the like-and-comment system as a “social-validation feedback loop” that delivers intermittent dopamine rewards—the same variable reinforcement schedule that makes slot machines addictive. Then he said the sentence that belongs in the permanent record of the chain: “The inventors, creators—it’s me, it’s Mark [Zuckerberg], it’s Kevin Systrom on Instagram, it’s all of these people—understood this consciously. And we did it anyway.”

Days later, Chamath Palihapitiya, Facebook’s former Vice President of User Growth from 2007 to 2011, told a Stanford Graduate School of Business audience that he felt “tremendous guilt” for his role. “The short-term, dopamine-driven feedback loops we’ve created are destroying how society works,” he said. “No civil discourse, no cooperation; misinformation, mistruth.” He revealed that he does not use social media and does not allow his children to use it. He told the Stanford students in the room—future Silicon Valley operators, many of them—that they were “being programmed” and that their Stanford credentials made them more susceptible, not less: “Don’t think, ‘Oh yeah, not me, I’m at Stanford.’ You’re probably the most likely to fall for it.”

These are not critics speaking from outside the system. These are the Bernays figures of the twenty-first century, recanting. Parker designed the dopamine trap. Palihapitiya scaled it globally. Both walked away. Both described the mechanism in clinical terms—variable reinforcement, dopamine feedback loops, exploitation of psychological vulnerability—that Bernays would have recognized instantly, even if the vocabulary had changed. And both admitted the critical fact that separates the modern chain from the historical one: they knew. Bernays could plausibly claim that the long-term consequences of his techniques were unforeseen. Parker and Palihapitiya cannot. They did it, in Parker’s words, “consciously.”

The people who did not recant—who are still building—are harder to name, because they are inside the platforms. The growth engineering teams at Meta, TikTok, YouTube, and X are the institutional successors to Bernays. They do not publish books. They ship code. The engagement-optimization algorithms they build are the automated Bernays: systems that discover, test, and deploy psychological manipulation at a speed no human propagandist could match. They have no public faces. They have quarterly metrics.

The Heirs of Bernays: The Political Operators

Cambridge Analytica collapsed in 2018 after investigations in multiple countries revealed that it had harvested data from eighty-seven million Facebook profiles to target psychologically tailored political advertising during the 2016 U.S. presidential election and the Brexit referendum. Its CEO, Alexander Nix, was suspended after undercover footage captured him discussing the use of honey traps and fake news campaigns. The British Parliamentary investigation concluded that the company’s relentless targeting played “to the fears and the prejudices of people, in order to alter their voting plans” and constituted a “democratic crisis.”

Cambridge Analytica is gone. Its infrastructure is not. The Custom Audiences system at Meta—the exact tool Cambridge Analytica used to upload voter files and match them to platform user profiles—still functions in 2026. The platform’s response to the scandal was not to dismantle the targeting architecture but to restrict third-party API access while keeping the matching algorithm intact for advertisers who use Meta’s own interface. The architecture was not removed. It was internalized.

The next generation of political operators is not a single firm. It is an ecosystem of AI-driven microtargeting capabilities embedded in the platforms themselves. According to an October 2025 investigation by the American Prospect, campaigns preparing for the 2026 U.S. midterm elections are using large language models to generate thousands of unique, personalized political advertisements that are automatically tested and optimized by algorithmic feedback loops. 

A 2024 study published in PNAS confirmed that AI-generated microtargeted political messages can be persuasive, and that targeting by even a single demographic variable is sufficient to yield a measurable advantage over generic messaging. A companion PNAS study noted that computer-based personality judgments derived from as few as three hundred Facebook likes can be more accurate than those made by a person’s own spouse. The bottleneck that limited Cambridge Analytica—human strategists designing and interpreting each campaign—has been removed. The 2026 midterms will be the first major American election in which AI-generated persuasion operates at scale without human editorial intervention at the message level.

The implications extend beyond any single election cycle. The platforms have every financial incentive to make the targeting more effective, not less. More effective targeting means campaigns spend more on advertising. More advertising spending means more platform revenue. The system is self-reinforcing: the better the manipulation works, the more money flows to the manipulators, and the more money they have to invest in making the manipulation better. Cambridge Analytica was a startup with limited capital operating on borrowed API access. The 2026 operations run on the platforms’ own infrastructure, with the platforms’ own optimization engines, funded by the campaigns’ own budgets. The middleman has been eliminated. The platform is the propagandist.

Behind the platforms, Palantir Technologies—the data analytics firm co-founded by Peter Thiel—connects to the chain through government contracts, proximity to the Cambridge Analytica network, and its capacity to integrate disparate data sources into behavioral models. In the United Kingdom, Faculty AI, formerly known as ASI Data Science, reportedly employed several former Cambridge Analytica staff members and provided data infrastructure for the Vote Leave campaign’s targeting operation. The personnel circulate between firms. The techniques transmit. The chain does not require a single company. It requires a labor market of people who know how to build the systems.

The Heirs of Lippmann: The Theorists of Manufactured Reality

Walter Lippmann’s contribution was not operational but conceptual: the argument that the public operates on “pictures in their heads”—manufactured representations that bear only approximate relationships to the world they describe. Lippmann understood that the press does not mirror reality. It constructs the mental environment in which citizens form opinions. The modern Lippmanns are the scholars who have extended this insight into the algorithmic age, mapping how reality is now constructed not by editors but by engagement-optimization systems.

Renée DiResta is the most operationally significant figure in this category. A former CIA intern, Wall Street quantitative trader, venture capitalist, and startup founder, she became the Technical Research Manager at the Stanford Internet Observatory, where she led the investigation into the Russian Internet Research Agency’s multi-year campaign to manipulate American society through social media. She delivered findings to the bipartisan leadership of the Senate Select Committee on Intelligence and advised Congress, the State Department, and dozens of academic and civic organizations. Her phrase “freedom of speech is not freedom of reach”—co-authored with Aza Raskin, the inventor of infinite scroll—captures the Lippmann insight for the platform era: the issue is not who is allowed to speak but whose speech the algorithm chooses to amplify.

In June 2024, DiResta’s contract at Stanford was not renewed. The Stanford Internet Observatory was effectively dismantled after sustained political pressure from Republican lawmakers who accused it of colluding with the government to censor conservative voices. House Judiciary Committee Chairman Jim Jordan posted “Free speech wins again!” on the day the closure was reported. DiResta moved to Georgetown University’s McCourt School of Public Policy. The observatory that studied how reality is manufactured was itself destroyed by a manufactured narrative about censorship. Lippmann would have recognized the mechanism instantly.

Shoshana Zuboff, professor emerita at Harvard Business School, published The Age of Surveillance Capitalism in 2019, coining the term that now defines the business model of the dominant technology platforms. Zuboff’s thesis extends Lippmann into the economic sphere: the platforms do not merely construct “pictures in their heads” but extract behavioral data to build predictive models that increasingly function as behavioral modification instruments. She calls this “instrumentarian power”—the capacity to shape behavior at scale through the architecture of digital environments. Where Lippmann’s manufactured reality was constructed by editors choosing which stories to print, Zuboff’s is constructed by algorithms optimizing for engagement metrics that serve as proxies for neurochemical arousal. The “pictures in their heads” are now personalized, dynamically updated, and selected by machines that have learned what each individual nervous system responds to most intensely.

Tim Wu, professor at Columbia Law School, occupies the space between Lippmann and Creel. His 2016 book The Attention Merchants traced the full lineage from the penny press through broadcast television to the digital platform, documenting how each medium monetized human attention through the same core transaction: free content in exchange for the viewer’s time, resold to advertisers. Wu also coined the concept of net neutrality, served in the Biden White House, and has argued that the attention merchants’ business model is not merely exploitative but structurally incompatible with democratic self-governance. Like Lippmann, he maps the system. Unlike Lippmann, he argues that the system should be dismantled rather than managed by a more enlightened elite.

The Branch Point: Why the Chain Distributed

The historical chain ran through individuals. Pulitzer to Creel to Bernays to Dichter to Fogg. The modern chain runs through systems. Why?

The answer is scale. When Bernays engineered the “Torches of Freedom” campaign in 1929, he needed to coordinate a few dozen debutantes, a photographer, and a sympathetic press. The campaign reached millions, but it required a human orchestrator at every step. When Cambridge Analytica targeted psychologically tailored advertisements during the 2016 election, it needed a team of data scientists, a voter file, and API access to Facebook. The campaign reached one hundred and twenty-six million Americans, but it still required human strategists to design the messages and interpret the data.

The 2026 operations require neither. The large language model generates the messages. The platform’s engagement algorithm tests them against live audiences. The feedback loop optimizes in real time. The human operator uploads a voter file and defines a desired outcome. The machine does the rest. The chain has been automated, and automation distributes the function across the system rather than concentrating it in an individual. There is no single Bernays to identify, confront, or hold accountable. There is an architecture.

This is the most significant change in the chain’s 126-year history. The techniques that Pulitzer discovered through competition, Bernays formalized through theory, Dichter tested through depth interviews, and Fogg taught through coursework are now embedded in code that runs without human supervision. The persuasion is continuous. The optimization is automatic. The accountability is distributed to the point of diffusion. When a newspaper published a sensational headline, an editor’s name was on the masthead. When Bernays engineered a campaign, his firm took the credit. When Cambridge Analytica targeted voters, its executives could be subpoenaed. When an algorithm selects the content most likely to activate a user’s amygdala and hold their attention for another thirty seconds, no individual made the decision. The system made the decision. The system was designed by thousands of engineers implementing specifications written by hundreds of product managers interpreting strategies set by dozens of executives pursuing a single metric: engagement. The chain is everywhere and nowhere. That is why it persists.

The Watchers and the Watched

A pattern emerges from the map. The operational heirs—the Sunsteins, Eyals, and platform growth teams—are thriving. They have budgets, institutional support, and expanding mandates. The theoretical heirs—the DiRestas, Zuboffs, and Wus—are being marginalized. DiResta’s research lab was shut down under political pressure. Zuboff retired from Harvard. Wu left the White House. The Center for Humane Technology, founded by Tristan Harris and Aza Raskin, continues to operate but has shifted focus from social media harms to AI governance, acknowledging that the social media fight was lost. The Stanford Internet Observatory’s Election Integrity Partnership, which monitored misinformation in real time during the 2020 and 2022 elections, no longer exists.

The asymmetry is structural, not accidental. The operators generate revenue. The theorists generate friction. In a system optimized for engagement, the people who study the system’s harms are a cost center. The people who build the system are a profit center. The market resolves this asymmetry in the obvious direction. Vance Packard published The Hidden Persuaders in 1957 and advertising spending continued to climb. Tim Wu published The Attention Merchants in 2016 and screen time continued to increase. DiResta documented Russian manipulation of American social media and the lab that documented it was defunded. The pattern is consistent across seventy years: exposure does not stop the system. Exposure is metabolized by the system. The alarm is sounded. The architecture absorbs it.

The most recent data point is the most telling. In his August 2025 interview with the Hoover Institution, Sunstein noted that demand for behavioral economists in the private sector is higher than it has ever been. Silicon Valley, Saudi Arabia, Germany, France, Italy—all are competing for professionals trained in the science of behavior modification. The supply of people who know how to manipulate human attention and decision-making is increasing to meet demand. The supply of people who study the consequences of that manipulation is decreasing under political and institutional pressure. The ratio is moving in one direction.

What the Distribution Reveals

The distributed chain has no single point of failure and no single point of accountability. That is its power and its danger. When the chain ran through individuals—Bernays, Dichter, Ogilvy—it could be named, critiqued, and at least theoretically regulated. When the chain runs through algorithms, nudge units, platform architectures, and AI-generated microtargeting systems, the naming becomes harder, the critique more diffuse, and the regulation perpetually one step behind the technology.

But the distribution also reveals something the historical chain obscured: the universality of the target. Bernays targeted consumers. Creel targeted citizens. Dichter targeted the unconscious. Sunstein targets the irrational decision-maker. The algorithm targets the nervous system directly, without needing to theorize about what it is targeting. They are all targeting the same thing. They have always been targeting the same thing. The human organism—evolved to detect threats, crave social validation, seek novelty, avoid cognitive effort, and respond to emotional activation faster than it can evaluate it—is the constant in a 126-year equation. The variables are the delivery systems, the institutional structures, and the language used to describe what is being done.

Bernays called it the engineering of consent. Sunstein calls it choice architecture. Eyal calls it habit formation. Facebook’s growth team called it user engagement. The algorithm calls it nothing at all. It has no name for what it does. It simply measures which stimulus produces the longest session and serves more of it. The removal of language from the process—the replacement of human intention with machine optimization—is the final evolution of the chain. The system no longer needs to justify itself because it no longer needs a justifier. It runs.

The question for the citizen is the same question it has been since 1898, when a headline about the USS Maine sent a nation to war. It is the question Lippmann posed in 1922, when he asked whether the public could distinguish the pictures in their heads from the world those pictures claimed to represent. It is the question Packard posed in 1957, Wu posed in 2016, and Harris posed to the United States Senate in 2019 and 2021. The question has never been answered.

Who decides what you are afraid of?

Because someone—or something—always does. And the answer, for the first time in the chain’s 126-year history, may be: nobody. Not in the sense that nobody is responsible, but in the sense that the decision is now made by a system so distributed that responsibility dissolves before it can be assigned. Bernays could be confronted. Creel could be disbanded. Dichter could be exposéd. Even Cambridge Analytica could be shut down. But the engagement algorithm cannot be confronted because it has no address, no office, no public face. It is not a person. It is not even a single program. It is a property of the architecture—a behavioral tendency built into the infrastructure of every platform that monetizes attention. To dismantle it would require dismantling the business model of the information economy. No government has attempted this. No regulator has proposed it. The chain has achieved what no individual link ever could: it has become the environment.

The chain has names. The names have changed. The function has not. And the heirs are not hiding. They are publishing books, advising governments, shipping code, and optimizing for engagement. They are doing it in the open. Just like Bernays did.

The difference is that Bernays worked alone, and the distributed chain works everywhere, all the time, on every screen, in every pocket. It has no off switch because it was never designed to have one. It has no conscience because conscience is not a metric that can be optimized. And it has no natural end because the nervous system it targets will not evolve fast enough to outrun a system that adapts in real time.

The only asymmetric advantage the citizen retains is the one the chain cannot automate: the decision to look up from the screen and recognize that what is being done to you has a history, that the history has been documented, and that the documentation is itself an act of resistance. Not because knowledge stops the system. It does not. Packard proved that in 1957. But because knowledge is the precondition for every other form of resistance that might.

The chain is distributed. The witness does not have to be.

RESONANCE

Sources, evidence, and the evidentiary chain

Cialdini RB (1984; rev. 2021). Influence: The Psychology of Persuasion. Harper Business. Summary: Identifies six (now seven) principles of compliance—reciprocity, commitment, social proof, authority, liking, scarcity, unity—that are embedded in the engagement architecture of every major platform and taught in every marketing curriculum.

Confessore N (2018). Cambridge Analytica and Facebook: The Scandal and the Fallout So Far. The New York Times. https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html Summary: Comprehensive reporting on Cambridge Analytica’s harvest of 87 million Facebook profiles for psychologically targeted political advertising, including the British Parliamentary finding that it constituted a “democratic crisis.”

DiResta R (2024). Invisible Rulers: The People Who Turn Lies into Reality. Crown. Summary: Maps the mechanics of modern information warfare, narrative manipulation across social networks, and the role of algorithmic amplification in constructing manufactured reality—extending Lippmann’s framework to the platform age.

Eyal N (2014). Hooked: How to Build Habit-Forming Products. Portfolio/Penguin. Summary: Silicon Valley’s operational manual for engineering compulsive user behavior. The Hook Model—trigger, action, variable reward, investment—is the Bernays playbook translated into product design. Over one million copies sold.

Eyal N (2019). Indistractable: How to Control Your Attention and Choose Your Life. BenBella Books. Summary: The same author who taught companies to build addictive products then wrote the guide to resisting them—replicating Bernays’s pattern of selling both the cigarettes and the filter.

Fogg BJ (2003). Persuasive Technology: Using Computers to Change What We Think and Do. Morgan Kaufmann. Summary: Foundational textbook of captology. Fogg later rebranded the Stanford Persuasive Technology Lab as the Behavior Design Lab—mirroring Bernays’s renaming of propaganda as public relations when the first term acquired negative connotation.

Halpern D (2015). Inside the Nudge Unit: How Small Changes Can Make a Big Difference. WH Allen. Summary: Account of the UK Behavioural Insights Team’s founding in 2010, its methods, and its expansion from British Cabinet Office to global advisory practice. The institutional Creel of the behavioral age.

Lewis P (2017). “Our Minds Can Be Hijacked”: The Tech Insiders Who Fear a Smartphone Dystopia. The Guardian. https://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-silicon-valley-dystopia Summary: Profiles Tristan Harris, Aza Raskin, and other former tech insiders who describe the persuasive design techniques used to exploit human psychology, confirming that the mechanisms were understood consciously by their creators.

Palihapitiya C (2017). Money as an Instrument of Change. Stanford Graduate School of Business, November 2017. Summary: The recorded public confession in which Facebook’s former VP of User Growth stated: “The short-term, dopamine-driven feedback loops we’ve created are destroying how society works.” He does not use social media and does not allow his children to use it.

Parker S (2017). Interview with Mike Allen. Axios, November 9, 2017. https://www.axios.com/2017/12/15/sean-parker-unloads-on-facebook-god-only-knows-what-its-doing-to-our-childrens-brains-1513306792 Summary: Facebook’s founding president stating the platform was designed to exploit “a vulnerability in human psychology” and that the creators “understood this consciously. And we did it anyway.”

Sanders NE, Schneier B (2025). AI Is Changing How Politics Is Practiced in America. The American Prospect. https://prospect.org/2025/10/10/ai-artificial-intelligence-campaigns-midterms/ Summary: Investigation of AI-driven political advertising in the 2026 midterm cycle, documenting the use of large language models to generate personalized campaign messaging at scale without human editorial intervention.

Sunstein CR, Thaler RH (2008; rev. 2021). Nudge: Improving Decisions About Health, Wealth, and Happiness. Penguin. Summary: The foundational text of choice architecture and libertarian paternalism, generating over 400 nudge units in governments worldwide. Sunstein served as OIRA Administrator under Obama and as Senior Counselor at DHS under Biden.

Tappin BM, et al. (2024). The persuasive effects of political microtargeting in the age of generative artificial intelligence. PNAS Nexus 3(2). doi:10.1093/pnasnexus/pgae035. Summary: Peer-reviewed study confirming that AI-generated microtargeted political messages can be persuasive, and that computer-based personality judgments from 300 Facebook likes exceed spousal accuracy.

Zuboff S (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs. Summary: Coined “surveillance capitalism” and “instrumentarian power”—the capacity to shape behavior at scale through digital architecture. Extends Lippmann’s manufactured reality into the economic sphere of behavioral futures markets.