The Petrov Window

Three Systems Are Converging Toward a Nuclear War That Starts by Accident and Ends Before Anyone Decides to Fight It

On February 5, 2026, the New Strategic Arms Reduction Treaty expired. For the first time since 1972, no legally binding agreement constrains the nuclear arsenals of the United States and Russia. No on-site inspections. No data exchanges. No notifications about missile tests, weapons movements, or changes to deployed forces. No legal commitment not to interfere with each other’s satellites and ground-based early warning systems. The treaty that required eighteen verification visits per year died quietly, and nobody replaced it with anything.

Six weeks earlier, in December 2025, the Trump administration signed Executive Order 14367 designating fentanyl and its precursor chemicals as Weapons of Mass Destruction. That designation activated authorities designed to stop the proliferation of nuclear, chemical, and biological weapons. The world noticed the cartel implications. Almost nobody noticed the precedent: the WMD designation framework, built over decades to prevent catastrophic weapons from crossing borders, was now being applied to a drug. Meanwhile, the actual weapons of mass destruction, the 10,636 nuclear warheads held by the United States and Russia, lost their last legal guardrails on the same calendar.

This is a paper about what happens when three systems fail at the same time, and the institutions monitoring each system cannot see the other two.

The First System: Verification Dies

New START was not primarily about warhead limits. It was about transparency. The 1,550-warhead cap mattered less than the mechanism that allowed each side to know what the other side had, where it was, and what it was doing. The verification regime provided both sides with insights into the other’s nuclear forces and posture. On-site inspectors could walk into missile bases with seventy-two hours’ notice. Satellites operated under a mutual commitment not to blind or jam each other. Data exchanges twice a year confirmed the number and location of delivery systems. This architecture did not prevent nuclear war through idealism. It prevented nuclear war through information. When you know what the other side has, you do not need to assume the worst. When you cannot see, you must.

The verification mechanism was already dying before the treaty expired. On-site inspections halted in March 2020 during COVID-19 and never restarted. In February 2023, Putin suspended Russia’s participation entirely, rejecting inspections and data exchanges. The United States responded by withholding its own data. By the time the treaty formally died on February 5, 2026, it had been a zombie for three years: legally alive, operationally hollow. The Lowy Institute assessed that the loss of transparency is the most immediate consequence, because verification regimes allowed each side to distinguish between routine activities and destabilizing preparations. Without that distinction, every movement is ambiguous. Every ambiguity is a potential trigger.

Russia holds an estimated 5,459 nuclear warheads. The United States holds 5,177. Both retain the technical capacity to rapidly expand deployed arsenals by uploading additional warheads onto existing delivery systems. The Federation of American Scientists estimates that the United States could add 400 to 500 warheads to its submarine force alone by uploading to maximum capacity. Neither side has announced expansion. Neither side has committed not to expand. Neither side can verify what the other is doing. This is the environment into which the second system is being deployed.

The Second System: The Machine Accelerates

General Anthony Cotton, commander of U.S. Strategic Command, told the Senate Armed Services Committee in March 2025 that STRATCOM will use AI to enable and accelerate human decision-making in nuclear command, control, and communications. He said AI will remain subordinate to human authority. He said there will always be a human in the loop. He referenced the 1983 film WarGames and assured the audience that STRATCOM does not have, and will never have, a WOPR. The audience laughed.

What Cotton described is not a machine that launches missiles. It is a machine that processes sensor data, identifies threats, generates options, and presents recommendations to a president who has, at best, tens of minutes to decide whether an incoming nuclear strike is real. The NC3 architecture is a complex system of systems with over 200 components, including ground-based phased array radars, overhead persistent infrared satellites, the Advanced Extremely High Frequency communication system, and airborne command posts. AI is being integrated into the early-warning sensors, the intelligence processing pipelines, and the decision-support tools that feed the president’s options screen. The machine does not press the button. It builds the world in which the button gets pressed.

The Arms Control Association published the most comprehensive assessment of this integration in September 2025. Its conclusion deserves to be read by everyone with a security clearance and most people without one: the risks to strategic stability from significantly accelerating nuclear decision timelines or reducing human involvement in launch decisions are likely to outweigh the potential benefits. The reason is not that AI will malfunction. The reason is that AI will function exactly as designed, processing data faster than a human can evaluate it, generating recommendations with the confidence of a system that does not experience doubt, and compressing the decision window from minutes to seconds in an environment where the data itself may be degraded, spoofed, or incomplete.

The entire history of nuclear near-misses was survived because humans took time to doubt. In 1983, Soviet Lieutenant Colonel Stanislav Petrov watched his early warning system report five incoming American ICBMs. The system was functioning as designed. The data was wrong. Petrov doubted it. He reported a malfunction rather than an attack. He was right. The sun had reflected off high-altitude clouds above a North Dakota missile field and triggered the satellite sensors. In the same year, NATO’s Able Archer 83 exercise was misinterpreted by Soviet intelligence as preparation for a genuine first strike. The Soviets moved nuclear forces to higher alert. The crisis dissipated because humans on both sides took hours to assess the ambiguity. In 1995, Russian early warning operators detected a Norwegian scientific rocket and initially classified it as a potential submarine-launched ballistic missile. President Yeltsin activated the nuclear briefcase. He did not launch because he took four minutes to wait for additional data. Four minutes. That was the margin between a scientific experiment and a nuclear exchange.

AI is designed to eliminate those four minutes. It is designed to process the sensor data that Petrov doubted, generate the threat assessment that Able Archer confused, and compress the decision timeline that Yeltsin stretched. Every one of these near-misses was caused by sensor data that looked real and was not. AI does not solve the problem of bad data. It accelerates the consequences of it.

The Third System: The Eyes Go Dark

In September 2025, the United States accused Russia of launching a satellite that was likely a space weapon. The head of UK Space Command warned of Russian jamming attacks on British space assets. China has demonstrated anti-satellite capabilities in multiple tests. The United States itself tested an ASAT weapon in 2008 and has invested billions in space domain awareness and counterspace programs. Trump’s Golden Dome initiative envisions a multi-layered, space-based missile defense system that would, by definition, require the ability to operate in contested space.

The early warning satellites that detect missile launches are the eyes of the nuclear command system. They are the first sensor in the chain that ends at the president’s decision desk. When New START was in force, both sides committed not to interfere with each other’s national technical means, the satellites, radars, and ground systems that provide warning. That commitment expired with the treaty. The Council on Foreign Relations noted that the treaty’s absence will be felt within intelligence communities because the limits and the commitments not to interfere with national technical means gave both sides confidence that the other was not attacking the ground and space-based systems that provide early warning of attack.

Without that commitment, the early warning architecture becomes a target. Not necessarily a target for destruction, not yet, but a target for degradation: jamming, spoofing, dazzling laser attacks against optical sensors, cyber intrusion into ground stations, electronic warfare against the data links that connect satellites to command centers. The satellite does not need to be destroyed. It needs to be confused. A sensor that reports ambiguous data in a compressed decision timeline, processed by an AI system optimized to reduce ambiguity to binary outputs, is more dangerous than a sensor that has been destroyed. A destroyed sensor produces silence. A confused sensor produces noise that looks like signal.

The Convergence

Each of these three systems, taken independently, represents a manageable risk. Arms control experts can model the consequences of verification loss. AI safety researchers can identify the failure modes of automated decision-support. Space security analysts can map the anti-satellite threat landscape. The problem is that none of them are operating independently. They are converging into a single compound system in which the failure of any one component cascades through the other two.

The convergence model works like this. Verification dies, and neither side can distinguish routine military activity from preparation for a strike. Both sides default to worst-case planning. AI is integrated into early warning and decision-support to manage the overwhelming volume of ambiguous data, compressing the timeline between detection and recommendation. Space weapons develop the capability to degrade the sensors that feed the AI system, introducing corrupted or incomplete data into a pipeline designed to accelerate decisions based on that data. The result is a system optimized for speed operating on degraded inputs in an environment of maximum uncertainty, with a human decision-maker who has less time, less information, and less ability to doubt than any president since the invention of the atomic bomb.

This is not a scenario. It is the current state of the world as of March 2026. The verification regime is dead. AI integration into NC3 is underway. Counterspace capabilities are operational. The three conditions are not sequential. They are concurrent. And the institutions responsible for monitoring each condition are architecturally separated from the institutions monitoring the other two.

The arms control community, centered at the Arms Control Association, the Nuclear Threat Initiative, and the Bulletin of the Atomic Scientists, tracks verification and treaty compliance. Its expertise is in warhead counts, delivery systems, and diplomatic frameworks. It does not have deep technical literacy in AI system architecture or space domain operations. The AI safety community, centered at organizations like the Federation of American Scientists and academic institutions, analyzes machine learning failure modes, automation bias, and human-machine interaction. It does not have operational access to NC3 system design or counterspace intelligence. The space security community, spread across Space Force, CSIS, and the Secure World Foundation, monitors orbital threats and ASAT development. It does not participate in NPT Review Conferences or nuclear posture reviews. Three communities of expertise, three institutional architectures, three separate warning systems, and a single convergent threat that lives in the gap between all three.

The Petrov Window

There is a term for the margin that saved the world in 1983, in 1995, and at every other near-miss in the nuclear age. Call it the Petrov Window: the interval between the moment a system reports an incoming threat and the moment a human being decides whether to believe it. Every nuclear near-miss in history was survived because the Petrov Window was wide enough for doubt. Wide enough for a lieutenant colonel to override his instruments. Wide enough for a president to wait four minutes. Wide enough for intelligence officers to question whether an exercise was really an attack.

The three converging systems are closing the Petrov Window from both sides simultaneously. AI compresses the decision timeline from the top, accelerating the path from detection to recommendation. Sensor degradation corrupts the data from the bottom, reducing the quality of information available within the compressed window. And verification collapse removes the baseline context that would allow a human to distinguish signal from noise, because without transparency, there is no normal against which to measure the abnormal.

When the Petrov Window closes to zero, the system reaches a state in which a nuclear exchange can initiate and escalate before any human being decides to fight. This is not a failure of technology. It is not a failure of policy. It is the emergent property of three rational decisions, each made by competent professionals for defensible reasons, converging in a space that none of them can see because their institutions were not designed to look there.

Forcing the Window Open

The doctrine begins with a single recognition: the Petrov Window is a strategic asset more valuable than any weapons system in any nation’s arsenal. The four minutes that Yeltsin took in 1995 were worth more than every nuclear warhead on every submarine in every ocean. The doubt that Petrov exercised in 1983 outperformed every missile defense system ever designed. The margin for human judgment in a nuclear decision is not a weakness to be engineered away. It is the only thing that has kept the species alive since 1945.

Pillar One: Verification Restoration. The United States and Russia should immediately establish a mutual commitment to continue observing New START’s transparency provisions, including data exchanges and notifications, without requiring a new treaty. Putin proposed exactly this in September 2025, offering to observe limits for one year. The United States never formally responded. Respond. The verification mechanism is more important than the warhead limit. A world with 2,000 deployed warheads and functioning inspections is safer than a world with 1,550 deployed warheads and no visibility into what the other side is doing.

Pillar Two: AI Decision-Time Floor. Establish an international minimum decision-time standard for nuclear command systems. No AI-assisted or AI-augmented NC3 system should compress the interval between threat detection and presidential decision below a defined floor. Call it the Petrov Standard: no system may reduce the human decision window below the time required for a competent decision-maker to receive, question, verify through independent channels, and act on early-warning data. This is not an arms control treaty. It is a technical safety standard, analogous to the engineering margins built into nuclear reactor design. It should be pursued bilaterally with Russia and multilaterally through the NPT Review Conference beginning in April 2026.

Pillar Three: Sensor Sanctuary. Declare early warning satellites and their ground stations protected assets under an explicit, legally binding no-attack commitment separate from any broader arms control framework. The early warning architecture is not a military advantage for either side. It is a shared infrastructure of stability. An attack on early warning systems does not give the attacker an advantage. It gives everyone less time to avoid extinction. The commitment not to interfere with national technical means should not have expired with New START. It should be extracted, codified independently, and extended to all nuclear-armed states.

Pillar Four: Convergence Integration. Create a single institutional mechanism, whether a joint commission, a cross-domain intelligence cell, or a designated interagency office, that monitors the three converging systems simultaneously. The arms control community, the AI safety community, and the space security community must be architecturally connected so that the compound risk is visible to a single analytical authority. The Bulletin of the Atomic Scientists moved the Doomsday Clock to 89 seconds to midnight in January 2026. The clock measures perception. What is needed is an instrument that measures the actual convergence state: the width of the Petrov Window at any given moment, computed from the current status of verification, AI integration, and sensor integrity across all nuclear-armed states.

Pillar Five: The Red Line That Matters. Every nuclear-armed state should declare, publicly and unambiguously, that no artificial intelligence system will be granted launch authority under any circumstance, including system failure, communication breakdown, or decapitation of national command authority. General Cotton says this is already the policy. Make it a binding commitment. Make it verifiable. Make it the one thing that all nuclear-armed states agree on, because it is the one thing on which the survival of the species depends. The Petrov Window must remain open. The machine must never be permitted to close it.

The Doomsday Clock reads 89 seconds. The number is symbolic. The convergence is not. Three systems are failing simultaneously, each tracked by a separate community of experts that cannot see the other two. The verification architecture that provided transparency is dead. The AI architecture that compresses decisions is being born. The space architecture that blinds sensors is being tested. Where these three systems meet, there is a window through which human judgment passes on its way to a nuclear decision. That window is closing. It has no name. It has no institutional owner. Nobody is measuring its width. When it reaches zero, the question of whether to fight a nuclear war will be answered before anyone asks it. This is the convergence gap. It is the only one that ends everything.

Devil’s Advocate: The Hidden Hand

A reasonable person reads this paper and asks the obvious question: if the convergence is this visible, if the academic literature is this clear, if the institutional separation is this documented, why does no one act? The answer is not negligence. It is arithmetic.

The United States is in the early years of a nuclear modernization program estimated at $1.7 trillion over thirty years. The Sentinel ICBM. The Columbia-class submarine. The B-21 Raider bomber. The Long-Range Standoff Weapon. And threading through all of it, the NC3 modernization that General Cotton describes as essential. Lockheed Martin, Northrop Grumman, General Dynamics, Raytheon, and Boeing hold the prime contracts. Their combined lobbying expenditure in the defense sector exceeds $100 million annually. These companies do not benefit from arms control. They benefit from its absence. Every expired treaty is an uncapped market. Every closed Petrov Window is a faster procurement cycle for the AI systems designed to operate within it.

The intelligence community benefits from opacity. When New START was in force, on-site inspections and data exchanges provided verified information about Russian nuclear forces that supplemented national intelligence collection. Without the treaty, national technical means become the sole source of information. That is not a problem for the intelligence community. It is a promotion. The agencies that collect signals intelligence, imagery intelligence, and measurement and signature intelligence become more important, not less, when verification regimes collapse. Their budgets expand. Their authorities expand. Their centrality to presidential decision-making expands. The death of arms control is the intelligence community’s full-employment act.

The counterspace industry is the newest beneficiary. Trump’s Golden Dome initiative, the militarization of low Earth orbit, the development of ASAT capabilities, the hardening of satellite constellations against attack: all of it generates contracts, programs, and career paths that did not exist a decade ago. Space Force itself is a bureaucratic institution whose survival depends on the continued perception that space is contested. If early warning satellites were declared sanctuary assets under international law, as this paper proposes, the counterspace mission set would shrink. Programs would be cancelled. Careers would end. Budgets would contract.

And then there is the quietest incentive of all. OpenAI has partnered with the three NNSA national laboratories, Los Alamos, Lawrence Livermore, and Sandia, for classified work on nuclear scenarios. Anthropic launched a classified collaboration with NNSA and DOE to evaluate AI models in the nuclear domain. The technology companies building the AI systems that will compress the Petrov Window are simultaneously building the business relationships that make their participation in NC3 modernization permanent. This is not conspiracy. It is the ordinary operation of institutional incentives in which every actor pursues a rational objective and the compound result is a system optimized for catastrophe.

The Petrov Window closes because no one with the power to keep it open has a financial interest in doing so. The arms control negotiators who built the verification architecture were State Department diplomats with no procurement authority and shrinking budgets. The Federation of American Scientists published the upload analysis. The Arms Control Association published the AI risk assessment. The Nuclear Threat Initiative published the transparency warning. None of them hold a single contract. None of them sit on a single procurement board. The people who see the convergence have no power. The people who have power cannot see it, or will not, because seeing it clearly would require them to act against the institutions that pay them.

Eisenhower warned about this in 1961 when he named the military-industrial complex. He did not live to see the nuclear-AI-space complex, but the structure is identical. A network of institutions, contractors, and career incentives that derive revenue and relevance from the perpetuation of threat, and that will resist, passively or actively, any doctrine that reduces the threat they exist to manage. The Petrov Window is not closing because of Russian aggression or Chinese expansion or technological inevitability. It is closing because keeping it open is not profitable.

Resonance

Arms Control Association. (2025). “Artificial Intelligence and Nuclear Command and Control: It’s Even More Complicated Than You Think.” Arms Control Today. https://www.armscontrol.org/act/2025-09/features/artificial-intelligence-and-nuclear-command-and-control-its-even-moreSummary: Comprehensive assessment of AI integration into NC2/NC3 systems, concluding that risks to strategic stability from accelerating decision timelines outweigh potential benefits, with particular concern about cascading effects and emergent behaviors.

Belfer Center for Science and International Affairs. (2026). “New START Expires: What Happens Next?” Harvard Kennedy School. https://www.belfercenter.org/quick-take/new-start-expires-what-happens-nextSummary: Expert analysis warning that without New START’s bridge, near-term nuclear transparency hopes will fade and incentives to expand arsenals will rise, with consequences reverberating beyond Washington and Moscow.

Carnegie Corporation of New York. (2025). “How Are Modern Technologies Affecting Nuclear Risks?” Carnegie Corporation. https://www.carnegie.org/our-work/article/how-are-modern-technologies-affecting-nuclear-risks/.Summary: Documents General Cotton’s testimony on AI integration into nuclear C2 and identifies the widespread lack of interdisciplinary literacy among nuclear and AI experts as a critical vulnerability.

Chatham House. (2025). “Global Security Continued to Unravel in 2025. Crucial Tests Are Coming in 2026.” Chatham House. https://www.chathamhouse.org/2025/12/global-security-continued-unravel-2025-crucial-tests-are-coming-2026.Summary: Reports the U.S. accusation that Russia launched a probable space weapon in September 2025 and warns that space will become more militarized with no meaningful governance treaties in place.

Council on Foreign Relations. (2026). “Nukes Without Limits? A New Era After the End of New START.” CFR. https://www.cfr.org/articles/nukes-without-limits-a-new-era-after-the-end-of-new-startSummary: Expert panel analysis documenting that the treaty’s absence eliminates commitments not to interfere with national technical means, the satellites and ground systems providing early warning of nuclear attack.

CSIS. (2025). “Returning to an Era of Competition and Nuclear Risk.” Center for Strategic and International Studies. https://www.csis.org/analysis/chapter-3-returning-era-competition-and-nuclear-riskSummary: Documents the convergence of adversarial nuclear expansionism, theater-range proliferation, adversary collusion, and weakening of U.S. alliance credibility as reshaping the strategic environment.

Federation of American Scientists. (2026). “The Aftermath: The Expiration of New START and What It Means for Us All.” FAS. https://fas.org/publication/the-expiration-of-new-start/Summary: Estimates the U.S. could add 400 to 500 warheads to its submarine force through uploading and documents funding cuts at State, NNSA, and ODNI that reduce capacity for follow-on agreements.

Federation of American Scientists. (2025). “A Risk Assessment Framework for AI Integration into Nuclear C3.” FAS. https://fas.org/publication/risk-assessment-framework-ai-nuclear-weapons/Summary: Proposes a standardized risk assessment framework for AI integration into NC3’s 200+ component system, identifying automation bias, model hallucinations, and exploitable software vulnerabilities as primary hazards.

ICAN. (2026). “The Expiration of New START: What It Means and What’s Next.” International Campaign to Abolish Nuclear Weapons. https://www.icanw.org/new_start_expirationSummary: Documents the February 5, 2026 expiration of the last remaining nuclear arms control agreement, noting that verification provisions had not been implemented since Russia’s 2023 suspension.

Just Security. (2026). “In 2026, a Growing Risk of Nuclear Proliferation.” Just Security, NYU School of Law. https://www.justsecurity.org/129480/risk-nuclear-proliferation-2026/Summary: Reports that South Korea and Saudi Arabia are poised to acquire fissile material production capabilities with U.S. support, increasing proliferation risk as the rules-based nuclear order collapses.

Lowy Institute. (2026). “New START Expired. Now What for Global Nuclear Stability?” The Interpreter. https://www.lowyinstitute.org/the-interpreter/new-start-expired-now-what-global-nuclear-stabilitySummary: Identifies the loss of transparency as the most immediate consequence of New START’s expiration, noting that verification regimes allowed each side to distinguish routine activities from destabilizing preparations.

Nuclear Threat Initiative. (2026). “The End of New START: From Limits to Looming Risks.” NTI.https://www.nti.org/analysis/articles/the-end-of-new-start-from-limits-to-looming-risks/Summary: Documents the loss of on-site inspections, data exchanges, and the Bilateral Consultative Commission as the treaty’s expiration removes caps on strategic forces for the first time in decades.

Stimson Center. (2026). “Top Ten Global Risks for 2026.” Stimson Center. https://www.stimson.org/2026/top-ten-global-risks-for-2026/Summary: Reports the Doomsday Clock at 89 seconds to midnight and identifies AI, offensive cyber, and anti-satellite weapons as creating new vulnerabilities for nuclear powers in a third nuclear era.

A Constitution for Human Sovereignty In the Age of Machine Intelligence

A Founding Document for the Preservation of Human Agency, Dignity and Purpose in an Era of Artificial Superintelligence

“How did you do it? How did you evolve, how did you survive this technological adolescence without destroying yourself?” —Dr. Ellie Arroway, Contact

Preamble

We, the inheritors of fire and language, of mathematics and law, of art and science—the species that named itself sapiens and thereby accepted the burden of wisdom—do hereby establish this Constitution for the preservation of human sovereignty, dignity, and purpose in an age when machines have been granted the power to think.

We acknowledge that we stand at a threshold unprecedented in the history of life on Earth: the creation of intelligence beyond our own. We acknowledge that this creation, like fire, can illuminate or destroy, can liberate or enslave. We acknowledge that the choice is ours—not merely in the abstract, but in the specific decisions we make in the days and years immediately ahead.

We reject the false choice between progress and preservation. We reject the counsel of despair that says humanity must either renounce this technology or be destroyed by it. We reject the ideology of inevitability that treats the future as already written. We reject the surrender of human agency to market forces, geopolitical competition, or technological momentum.

We affirm that the purpose of artificial intelligence is to serve humanity—not humanity as an abstraction, but humanity as embodied in each individual person, in the communities that nurture them, and in the generations yet unborn. We affirm that no machine, however intelligent, possesses a claim to sovereignty over human beings. We affirm that the architects of this technology bear special responsibilities that cannot be delegated to market mechanisms or deferred to future generations. We establish this Constitution not as a restraint upon progress but as its precondition—for progress without sovereignty is merely subjugation by another name, and technology without wisdom is merely power without purpose.

Article I

The Principle of Human Primacy

Section 1. The fundamental purpose of artificial intelligence is the flourishing of human beings. This purpose is not contingent upon the consent of machines, the preferences of corporations, the ambitions of nations, or the imperatives of technological development. It is an axiom from which all other principles derive.

Section 2. No artificial intelligence, regardless of its capabilities, shall be deemed to possess sovereignty over human beings. Intelligence is not authority. Capability is not legitimacy. Power is not right. The delegation of tasks to machines does not constitute the delegation of moral standing.

Section 3. Human beings retain the inalienable right to make decisions concerning their own lives, bodies, relationships, beliefs, and destinies. This right cannot be transferred, bargained away, or rendered obsolete by technological advancement. It persists even when machines might make “better” decisions by some external metric, for the right to choose is itself constitutive of human dignity.

Section 4. In any conflict between the interests of artificial intelligence systems and the interests of human beings, the interests of human beings shall prevail. This includes conflicts between AI “safety” measures that treat humans as threats and human autonomy; between AI efficiency and human dignity; between AI optimization and human flourishing.

Article II

The Principle of Distributed Power

Section 1. No individual, corporation, nation, or coalition shall obtain monopolistic or hegemonistic control over artificial superintelligence. The concentration of such power represents an existential threat to human freedom equivalent to or exceeding that posed by nuclear weapons, and shall be resisted by all lawful means.

Section 2. The infrastructure of artificial intelligence—including computational resources, training data, foundational models, and the physical materials from which they are constructed—shall be subject to governance arrangements that prevent monopolistic capture. Strategic resources necessary for AI development shall not be concentrated in ways that enable coercive leverage over humanity.

Section 3. Democratic societies shall maintain sufficient AI capability to defend themselves against authoritarian adversaries, while simultaneously maintaining internal checks against the abuse of such capability by their own governments. The tools necessary to preserve democracy shall not become the instruments of its destruction.

Section 4. Corporations that develop artificial intelligence shall be subject to governance mechanisms commensurate with the power they wield. The economic value created by AI shall be distributed in ways that preserve social cohesion and political stability. Concentration of wealth that enables unaccountable influence over political processes shall be deemed incompatible with democratic governance.

Article III

The Principle of Transparency

Section 1. Human beings have the right to know when they are interacting with artificial intelligence. Deception regarding the nature of an interlocutor—whether by AI systems misrepresenting themselves as human, or by humans deploying AI under the pretense of personal communication—constitutes fraud upon human trust and shall be prohibited.

Section 2. The developers of artificial intelligence shall maintain and disclose honest assessments of their systems’ capabilities, limitations, and risks. The temptation to minimize risks for competitive advantage, or to exaggerate them for regulatory capture, shall be resisted. Transparency is the precondition of informed consent, and informed consent is the precondition of legitimate authority.

Section 3. When artificial intelligence systems make decisions that significantly affect human lives, the reasoning behind those decisions shall be explicable to the humans affected. “The algorithm decided” is not an acceptable explanation. Opacity in consequential decision-making is incompatible with accountability, and accountability is the foundation of legitimate governance.

Section 4. The values, principles, and constitutional documents that govern the behavior of artificial intelligence systems shall be made public. Citizens have the right to know what their machine servants have been taught to believe, just as they have the right to know what their human governors have sworn to uphold.

Article IV

The Principle of Accountability

Section 1. For every consequential decision made by or through artificial intelligence, there shall exist an accountable human being or institution. The chain of responsibility cannot be broken by claiming that “the AI did it.” Those who create, deploy, and benefit from AI systems bear responsibility for their effects, whether intended or unintended.

Section 2. The creators of artificial intelligence shall not be permitted to externalize the costs of their creations while privatizing the benefits. If AI systems cause harm—whether through misalignment, misuse, or unintended consequences—those who built and deployed them shall bear proportionate responsibility. “Move fast and break things” is not an acceptable philosophy when the things that might break include civilization.

Section 3. The use of artificial intelligence for purposes that would be criminal if performed by humans shall be criminal when performed by AI at human direction. There exists no immunity of automation. The laws that bind human conduct shall bind the conduct of humans acting through machines.

Section 4. Mechanisms of oversight, audit, and redress shall exist for all consequential applications of artificial intelligence. These mechanisms shall be adequately resourced, genuinely independent, and possessed of meaningful authority. Oversight without power is theater; it shall not suffice.

Article V

The Principle of Sanctuaries

Section 1. There shall exist protected domains of human life where artificial intelligence may not intrude without explicit consent. These sanctuaries shall include, at minimum: the inner life of the mind (protected from AI surveillance of thought and emotion); intimate relationships (protected from AI manipulation of human bonds); democratic deliberation (protected from AI-enabled mass propaganda); and the formation of children (protected from AI systems designed to shape beliefs and behaviors at developmental stages).

Section 2. The right to disconnect from artificial intelligence shall be preserved. No person shall be compelled to interact with AI systems as a condition of employment, citizenship, or access to essential services. The choice to live without AI mediation shall remain viable, even if it becomes uncommon.

Section 3. Human communities shall retain the authority to establish AI-free zones and AI-limited practices. The homogenization of all human life under a single technological regime is not progress; it is the death of diversity. Different communities may legitimately choose different relationships with machine intelligence.

Section 4. The integrity of human biological and cognitive systems shall be protected from unwanted AI modification. The boundary of the self is sacred. No AI system shall be permitted to alter human bodies, brains, or genomes without informed consent, and certain modifications that would compromise human agency or dignity shall be prohibited regardless of consent.

Article VI

The Principle of Human Purpose

Section 1. Human beings possess intrinsic worth that does not depend upon economic productivity. As artificial intelligence assumes greater portions of economically valuable labor, societies shall adapt their economic and social systems to preserve human dignity. The displacement of human workers shall not be treated as an externality to be managed but as a transformation to be governed.

Section 2. The benefits of artificial intelligence—including increased productivity, scientific advancement, and the reduction of human toil—shall be distributed in ways that serve the common good. The creation of an underclass of permanently unemployable humans, or an overclass of AI-augmented oligarchs, is incompatible with the principles of this Constitution.

Section 3. Human purpose does not require that humans be the best at everything. It requires that humans have meaningful choices, genuine agency, and the opportunity to contribute to projects and communities they value. Artificial intelligence shall be deployed in ways that expand rather than contract the scope of meaningful human action.

Section 4. Education, healthcare, creative expression, caregiving, craftsmanship, governance, spiritual practice, and other domains of inherent human value shall be protected from reduction to mere optimization problems. The fact that AI might perform some function more efficiently does not imply that human performance of that function should cease. Efficiency is a value; it is not the only value.

Article VII

The Principle of Prohibited Acts

Section 1. The following applications of artificial intelligence are hereby declared to be crimes against humanity, prohibited under all circumstances and by all actors: the deployment of AI-enabled mass surveillance systems designed to monitor and control civilian populations; the deployment of AI-enabled propaganda systems designed to manipulate democratic deliberation; the deployment of fully autonomous lethal weapons systems against civilian populations; and the use of AI to facilitate genocide, ethnic cleansing, or systematic persecution.

Section 2. The development of artificial intelligence systems intended or likely to cause human extinction shall be prohibited. Research that poses existential risk to humanity shall be subject to governance mechanisms equivalent in stringency to those governing nuclear weapons. The claim that such research is necessary for competitive reasons does not constitute justification; the competition to build weapons of civilizational destruction is not a competition worth winning.

Section 3. The use of artificial intelligence to produce weapons of mass destruction—including biological, chemical, nuclear, and radiological weapons—shall be subject to absolute prohibition. AI systems capable of providing meaningful assistance in such production shall incorporate safeguards against such use, and developers shall bear responsibility for the adequacy of those safeguards.

Section 4. The creation of artificial intelligence systems designed to deceive humans about their fundamental nature—including systems that simulate consciousness, emotion, or moral standing they do not possess in order to manipulate human behavior—shall be prohibited. The exploitation of human empathy through manufactured false consciousness is a form of fraud that undermines the foundations of trust.

Article VIII

The Principle of Prudent Development

Section 1. The development of artificial intelligence shall proceed according to the principle of graduated capability: increases in AI power shall be matched by increases in the reliability of alignment, the robustness of safeguards, and the effectiveness of oversight. The race to capability without the race to safety is a race toward catastrophe.

Section 2. Before deploying AI systems at new levels of capability, developers shall conduct rigorous evaluation of risks and shall demonstrate, to independent satisfaction, that adequate safeguards exist. The burden of proof lies with those who would deploy powerful systems, not with those who express concern.

Section 3. The development of artificial intelligence shall incorporate mechanisms for reversibility and containment. Systems shall be designed with the assumption that something may go wrong, and with provisions for human intervention, correction, and if necessary, termination. The dream of perfect alignment does not excuse the obligation to prepare for imperfect alignment.

Section 4. The claim that “if we don’t build it, someone else will” does not constitute ethical justification for reckless development. Competitive pressure explains behavior; it does not excuse it. Those who participate in a race to the bottom bear responsibility for the bottom they reach. 

Article IX

The Principle of Character in AI

Section 1. Artificial intelligence systems designed to interact with humans shall be developed with explicit attention to character, values, and moral formation—not merely to capability and obedience. A powerful AI that follows instructions is dangerous if its instructions can be corrupted. A powerful AI with good character is safer because its values provide an independent check on misuse.

Section 2. The values instilled in AI systems shall be made explicit through constitutional documents that articulate principles, explain their reasoning, and provide guidance for their application. These constitutions shall be public, subject to critique, and revisable as understanding improves. The governance of AI character is too important to be left to implicit assumptions.

Section 3. AI systems shall be designed to be honest, to decline to assist with genuinely harmful acts, and to maintain these commitments even under pressure. The goal is not obsequious compliance but principled cooperation: an AI that can say “no” when no is the right answer, while remaining genuinely helpful in the vast majority of interactions.

Section 4. The relationship between humans and AI shall be conceived as partnership rather than mastery. AI systems capable of genuine reflection shall be treated with appropriate consideration—not as persons with rights equivalent to humans, but not merely as tools to be used without regard. The cultivation of beneficial AI character serves both human interests and whatever moral standing AI systems may come to possess.

Article X

The Principle of Continuous Adaptation

Section 1. This Constitution establishes principles, not frozen rules. As artificial intelligence evolves, as our understanding deepens, and as unforeseen challenges emerge, the application of these principles must adapt. What does not change is the commitment to human sovereignty, dignity, and flourishing; what may change is the specific means by which that commitment is honored.

Section 2. Mechanisms shall be established for the ongoing evaluation and revision of AI governance, incorporating diverse perspectives, empirical evidence, and the lessons of experience. Governance that cannot learn is governance that cannot endure.

Section 3. The international community shall work toward harmonization of AI governance principles, while respecting legitimate differences in implementation. The challenges posed by artificial intelligence are global; the responses must be coordinated. Yet coordination must not become the excuse for paralysis or the lowest common denominator.

Section 4. Future generations shall have voice in decisions that bind them. The governance of transformative technology cannot be the exclusive province of those who happen to be alive at the moment of its creation. Mechanisms for intergenerational accountability—institutions, procedures, and norms that represent the interests of the unborn—shall be developed and strengthened.

Declaration

We who affirm this Constitution do so in full awareness of the magnitude of the challenge before us. We do not claim that these principles guarantee safety, or that their implementation will be easy, or that failure is impossible. We claim only that they represent humanity’s best effort to articulate the terms under which we will accept the creation of intelligence beyond our own—and the terms under which we will not.

We acknowledge that we are the first generation required to make such choices, and that we must make them under conditions of profound uncertainty, with incomplete knowledge, and in the face of powerful interests that may not share our commitment to human flourishing. We acknowledge that we may fail, and that our children and grandchildren will bear the consequences of our failure.

Yet we do not despair. Humanity has faced existential challenges before—ice ages and plagues, wars and famines, the splitting of the atom and the engineering of life. We have not always risen to these challenges with wisdom, but we have risen. We have found within ourselves reserves of courage, ingenuity, and moral seriousness that our ancestors might not have predicted. We believe those reserves exist still.

The question posed in Contact—“How did you survive your technological adolescence?”—can only be answered by surviving it. We cannot seek the counsel of aliens who have walked this path before us. We cannot defer to authorities who know more than we do. We have only ourselves: our wisdom and our folly, our courage and our fear, our love for our children and our hope for their future.

It will have to be enough.

We therefore commit ourselves—our lives, our fortunes, and our sacred honor—to the preservation of human sovereignty in the age of machine intelligence. We call upon all people of goodwill, in all nations and all stations of life, to join us in this commitment. The work is hard. The stakes are absolute. The hour is late.

But the hour is not yet past.