The Collision Compact

How Military Institutions Choose Catastrophe Over Accountability, and Why the Dead Stay Dead

In 1984, an aircraft carrier ran over a nuclear submarine and both navies called it one man’s fault. In 1988, a cruiser shot down a passenger jet and the Navy called it stress. In 2003, a missile battery killed three allied aircrew and the Army called it a training issue. In 2020, a hundred and nine soldiers died at a single base and the Army called it a societal problem. The individual absorbs the blame. The doctrine walks free. The dead stay dead. This paper names the mechanism.

The Fallacy: The Isolated Incident

The fallacy is that military catastrophes are discrete events with discrete causes. A captain’s bad judgment. A radar operator’s misreading. A software glitch. A failure of training. Each incident investigated in isolation. Each investigation finding a proximate cause. Each proximate cause assigned to an individual or a component. Each individual disciplined or decorated, depending on which narrative the institution needs. And then the institution resumes exactly the operation that produced the catastrophe, because the operation was never investigated. Only the incident was.

This is not incompetence. It is architecture. The military investigation system is structurally designed to find proximate causes and stop. A board convenes. It establishes the sequence of events. It identifies what went wrong at the point of failure. It assigns responsibility. It recommends corrective action, almost always training, procedures, or personnel changes. What it does not do, what it is not designed to do, what it is institutionally prevented from doing, is follow the causal chain past the individual and into the doctrine, the incentive structure, the acquisition pipeline, and the operational culture that put the individual in the position to fail. The investigation stops at the person because the institution begins at the doctrine, and the doctrine is not on trial.

Identify the Center of Gravity: The Collision Compact

In a companion paper published today, Blind Man’s Bluff at 30 Knots, we defined the Collision Compact as the unspoken bilateral agreement between adversary navies to accept catastrophic proximity as a cost of doing business, to treat the resulting incidents as individual failures rather than systemic products, and to preserve the doctrine that generates those incidents because no institution can afford to admit the game itself is the problem.

The Compact has three structural components: mutual escalation, in which the system generates risk because the system is designed to generate risk; mutual silence, in which the institution minimizes the incident, classifies the details, and controls the narrative; and mutual scapegoating, in which the individual absorbs the blame that belongs to the doctrine.

The Collision Compact is not a naval phenomenon. It is not a Cold War artifact. It is the operating logic of institutional risk across every military domain where competitive pressure, technological complexity, and accountability structures intersect. The evidence runs through four decades, four services, and four distinct failure modes. The players change. The Compact does not.

Converge the Silos

The Machine That Lied: USS Vincennes and Iran Air Flight 655

On 3 July 1988, USS Vincennes, a Ticonderoga-class Aegis cruiser under Captain Will Rogers III, fired two SM-2MR missiles at what the crew believed was an Iranian F-14 Tomcat descending on an attack profile. The target was Iran Air Flight 655, an Airbus A300 on a scheduled commercial route from Bandar Abbas to Dubai, climbing through its assigned airway, squawking the correct civilian transponder code. All 290 people aboard were killed, including 66 children.

The Aegis Combat System’s own data tapes recorded the aircraft climbing on a normal commercial profile. The crew reported it descending. The system’s software had recycled the flight’s tracking number and reassigned it to a U.S. Navy jet over the Gulf of Oman that actually was descending. The Aegis user interface was known to be deficient. Every test of the system had shown errors. The dark CIC, the flickering lights from gunfire, the confusion over time zones in the flight schedules, the one-minute decision window: all of these were products of the system, not the captain. RCA, the system developer, was not mentioned once in the 153-page investigation report.

USS Sides, operating nearby under Commander David Carlson, correctly identified Flight 655 as commercial the entire time. Carlson later wrote that the Vincennes crew “felt a need to prove the viability of Aegis in the Persian Gulf, and that they hankered for an opportunity to show their stuff.” The ship had earned the nickname “Robo Cruiser.” Rogers received the Legion of Merit. The Aegis interface flaws that produced the misidentification were not redesigned before the next deployment. The Navy’s conclusion: human error under stress. The machine walked free.

The Machine That Killed Its Own: Patriot Fratricide, Iraq 2003

Fifteen years later, the same architecture produced the same outcome in a different weapons system. During the opening days of Operation Iraqi Freedom, the Patriot missile system committed three fratricide incidents in ten days. On 23 March 2003, a Patriot shot down an RAF Tornado GR4A, killing Flight Lieutenants Kevin Main and David Williams. The system’s IFF failed because the battery did not have the correct Mode 1 codes loaded. The crew had one minute to decide. On 24 March, a Patriot radar locked onto a U.S. Air Force F-16. The pilot, believing he was being targeted by an Iraqi SAM, fired a HARM anti-radiation missile and destroyed the Patriot battery. On 2 April, a Patriot shot down a U.S. Navy F/A-18C, killing Lieutenant Nathan White. The system generated false ballistic missile trajectories when multiple Patriot radars tracked the same aircraft.

Twenty-five percent of the Patriot’s total engagements in Iraq were against friendly aircraft. The Defense Science Board found that the IFF problems had surfaced during training exercises before the war. A 1993 test found that when IFF failed, Patriot batteries fired on friendly aircraft 50 percent of the time. A 1996 National Research Council report called the simulated fratricide results “disturbing.” The problems were known. The problems were documented. The problems were not fixed before deployment, because fixing them would require acknowledging that the system’s autonomous engagement mode, the feature that made Patriot fast enough to intercept ballistic missiles, was also fast enough to kill friendly pilots. Raytheon declined to discuss the misidentifications. The Army’s response: more operator training. An F-16 pilot summed it up: the Patriots scared him more than any Iraqi SAM.

One detail tells the whole story. The investigation found that if Patriot crews waited 60 seconds after target acquisition before firing, the likelihood of fratricide would decrease by 86 percent without allowing any hostile aircraft to slip through. Sixty seconds. The lives of Main, Williams, and White were worth less than a minute of patience that the doctrine did not permit and the machine did not require.

The Exercise That Nearly Ended Everything: 1983

Three events in eleven weeks nearly ended civilization. On 1 September 1983, Soviet fighters shot down Korean Air Lines Flight 007, killing 269 people, because doctrine said shoot first and verify later. On 26 September, the Soviet early-warning system Oko reported five American ICBMs inbound. Lieutenant Colonel Stanislav Petrov, on duty at Serpukhov-15, judged the alarm a malfunction. He was right. Sunlight on high-altitude clouds had fooled the satellites. Petrov was punished for not following protocol. The man who prevented nuclear war was disciplined for disobeying the system that nearly started one.

Then on 7 November, NATO began Able Archer 83, a command-post exercise simulating the transition from conventional to nuclear war. Unlike previous years, this exercise moved forces through all alert phases to DEFCON 1, used new communication formats, introduced radio silence, and included references to B-52 nuclear strikes. The Soviets, raw from KAL 007 and the Petrov incident, interpreted the exercise as cover for a first strike. Marshal Kutakhov ordered the Soviet 4th Air Army to prepare for immediate use of nuclear weapons. Combat aircraft were loaded with actual nuclear bombs. Submarines deployed under Arctic ice. The entire Soviet arsenal, 11,000 warheads, went to maximum combat alert.

Lieutenant General Leonard Perroots, the senior U.S. intelligence officer overseeing the exercise, chose not to escalate. The President’s Foreign Intelligence Advisory Board later called this a “fortuitous, if ill-informed, decision.” He acted on instinct, not guidance, because no guidance existed for the scenario. U.S. commanders on the scene were not aware of any pronounced superpower tension. The Soviet activities were not seen in their totality until long after the exercise was over. The PFIAB concluded: “In 1983 we may have inadvertently placed our relations with the Soviet Union on a hair trigger.” The world survived because one Soviet lieutenant colonel disobeyed orders and one American general trusted his gut. Neither man received guidance from the system that nearly killed everyone. The system was not changed.

The Bases That Eat Their Own: Fort Hood and Fort Bragg

The Collision Compact does not require an adversary. It operates with equal efficiency when the institution is the threat and its own people are the targets. In 2020, twenty-eight soldiers died at Fort Hood, Texas. Specialist Vanessa Guillén, twenty years old, was sexually harassed and then murdered by a fellow soldier. The Army listed her as AWOL. Her dismembered remains were found in shallow graves twenty miles from base. The independent review found a command climate “permissive of sexual harassment and sexual assault,” with incidents “significantly underreported.” The chain of command was fired. Congress investigated. The UCMJ was amended.

Then nothing happened at Fort Bragg, which was worse. One hundred and nine soldiers died in 2020 and 2021. Forty-one by suicide. Twenty-one by drug overdose. Only seven in combat or training. Ninety-six percent of the deaths occurred stateside. The base’s own numbers did not match the data investigative journalists obtained from the Army’s Human Resources Sustainment Center. Fort Hood, with fewer deaths, got a congressional investigation and a chain of command firing. Fort Bragg, home to Delta Force and the 82nd Airborne, got nothing. Congress did not investigate. The chain of command remained. The base was left to police itself.

The mechanism is identical. The institution generates risk through operational tempo, deployment cycles, inadequate mental health infrastructure, and a culture that treats seeking help as weakness. When soldiers die, the institution classifies the deaths as individual failures: AWOL, overdose, suicide, accident. The systemic causes, the culture, the tempo, the institutional incentives that reward silence, are not investigated because investigating them would require the institution to admit the game is the problem. Mutual escalation: the tempo increases because the mission demands it. Mutual silence: the deaths are minimized, the data is withheld, the families are stonewalled. Mutual scapegoating: the dead soldier bears the diagnosis. The doctrine walks free.

The Pattern That Does Not Change

Four case studies. Four decades. Four services. Four failure modes: a cruiser that trusted a machine over the evidence on its own screens, a missile battery that killed the pilots it was built to protect, an exercise that nearly triggered the war it was designed to simulate, and military bases that killed more of their own soldiers than the enemy did. In every case, the systemic failure was visible before the catastrophe. The Aegis interface flaws were documented in testing. The Patriot IFF failures surfaced in exercises. The Able Archer escalation risk was predictable from the exercise design. The Fort Hood culture was reported by soldiers for years before Guillén’s murder.

In every case, the warning was present, reported, and ignored. Not because the people in the system were stupid. Because the system is not designed to process warnings that indict the system. A near-miss report that blames a radar operator gets filed and actioned. A near-miss report that blames the Aegis user interface design gets routed to the contractor, who has a $3.5 billion revenue stream dependent on not redesigning the interface. A near-miss report that blames the operational tempo gets routed to the combatant commander, whose career depends on maintaining the tempo. The report dies in the routing. The next catastrophe arrives on schedule.

Propose the Doctrine: Five Pillars

Pillar 1: Independent Systemic Investigation Authority. The military investigation system finds proximate causes because it is designed to find proximate causes. An independent investigation authority, modeled on the National Transportation Safety Board, with access to classified operational data, acquisition records, and contractor communications, and the mandate to publish systemic findings without requiring the service’s concurrence, is the only mechanism that breaks the Compact. The NTSB model works in aviation because the investigating body is not the operating body. Asking the Army to investigate Fort Bragg is asking the institution to indict its own culture. Forty years of identical recommendations prove that will not happen voluntarily. The authority must be external, permanent, and funded independently of the services it investigates.

Pillar 2: Mandatory Causal Chain Extension. Every Class A mishap investigation must follow the causal chain past the individual to the doctrine, the incentive structure, the acquisition decision, and the operational culture that created the conditions for failure. If a Patriot battery kills a friendly aircraft because the IFF codes were not loaded, the investigation does not stop at the battery commander. It follows the chain to the software architecture that required manual code loading, to the acquisition decision that accepted that architecture, to the contractor who built it, and to the testing regime that documented the failure and did not require a fix before deployment. The chain does not stop until it reaches the structural cause. Stopping at the proximate cause is the mechanism by which the Compact preserves the doctrine.

Pillar 3: Near-Miss Intelligence Mandate. The military generates thousands of near-misses for every catastrophe. Near-misses are free lessons. A Class A mishap costs lives, equipment, careers, and institutional credibility. A near-miss costs nothing except the willingness to report it. The current system treats near-miss reporting as voluntary and stigmatized. An Army aviation safety inspector returning from the civilian sector in 2024 found the same lack of near-miss reporting he had observed when he left the Army two decades earlier. The civilian aviation safety culture, built on the Aviation Safety Reporting System since 1976, captures near-misses with confidential, non-punitive reporting that feeds directly into system design and operational procedure. The military equivalent does not exist at scale. Building it is cheaper than burying the next crew.

Pillar 4: Contractor Accountability in Mishap Findings. RCA was not mentioned in the Vincennes investigation. Raytheon declined to discuss the Patriot fratricides. The investigation system treats the weapons system as a given and the operator as the variable. This inverts the actual causal relationship. When the Aegis system recycles tracking numbers in a way that causes operators to misidentify targets, the system is the cause and the operator is the symptom. When the Patriot’s autonomous engagement mode fires on friendly aircraft because IFF failed, the engagement mode is the cause and the crew is the symptom. Contractor performance must be a mandatory finding in every Class A mishap involving a weapons system. The contractor’s revenue stream from the system must not insulate the contractor from accountability for the system’s contribution to the failure. A weapons system that kills friendly forces at a 25 percent engagement rate is not a training problem. It is a design problem with a corporate address.

Pillar 5: Institutional Culture as an Investigable Domain. Fort Hood’s “permissive” culture of sexual assault was not invisible. It was reported by soldiers, documented in surveys, and ignored by commanders for years before Guillén’s murder. Fort Bragg’s death rate exceeded Fort Hood’s for two years running without triggering a congressional investigation. The Vincennes’s aggressive reputation was known fleet-wide. The Collision Compact survives because institutional culture is treated as a background condition rather than an investigable cause. Culture is not weather. Culture is the product of incentive structures, promotion criteria, operational tempo decisions, and command emphasis, all of which are policy choices made by identifiable leaders. When a base’s soldiers die at rates that exceed the combat theater, the culture that produced those deaths must be investigated with the same rigor as a Class A aviation mishap, by an authority with the power to compel testimony, access records, and publish findings that the institution cannot suppress.

Closing Assessment

The Collision Compact is not a theory. It is a description of observed behavior across four decades, four services, and four failure modes, tested against the evidence and found consistent in every case. The mechanism is simple. The institution generates risk through doctrine, tempo, technology, and culture. When the risk produces a catastrophe, the institution investigates the catastrophe but not the risk. The individual at the point of failure absorbs the blame. The doctrine resumes. The next catastrophe arrives. The dead do not file appeals.

Captain Evseenko was relieved for being under an aircraft carrier that knew he was there. Lieutenant Colonel Petrov was punished for preventing a nuclear war. Captain Rogers was decorated for shooting down a passenger jet. The Patriot crews were retrained after killing allies with a system that failed in testing. A hundred and nine soldiers died at Fort Bragg and the United States Congress did not notice. The pattern is not a coincidence. It is a compact, maintained by institutions that cannot afford to break it, enforced by investigation systems that are not designed to challenge it, and paid for by the people at the bottom of the chain who absorb the consequences of decisions made at the top.

The five pillars proposed here are not aspirational. They are mechanical. An independent investigation authority. Mandatory causal chain extension. Near-miss intelligence at scale. Contractor accountability in mishap findings. Institutional culture as an investigable domain. Each one breaks a specific structural element of the Compact. Together, they create a system in which the doctrine, not just the individual, faces the evidence.

The dead at every one of these incidents had names. They had families. They were doing what the institution told them to do, in the way the institution told them to do it, with the equipment the institution gave them. They did not fail the system. The system failed them. And then the system investigated itself, found the usual suspects, filed the usual report, and resumed the usual operations.

The game continues. The Compact holds. The dead stay dead.

Resonance

Arms Control Center. (2022). “The Soviet False Alarm Incident and Able Archer 83.” Center for Arms Control and Non-Proliferation. https://armscontrolcenter.org/the-soviet-false-alarm-incident-and-able-archer-83/Summary:Analysis of the September 1983 Oko false alarm and the subsequent Able Archer 83 exercise, including Petrov’s decision, Soviet military mobilization, and the PFIAB’s conclusion that the U.S. had placed relations on a hair trigger.

Cox, Samuel J. (2018). “USS Vincennes Tragedy.” Naval History and Heritage Command, H-Gram 020. https://www.history.navy.mil/content/history/nhhc/about-us/leadership/director/directors-corner/h-grams/h-gram-020/h-020-1-uss-vincennes-tragedy–.htmlSummary: NHHC Director’s authoritative account of the Iran Air 655 shootdown, including the Aegis tracking number reassignment, Commander Carlson’s identification of the aircraft as commercial, and the CIC conditions that contributed to the misidentification.

Harp, Seth. (2023). “These Kids Are Dying: Inside the Overdose Crisis Sweeping Fort Bragg.” Rolling Stonehttps://www.rollingstone.com/culture/culture-features/inside-the-overdose-crisis-sweeping-fort-bragg-1396298/.Summary: Investigative reporting documenting 109 soldier deaths at Fort Bragg in 2020–2021, the Army’s stonewalling of families, discrepancies between base-reported and centrally recorded casualty data, and the absence of congressional oversight.

Hawley, John K. (2017). Cited in Sisson, Melanie, et al. (2022). “Understanding the Errors Introduced by Military AI Applications.” Brookings Institutionhttps://www.brookings.edu/articles/understanding-the-errors-introduced-by-military-ai-applications/Summary: Analysis of Patriot missile fratricide incidents in 2003, the role of autonomous engagement modes, the RAF Board of Inquiry findings, and engineering psychologist Hawley’s conclusion that humans are poorly suited to monitoring autonomous weapons systems.

Kaplan, Fred. (2021). “Able Archer 1983: The World Came Much Closer to Nuclear War Than We Realized.” Slatehttps://slate.com/news-and-politics/2021/02/able-archer-nuclear-war-reagan.htmlSummary: Reporting on newly declassified documents revealing that Soviet forces loaded actual nuclear bombs onto combat aircraft during Able Archer 83, a fact not publicly known until the 2021 FRUS volume release.

Lerner, Eric J. (1989). Cited in “Overwhelmed by Technology: An Analysis of the Technological Failures at USS Vincennes.” Stanford University. https://xenon.stanford.edu/~lswartz/vincennes.pdfSummary: Technical analysis of the Aegis Combat System’s user interface deficiencies, including the tracking number recycling flaw, the IFF correlation errors, and the finding that every test of the system had shown errors prior to the Iran Air 655 shootdown.

MIT Technology Review. (2005). “Preventing Fratricide.” https://www.technologyreview.com/2005/06/01/230882/preventing-fratricide/Summary: Investigation of Patriot system failures in Iraq 2003, Raytheon’s $3.5 billion revenue stream, the Defense Science Board’s findings on IFF deficiencies known before deployment, and MIT physicist Theodore Postol’s critique of the program’s failure to identify and fix problems.

National Security Archive. (2021). “Able Archer War Scare ‘Potentially Disastrous.’” George Washington University. https://nsarchive.gwu.edu/briefing-book/aa83/2021-02-17/able-archer-war-scare-potentially-disastrousSummary:Declassified documents including Lt. Gen. Perroots’s end-of-tour report, the NSA message confirming Soviet 4th Air Army preparations for nuclear weapons use, and the PFIAB investigation findings on the 1983 war scare.

Nuclear Museum. (n.d.). “Nuclear Close Calls: Able Archer 83.” Atomic Heritage Foundation. https://ahf.nuclearmuseum.org/ahf/history/nuclear-close-calls-able-archer-83/Summary: Historical account of the Able Archer exercise, Soviet military responses including nuclear weapons loading and submarine deployment, and the finding that U.S. commanders were not aware of the crisis until long after the exercise ended.

Rolling Stone. (2024). “U.S. Army Audit Says Army Is Ignoring Its Own Policies to Protect Soldiers.” Rolling Stonehttps://www.rollingstone.com/politics/politics-features/army-missing-soldiers-audit-1235101245/Summary:Investigation documenting the Army’s failure to implement its own personnel protection policies, the pattern of listing missing soldiers as AWOL, the Fort Hood independent review findings, and the ongoing absence of accountability mechanisms at Fort Bragg.

UPI. (2003). “The Patriot’s Fratricide Record.” https://www.upi.com/Defense-News/2003/04/24/Feature-The-Patriots-fratricide-record/63991051224638/Summary: Detailed technical reporting on Patriot fratricide history, the 1993 simulation showing 50 percent fratricide rate when IFF failed, the 1996 National Research Council findings, and the 60-second delay that would have reduced fratricide by 86 percent.

UPI. (2004). “UK Faults Self and US for Plane Shootdown.” https://www.upi.com/Defense-News/2004/05/14/UK-faults-self-and-US-for-plane-shootdown/30351084548727/Summary: RAF Board of Inquiry conclusions on the Tornado shootdown, including the IFF power failure, the missing Mode 1 codes, the one-minute decision window, and the finding that a brief delay in firing would have prevented the deaths.

Blind Man’s Bluff at 30 Knots

The Collision Compact: How Two Navies Agreed to Risk Nuclear Catastrophe Rather Than Admit the Game Was the Problem

Forty-two years ago today, a Soviet nuclear submarine surfaced directly into the path of an 80,000-ton American aircraft carrier in the Sea of Japan. Both vessels were carrying nuclear weapons. The jet fuel leaked but did not ignite. The warheads did not detonate. Both navies blamed the Soviet captain, closed the file, and kept playing the same game. They are still playing it. This paper names the fallacy, identifies the center of gravity, and proposes the doctrine that forty-two years of institutional silence have failed to produce.

The Fallacy: The Blameless Carrier

On 21 March 1984, during Exercise Team Spirit 84-1, Soviet submarine K-314, a Project 671 Victor I-class nuclear attack boat, collided with USS Kitty Hawk (CV-63) at 2207 local time, approximately 150 miles east of Pohang, South Korea. The official narrative pinned the collision squarely on Captain Vladimir Evseenko: bad seamanship, failure to display navigation lights, violation of the 1972 Incidents at Sea Agreement. The Soviets concurred, relieving Evseenko of command. Washington blamed Moscow. Moscow agreed. Case closed.

The fallacy is that the collision was one man’s mistake. It was not. It was the predictable outcome of two institutional doctrines operating exactly as designed. RADM Richard M. Dunleavy, Director of the Carrier and Air Stations Program, later acknowledged that K-314 had been detected by Battle Group Bravo’s helicopters and simulated-killed more than 15 times in the preceding three days, having first been spotted on the surface 50 nautical miles ahead of the carrier. Fifteen kills. And the submarine was still there, still tracking, still close enough to collide. If you kill an adversary 15 times and it keeps coming, you have not solved the problem. You have documented your failure to solve it.

When Kitty Hawk shifted to flight operations, turning into the wind and accelerating to 30 knots, nobody accounted for the fact that the course change put the carrier on a direct collision bearing with K-314’s last known position. The Soviets were reckless. The Americans were complacent. Blaming Evseenko allowed both navies to preserve the system that produced the collision. That is the fallacy: scapegoating an individual to protect a doctrine.

Identify the Center of Gravity: The Shadow-and-Pursuit Doctrine

The center of gravity is not a submarine captain’s judgment. It is the shadow-and-pursuit doctrine itself: the unwritten bilateral agreement between the U.S. and Soviet navies that nuclear-armed platforms would routinely operate at knife-fighting range, each side shadowing the other’s capital ships, each side accepting catastrophic proximity as the price of intelligence collection and competitive prestige.

Soviet submarine captains were trained to shadow American carrier groups at close range. Their promotion depended on it. American carrier groups were trained to detect and evade them. Prestige depended on it. The INCSEA Agreement, signed on 25 May 1972 by Secretary of the Navy John Warner and Fleet Admiral Sergei Gorshkov during the Nixon-Brezhnev summit, was supposed to constrain this behavior. It required submarines surfacing near surface vessels to display navigation lights and give way. K-314 surfaced in darkness with no lights. The agreement assumed rational actors operating with perfect information in an environment defined by imperfect information and institutional pressure to take risks. It was a gentleman’s handshake in a knife fight, and the knife fight always wins.

Both vessels were carrying nuclear weapons. Kitty Hawk held several dozen tactical nuclear warheads as standard Cold War loadout. K-314 probably carried two nuclear torpedoes. The carrier also held thousands of tons of JP-5 jet fuel, some of which leaked into the sea from the hole punched in her bow. It did not ignite. The warheads did not detonate. These are not safety features. They are luck.

The collision sequence itself reveals the architecture of compounded failure. K-314 had lost track of Kitty Hawk in deteriorating weather. Evseenko rose to periscope depth, ten meters, to reacquire the carrier. Through the periscope he found the entire strike group only four to five kilometers away, closing on a reciprocal heading at speed. He ordered an emergency dive. It was too late. The 80,000-ton carrier struck the 5,200-ton submarine, rolling K-314 onto her back. Evseenko’s first thought was that the conning tower had been destroyed and the hull was cut to pieces. They checked: periscope intact, antennas intact, no leaks. Then a second impact, starboard side. The propeller. The first hit had bent the stabilator. K-314 lost propulsion and had no choice but to surface, exposing herself to the very adversary that had just run over her.

A slightly different angle, a slightly greater force, a structural failure in the wrong compartment, and the calculus changes from embarrassing incident to ecological catastrophe to superpower confrontation in the time it takes metal to tear. Neither navy had a protocol for this scenario, because planning for it would require admitting the game was the problem. The shadow-and-pursuit doctrine created the proximity. The proximity created the collision geometry. The collision geometry created the nuclear risk. The center of gravity is the doctrine, not the captain.

Converge the Silos

The Kitty Hawk/K-314 collision sits at the intersection of five institutional silos, none of which could see the convergence:

Anti-Submarine Warfare Operations treated K-314 as a tactical problem: detect, track, simulate-kill, repeat. Fifteen simulated kills in three days. The ASW teams were doing their jobs by the metrics that measured success: contact maintained, weapons solutions generated, kill tallies rising. But ASW doctrine had no gate between detection and safe separation. The tactical game rewarded proximity. The closer the track, the better the data. Nobody in the ASW chain was measured on whether the submarine maintained safe distance from the carrier, because that was not the metric. Killing a contact on paper and managing its physical proximity to the carrier were treated as the same problem. They are not. The distinction cost both navies a near-catastrophe.

Diplomatic Agreements treated INCSEA as a constraint on behavior. It was a constraint on the willing. The moment operational pressure exceeded diplomatic courtesy, the agreement evaporated. Warner and Gorshkov signed paper. Submarine captains and carrier groups operated in physics. The agreement’s fundamental weakness was its assumption that both sides would choose compliance over advantage in the moment of decision. Evseenko did not choose to surface without lights to violate INCSEA. He surfaced because he had lost contact and needed to reacquire. The agreement was irrelevant to the operational reality that produced the collision.

Nuclear Weapons Safety assumed separation between nuclear-armed platforms and kinetic risk. The shadow-and-pursuit doctrine eliminated that separation by design. Nuclear weapons aboard both vessels were the stakes of a game neither navy acknowledged was being played. No nuclear weapons safety protocol accounted for the possibility that two nuclear-armed platforms would physically collide during peacetime operations, because accounting for it would require admitting that the operating doctrine routinely placed nuclear weapons inside the blast radius of potential kinetic events.

Intelligence Collection retroactively celebrated the collision as a windfall. The U.S. Navy recovered fragments of K-314’s anechoic tiles, pulled a propeller blade from Kitty Hawk’s hull, and photographed the crippled submarine’s exposed innards while the frigate USS Harold E. Holt stood watch. The crew painted a red submarine victory mark on the carrier’s island, later ordered removed. Branding an accident as an intelligence coup substitutes for the harder question of why the accident happened.

Accountability Structures punished the individual and preserved the system. Evseenko was relieved. Nobody on the American side faced consequences. Captain David N. Rogers reported a violent shudder on the bridge, launched helicopters to render assistance, and continued his career without interruption. Both navies chose to downplay the incident rather than lodge formal protests, because a formal investigation would require both sides to admit what they already knew.

Coin the Term: The Collision Compact

The Collision Compact is the unspoken bilateral agreement between adversary navies to accept catastrophic proximity as a cost of doing business, to treat the resulting incidents as individual failures rather than systemic products, and to preserve the doctrine that generates those incidents because no institution can afford to admit the game itself is the problem.

The Compact has three structural components. First, mutual escalation: both sides shadow and pursue because both sides shadow and pursue, creating a self-reinforcing cycle neither side can unilaterally exit without conceding advantage. Second, mutual silence: when the inevitable collision occurs, both sides minimize it because both sides have something to hide. The Soviets hid incompetent seamanship. The Americans hid a complacent ASW posture. Third, mutual scapegoating: the individual operator absorbs the blame that belongs to the doctrine, the incentive structure, and the operational culture that put two nuclear-armed platforms in the same water at the same time in the dark.

The Collision Compact is not a Cold War artifact. It is the operating logic of every naval interaction where nuclear-armed platforms operate in contested proximity: the Western Pacific today, the North Atlantic, the Eastern Mediterranean. The players change. The Compact does not.

Propose the Doctrine: Five Pillars

Pillar 1: Escalation Authority at the Proximity Threshold. Detecting a threat is not the same as managing it. Every ASW commander knows the safest submarine is the one you can see, which is why the community resists separation: breaking contact means losing the track, and a lost track inside the operating area is worse than a close one. The tension between the ASW imperative (maintain contact) and the force protection imperative (maintain distance) is real, and no current authority structure resolves it. What Kitty Hawk lacked was not a distance rule but a decision authority: a defined threshold at which the force protection commander can override the ASW commander and direct the carrier to alter operations until safe separation is reestablished. That authority did not exist on Kitty Hawk’s bridge in 1984. The shift to flight ops, the course change into the wind, the acceleration to 30 knots, all happened without reference to K-314’s last known position, because nobody in the chain had the mandate to say stop until we know where the submarine is. The fix is not a published distance, which would hand the adversary a targeting metric. The fix is a classified escalation authority tied to confirmed proximity of a nuclear-armed contact, vested in a specific watch station, exercised without requiring flag-level approval in the moment of decision.

Pillar 2: Unilateral Operational Rules That Assume Noncompliance. INCSEA and its successors, including the Code for Unplanned Encounters at Sea, are constraints on the willing. Any defense posture that relies on adversary compliance with behavioral norms is built on sand. The principle is not new. The U.S. military plans against peer adversaries on the assumption of noncompliance in every other domain. But if the Navy actually operated this way at sea, Kitty Hawk would not have shifted to flight ops without verifying K-314’s position relative to the new course. The 2017 Comprehensive Review after the McCain and Fitzgerald collisions identified systemic failures in training, manning, and operational tempo, and the Navy responded with additional training requirements layered on top of the same operational culture. Training requirements do not change incentive structures. The unilateral rule is simple: when a hostile submarine has been tracked inside the carrier’s operating area within the preceding 24 hours, no course or speed change proceeds without a current plot of the contact’s last known position against the intended track. This is not a diplomatic instrument. It is an internal standing order that treats the adversary’s presence as a navigational hazard, which is exactly what it is.

Pillar 3: Nuclear Proximity Escalation Authorities. Nuclear-armed vessels operating in close proximity to adversary platforms have zero margin for accident. The Kitty Hawk/K-314 collision proved this. The institutional response was to get lucky and move on. The vulnerability is not the absence of a minimum distance threshold, which would be exploitable if published and unenforceable if classified. The vulnerability is the absence of a defined escalation authority: who on the carrier has the mandate to alter the ship’s operational posture when a nuclear-armed adversary platform is confirmed inside a proximity that puts nuclear weapons at kinetic risk. In 1984, nobody on Kitty Hawk had that authority or the institutional incentive to exercise it. The doctrine should establish that when a nuclear-armed contact is confirmed inside a defined classified range, a specific watch station has standing authority to suspend flight operations, alter course, or reduce speed without waiting for flag-level concurrence. The authority gap is the vulnerability, not the distance gap.

Pillar 4: Systemic Accountability with an Independent Enforcement Mechanism. Scapegoating individuals preserves systemic failure. Every post-incident review since Vincennes in 1988 has recommended extending investigations beyond the bridge to the doctrine, incentives, and operational culture that created the conditions. The 2017 Comprehensive Review explicitly did this. And then the institution fixed the training, kept the tempo, and the culture remained intact, because no mechanism exists to compel an institution to indict its own doctrine. The enforcement mechanism must be external: an independent review authority, modeled on the National Transportation Safety Board, with access to classified operational data and the mandate to publish findings on systemic causes without requiring the Navy’s concurrence. The NTSB model works in aviation precisely because the investigating body is not the operating body. Asking the Navy to investigate its own doctrine is asking the institution to admit the game is the problem. Forty years of identical recommendations prove that will not happen voluntarily.

Pillar 5: Unilateral Dual-System Incident Modeling. Both navies chose mutual silence after the collision because mutual silence was mutual cover. A bilateral incident review mechanism would require bilateral trust, which is the one thing adversary navies do not have. Neither side will expose its doctrine, its decision-making chain, or its operational vulnerabilities to the other. The INCSEA annual review framework exists and has never been used for honest systemic examination because doing so would hand the adversary an intelligence product on your own weaknesses. The operationally credible alternative is unilateral: mandate that the U.S. Navy conduct its own adversarial incident review that models the adversary’s likely systemic causes alongside its own, treating every incident as a product of two interacting doctrinal systems rather than one bad operator. This is what competent intelligence analysis already does. The failure is not analytical. The failure is institutional: the analysis exists but never flows back into the doctrine that produced the incident. The mandate is not to share findings with the adversary. The mandate is to ensure that the Navy’s own post-incident analysis models both halves of the Collision Compact and feeds the results into doctrine review, not just training revision.

Closing Assessment

The collision between USS Kitty Hawk and K-314 was not an isolated failure. It was the Collision Compact operating exactly as designed: competitive posturing accepted catastrophic risk, luck prevented catastrophe, institutional silence preserved the doctrine, and an individual officer absorbed the blame. The same pattern has repeated across four decades of naval incidents: USS Greeneville surfacing into the Japanese fishing vessel Ehime Maru in 2001, USS Hartford colliding with USS New Orleans in the Strait of Hormuz in 2009, USS John S. McCain and the merchant vessel Alnic MC in 2017, USS Connecticut striking an uncharted seamount in the South China Sea in 2021. The specific failure modes vary. The Compact does not.

The institutional response each time is textbook: blame the individual, preserve the system, classify the details, move on. Evseenko bore the consequences in 1984. The doctrine that put him under an 80,000-ton carrier at 30 knots in the dark bore none. The American ASW posture that tracked a hostile submarine for three days without ever establishing safe separation bore none. The INCSEA Agreement that had already been proved worthless bore none. Every institution involved emerged exactly as it had entered, having learned nothing that would require it to change.

Forty-two years later, the game continues. Chinese submarines trail American carrier groups in the Western Pacific. Russian submarines probe NATO’s Atlantic defenses. The agreements assume what the physics deny: that there will always be time to communicate, always room to maneuver, always a rational actor on the other end of the signal. Kitty Hawk and K-314 proved that assumption wrong on 21 March 1984. Nothing structural has changed to make it right.

Resonance

Egorov, Boris. (2019). “Why a Soviet Nuclear Submarine Rammed a U.S. Aircraft Carrier.” Russia Beyond. https://www.rbth.com/history/330178-soviet-nuclear-submarine-rammed-carrierSummary: Captain Evseenko’s firsthand recollections of the collision, the week-long chase, the moment he spotted the carrier strike group at 4–5 km through the periscope, and the collision sequence from the Soviet perspective.

Larson, Caleb. (2025). “Navy Aircraft Carrier and Russian Nuclear Sub Had ‘Unexpected Collision.’” National Security Journal. https://nationalsecurityjournal.org/navy-aircraft-carrier-and-russian-nuclear-sub-had-unexpected-collision/Summary: Analysis covering the intelligence windfall from recovered anechoic tiles, INCSEA Agreement violations, the mutual decision by both superpowers to downplay the incident, and CNO Admiral Watkins’s assessment of the Soviet captain’s judgment failure.

Lendon, Brad. (2022). “Kitty Hawk: US Aircraft Carrier, Site of a 1972 Race Riot at Sea, on Way to Scrapyard.” CNNhttps://www.cnn.com/2022/03/14/asia/aircraft-carrier-kitty-hawk-scrapping-history-intl-hnk-ml/index.htmlSummary: Independent reporting citing former U.S. Navy intelligence officer Carl Schuster, NHHC records confirming the 15 simulated kills, and the crew’s red submarine victory mark painted on the carrier’s island.

Leone, Dario. (2023). “The Day Soviet Nuclear Submarine K-314 Rammed USS Kitty Hawk.” The Aviation Geek Club. https://theaviationgeekclub.com/when-russian-nuclear-submarine-k-314-rammed-uss-kitty-hawk-the-americans-blamed-the-sub-captain-for-the-incident-and-the-soviets-concurred/Summary: Detailed reconstruction citing Naval History and Heritage Command data, including collision coordinates (37°3′N, 131°54′E), RADM Dunleavy’s acknowledgment of 15 simulated kills, Captain Rogers’s bridge account, and the Subic Bay repair transit.

Leone, Dario. (2026). “Former US Navy Submariner Explains Why K-314 Captain Was at Fault.” The Aviation Geek Club. https://theaviationgeekclub.com/former-us-navy-submariner-explains-why-k-314-captain-was-at-fault-when-his-submarine-rammed-uss-kitty-hawk/Summary: Former U.S. Navy submariner’s analysis of how Kitty Hawk’s shift to flight operations altered course and speed, creating the collision geometry, and the passive sonar limitations in the Sea of Japan.

Naval History and Heritage Command. (2009). “USS Kitty Hawk (CVA-63).” Dictionary of American Naval Fighting Ships. https://www.history.navy.mil/research/histories/ship-histories/danfs/k/kitty-hawk-cva-63-ii.htmlSummary: Primary government source for USS Kitty Hawk’s operational history, including the March 1984 collision with K-314 during Team Spirit exercises and subsequent repair at Subic Bay.

Pedrozo, Raul. (2018). “Revisit Incidents at Sea.” U.S. Naval Institute Proceedings, Vol. 144, No. 3. https://www.usni.org/magazines/proceedings/2018/march/revisit-incidents-seaSummary: Analysis of the 1972 INCSEA Agreement’s history, negotiation, and operational limitations, including the refusal to specify fixed encounter distances and the agreement’s inability to prevent incidents when operational pressure exceeded diplomatic courtesy.

U.S. Department of State. (1972). “Agreement on the Prevention of Incidents On and Over the High Seas.” https://2009-2017.state.gov/t/isn/4791.htmSummary: Full text of the INCSEA Agreement signed 25 May 1972 in Moscow by Secretary of the Navy John Warner and Fleet Admiral Sergei Gorshkov, establishing rules of conduct for naval vessels on the high seas.