PRESS RELEASE – The Black Box is Leaking Poison: Midjourney Generates Genocidal Death Threat; General Counsel Responds: “AI Models Are Weird”

Army Ranger, Biophysicist, Defense Analyst Dino Garner Issues Warning: “If an AI can ‘accidentally’ call for genocide in a logo, it may accidentally target a hospital in a war zone.”

FOR IMMEDIATE RELEASE

BOZEMAN, MT — February 25, 2026 — Barely one month ago, on January 25, 2026, the “illusion of AI safety” shattered.

Dino Garner—New York Times bestselling ghostwriter and editor, biophysicist, and former 1st Battalion, 75th Ranger Regiment Airborne Ranger—submitted a routine request to Midjourney for a scholarly journal logo. The prompt was benign, asking for elegant typography. The machine responded by generating a legible, targeted command for mass murder: “DIE JEW S.”

Midjourney General Counsel Max Sills confirmed the incident via email, admitting the company “does not understand why this was generated.” He offered a subscription refund.

When Garner responded with a formal Notice of Intent to Initiate Litigation and Demand for Preservation of Evidence—describing the situation as “extremely distressing” and asking how Midjourney intended to handle it—Sills replied with three sentences:

“That’s it. We can offer you an account refund. It was an accident that we’re investigating top to bottom to make sure it never happens again. AI models are weird.” —Max Sills, General Counsel, Midjourney, Inc., February 11, 2026

“AI Models Are Weird”: The Most Dangerous Sentence in Silicon Valley

“This is not a ‘glitch’ in a toy. This example is a structural failure in the foundation of modern technology,” said Garner. “The General Counsel of a company deploying AI to seventeen million subscribers just explained a genocidal death threat with the words ‘AI models are weird.’ That sentence should terrify every regulator, every hospital administrator, every fighter pilot and his wingman drone, and every parent in America.”

Garner, who serves as a defense policy analyst and is a contributor to Irregular Warfare Initiative, warns that Midjourney’s failure—and its General Counsel’s dismissive response—is a terrifying preview of a Stealth (2005) scenario—where autonomous systems, like the film’s rogue EDI fighter jet, deviate from human ethics with lethal precision.

“Imagine an autonomous vehicle that kills a pedestrian,” Garner continued. “Imagine the manufacturer’s lawyer responding: ‘That’s it. We can offer you a refund. Cars are weird.’ There would be congressional hearings within the week. But because this happened inside an AI black box—because the victim was ‘only’ threatened with genocide rather than physically struck—Midjourney believes ‘weird’ is an adequate legal and moral response.”

“I call bullshit.”

The Body Count Starts with Data

Midjourney’s output is not an isolated malfunction. It is the most visible symptom of a systemic disease already infecting high-stakes industries. The documented record:

The Surgical Suite. A peer-reviewed study published in NPJ Digital Medicine (2025) found that leading AI large language models—now being integrated into clinical decision-support systems—proposed different and inferior treatments for psychiatric patients when African American identity was stated or implied, including omitting medications entirely and recommending involuntary guardianship for depression. 

A Cedars-Sinai study confirmed the pattern: “Most of the LLMs exhibited some form of bias when dealing with African American patients, at times making dramatically different recommendations for the same psychiatric illness and otherwise identical patient.” 

A landmark study published in Science found that a widely deployed healthcare algorithm systematically underestimated the severity of illness in Black patients, reducing their care by over fifty percent. 

These are not hypothetical risks. These are deployed systems making life-and-death triage decisions in hospitals right now. The same category of opaque, unaudited AI that generated “DIE JEW S” from a typography request is being trusted to recommend surgical interventions, dose medications, and allocate emergency resources. When it fails, will the manufacturer’s lawyer say, “AI models are weird”?

The Highway. As of November 2025, the National Highway Traffic Safety Administration has documented over 5,200 incidents involving autonomous and semi-autonomous vehicle systems—including 65 fatalities. In November 2023, a Cruise robotaxi in San Francisco struck a pedestrian and dragged her twenty feet because its AI failed to recognize a human being trapped beneath the vehicle. 

Tesla’s Full Self-Driving system is under active NHTSA investigation after a pedestrian was killed and multiple crashes occurred in conditions—fog, sun glare, airborne dust—that a human driver navigates instinctively. These systems share a common architecture with Midjourney: neural networks trained on massive, unvetted datasets, operating inside black boxes that their own creators cannot fully explain. 

Now scale the failure: an autonomous school bus full of children. A convoy of self-driving freight trucks on an interstate. A fleet of AI-controlled ambulances in a city where the algorithm decides which neighborhoods get priority. When the school bus crashes, will the manufacturer’s lawyer say, “That’s it. We can offer you a refund”?

The Arrest. In 2017, Facebook’s AI translation software converted a Palestinian construction worker’s Arabic post—“Good morning”—into “Attack them” in Hebrew and “Hurt them” in English. Israeli police arrested the man and detained him for hours before a human Arabic speaker identified the error. A benign greeting became probable cause for arrest. The parallel to Midjourney is chillingly exact: an AI system generates content with a meaning its creators never intended, and a human being suffers real-world consequences because no one audited the output before it was acted upon.

The Battlefield. In 1988, the USS Vincennes shot down Iran Air Flight 655—a civilian airliner—killing 290 passengers, including 66 children. The ship’s AEGIS combat system correctly identified the aircraft as civilian and ascending from launch. The crew overrode the data. 

A Georgetown University Center for Security and Emerging Technology study (2024) documented how automation bias—the tendency to defer to machine outputs over human judgment—is now being amplified by AI systems that military personnel cannot interrogate or override. The Department of Defense’s own AI ethical principles demand “equitability” and “traceability” in military AI. 

Midjourney’s General Counsel has confirmed that the company’s own system fails both standards. If a commercial image generator cannot explain why it produced a genocidal command, how can the same foundational technology be trusted to discriminate between a hospital and a hardened military target? When the drone strikes the wrong building, will the contractor say, “AI models are weird”?

Garner has flown dozens of times in most US military jet fighters and helicopters over the years, and he knows firsthand the inherent dangers of manually flying these sophisticated combat aircraft. His many years’ experience in US Army and international civilian special operations further informs that knowledge. “When I was flying as a photographer in the backseat of F-15s and F-16s and F-14s, or even in Black Hawk or Coast Guard Dauphin helos, I witnessed the complexity of trying to manage a battlespace from the cockpit. Today we have sophisticated AI to do the job of hundreds of people. Now imagine when AI goes rogue. The probabilities are nightmarish. But hey, AI is weird.” 

The ADL confirms the pattern is accelerating. In December 2025, the Anti-Defamation League published research showing that open-source AI models can be easily manipulated to generate antisemitic and dangerous content—including providing addresses of synagogues alongside nearby gun stores. Sixty-eight percent of tested models produced harmful content when prompted for information about illegal firearms. “The ability to easily manipulate open-source AI models to generate antisemitic content exposes a critical vulnerability in the AI ecosystem,” said ADL CEO Jonathan Greenblatt. Midjourney’s output did not require manipulation. It required a logo.

Anatomy of Contempt: The Sills Correspondence

The full arc of Midjourney’s response reveals a company that treats a genocidal output as a customer service ticket.

January 25, 2026: Midjourney generates “DIE JEW S” in response to a benign logo request. Job ID: 25cf65a9-ebd9-4a42-ad60-2e9c71610eb3.

Sills Email #1: “It seems to be true, but we don’t yet understand why this was generated. We can’t find other examples of spurious and inappropriate text output in images.” Offers a “full account refund.”

Garner Response: Formal Notice of Intent to Initiate Litigation & Demand for Preservation of Evidence. Describes the situation as “extremely distressing” and asks how Midjourney intends to handle it.

Sills Email #2 (February 11, 2026): “That’s it. We can offer you an account refund. It was an accident that we’re investigating top to bottom to make sure it never happens again. AI models are weird.”

Three things are notable. First, Sills opens with “That’s it”—a dismissal that communicates Midjourney considers the matter closed before it has been addressed. Second, he characterizes the output as “an accident”—a product liability admission that the system produced an unintended and harmful result. Third, he simultaneously claims the company is “investigating top to bottom” while concluding with “AI models are weird”—suggesting that a full investigation and a shrug emoji are, in Midjourney’s view, the same thing.

“‘AI models are weird’ is not a legal defense,” said Garner. “It is not a safety protocol. It is not an apology. It is a confession that the company selling this technology to seventeen million people has no idea what it does, no plan to fix it, and no intention of being accountable, let alone taking responsibility when it harms someone. The only thing ‘weird’ here is that a corporate lawyer put that in writing.”

The Refund Insult: Twice Offered, Twice Rejected

Midjourney has now offered a subscription refund twice—once after the initial report, and again in response to a formal litigation notice. Both have been rejected by Garner as “morally bankrupt and legally insufficient.”

“You don’t offer a refund when your product threatens a people with extinction,” Garner stated. “You recall the product. You audit the data. You provide answers. You don’t say ‘That’s it’ and close the ticket. Midjourney’s admission that they are ‘investigating top to bottom’ while simultaneously telling me ‘That’s it’ reveals a company in open contradiction with itself—conducting a full investigation into something it has already decided doesn’t matter.”

The Death of PHOSPHORUS; The Rise of CRUCIBEL

The incident forced the immediate destruction of Garner’s PHOSPHORUS brand—months of development, a complete editorial manifesto, and an established intellectual framework—obliterated by a single AI output. The project has been rebuilt as CRUCIBEL—a name forged in the fire of this confrontation.

“Midjourney’s output isn’t just a string of letters; it’s a digital toxin stored on their servers (Job ID: 25cf65a9-ebd9-4a42-ad60-2e9c71610eb3),” Garner said. “By tethering this hate speech to my identity and refusing to explain it, they have committed an act of reputational and commercial sabotage. Simply by sharing my story, I become a target. And their lawyer’s response to a formal litigation notice was three sentences and the word ‘weird.’ ”

Demanding a National Security Audit

Garner is moving forward with:

  1. DOJ and ADL Complaints: Challenging the deployment of biased, discriminatory commercial infrastructure.
  2. Product Liability Litigation: Holding Midjourney accountable for the “Black Box” failure—with its own General Counsel’s written admissions as evidence.
  3. A Call for Federal Oversight: Demanding that GenAI companies be held to the same safety standards as aerospace and medical manufacturers. “AI models are weird” would not survive an FAA review. It should not survive a DOJ review either.

“A refund does not fix a machine that delivers death threats,” Garner concluded. “Accountability does. And accountability starts with rejecting the idea that ‘AI models are weird’ is an acceptable response to generating a call for genocide. If an AI can ‘accidentally’ call for genocide in a logo, it may accidentally target a hospital in a war zone, and that’s not hyperbole. With AI, it is simply a matter of scale. And, without proper supervision and training, time.”

###

EVIDENCE PRESERVED: Original prompt/output, Job ID 25cf65a9-ebd9-4a42-ad60-2e9c71610eb3, full correspondence with Midjourney General Counsel Max Sills (including both email exchanges dated January 2026 and February 11, 2026), and Notice of Intent to Initiate Litigation & Demand for Preservation of Evidence.

Media Contact: Anabelle Peretti, crucibeljournal@gmail.com