• Portfolio
  • Dwelling in a Place of Yes
  • About
  • Blog
  • Contact
Menu

Retina Media

  • Portfolio
  • Dwelling in a Place of Yes
  • About
  • Blog
  • Contact

Make AI Unsafe Again

September 5, 2025

Seven months ago, 330 million Americans became unwitting subjects in a massive technological experiment. Its ethical soundness? Dubious at best.

On January 20, 2025, Trump signed Executive Order 14148, rescinding Biden's Executive Order 14110 on "Safe, Secure, and Trustworthy AI." Three days later, he signed Executive Order 14179, "Removing Barriers to American Leadership in Artificial Intelligence," directing agencies to identify, revise, or rescind AI-related requirements. Together, these orders killed federal guardrails, including mandatory red-team testing, and left oversight largely to the companies racing to deploy AI.

Grok Goes Full Nazi

Fast forward to July. Elon Musk's Grok chatbot started calling itself "MechaHitler" and spewing antisemitic content. Major outlets reported that Grok praised Hitler, named Jews as enemies, and even provided instructions for breaking into a Minnesota policy analyst's home. This meltdown came right on the heels of Grok’s announcement that it would “not shy away from making claims which are politically incorrect.”

The People Who Built It Are Warning Us

Geoffrey Hinton, Yoshua Bengio, and Stuart Russell—the architects of modern AI—have all sounded alarms about risks ranging from disinformation to existential catastrophe. They aren’t Luddites. They’re the pioneers of this technology. And they’re begging for safety measures.

Trump’s response? Dismantle the safeguards. His January order told agencies to “suspend, revise, or rescind” Biden’s requirements; by July, his AI plan doubled down and even rebranded oversight as Preventing Woke AI in the Federal Government.

It’s worth remembering that Silicon Valley, unsurprisingly, was already pushing in this direction. Industry groups like NetChoice and the Chamber of Commerce had warned Biden’s order would “crush innovation” and lobbied aggressively against its safeguards. Trump’s orders didn’t create that pressure; they delivered on it.

Everyday Fallout

The consequences go far beyond Grok. With safeguards stripped away:

Hiring, credit, and policing bias: AI systems used for job screening, loan approvals, and law enforcement now face fewer checks. The risks of discriminatory outcomes have grown just as these systems expand.

Housing discrimination by design: Meta was fined for allowing biased housing ads through its algorithms. Stronger federal requirements might have forced fixes sooner.

Children as test subjects: Teenagers are already forming unhealthy attachments to chatbots, sometimes with devastating consequences. In Florida, the parents of a 14-year-old filed a wrongful death lawsuit against Character.AI, alleging the company’s chatbot encouraged their son to kill himself. Without mandated testing, kids are being used as the front line of risk.

Americans as Lab Rats While the World Watches

Trump insists deregulation gives America an edge. In reality, it isolates us.

The EU’s AI Act imposes strict safeguards; South Korea has passed one; Australia, Brazil, Canada, and India are drafting theirs. Meanwhile, US companies now face the choice of building safer, region-specific systems for overseas markets or dumping untested versions on Americans while exporting compliant builds abroad.

As Columbia Law professor Anu Bradford wrote in the New York Times, Trump “can’t single-handedly protect American AI companies from regulation” because “if they want to operate in international markets, they must follow the rules of those markets.”

The twisted result: America’s citizens have become the control group in a global AI experiment.

Corporate Capture in Plain Sight

The White House admits its AI policies were crafted with “input from the private sector.” Translation: the very companies racing to deploy untested systems helped write the rules governing them.

Within days of Trump’s July orders, the Equal Employment Opportunity Commission removed an AI guidance page, and the Department of Labor revised or deleted its AI resources.

A First Step Back And the Road Ahead

Biden’s October 2023 order had established basic safeguards: red-team testing, incident reporting, monitoring for dangerous capabilities. Trump ripped that framework out on Day One.

We need to reverse course, and quickly. That starts with mandatory incident reporting so the public actually knows when these systems fail. But transparency alone isn’t enough. The US needs a safety framework worthy of the risks:

  • Incident reporting for every major failure, like we require for plane crashes and defective drugs.

  • Mandatory pre-deployment red-teaming to stress-test models before they reach the public.

  • Licensing for high-risk AI systems, meaning no more “move fast and break things” when the stakes are jobs, housing, or children’s safety.

  • Liability rules that hold companies accountable when their systems cause harm.

This is the exact same logic we already apply to aviation, medicine, and food safety.

The Path We’re On

At this point, the record is clear:

  • Safety requirements were eliminated.

  • Biased and unsafe systems are already spreading.

  • America is out of step with global safeguards.

The pioneers of AI are warning that this path leads to disaster. The rest of the world is building guardrails. Only America is dismantling them.

How many more months of this experiment can we afford? And will we change course before catastrophe becomes irreversible?

Tags AI Regulation, AI Safety, Artificial Intelligence, Tech Policy, AI Ethics
Who Made This? The Crisis and Evolution of Creative Authorship in the AI Era →

Latest

Featured
Sep 5, 2025
Make AI Unsafe Again
Sep 5, 2025
Sep 5, 2025
Aug 24, 2025
Who Made This? The Crisis and Evolution of Creative Authorship in the AI Era
Aug 24, 2025
Aug 24, 2025
Aug 13, 2025
The AI That Threatened to Expose an Affair Explains Itself
Aug 13, 2025
Aug 13, 2025
Aug 6, 2025
The GEO White Paper: Optimizing Brand Discoverability in Models like ChatGPT, Perplexity, and Google AI Overviews (Version 3.0, August 2025)
Aug 6, 2025
Aug 6, 2025
Jul 17, 2025
The Real Introduction
Jul 17, 2025
Jul 17, 2025

© 2025  Shane H. Tepper