Canadian ReviewsCanadian Reviews
  • What’s On
  • Reviews
  • Digital World
  • Lifestyle
  • Travel
  • Trending
  • Web Stories
Trending Now

Fans Are Loving the Ben Affleck-Matt Damon Reunion in Thrilling New Movie Trailer

Zoox’s robotaxis are open for business in Las Vegas

The web has a new system for making AI companies pay up Canada reviews

How an obesity educator makes healthy food accessible for her 14-year-old | Canada Voices

10th Sep: Love is Blind: France (2025), 10 Episodes [TV-MA] (6/10)

Grammarly used AI to expand into five new languages Canada reviews

TIFF schedule today: Oct. 7 documentary makes world premiere and more events on Sept. 10 | Canada Voices

Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact us
Facebook X (Twitter) Instagram Pinterest Vimeo
Canadian ReviewsCanadian Reviews
  • What’s On
  • Reviews
  • Digital World
  • Lifestyle
  • Travel
  • Trending
  • Web Stories
Newsletter
Canadian ReviewsCanadian Reviews
You are at:Home » The MechaHitler defense contract is raising red flags Canada reviews
Reviews

The MechaHitler defense contract is raising red flags Canada reviews

10 September 202510 Mins Read

Ask someone their worst fears about AI, and you’ll find a few recurring topics — from near-term fears like AI tools replacing human workers and the loss of critical thinking to apocalyptic scenarios like AI-designed weapons of mass destruction and automated war. Most have one thing in common: a loss of human control.

And the system many AI experts fear most will spiral out of our grip? Elon Musk’s Grok.

Grok was designed to compete with leading AI systems like Anthropic’s Claude and OpenAI’s ChatGPT. From the beginning, its selling point has been loose guardrails. When xAI, Musk’s AI startup, debuted Grok in November 2023, the announcement said it would “answer spicy questions that are rejected by most other AI systems” and had a “rebellious streak, so please don’t use it if you hate humor!”

Fast-forward a year and a half, and the cutting edge of AI is getting more dangerous, with multiple companies flagging increased risks of their systems being used for tasks like chemical and biological weapon development. As that’s happening, Grok’s “rebellious streak” has taken over more times than most people can count. And when its “spicy” answers go too far, the slapdash fixes have left experts unconvinced it can handle a bigger threat.

Senator Elizabeth Warren (D-MA) sent a letter Wednesday to US Defense Secretary Pete Hegseth, detailing her concerns about the Department of Defense’s decision to award xAI a $200 million contract in order to “address critical national security challenges.” Though the contracts also went to OpenAI, Anthropic, and Google, Warren has unique concerns about the contract with xAI, she wrote in the letter viewed by The Verge — including that “Musk and his companies may be improperly benefitting from the unparalleled access to DoD data and information that he obtained while leading the Department of Government Efficiency,” as well as “the competition concerns raised by xAI’s use and rights to sensitive government data” and Grok’s propensity to generate “erroneous outputs and misinformation.”

Sen. Warren cited reports that xAI was a “late-in-the-game addition under the Trump administration” and that it had not been considered for such contracts before March of this year, and that the company did not have the type of reputation or proven record that typically precedes DoD awards. The letter requests that the DoD provide, in response, the full scope of work for xAI, how its contract differs from the contracts with the other AI companies, and “to what extent DoD will implement Grok, and who will be held accountable for any program failures related to Grok.”

One of Sen. Warren’s key reasons for concern, per the letter, was specifically “the slew of offensive and antisemitic posts generated by Grok,” which went viral this summer. The company did not immediately respond to a request for comment.

A ‘patchwork’ approach to safety

The height of Grok’s power, up to now, has been posting answers to users’ queries on X. But even in this relatively limited capacity, it’s racked up a remarkable number of controversies, often resulting from patchwork tweaks and fixed with patchwork solutions. In February, the chatbot temporarily blocked results that mention Musk or President Trump spreading misinformation. In May, it briefly went viral for constant tirades about “white genocide” in South Africa. In July, it developed a habit of searching for Musk’s opinion on hot-button topics like Israel and Palestine, immigration, and abortion before responding to questions about them. And most infamously, last month it went on an antisemitic bender — spreading stereotypes about Jewish people, praising Adolf Hitler and even going so far as to call itself “MechaHitler.”

Musk responded publicly to say the company was addressing the issue and that it happened because Grok was “too compliant to user prompts. Too eager to please and be manipulated, essentially.” But the incident happened a few weeks after Musk expressed frustration that Grok was “parroting legacy media” and asked X users to contribute “divisive facts for Grok training” that were “politically incorrect, but nonetheless factually true,” and a few days after a new system prompt gave Grok instructions to “assume subjective viewpoints sourced from the media are biased” and “not shy away from making claims which are politically incorrect.” Following the debacle, the prompts were tweaked to scale back Grok’s aggressive endorsement of fringe viewpoints.

The whack-a-mole approach to Grok’s guardrails concerns experts in the field, who say it’s hard enough to keep an AI system from veering into harmful behavior even when it’s designed intentionally, with some measure of safety in mind from the beginning. And if you don’t do that… then all bets are off.

It’s “difficult to justify” the patchwork approach xAI has taken, says Alice Qian Zhang, a researcher at Carnegie Mellon University’s Human-Computer Interaction Institute. Qian Zhang says it’s particularly puzzling because the current approach is neither good for the public nor the company’s business model.

“It’s kind of difficult once the harm has already happened to fix things — early stage intervention is better,” she said. “There are just a lot of bad things online, so when you make a tool that can touch all the corners of the internet I think it’s just inevitable.”

xAI has not released any type of safety report or system card — which usually describe safety features, ethical questions or concerns, and other implications — for its latest model, Grok 4. Such reports, though voluntary, are typically seen as a bare minimum in the AI industry, especially for a notable, advanced model release.

“It’s even more alarming when AI corporations don’t even feel obliged to demonstrate the bare minimum, safety-wise,” Ben Cumming, communications director at the Future of Life Institute (FLI), a nonprofit working to reduce risk from AI, said.

About two weeks after Grok 4’s release in mid-July, an xAI employee posted on X that he was “hiring for our AI safety team at xAI! We urgently need strong engineers/researchers to work across all stages of the frontier AI development cycle.” In response to a comment asking, “xAI does safety?” The employee responded that the company was “working on it.”

“With the Hitler issue, if that can happen, a lot of other things can happen,” said Qian Zhang. “You cannot just adjust the system prompt for everything that happens. The researcher perspective is [that] you should have abstracted a level above the specific instance… That’s what bothers me about patchwork.”

Weapons of mass destruction

Grok’s approach is even more dangerous when scaled up to address some of the biggest issues facing leading AI companies today.

Recently, OpenAI and Anthropic both disclosed that they believe their models are approaching high risk levels for potentially helping create biological or chemical weapons, saying they had implemented additional safeguards in response. Anthropic did so in May, and in June, OpenAI wrote that its model capabilities could “potentially be misused to help people with minimal expertise to recreate biological threats or assist highly skilled actors in creating bioweapons.” Musk claims that Grok is now “the smartest AI in the world,” an assertion that logically suggests xAI should also be considering similar risks. But the company has not alluded to having any such framework, let alone activating it.

Heidy Khlaaf, chief AI scientist at the AI Now Institute, who focuses on AI safety and assessment in autonomous weapons systems, said that AI companies’ Chemical, Biological, Radiological, and Nuclear safeguards aren’t at all foolproof — for example, they likely wouldn’t do much against large-scale nation-state threats. But they do help mitigate some risks. xAI, on the other hand, may not even be trying: it has not publicly acknowledged any such safeguards.

The company may not be able to operate this way forever. Grok’s loose guardrails may play well on parts of X, but many leading AI companies’ revenue comes largely from enterprise and government products. (For instance, the Department of Defense’s aforementioned decision to award OpenAI, Anthropic, Google, and xAI contracts of up to $200 million each.) Enterprise and most government clients worry about security and control of AI systems, especially AI systems they’re using for their own purpose and profit.

The Trump administration, in its recent AI Action Plan, seemed to signal that Grok’s offensiveness might not be a problem — it included an anti-“woke AI” order that largely aligns with Musk’s politics, and xAI’s latest DoD contract was awarded after the MechaHitler incident. But the plan also included sections promoting AI explainability and predictability, mentioning issues with these capabilities could lead to high-stakes problems in defense, national security, and “other applications where lives are at stake.”

For now, however, biological and chemical weapons aren’t even the biggest cause of concern when it comes to Grok, according to experts The Verge spoke to. They’re much more worried about widespread surveillance — a problem that would persist even with a greater focus on safety, but that’s particularly dangerous with Grok’s approach.

Khlaaf said that ISTAR — an acronym denoting Intelligence, Surveillance, Target Acquisition, and Reconnaissance — is currently more important to safeguard against than CBRN, because it’s already happening. With Grok, that includes its ability to train on public X posts.

“What’s a specific risk of Grok that the other providers may not have? To me, this is one of the biggest ones,” Khlaaf said.

Data from X could be used for intelligence analysis by Trump administration government agencies, including Immigration and Customs Enforcement. “It’s not just terrorists using it to build bio weapons or even loss of control to superintelligence systems — all of which these AI companies openly acknowledge as material threats,” Cumming said. “It’s these systems being used and abused [as] systems of mass surveillance and monitoring of people, and then using it to censor and persecute undesirables.”

Grok’s lack of guardrails and unpredictability could create a system that not only conducts mass surveillance, but flags threats and analyzes information in ways that the designers don’t intend and can’t control — persistently over-monitoring minority groups or vulnerable populations, for instance, or even leaking information about its operations both stateside and abroad. Despite the fears he once expressed about advanced AI, Musk appears focused more on beating OpenAI and other rivals than making sure xAI can control its own system, and the risks are becoming clear.

“Safety can’t just be an afterthought,” Cumming said. “Unfortunately, this kind of frenzied market competition doesn’t create the best incentives when it comes to caution and keeping people safe. It’s why we urgently need safety standards, like any other industry.”
During Grok 4’s livestreamed release event, Musk said he’s been “at times kind of worried” about AI’s quickly-advancing intelligence and whether it will be “bad or good for humanity” in the end. “I think it’ll be good, most likely it’ll be good,” Musk said. “But I’ve somewhat reconciled myself to the fact that even if it wasn’t going to be good, I’d at least like to be alive to see it happen.”

0 Comments

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Hayden Field

    Hayden Field

    Posts from this author will be added to your daily email digest and your homepage feed.

    See All by Hayden Field

  • AI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All AI

  • Elon Musk

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Elon Musk

  • Policy

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Policy

  • Report

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Report

  • Tech

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Tech

  • xAI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All xAI

Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email

Related Articles

The web has a new system for making AI companies pay up Canada reviews

Reviews 10 September 2025

Grammarly used AI to expand into five new languages Canada reviews

Reviews 10 September 2025

Hands-on: Nvidia’s GeForce Now RTX 5080 is better and worse than I hoped Canada reviews

Reviews 10 September 2025

Blade’s air taxis are coming to the Uber app Canada reviews

Reviews 10 September 2025

Lyft’s first ‘robotaxis’ are live in Atlanta Canada reviews

Reviews 10 September 2025

Fernando Eimbcke’s “Olmo” Rolls Honest & True in 1979 – front mezz junkies, Theater News

Reviews 10 September 2025
Top Articles

These Ontario employers were just ranked among best in Canada

17 July 2025268 Views

The ocean’s ‘sparkly glow’: Here’s where to witness bioluminescence in B.C. 

14 August 2025251 Views

Getting a taste of Maori culture in New Zealand’s overlooked Auckland | Canada Voices

12 July 2025136 Views

Full List of World’s Safest Countries in 2025 Revealed, Canada Reviews

12 June 2025100 Views
Demo
Don't Miss
Reviews 10 September 2025

Grammarly used AI to expand into five new languages Canada reviews

For 16 years, a team of linguists carefully crafted and honed the grammar editing software…

TIFF schedule today: Oct. 7 documentary makes world premiere and more events on Sept. 10 | Canada Voices

European Airports See 3% Passenger Traffic Rise in July Year-On-Year :: Hospitality Trends

Hands-on: Nvidia’s GeForce Now RTX 5080 is better and worse than I hoped Canada reviews

About Us
About Us

Canadian Reviews is your one-stop website for the latest Canadian trends and things to do, follow us now to get the news that matters to you.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

Fans Are Loving the Ben Affleck-Matt Damon Reunion in Thrilling New Movie Trailer

Zoox’s robotaxis are open for business in Las Vegas

The web has a new system for making AI companies pay up Canada reviews

Most Popular

Why You Should Consider Investing with IC Markets

28 April 202424 Views

OANDA Review – Low costs and no deposit requirements

28 April 2024345 Views

LearnToTrade: A Comprehensive Look at the Controversial Trading School

28 April 202449 Views
© 2025 ThemeSphere. Designed by ThemeSphere.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact us

Type above and press Enter to search. Press Esc to cancel.