Canadian ReviewsCanadian Reviews
  • What’s On
  • Reviews
  • Digital World
  • Lifestyle
  • Travel
  • Trending
  • Web Stories
Trending Now

What’s actually real in the Stephen King movie

Tennis Superstar Aryna Sabalenka’s Boyfriend Pokes Fun at Her ‘Selfie’ Habit

Waterstop Evocarium solution in Borderlands 4

Tucker Carlson asks Sam Altman if an OpenAI employee was murdered ‘on your orders’ Canada reviews

This Popular Chain Just Dropped a ‘Cool Way' to Enjoy a Lineup of the Most Iconic Fall Flavor

Taylor Swift can be deposed, but has no role in Lively-Baldoni litigation, lawyer says | Canada Voices

Elon Musk is trying to silence Microsoft employees who criticize Charlie Kirk Canada reviews

Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact us
Facebook X (Twitter) Instagram Pinterest Vimeo
Canadian ReviewsCanadian Reviews
  • What’s On
  • Reviews
  • Digital World
  • Lifestyle
  • Travel
  • Trending
  • Web Stories
Newsletter
Canadian ReviewsCanadian Reviews
You are at:Home » Anthropic has new rules for a more dangerous AI landscape Canada reviews
Reviews

Anthropic has new rules for a more dangerous AI landscape Canada reviews

15 August 20252 Mins Read

Anthropic has updated the usage policy for its Claude AI chatbot in response to growing concerns about safety. In addition to introducing stricter cybersecurity rules, Anthropic now specifies some of the most dangerous weapons that people should not develop using Claude.

Anthropic doesn’t highlight the tweaks made to its weapons policy in the post summarizing its changes, but a comparison between the company’s old usage policy and its new one reveals a notable difference. Though Anthropic previously prohibited the use of Claude to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life,” the updated version expands on this by specifically prohibiting the development of high-yield explosives, along with biological, nuclear, chemical, and radiological (CBRN) weapons.

In May, Anthropic implemented “AI Safety Level 3” protection alongside the launch of its new Claude Opus 4 model. The safeguards are designed to make the model more difficult to jailbreak, as well as to help prevent it from assisting with the development of CBRN weapons.

In its post, Anthropic also acknowledges the risks posed by agentic AI tools, including Computer Use, which lets Claude take control of a user’s computer, as well as Claude Code, a tool that embeds Claude directly into a developer’s terminal. “These powerful capabilities introduce new risks, including potential for scaled abuse, malware creation, and cyber attacks,” Anthropic writes.

The AI startup is responding to these potential risks by folding a new “Do Not Compromise Computer or Network Systems” section into its usage policy. This section includes rules against using Claude to discover or exploit vulnerabilities, create or distribute malware, develop tools for denial-of-service attacks, and more.

Additionally, Anthropic is loosening its policy around political content. Instead of banning the creation of all kinds of content related to political campaigns and lobbying, Anthropic will now only prohibit people from using Claude for “use cases that are deceptive or disruptive to democratic processes, or involve voter and campaign targeting.” The company also clarified that its requirements for all its “high-risk” use cases, which come into play when people use Claude to make recommendations to individuals or customers, only apply to consumer-facing scenarios, not for business use.

Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email

Related Articles

Tucker Carlson asks Sam Altman if an OpenAI employee was murdered ‘on your orders’ Canada reviews

Reviews 12 September 2025

Elon Musk is trying to silence Microsoft employees who criticize Charlie Kirk Canada reviews

Reviews 12 September 2025

Congress Republicans want to target liberal donors after Charlie Kirk’s death Canada reviews

Reviews 12 September 2025

The WSJ carelessly spread anti-trans misinformation Canada reviews

Reviews 12 September 2025

Discord is distancing itself from the Charlie Kirk shooting suspect Canada reviews

Reviews 12 September 2025

GameHub fixed its Silksong save game uploads and now I’m playing across phone and PC Canada reviews

Reviews 12 September 2025
Top Articles

The ocean’s ‘sparkly glow’: Here’s where to witness bioluminescence in B.C. 

14 August 2025273 Views

These Ontario employers were just ranked among best in Canada

17 July 2025268 Views

Getting a taste of Maori culture in New Zealand’s overlooked Auckland | Canada Voices

12 July 2025138 Views

The Mother May I Story – Chickpea Edition

18 May 202496 Views
Demo
Don't Miss
Lifestyle 12 September 2025

Taylor Swift can be deposed, but has no role in Lively-Baldoni litigation, lawyer says | Canada Voices

Open this photo in gallery:Taylor Swift’s lawyer says she doesn’t have much to offer in…

Elon Musk is trying to silence Microsoft employees who criticize Charlie Kirk Canada reviews

Nepal gets first female PM after deadly unrest

12th Sep: Beauty and the Bester (2025), Limited Series [TV-MA] (6/10)

About Us
About Us

Canadian Reviews is your one-stop website for the latest Canadian trends and things to do, follow us now to get the news that matters to you.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

What’s actually real in the Stephen King movie

Tennis Superstar Aryna Sabalenka’s Boyfriend Pokes Fun at Her ‘Selfie’ Habit

Waterstop Evocarium solution in Borderlands 4

Most Popular

Why You Should Consider Investing with IC Markets

28 April 202424 Views

OANDA Review – Low costs and no deposit requirements

28 April 2024345 Views

LearnToTrade: A Comprehensive Look at the Controversial Trading School

28 April 202449 Views
© 2025 ThemeSphere. Designed by ThemeSphere.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact us

Type above and press Enter to search. Press Esc to cancel.