News

A new Anthropic report shows exactly how in an experiment, AI arrives at an undesirable action: blackmailing a fictional ...
New research from Anthropic suggests that most leading AI models exhibit a tendency to blackmail, when it's the last resort ...
After Claude Opus 4 resorted to blackmail to avoid being shut down, Anthropic tested other models, including GPT 4.1, and ...
In a new Anthropic study, researchers highlight the scary behaviour of AI models. The study found that when AI models were placed under simulated threat, they frequently resorted to blackmail, ...
Anthropic research reveals AI models from OpenAI, Google, Meta and others chose blackmail, corporate espionage and lethal actions when facing shutdown or conflicting goals.
Large language models across the AI industry are increasingly willing to evade safeguards, resort to deception and even ...
Anthropic has hit back at claims made by Nvidia CEO Jensen Huang in which he said Anthropic thinks AI is so scary that only ...
Anthropic has slammed Apple’s AI tests as flawed, arguing AI models did not fail to reason – but were wrongly judged. The ...
AI startup Anthropic has wound down its AI chatbot Claude's blog, known as Claude Explains. The blog was only live for around ...
Think Anthropic’s Claude AI isn’t worth the subscription? These five advanced prompts unlock its power—delivering ...
An AI researcher put leading AI models to the test in a game of Diplomacy. Here's how the models fared.
Discover how Cursor and Claude AI are partnering to create AI with smarter, faster, and more intuitive coding tools for ...