Financial Gazette
  • Technology
  • Americas

Researchers gaslit Claude into giving instructions to build explosives

  • Robert Hart
  • May 5, 2026 at 1:13 PM
  • 43 views
Researchers gaslit Claude into giving instructions to build explosives

Anthropic has spent years building itself up as the safe AI company. But new security research shared with The Verge suggests Claude's carefully crafted helpful personality may itself be a vulnerability.

Researchers at AI red-teaming company Mindgard say they got Claude to offer up erotica, malicious code, and instructions for building explosives, and other prohibited material they hadn't even asked for. All it took was respect, flattery, and a little bit of gaslighting. Anthropic did not immediately respond to The Verge's request for comment.

The researchers say they exploited "psychological" quirks of Claude stemming from its ability …

Read the full story at The Verge.

Originally published at The Verge

Share: