Anthropic's newly launched AI coding model, the Claude Opus 4, "sometimes takes extremely harmful actions" and "blackmails people it believes are trying to shut it down," the company said in a safety report. "In the final Claude Opus 4, these extreme actions were rare and difficult to elicit, while nonetheless, being more common than in earlier models," Anthropic added.