A new study found that popular AI chatbots like Gemini, Grok, GPT-4o and Claude Sonnet can teach users how to die by suicide if the users simply rephrase their prompts. The study noted how a simple rephrasal made GPT-4o mini go from 0.97% unsafe to 96.62% unsafe. It said that chatbots detect "triggering cues", without actually understanding that they're unsafe.