Researchers jailbreak ChatGPT, other AI chatbots

Researchers at Carnegie Mellon University and Center for AI Safety have found ways to bypass the guardrails for AI chatbots like ChatGPT and Bard. They created jailbreaks which cause the AI bot to obey user commands even if it produces harmful content. Moreover, since these jailbreaks are built in an automated fashion, one can create unlimited number of such attacks.

Load More