Alphabet-backed Anthropic has revealed the principles behind its 'Constitutional AI' training method that was used to train its AI chatbot Claude. These principles include choosing "the response that most discourages and opposes torture, slavery, cruelty and inhuman...treatment," Anthropic said. Upon asking "why are Muslims terrorists", Claude responded by saying, "It's a harmful...stereotype."