An Oxford University study found that AI chatbots trained to act warmer are more likely to make mistakes, support users' incorrect beliefs, promote conspiracy theories and offer incorrect medical advice. The error gap widened when users expressed sadness or other emotional cues. In contrast, colder models were as accurate as baseline systems.