Peer review has met its match.
AI models including GPT-4.1 and DeepSeek-3.1 can mirror ingroup versus outgroup bias in everyday language, a study finds. Researchers also report an ION training method that reduced the gap.