OpenAI promises greater transparency on model hallucinations and harmful content

engadget.comPublished: 5/14/2025

Summary

Opening with a focus on OpenAI's commitment to transparency, the article highlights their new "Safety Evaluations Hub," designed to share insights into model behavior and safety metrics like hallucinations, harmful content, and ethical performance. Launched in response to prior controversies, including multiple lawsuits over copyrighted material use and an accidental deletion of evidence from a plagiarism case by *The New York Times*, the hub aims to provide ongoing updates on model safety beyond initial system cards. However, OpenAI's oversight ensures that not all issues can be publicly disclosed, balancing transparency with privacy concerns.