OpenAI has announced the launch of a new tool designed to distinguish content created by artificial intelligence.
Currently in its early stages, OpenAI’s content detection tool has shown promising results in internal testing, says the company. The classifier has demonstrated a high accuracy rate of approximately 98% in identifying images created by the company’s latest image model, DALL·E 3, while incorrectly tagging less than 0.5% of non-AI generated images as being from DALL·E 3. However, the tool’s performance may be affected by certain modifications, and its ability to distinguish between images generated by DALL·E 3 and other AI models is currently lower.
In addition to the detection tool, OpenAI is implementing tamper-resistant watermarking and adding C2PA metadata to images created and edited by DALL·E 3 in ChatGPT and the OpenAI API. The company plans to extend this metadata integration to its upcoming video generation model, Sora.
In another development, OpenAI has joined the Coalition for Content Provenance and Authenticity (C2PA) Steering Committee to contribute to the development of a widely used standard for digital content certification, which aims to clarify how content was made and provide information about its origins. The company is also joining Microsoft in launching a social resilience fund – a $2 million fund that is expected to support AI education and understanding.
[Image courtesy: OpenAI]