OpenAI has introduced innovative tools to identify and track content generated by its DALL-E AI image generator. In a recent blog post, the company announced the development of new methods to verify whether content was created using AI.
One of these methods is an image detection classifier that utilizes AI to determine if a photo was generated by DALL-E 3. This classifier is designed to remain effective even if the image is altered in various ways, such as cropping or changes in colour saturation. Although it excels at detecting images created by DALL-E 3 with approximately 98% accuracy, its performance in identifying content from other AI models, like Midjourney, is not as high, flagging only 5 to 10% of such images.
Additionally, the giant has implemented tamper-resistant watermarking techniques to mark content generated by its AI platforms, such as Voice Engine, a text-to-speech platform. These watermarks, invisible to the naked eye, provide information about the content's origin and ownership, ensuring greater transparency and accountability.
Furthermore, it has joined the Coalition of Content Provenance and Authority (C2PA), along with companies like Microsoft and Adobe. Through this collaboration, OpenAI aims to enhance content credibility by incorporating content credentials, akin to watermarks, into image metadata.
While these tools are still undergoing refinement, OpenAI is seeking feedback from users, including researchers and nonprofit journalism organizations, to assess their effectiveness. Interested parties can test the image detection classifier through OpenAI's research access platform.