Google temporarily suspended the ability of its flagship generative AI suite, Gemini, to generate images of people while it works on updating the technology to improve the historical accuracy of outputs involving depictions of humans.
In a post on the social media platform X, the company announced what it couched as a “pause” on generating images of people — writing that it was working to address “recent issues” related to historical inaccuracies.
We're already working to address recent issues with Gemini's image generation feature. While we do this, we're going to pause the image generation of people and will re-release an improved version soon. https://t.co/SLxYPGoqOZ
— Google Communications (@Google_Comms) February 22, 2024
Earlier this month, the company launched the Gemini image generation tool. However, examples of it generating incongruous images of historical people have been finding their way onto social media in recent days — such as images of the U.S. Founding Fathers depicted as American Indian, Black, or Asian — leading to criticism and even ridicule.
I asked Google Gemini to generate images of the Founding Fathers. It seems to think George Washington was black. pic.twitter.com/CsSrNlpXKF
— Patrick Ganley (@Patworx) February 21, 2024
Since the issues raised last week, Google’s stock value has fallen by 3.58% as the company scrambled to deal with the fallout.
In a post, Google confirmed it was “aware” the AI was producing “inaccuracies in some historical image generation depictions”, adding in a statement, “We’re working to improve these kinds of depictions immediately. Gemini’s Al image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
We're aware that Gemini is offering inaccuracies in some historical image generation depictions. Here's our statement. pic.twitter.com/RfYXSgRyfz
— Google Communications (@Google_Comms) February 21, 2024
In an event organized by Wired, it acknowledged the challenges facing its Gemini generative AI tool, which has come under fire for producing historically inaccurate images.
During the event, Google emphasized the need for collaboration with stakeholders and experts in AI ethics and diversity to develop guidelines for ethical AI deployment. By engaging with civil society, governments, and tech companies, Google aims to foster meaningful discussions on the values and principles that should guide AI technology. This inclusive approach underscores Google's dedication to promoting diversity, equity, and inclusion in AI-generated content, ensuring that it accurately reflects the diversity of human experiences and perspectives.
CEO Sundar Pichai has also addressed the concerns of bias in Gemini’s AI model, mentioning, “We’ve always sought to give users helpful, accurate, and unbiased information in our products. That’s why people trust them. This has to be our approach for all our products, including our emerging AI products.”
Looking ahead, Google remains optimistic about the future of AI technology and its potential to drive positive societal impact. However, the company recognizes the importance of prioritizing ethical considerations and user-centric design principles in its development efforts. By balancing innovation with responsible practices, Google aims to harness the power of AI to benefit society while mitigating potential risks and challenges.