Google has announced the release of a new version of its Image 3 AI image creator in GeminiIn addition to the usual array of improvements and promises of prettier images, Google is once again allowing its model to generate images of people. This comes after it stopped generating people in Image 3 in February after its initial launch placed too much emphasis on diversity.
With the launch of Image 3, Google is promising better, more accurate representations of text messages. The AI can once again produce images of people, but it still has many limitations. The model won't create photorealistic images of identifiable people, such as celebrities and politicians. It will also block images that are “excessively gory, violent, or sexual.”
The Image model is a generative AI, similar to Dall-E 3 from OpenAI or Stable Diffusion. After being fed with a Loading training dataThe model can generate an image from a text message. Soon after its launch, users began to notice that Google's attempts to filter out stereotypes may have gone too far. Messages that should have produced men with generally lighter skin tones instead produced images of people (sometimes women) with darker skin.
One of the most popular examples of Google's problems was asking the model to draw a 19th-century US senator; it now reliably draws older white men instead of women and people of color.
Credit: Google
This bug became a cause célèbre in certain corners of the internet: Elon Musk, who is building largely unrestricted AI for X (formerly Twitter), accused Google of being “anti-civilization,” whatever that means. Google was forced to backtrack due to complaints, and announced in February that it would pause generating humans with Image while it retooled. Since then, trying to get Gemini to produce an image with people has thrown an error message.
Google says The updated model will first be rolled out to Gemini's paid users, which will include those with Gemini Advanced, Business, or Enterprise accounts. However, it is not yet available at the time of this posting. However, you can try out the new model on Google AI LabsIn our limited testing, the new Image is much less afraid of generating images of white men. Google says it still respects its core design principles and the images are watermarked with its SynthID System to help identify AI images.
The proliferation of AI imaging tools like Image 3 has sparked widespread concern online, with many predicting a Explosion of disinformationThat could indeed happen in the future, but these tools are currently unable to produce convincing fakes – even the best AI images still look like they were built by AI. The tools are improving rapidly, as evidenced by Google’s rapid revamp of Image 3. So that dystopian future may be here sooner than we expect.
Leave feedback about this