New Delhi: Google still hasn’t fixed the bias problems with Gemini’s image generator. Back in February, Google stopped Gemini from making pictures of people because users noticed it made mistakes. For example, when asked to show “a Roman legion,” Gemini created a picture of racially diverse soldiers that didn’t fit the historical context. It also showed “Zulu warriors” in a stereotypically Black way.
Google CEO Sundar Pichai apologized, and Demis Hassabis, co-founder of Google’s AI research division DeepMind, promised a fix within a few weeks. But now it’s May, and the problem still isn’t solved.
At its recent I/O developer conference, Google showed off many other Gemini features, like custom chatbots, a vacation planner, and connections to Google Calendar, Keep, and YouTube Music. However, the feature for making pictures of people is still turned off in Gemini’s web and mobile apps, as confirmed by a Google spokesperson.
The delay is likely because the issue is more complicated than Hassabis suggested. The datasets used to train image generators like Gemini have more images of white people than people of other races, and the images of non-white people often reinforce negative stereotypes. Google tried to fix these biases by using some awkward coding solutions, and now it’s struggling to find a good balance that avoids these mistakes.
Whether Google will eventually solve this problem is unclear. This ongoing issue shows how hard it is to fix AI mistakes, especially when bias is the main cause.