Below Supernav ↴

Google suspends Gemini AI chatbot’s ability to generate pictures of people

FILE - Google logos are shown when searched on Google in New York, Sept. 11, 2023. Google said Thursday, Feb. 22, 2024, it’s temporarily stopping its Gemini artificial intelligence chatbot from generating images of people a day after apologizing for “inaccuracies” in historical depictions that it was creating.(AP Photo/Richard Drew, File)

FILE – Google logos are shown when searched on Google in New York, Sept. 11, 2023. Google said Thursday, Feb. 22, 2024, it’s temporarily stopping its Gemini artificial intelligence chatbot from generating images of people a day after apologizing for “inaccuracies” in historical depictions that it was creating.(AP Photo/Richard Drew, File)

 

Main Area Top ↴

AUTO TEST CUSTOM HTML 20241211205327

AUTO TEST CUSTOM HTML 20241212105526

This is an archived article and the information in the article may be outdated. Please look at the time stamp on the story to see when it was last updated.

Google said Thursday it is temporarily stopping its Gemini artificial intelligence chatbot from generating images of people a day after apologizing for “inaccuracies” in historical depictions that it was creating.

Gemini users this week posted screenshots on social media of historically white-dominated scenes with racially diverse characters that they say it generated, leading critics to raise questions about whether the company is over-correcting for the risk of racial bias in its AI model.

“We’re already working to address recent issues with Gemini’s image generation feature,” Google said in a post on the social media platform X. “While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.”

Previous studies have shown AI image-generators can amplify racial and gender stereotypes found in their training data, and without filters are more likely to show lighter-skinned men when asked to generate a person in various contexts.

Google said on Wednesday that it’s “aware that Gemini is offering inaccuracies in some historical image generation depictions” and that it’s “working to improve these kinds of depictions immediately.”

Gemini can generate a “wide range of people,” which the company said is “generally a good thing” because people around the world use the system but it is “missing the mark.”

University of Washington researcher Sourojit Ghosh, who has studied bias in AI image-generators, said he’s in favor of Google pausing the generation of people’s faces but is a “little conflicted about how we got to this outcome.” Contrary to claims of so-called “white erasure” and the premise that Gemini refuses to generate faces of white people — ideas circulating on social media this week — Ghosh’s research has largely found the opposite.

“The rapidness of this response in the face of a lot of other literature and a lot of other research that has shown traditionally marginalized people being erased by models like this — I find a little difficult to square,” he said.

When the AP asked Gemini to generate pictures of people, or even just a big crowd, it responded by saying it’s “working to improve” the ability to do so. “We expect this feature to return soon and will notify you in release updates when it does,” the chatbot said.

Ghosh said it’s likely that Google can find a way to filter responses to reflect the historical context of a user’s prompt, but solving the broader harms posed by image-generators built on generations of photos and artwork found on the internet requires more than a technical patch.

“You’re not going to overnight come up with a text-to-image generator that does not cause representational harm,” he said. “They are a reflection of the society in which we live.”

Tech Headlines

Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. AP

test

 

Main Area Middle ↴

Trending on NewsNationNow.com

Main Area Bottom ↴