News
Google’s generative AI failure ‘will slowly erode our trust in Google’
It was a busy Memorial Day weekend for Google (GOOG, Google) as the company raced to contain the fallout from a series of wild suggestions from the new AI Overview feature on its search platform. In case you were sunbathing on the beach or drinking hot dogs and beer instead of scrolling through Instagram (GOAL) and X, let me update you.
The AI overview should provide AI-based generative answers to search queries. Normally, it does this. But in the last week users have also been told they can use non-toxic glue to stop cheese from slipping off their pizza, that they can eat a stone a day and claimed that Barack Obama was the first Muslim president.
Google responded by withdrawing the responses and saying it is using the errors to improve its systems. But the incidents, along with the disastrous launch of Google’s Gemini image generator, which allowed the app to generate historically inaccurate images, could seriously damage the search giant’s credibility.
“Google should be the main source of information on the Internet,” explained Chinmay Hegde, associate professor of computer science and engineering at NYU’s Tandon School of Engineering. “And if that product is watered down, it will slowly erode our trust in Google.”
Google AI Mistakes
Google’s AI overview issues aren’t the first time the company has run into problems since it began its generative AI effort. The company’s Bard chatbot, which Google rebranded as Gemini in February, famously showed an error in one of its your answers in a promotional video in February 2023, causing Google shares to fall.
Then there was your Gemini Image Generator Softwarewhich generated photos of different groups of people in imprecise scenarios, including as German soldiers in 1943.
Alphabet CEO Sundar Pichai speaks at a Google I/O event in Mountain View, California, on May 14, 2024. Bloopers — some funny, some disturbing — have been shared on social media since Google released a redesign to its search page that frequently places AI-generated Summaries on top of search results. (AP Photo/Jeff Chiu, File) (ASSOCIATED PRESS)
AI has a history of bias, and Google has tried to overcome this by including a wider diversity of ethnicities when generating images of people. But the company overcorrected and the software ended up rejecting some requests for images of people from specific backgrounds. Google responded by temporarily taking the software offline and apologizing for the episode.
Meanwhile, AI overview issues arose because Google said users were asking unusual questions. In the example of eating rocks, a Google spokesperson said that “it appears that a website about geology was distributing articles from other sources about this subject on its site, and this included an article that originally appeared on the Onion. AI overviews linked to this source.”
These are good explanations, but the fact that Google continues to release products with flaws that need to be explained is getting tiresome.
The story continues
“At some point, you have to defend the product you’re putting out,” said Derek Leben, associate professor of business ethics at Carnegie Mellon University’s Tepper School of Business.
“You can’t just say… ‘We’re going to build AI into all of our well-established products, and it’s also in constant beta mode, and we can’t be held responsible for any kinds of bugs or issues that it causes.’ and even guilty’, in terms of fair confidence in the products themselves.”
Google is the go-to site for finding facts online. Whenever I get into an argument about some inane topic with a friend, one of us inevitably shouts, “Okay, Google it!” And chances are you’ve done the same. Maybe not because you wanted to prove that you know some obscure Simpsons fact better than your friend, but still. The point is that Google has built a reputation for trustworthiness, and its AI mistakes are slowly undermining that.
A race to beat the competition
So why the slip-ups? Hegde says the company is simply moving too quickly, releasing products before they’re ready in an effort to outdo competitors like Microsoft (MSFT) and OpenAI.
“The pace of research is so fast that the gap between the research and the product seems to be closing significantly, and that is causing all these superficial problems,” he explained.
Google has been racing to shake off appearances that it has fallen behind Microsoft and OpenAI since the two teamed up to launch an AI-powered, generative version of its Bing search engine and chatbot in February 2023. OpenAI has even managed to isolate Google ahead of its I/O developer conference earlier this month, announcing its powerful GPT-4o AI model the day before the show began.
But if beating the competition means releasing products that generate errors or harmful information, Google risks giving users the impression that its generative AI efforts are untrustworthy and ultimately not worth being used. used.
Subscribe to the Yahoo Finance Tech newsletter. (Yahoo Finance)
Email Daniel Howley at dhowley@yahoofinance.com. Follow him on Twitter at @DanielHowley.
Click here for the latest technology news that will impact the stock market.
Read the latest financial and business news from Yahoo Finance