Google says it fixed the AI Overviews everyone has been roasting for weeks

BGR

At I/O 2024 a few weeks ago, Google announced that AI Overviews will appear at the top of Google Search results in the US, telling users that it wants AI to do the googling for them. But AI Overviews turned out to be a huge misfire from the company that so many people trust for accurate internet search results.

AI Overviews went viral on social media for displaying nonsensical, inaccurate, and sometimes dangerous answers at the top of Search. Google Search actually suggested putting glue on pizza to make the cheese stick.

We showed you ways to avoid AI Overviews in search, and I realized that my decision to ditch Google Search a long time ago was a smart one. I will never have to deal with any of this nonsense. I also said that Google should retire the AI Overviews feature from Google Search. At the very least, the feature should be optional rather than the default.

Unsurprisingly, Google has no plans to back away from AI. Instead, Google explained what happened since I/O, why AI Overviews offer inaccurate information, and what it’s done to fix it. Google also blamed you, the user, for giving AI Overviews a bad name.

Not your regular AI hallucination

I’ve warned you time and time again that AI like ChatGPT can invent falso information. It’s called a hallucination, and it’s a problem nobody in the AI industry knows how to fix. Google explained in a blog post that AI Overviews do not run on regular large language models, so they don’t hallucinate in the same manner:

AI Overviews work very differently than chatbots and other LLM products that people may have tried out. They’re not simply generating an output based on training data. While AI Overviews are powered by a customized language model, the model is integrated with our core web ranking systems and designed to carry out traditional “search” tasks, like identifying relevant, high-quality results from our index. That’s why AI Overviews don’t just provide text output, but include relevant links so people can explore further. Because accuracy is paramount in Search, AI Overviews are built to only show information that is backed up by top web results.

This means that AI Overviews generally don’t “hallucinate” or make things up in the ways that other LLM products might. When AI Overviews get it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available. (These are challenges that occur with other Search features too.)

What went wrong

Google said it tested its AI Overviews and optimized them for accuracy. But millions of people using the feature led to novel search searches. Like “nonsensical new searches, seemingly aimed at producing erroneous results.” Yes, Google is blaming you for causing these ridiculous errors.

Google also says that people have faked AI Overviews results. So, again, you are to blame:

Separately, there have been a large number of faked screenshots shared widely. Some of these faked results have been obvious and silly. Others have implied that we returned dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression. Those AI Overviews never appeared. So we’d encourage anyone encountering these screenshots to do a search themselves to check.

Only after this does Google take responsibility and say “some odd, inaccurate or unhelpful AI Overviews certainly did show up.” But these showed up for “queries that people don’t commonly do.”

Apparently, no one out there is depressed:

Google confirmed something we knew all along from the AI Overviews that went viral: its AI can’t interpret “nonsensical queries and satirical content.” For example, the question “How many rocks should I eat?” produced a response only because it found satirical content to be the only source of information.

The company also addressed the pizza glue AI Overviews in the blog post, blaming forums like Reddit, without actually naming it:

In other examples, we saw AI Overviews that featured sarcastic or troll-y content from discussion forums. Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza.

Google says it made improvements

Google is determined to keep AI Overviews at the top of Google Search, so it has started fixing them. Here are some of the things Google is improving, as listed in the blog post:

  • We built better detection mechanisms for nonsensical queries that shouldn’t show an AI Overview, and limited the inclusion of satire and humor content.
  • We updated our systems to limit the use of user-generated content in responses that could offer misleading advice.
  • We added triggering restrictions for queries where AI Overviews were not proving to be as helpful.
  • For topics like news and health, we already have strong guardrails in place. For example, we aim to not show AI Overviews for hard news topics, where freshness and factuality are important. In the case of health, we launched additional triggering refinements to enhance our quality protections.

Are Google’s problems with AI Overviews behind it? We have no way of knowing. AI Overviews better improve for Google’s sake. Then again, we’re still in the early days of AI when it comes to accuracy. I wouldn’t be surprised if we keep seeing erroneous AI Overviews, as Google keeps fighting to improve the feature that has so far been a laughingstock.

The post Google says it fixed the AI Overviews everyone has been roasting for weeks appeared first on BGR.

Leave a Reply

Avatar

Your email address will not be published. Required fields are marked *