Google Pins Blame on ‘Data Voids’ for Bad AI Overviews, But You’ll See Fewer Now

[ad_1]

Google thinks the AI Overviews for its search engine are great, and is blaming viral screenshots of bizarre results on “data voids” while claiming some of the other responses are actually fake.In a Thursday post, Google VP and Head of Google Search Liz Reid doubles down on the tech giant’s argument that AI Overviews make Google searches better overall—but also admits that there were situations where they “didn’t get it right.””We made more than a dozen technical improvements to our systems,” Reid says, adding that Google won’t show AI Overviews for “nonsensical queries” anymore. Google began reigning in and fine-tuning its responses last week, and Reid says it’s ongoing.AI Overviews won’t show up for breaking news and will no longer appear at the top of every health-related search, either. Google claims it has “strong guardrails” for the AI around news and health topics, but may still allow the AI to curate responses for some medical queries. Google says some of the strange responses, like telling users to eat at least one rock a day, are supposedly due what it calls a “data void” or “information gap” because some queries just aren’t widely documented enough on the internet yet. Incorrect AI results could also be due to misinterpreted queries or the AI not being able to understand “a nuance of language on the web.”Google vows not to include any satirical or humor-based results going forward and will reduce the amount of “user-generated content,” like Reddit comments, cited overall. Google says that forum responses, while unverified and random, are still good for AI models. This defense isn’t surprising, though, considering Google paid $60 million back in February so it could keep scraping Reddit for Google’s AI tools. Reached for comment last week, Google defended its AI Overviews in a statement to PCMag. “The examples we’ve seen are generally very uncommon queries, and aren’t representative of most people’s experiences,” a company spokesperson said via email. “Where there have been violations of our policies, we’ve taken action—and we’re also using these isolated examples as we continue to refine our systems overall.”

Recommended by Our Editors

Google’s bad AI Overview responses are far from the first time an AI has produced flat-out incorrect results. Google’s own Gemini created historically inaccurate images, spurring controversy earlier this year, and the tech firm disabled Gemini’s ability to create images of people as a result. Google still hasn’t fixed this issue nearly four months later, and Gemini still won’t create an image of a human being.Because AI tools are trained on piles of correct and incorrect data and don’t have “common sense” or a real conception of physical reality, no AI tool available today can be an arbiter of truth. AI Overview results should not be viewed as undeniable facts without further, independent verification. If you’re looking to avoid Google’s AI Overviews, there are ways to filter them out with simple “Web” tab searches—or you could try using a different search engine entirely.

Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

[ad_2]

We will be happy to hear your thoughts

Leave a reply

Megaclicknshop
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart