Google Draws Laughter with New AI Search Engine

Google Draws Laughter with New AI Search Engine

Since 2024, Google has been heavily investing in artificial intelligence-based functionalities. The tech giant is channeling a significant portion of its resources into improving its existing AI models and developing new ones. Just last week, Google and Atlético de Madrid signed an agreement in this regard.

Over the past six months, Google has made strides in cybersecurity and AI functionalities aimed at helping users search for information more quickly and efficiently. However, things haven’t gone as planned with their new AI search engine, AI Overviews, which was launched for Chrome, Firefox, and Google’s app browser for hundreds of millions of users.

Listen to AI, but Don’t Follow: Don’t Eat Rocks or Glue

AI Overviews was designed to save users time. With a simple click, users can gather valuable information after posing a question. According to an article by Science Alert, this technology aims to rival ChatGPT. For instance, you can ask, “how to keep bananas fresh for longer,” and the platform will provide useful tips to help solve your problem.

However, some questions posed to the platform have not yielded the expected responses, which have since gone viral. For example, when asked about space, AI Overviews replied that “astronauts met cats on the Moon, played with them, and took care of them.”

But that’s not all. It also recommended “eating at least one small rock per day, as rocks are a vital source of minerals and vitamins.” Finally, it suggested putting glue on pizzas, although it’s unclear whether this was meant to improve their quality or taste. Needless to say, these recommendations are dangerous and should not be followed.

Why Does This Happen?

Since the implementation of AI models began, systems have struggled to distinguish between what is true and what is a caricature or hoax. In this case, as Science Alert explains, there are no legitimate articles on eating rocks, but there is a humorous piece that AI Overviews took seriously, leading to the erroneous and hazardous advice.

It’s clear that despite Google’s decision to release AI models later than other firms out of fear of such issues, these problems have still occurred. We don’t believe such incidents will damage the tech giant’s reputation, but they highlight that artificial intelligence still has a long way to go to become genuinely useful in such contexts.

Following the social media posts on platforms like X, other questions and answers from this AI model have gone viral. One user asked if it was advisable to smoke while pregnant, to which the AI responded affirmatively, suggesting 2 or 3 cigarettes a day. In conclusion, relying on AI Overviews has become a dangerous game for humans, at least for now.