Gemini Wrongs
I have a love-hate relationship with research. The exhilaration of finding valuable information that satiates my curiosity is rewarding; yet, the spectrum of disappointment that I experience while digging into the deep sea of online resources can be vexing. But this is the beauty of research. As I scroll through the myriad of sources, I often discover myself going on rabbithole searches, through which I unanticipatedly learn about random topics. It is a pleasant journey of constant discovery.
Our generation, unfortunately, is going to miss out on this learning experience. The rise of AI-generated search engines, especially Google AI Overview, discourages its users from doing further research as it conveniently provides a concise summary. While the convenience of the brief information is irresistible, the lack of accuracy and depth of the information will make our knowledge not only superficial but also meaningless.
Here’s how Google’s AI Overview works. This new feature essentially scans the internet and compresses all the information into a couple paragraphs explaining the premises of the search. However, this overview is often misleading, inaccurate, and even dangerous. Users have pointed out hundreds upon hundreds of examples already in how dangerous this AI may prove:giving fake medical advice such as drinking fluids during kidney stones, using glue on food to stabilize it and claiming Obama is a muslim due to the AI taking resources from unreliable sources from anywhere on the internet. All of this easy to access info not only defeats the purpose of research, but facilitates the spread of false information due to its easy-to-access and convenient nature that it brings. While Google can implement AI to make its services easier to use for users, Google should remove the AI overview tool entirely and repurpose how it uses AI - not as a summarizer of all websites which may be inaccurate, but for cross checking sources and finding easier ways for users to find resources.