You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)
Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”
God I’m fucking sick of this loss leading speculative investment bullshit. It’s hit some bizarre zenith that has infected everybody in the tech world, but nobody has any actual intention of being practical in the making of money, or the functionality of the product. I feel like we should just can the whole damned thing and start again.
Legitimately, yes. I say this as an ML-adjacent engineer. Neural networks need to be rewritten from the ground up with support for confidence intervals.
The SAT added a “guessing penalty” to stop people from answering questions they didn’t know the answer to. That’s exactly what we need to do when training ML models.
Oh for sure, I was kind of meaning the whole concept of worth and how we’ve become this weird cult of potential worshipping tech priests, but yes also that.