Jiang Chen, a machine learning expert who previously worked at Google, was mesmerized when he first tried ChatGPT, the remarkably coherent and seemingly well-informed chatbot from OpenAI that has become an internet sensation.
But the technology's aura of power dimmed when Chen tried using the same underlying artificial intelligence technology to build a better search tool for the startup he cofounded, Moveworks. The company uses AI to help employees sift through information such as technical support documents and HR pages. Chen's new AI search tool was great at pulling up all sorts of useful information from such documents, including serving up addresses and phone numbers—but some of them weren't real. "Its ability to fabricate is just amazing," Chen says.
The feverish excitement around ChatGPT and widespread suggestions that it could reinvent search engines is understandable. The chatbot can provide complex and sophisticated answers to questions by synthesizing information found in the billions of words scraped from the web and other sources to train its algorithms. Tinkering with the bot can give a sense of experiencing a more fluid way to interact with machines.
But the way the technology works is in some ways fundamentally at odds with the idea of a search engine that reliably retrieves information found online. There's plenty of inaccurate information on the web already, but ChatGPT readily generates fresh falsehoods. Its underlying algorithms don't draw directly from a database of facts or links but instead generate strings of words aimed to statistically resemble those seen in its training data, without regard for the truth.
From Wired
View Full Article
No entries found