acm-header
Sign In

Communications of the ACM

ACM TechNews

Fighting Fake 'Facts' with Two Little Words


View as: Print Mobile App Share:
A robot typing.

Inspired by journalists, the researchers discovered a new technique to ground a large language model's answers in reality.

Credit: Getty Images

Johns Hopkins University (JHU) researchers have developed a method to decrease hallucinations by large language models (LLMs) by including "according to" in LLM queries.

Using this method, LLMs are directed to quote from trusted resources in their training data, rather than produce false responses.

The researchers used Data Portraits, a tool developed previously at JHU, to verify whether the LLM's responses were present in the training dataset without downloading vast amounts of text.

They observed that an LLM's QUIP (Quoted Information Precision) Score rose 5% to 15% when the "according to" grounding prompt was incorporated into queries.

Grounding prompts also generated more detailed and accurate answers overall.

Said JHU's Daniel Khashabi, "Our goal is for the models to access helpful content, such as strings memorized from high-quality or trusted documents."

Because response accuracy depends on the quality of the training dataset, the method can filter out data from disreputable websites.

From Johns Hopkins University Hub
View Full Article


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account