Researchers are developing tools that can identify content created by bots like OpenAI's ChatGPT, which generates text that can be difficult to distinguish from works created by humans. These bot-detection tools come amid concerns that students could use ChatGPT to pass off artificial intelligence (AI)-generated essays as their own work and that workers could use such tools as shortcuts.
GPTZero, developed by Princeton University student Edward Tian, assesses text to determine the likelihood of it being AI-generated. Irene Solaiman of Hugging Face, a company that offers a similar tool, says signs of repetition or inaccuracies also can help spot AI-generated content. "Sometimes you can tell with a language model that it's misunderstanding modern data, misunderstanding time frames," Solaiman says.
Meanwhile, the Detect Fakes project at the Massachusetts Institute of Technology, provides an exercise to help users identify "deepfakes." However, as AI advances, these tools must be updated.
From The Wall Street Journal
View Full Article – May Require Paid Subscription
Abstracts Copyright © 2023 SmithBucklin, Washington, DC, USA
No entries found