With the rapid rise of generative AI, peer-reviewed academic journals are now grappling with submissions in which the authors may have used generative AI to write outlines, drafts, or even entire papers, but failed to make the AI use clear.
Journals are taking a patchwork approach to the problem. Nature, for example, has banned images and videos that are generated by AI, and requires the use of language models to be disclosed. Many journals' policies make authors responsible for the validity of any information generated by AI.
Experts say there's a balance to strike in the academic world when using generative AI — it could make the writing process more efficient and help researchers more clearly convey their findings. But the tech — when used in many kinds of writing — has also dropped fake references into its responses, made things up, and reiterated sexist and racist content from the Internet.
If researchers use these generated responses in their work without strict vetting or disclosure, they raise major credibility issues.
From Wired
View Full Article
No entries found