Artificial intelligence (AI) researchers are hoping to use the tools of their discipline to solve a growing problem: how to identify and choose reviewers who can knowledgeably vet the rising flood of papers submitted to large computer science conferences.
In most scientific fields, journals act as the main venues of peer review and publication, and editors have time to assign papers to appropriate reviewers using professional judgment. But in computer science, finding reviewers is often by necessity a more rushed affair: Most manuscripts are submitted all at once for annual conferences, leaving some organizers only a week or so to assign thousands of papers to a pool of thousands of reviewers.
This system is under strain: In the past 5 years, submissions to large AI conferences have more than quadrupled, leaving organizers scrambling to keep up. One example of the workload crush: The annual AI Conference on Neural Information Processing Systems (NeurIPS)—the discipline's largest—received more than 9000 submissions for its December 2020 event, 40% more than the previous year. Organizers had to assign 31,000 reviews to about 7000 reviewers. "It is extremely tiring and stressful," says Marc'Aurelio Ranzato, general chair of this year's NeurIPS. "A board member called this a herculean effort, and it really is!"
From Science
View Full Article
No entries found