Peer review is an essential process that subjects new research to the scrutiny of other experts in the same field. Today's top Machine Learning (ML) conferences are heavily reliant on peer review as it allows them to gauge submitted academic papers' quality and suitability. However, a series of unsettling incidents and heated discussions on social media have now put the peer review process itself under scrutiny.
The annual Computer Vision and Pattern Recognition (CVPR) Conference is one of the world's top three academic gatherings in the field of computer vision (along with ICCV and ECCV). A paper accepted to CVPR 2018 recently came under question when a Reddit user claimed the authors' proposed method could not achieve the accuracy promised.
"The idea described in Perturbative Neural Networks is to replace 3×3 convolution with 1×1 convolution, with some noise applied to the input. It was claimed to perform just as well. To me, this did not make much sense, so I decided to test it. The authors conveniently provided their code, but on closer inspection, turns out they calculated test accuracy incorrectly, which invalidates all their results."
The paper's lead author Felix Juefei Xu promptly responded: "We are now re-running all our experiments. We will update our arxiv paper and github repository with the updated results. And, if the analysis suggests that our results are indeed far worse than those reported in the CVPR version, we will retract the paper."
The Reddit poster's challenge shed light on an often overlooked issue. Reviewers don't necessarily invest their own time and resources on running codes and reproducing experiment results as part of the peer review process, rather they tend to rely on the honesty and competency of the authors.
From Medium
View Full Article
No entries found