I recently submitted a paper to a conference, and when I got the reviews back, I noticed a reviewer had asked me specifically to take out the experiments (in this case, simulations) that I had presented partly as motivation, and partly to demonstrate my results. The conference was, unsurprisingly, a theory conference; according to the reviewer the experiments were not only unnecessary, they detracted from the paper.
Jeffrey Ullman's Viewpoint in this issue of Communications observes that many computer science areas are overly focused on experiments as validation for their work. I do not disagree; he raises several great points. My own experience on several program committees for systems conferences, as well as an author for papers for such conferences, is that experiments are, with very rare exception, a de facto requirement. How can you know if it is really a good idea, the thinking goes, unless there are experiments to back it up? Indeed, in some settings, even simulations are called into question; experiments should be done in real, deployed systems, as simulations can oversimplify what goes on in the real world. As Ullman points out, there are gaping problems with this framework. In particular, we run the risk of losing strong ideas because they do not fall into a framework where it is natural to build a system around them.
Dear Michael I have commented once on this - someone has picked for Scientific American - the value of any theory is in predicting power and our ability to apply them.
Best,
Igor Schagaev
Displaying 1 comment