acm-header
Sign In

Communications of the ACM

BLOG@CACM

Computer Systems Research: The Joys, the Perils, and How to Count Beans Well


View as: Print Mobile App Share:
Saurabh Bagchi

This was originally posted on the ACM SIGARCH blog Computer Architecture Today.

This post is broadly meant for computer systems researchers, and that is a big tent, including members of the architecture, software systems, security/dependability, programming languages, software engineering, and several other communities. This post is meant to highlight my subjective take on the joys and the road bumps on the way to doing innovative work in computer systems. One kind of road bump arises from the bibliometric indices that are not always kind to our work. I speculate on some ways of skirting bumps of this kind. The reflection was brought on most immediately by my just-experienced experience of getting a paper into a hallowed AI conference.

I structure my ramblings into three pieces:

  1. The joys of researching and building computing systems
  2. The occupational hazards of researching and building computing systems
  3. Measuring success

I am not delusional enough to claim that these are universal truths. They are at best an amalgamation of many enjoyable chats with colleagues, grizzled veterans to student researchers and everyone in between, with some effort at organization and synthesis. At worst, these are highly biased reflections, seen from an intensely individual lens.

The Joys

The joys of researching and building computing systems are manifold and very individualized. They come at various stages of the whole process. The initial rush when you think you have the germ of a new idea. That rush is a tremendous rush, no matter how many times one has had it. The rumination of the idea adds to the joy … so it is not simply a momentary rush. The rumination is aided by a search through the literature, first skimming and then diving deep into the most related work. And the idea evolves through that—we sand down some rough edges, throw out altogether some misfitting angles, and add to some others that add to the appeal of the overall idea. Through it all, the joy expands.

The second joy comes from the realization that this idea can solve someone's problem. Okay, we are not solving world hunger directly (though indirectly, some of our innovations are). Nevertheless, we are developing an idea that has the potential to solve a problem that someone we have never met cares about—and sometimes, it is very many such people who care for it. Take for example, can we create a chip that goes into our homes and cars and perform continuous computer vision tasks. This can enable our fast growing numbers of elders to lead independent lives and our cars to navigate traffic safely. The applications of our computing systems are effectively boundless, as limitless as human imagination itself.

The third joy comes from the sense of integrating our ideas with those of giants who have come before us. Read, understand, assimilate, and then integrate. And see that the whole has gains so much greater than we could have imagined by considering the parts. Sure, this can be dismissed by some as "mere engineering work" but I believe this fundamentally helps us in sprinting fast toward the finish line. This has the added benefit (and potential peril — more of that later) that computer systems work is a team sport. There is great joy in being a part of a well-functioning team, not to mention a great preparation for the students for their professional careers.

The Perils

Needless to say, there are many occupational hazards of this line of work. We often do not hear of them because there is survivor bias and only the strongest is strong enough to own up to weaknesses. First off, a question that besets my colleagues, all across the seniority spectrum, is who will adopt our systems. The systems that we take such deliberate pleasure in designing and building, who will be the end customer for this? It is often unsatisfying to hope that some distant unseen soul will read our paper, take our code base, and put it to good use.

Second, there is the long time lag from idea to fruition. We look enviously at some of our colleagues in other domains of the computing discipline1 at the much shorter length of their pipelines.2 For example, a cool new idea in say reinforcement learning (or representation learning or pick your favorite topic in AI) that sails through in its first submission can have a much quicker turnaround—dare I say 6 months (and wait for the brickbats)? Granted this has to be a really cool new idea, but the bar for extensive evaluation is easier to cross and other communities have done better to make benchmarking easier. For example, downloading MNIST and running even state-of-the-art algorithms that just appeared is relatively simple (just ask my student who is still chasing dependencies of the prior work's software package and it is only 3 am).

Third, there is the flip side of the team sport nature of systems work. If you are at a small department or a no-name university or are strapped for research funding, building a team can be hard. One has only to take a look at the steadily increasing author count of the top publications in the field to get a sense of team effort nature of our work and the positive slope. Now, some of the team members obviously have to do the "grunt work"—re-implement the competition, port to a panoply of platforms, run different fault injection/attack injection experiments, etc. And there is the cringe-worthy comment made by members, often from outside the community, but sometimes from inside as well, that "this is just engineering work." First off, why is that pejorative—without all this engineering work, we would not have this building that we are living in (or so many other essentials of life), and second, the engineering work leads us to new insights for new discoveries.

Counting Beans

The bibliometric measures that we live or die by, and more pertinently university/research administrators do, do not favor computer systems work. The single biggest metric that so many other metrics derive from, citation count, is quite low in our domain compared to many others. Compare, for example, the Google Scholar top publications within Engineering and Computer Science. While CVPR has the stratospheric h5-index of 299 and the 10th-ranked in "AI", IJCAI, has 95, the highest-ranked one in "Software Systems," ICSE, has a pedestrian (by comparison) value of 74. SOSP is blown right out of the water at a lowly 42. I know you cringe because I am counting beans and because I come across as falling prey to the deadly sin of envy. Yes I am, and so do tenure and promotion committees, award committees, and administrators and policy makers. Second, not quite. Because I believe that if you cannot beat them, join them. And I promise to assiduously submit to some of these highly visible venues, riding coat-tails if I have to.

The sizes of the top conferences in several other domains of Computer Science are larger, often much larger, than ours. This means the CVs of folks in these other domains will be longer. Take a look at the go-to ranking of CS departments, CSrankings.org.3 The total number of papers submitted in five top systems conferences (PLDI, OSDI, NSDI, S&P, VLDB, Usenix ATC) in the most recent year is 3,117. In Machine Learning and Data Mining, just one sub-domain within the domain of "AI," the three conferences listed (ICML, KDD, NeurIPS) had a total of 16,479 submissions (10.6X, normalized by the number of conferences). In Robotics, just one sub-domain within the domain of "Interdisciplinary Areas," the three conferences listed (ICRA, IROS, RSS) had a total of 6,280 submissions (4.0X). The numbers of papers that are accepted are correspondingly higher: ML-to-Systems at 11.5X (again normalizing by the number of conferences in the comparison set) and Robotics-to-Systems at 9.4X.

Counting Beans Well

So how do we address this issue and count beans well? First off, do we need to address it at all? One line of thinking is that comparisons are only made within a domain (so within Computer Systems) so there is no need to calibrate across. I think you can drive a hole through this argument without trying too hard. Comparisons are made across domains (within Computer Science) and across disciplines (Computer Science vis-a-vis others) all the time. Ask the funding agency Program Managers who make decisions on CAREER/Young Investigator awards. Ask the university committees that decide on university-wide recognitions. Ask the government agencies who look to researchers for policy-making advice.

I can speculate the following measures will help. None of these are easy and all will take concerted efforts of multiple research communities. And perhaps some of them will do more harm than good and should therefore never even be tried. Consider this the equivalent of the future work section of your paper, and do not reject this post because of that. And I wish I had more space to detail these suggestions but paraphrasing Fermat, that would be too large to fit in the margin.

  1. Normalize bibliometric measures. When you are comparing two researchers or two domains, use a normalizing factor, such as by the volume or the citation count (depending on what measure you are computing).
  2. Add more papers to the prestige conferences, or add more conferences to the prestige list. We can afford to admit more papers without feeling that we have failed as gatekeepers—this can be done through the two-pronged strategy. Note though that fundamentally we should not try to equalize the sizes of all the domains, but reduce the disbalance some. And then the normalization mentioned above will kick in.
  3. Have short papers count. We too often relegate the short papers to the nether worlds, bereft of prestige, even those appearing at our prestigious conferences. We can afford to do a stricter quality control and then elevate their prestige. Sometimes a cute idea is worth hearing, even though it has not been evaluated against every possible competition and dataset.
  4. Elevate practical experience reports. A key way our community moves forward is by seeing how some ideas pan out when evaluated widely and in the wild. These are sometime called "Practical experience reports" or "Experience track papers." We can afford to have more of them and have them count toward all those bean-counting exercises.

In Conclusion

There are many joys to research and development in our domain of Computer Systems—bringing forth new ideas, integrating with existing ideas, and delivering systems that cure world hunger, or our safety at home or work, or some less grandiose, but important nevertheless, application problem, and the opportunity to work in well-functioning large teams. We should not be Polyannaish, though, and own up that there are also perils: the long slog from idea to delivery, finding adopters for our work, learning to play well in teams, and dealing with Luddites who will get in their heads to call it "just engineering." One peril is that bibliometric measures do not show us up in great light. But there are some ways discussed here to see the silver lining and make others see it too.

Footnotes

1 Terminology: Computer Systems is a "domain" within the "discipline" of Computer Science. AI is also a domain. The next finer level of classification is a "sub-domain." Thus, Security is a sub-domain, as is ML.

2 And we do not look at colleagues in other disciplines where the pipeline of idea to completion is longer, sometime much longer such as in microbiology.

3 I don't think you would mistake my statement as being a criticism of CSrankings.org, but in case you make that mistake, let me tell you: this is not.

Saurabh Bagchi is a professor of Electrical and Computer Engineering and Computer Science at Purdue University, where he leads a university-wide center on resilience called CRISP. His research interests are in distributed systems and dependable computing, while he and his group have the most fun making and breaking large-scale usable software systems for the greater good.


Comments


Duncan Walker

In the summer of 1995 I was at a conference at the Chinese Academy of Sciences, along with Dan Siewiorek of CMU and Ravi Iyer of Illinois. We were talking about the future of systems research. We discussed all of these issues, and whether systems research might become increasingly difficult to get funded. Our conclusion was that it will still get funded, because new systems build on top of the progress of past research, so the amount of "grunt work" could be contained, and energy could be focused on the research questions.


Saurabh Bagchi

Duncan,

Thank you for your comment. I would be curious to hear how you perceive the situation is for computer systems research --- how far the hypotheses you made in 1995 have turned out to be true or false.

My own take:
(1) Difficulty or ease of funding computer systems research: I don't think this is any more difficult than other areas of Computer Science, though the flavor of the year (or decade) area of CS will by definition get more attention and funding.
(2) New systems build on top of the progress of past research: I think this is largely true and that is a great positive.
(3) The amount of "grunt work" could be contained: I don't think this has been universally true, or even largely true (and has not followed directly from #2 above). Possible reasons are: problems have become harder, the outside context within which a system must live has become more complex, there are more and fragmented platforms/use cases/workloads that a system has to deal with.

Best,
Saurabh

PS: Ravi was my PhD advisor and still a valued collaborator. I have known Dan for years from my association with the dependability community.


Displaying all 2 comments

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account