Research foundations want to fund great research projects. However, a while back Bertrand Meyer wrote an interesting blog post: "Long Live Incremental Research."1 With examples he showed that many of the greatest research results could not possibly be projected in great sounding project descriptions. His conclusion is that we should drop the high-flying ambitions from research project descriptions, and instead support more incremental research proposals, hoping that great stuff will happen on the way. Indeed incremental research is perfect for research projects with predictable deliverables. However, I suggest an alternative conclusion: for some funding, we should drop the project description entirely.
Instead, we should initiate some pure result-based funding. An x-year grant could be based on results from the last × years. From a research foundation perspective, this eliminates the issue of unpredictable research, for this funding is not given for a projected future that may or may not happen. Rather it is rewarded for results already delivered. The researcher can at his own risk follow the craziest inspiration, but he or she has a strong incentive to make it work if he or she wants to secure result-based funding in the future. Result-based funding would only be applicable for researchers with a history of success, with emphasis on the more recent past, and the funding would only work for basic expenses that are independent of the concrete project. In the U.S., for example, a baseline might be one or two months of summer salary and one or two graduate students. Junior faculty hired based on an impressive recent track record would be fully eligible. Senior faculty would need to demonstrate that they are still going strong. The simple point is to drop the project description and just reward what is already done.
Consider a researcher with a history of brilliant ideas taking research in surprising new directions. If we try casting this as a project, the referees will rightly complain: "It is not clear how the applicant will come up with a brilliant idea, nor is it clear what the surprise will be." With such lack of focus and feasibility, a low project score is expected, and then the overall score will be too low for funding, regardless of the researcher's established record of success. However, research needs great new ideas. Therefore, we need some result-based funding so that we can support creative researchers with a proven talent for great new ideas even if we do not know how it will happen.
The aforementioned issue is often very real in my field of theoretical computer science. Like in other fields, theoretical research is only interesting if it contains surprises (otherwise it is more like development). A project plan would make sense if the starting point was a surprising idea or approach that it would take years to develop, but in theory, the most exciting ideas are often strikingly simple. When you first have such an idea, you are typically close to done, ready to start writing a paper. Thus, if you have the right idea when you apply for a grant, you will typically be done long before you get the grant. The essence of the research is an unpredictable search for powerful ideas and insights. Thousands of wild ideas may be tried in the search of a brilliant one that works. The most appropriate project description is just a description of the importance of the area to be researched and the type of results aimed for. The track record shows which researchers have the talent to succeed.
The problem (which may be much bigger in the EU than in the U.S.) for such dynamic research is when proposals are selected by project-oriented researchers who want structured methodological plans, specifying how to attain the proposed goals, and who do not appreciate that a successful outcome depends heavily on the talent of the involved researchers. The philosophical difference is if we only count the creativity and originality specified up front in the project description, or if a researcher's demonstrated talent for creativity and originality is counted as an integral part of the research to be performed in the project.
Dropping the project description will greatly increase methodological diversity, allowing researchers to use the strategy that has proved most suitable for their area and their own talent and skills. As a simple example, Meyer suggested funding incremental research, hoping that great surprising things would turn up on the way. I favor the opposite strategy, spending as much time as possible pursuing overly ambitious targets, but being flexible about the results. Even if the high-flying targets fail, you do not need to come home empty-handed, for by studying the unknown you may discover something new, sometimes more interesting than the original target. From the perspective of ambition, I see it as an advantage to minimize time spent on easy targets, but foundations seem to prefer that you take a planned path with some guaranteed targets on the way. The point here is not to argue whether one strategy is superior to the other, but rather to embrace the diversity of strategies that work depending on the area and the individual researcher.
Perhaps more seriously, if a target is difficult to achieve, it may be because it requires an atypical approach that would not look reasonable to anyone else, but which may work for a researcher thanks to his special talents and intuition. Indeed, I have often been positively surprised seeing how others succeeded using an approach I had myself dismissed. As a project, such unbelievable approaches would fail on perceived feasibility, but the point in result-based funding is that researchers are free to use whatever approach they find most efficient. Funding is given to those who prove successful. This gives the perfect incentive to do great work; namely to secure future result-based funding.
Result-based funding would also reduce resources needed to evaluate applications. It is very difficult for a general panel to evaluate the methodology and the probability of success of a project. Moreover, it requires an intimate knowledge of a field to evaluate how big a difference a result would make relative to what is already known in the field. However, handling published results, we know what happened and if published in a strong venue, the experts have already verified the novelty to the field.
Some prestigious grants say they welcome high-risk high-gain research. Surprising breakthroughs in an important area would fall well within this scope. Having researchers with proven skills explore the area and follow their inspiration may be the optimal strategy, a bit like sending an expedition into an unknown territory. Uncertainty about what they would find should be no worse than high risk. In fact, based on past performance, it may be safe to assume they will discover something interesting, if not ground-breaking. However, when a project is scored based on focus and feasibility, projects where the end results are not predictable in advance will fail even if their expected return is very high. It has to be possible to get a high overall score for promising research even if it would not score well under standard project parameters like focus and feasibility. At the end of the day, what we want are results, not project descriptions, so what should determine the overall score is which proposal is expected to yield the greatest results.
The issue boils down to the formula used to compute the overall score of a proposal, the problem being when the score is based on a predefined weighted average, diluting the impact of any unique aspect. As a concrete case, I experienced an integration grant giving the established quality of the researcher a predefined weight of 30% of the total score. The remaining 70% of the weight was all about the projected future: project description (30%), implementation (20%), and impact (20%). The world's best most original researcher with the biggest prizes to his or her name can get at most 100% on established quality, contributing 30% to the total average. A more typical researcher may get 80% on established quality, contributing 30%*80%=24% to the total average. The incremental advantage of the super-genius over the more typical researcher is thus a mere 6%, which is easily lost in the 70% of the weight devoted to the projected future. As a kind of entertaining example from the projected future, one question was: "Outline the capacity for transferring the knowledge previously acquired to the host." As a theoretician I thought the answer was simple: "The knowledge sits in my head so the transfer is complete on arrival. From my head, I will transfer knowledge and ideas to students, colleagues, and visitors." Naively I thought I would get 100% on this one, but my answer was deemed unconvincing, that is, 0%. The point I try to make here is not whether my answer was good or bad. My point is that while this transfer of knowledge may be critical in some cases, it is typically not an issue in my theoretical field. The general point is that the more standard parameters you involve in an average score, the more you favor standard proposals that these parameters apply to. However, what makes research special is normally something unique, for example, a great researcher, or a great idea for a project. To let the uniqueness come through, one should not average, but rather look at a maximum, possibly with a fail/pass on other parameters, allowing for some to be not applicable. The proposed result-based funding would cover the case of great researchers.
I have proposed the initiation of some pure result-based funding as a simple efficient method for basic support of successful researchers, giving them the freedom and incentive to seek great results even when these are not projectable. Project-based proposals would still be needed in many cases, for example, to justify expensive experiments. Because result-based funding is simpler to handle, it could be used efficiently as a first line of funding with smaller individual grants.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.
No entries found