acm-header
Sign In

Communications of the ACM

Technical opinion

The Online Research 'Bubble'


There is something wrong with online research today: it is starting to look very much like offline research. Conducting online research to guide managerial decisions about product designs and promotion was intended to leverage the full power and flexibility of the Internet to reach and more fully represent the consuming public, but what has happened, instead, appears to be that the Internet has been adopted as a cost-cutting tool to continue standard captive research panel practices in the custom research industry [4]. Specifically, it has become clear that the critical issue in online research methodology is the selection of the sampling frame [2]. With online research revenues reaching $1.1 billion, the problems that exist with commercial research online sampling practices are significant.

There are serious threats to the validity of research when the Internet is used simply as a low-cost tool to interact with pre-identified and repetitively used subjects, as has been the traditional practice in the offline research industry. ComScore (www.comScore.com) indicates current practices reflecting one-quarter of 1% of panel participants accounting for 30% of online surveys. This is due to the emerging phenomenon of "professional online respondents," who can take an average of 80 surveys over a three-month period, and who are significantly less likely to be employed full-time, no less likely to be any better educated than the norm, and far more likely to spend disproportionately higher amounts of time at home online.

Simply stated, typical online research respondents are not very typical of the general population one would like to validly generalize to. Online sites such as the Survey Club (www.surveyclub.com) are emblematic of this problem, and an online search of the term "earn survey money" demonstrates the extent of the quality and validity problems facing online researchers.

Back to Top

Growth Versus Quality

The professional online research community has experienced phenomenal growth, [2–4] and the growing reliance on Internet-based research in industry is not surprising in view of cost savings as high as 40% compared to traditional survey research [4]. The unintended consequence of the cost-driven approach to Internet research has resulted in professional consumer respondents operating in over-sampled proprietary panels, just as was standard practice in the research industry before the advent of Internet-based research.

It appears the Internet is being used by commercial researchers primarily as a low-cost contact venue; using its broad reach and accessibility to the general public as a sampling tool has been either ignored or overlooked. If consideration is given to quality versus cost, as regard to using the Internet for research studies, there are new methods of respondent interaction, recruitment, and interaction that the Internet can provide. These arise from carefully managing the potential flow of willing research respondents in the online communities represented by major Internet service providers [2, 6], as opposed to using the Internet as a low-cost contact venue for static respondent panels, in which a long-term group of regular subjects are repeatedly quizzed about any number of products and market views.

In science, scholarly researchers readily accede to the attractive nature of larger, more economical, more easily accessed online samples for their research [5], while at the same time remaining concerned about issues of generalizability beyond the basic contexts of human personality research. The commercial research industry does not appear to be as concerned about these issues, and the quality of managerial decision making is increasingly driven by the (lack of) quality of online research.


The environment has never been better for a judicious improvement in online survey sampling approaches.


Back to Top

Two Issues: Online Sampling and Online Recruitment

Advocates of scientific online research methods believe that respondents should not be allowed unfettered access to survey participation whenever and however they chose [2]. There are problems with the use of "free participation" internal panels that interfere with valid prediction and quality results, as noted in a recent panel discussion at CASRO—the Council of American Survey Research Organizations (www.casro.org). In the world of online research, outside sources of data—data that comes from the public at large—account for only 40% all of sample sources. This means that 60% of data on which important managerial decisions are based comes from internal sources such as panels or customer/employee lists [4]. Such procedures can result in respondent overuse, respondent/study overlap, and troublingly large amounts of non-qualified participants.

Email solicitation continues to be the most widely practiced and preferred method for recruiting online research participants, but there are problems with email recruitment that go beyond simple validity concerns [2], including potential legal prohibitions and respondent resentment in the face of the growing tide of spam. Few online research methods do not suffer these potential pitfalls. One chief alternative to email solicitation would be the unobtrusive observation of online responses to banner advertisements, measured through subsequent clickstream data. This is often unacceptable to researchers who seem to prefer to execute more complex questionnaire-based studies where specifically targeted variables are more readily measured and varied.

Back to Top

A Better Way

The environment has never been better for a judicious improvement in online survey sampling approaches. As major portals such as Yahoo, AOL, and Google garner more traffic from an ever-increasing diversity of households, online researchers can draw samples from these "rivers" of respondents as they flow to and from online sites, instead of overfishing captive respondent "pools." Nielsen/ Netratings (www.nielsen-netratings.com) suggests that major online portals are attracting well over 100 million unique visitors per month, and each of these unique visitors is a potential research respondent, if properly recruited and validly assigned to studies.

An important key to the online sampling methods we advocate is an incentive schema that is balanced, well timed, is universally appealing, and offers instant gratification; offering something the respondent wants in exchange for a few minutes of opinion data online is far superior to sending spam email messages to captive mailing lists or continually resampling captive research panels [1]. As has been demonstrated with the online research division of a major ISP, it is technically easy to manage respondent access, administer random assignment to studies, and to prevent repeat respondents to surveys using standard methods of validation and assignment [6].

Back to Top

Maintaining Validity in New Modes of Interaction

Demographics in the portal-based recruitment and incentive approach are certainly no worse than those achieved in currently popular methods of self-selected internal industry panels [2]. Sample frames achieved in the portal approach tend to be quite representative of both the U.S. consuming public and the universe of Internet users [6], so it is apparent there are means and practices for achieving more scientifically valid research findings in online surveys, if the will exists to do so. The issue really becomes one of the researcher's dedication to scientific principles and practices as juxtaposed against the compelling lure of extremely low transaction costs for the current "traditional" practice of online research. Wise managers intuitively understand there are three desirable qualities of important research projects: fast, cheap, and good. The trouble is that quality costs money, and it is impossible to have really good research done in a fast and cheap manner.

Fast and cheap research will require sampling sacrifices, such as sending spam email for respondents or using captive panels, and neither approach is optimal for validity purposes. Good research can be done quickly, but not cheaply at the same time; proper online sampling is costly. Right now, the online research industry is dominated by a desire for fast and cheap studies. The desirable characteristic that is sacrificed for speed and economy is inevitably the quality upon which scientific principles of validity reside.

Back to Top

Occasional Benchmarks: Online is Not the Only Way

It is important to note that scientific offline research practices are both no better and no worse than online data collection methods; good research is good research, regardless of the venue. The fundamental result of extensive comparability tests between online and offline studies conducted by Digital Marketing Services (www.dmsdallas.com) for companies such as General Mills, Procter and Gamble, Coca-Cola, JC Penney, AVON, and Time Warner yielded the conclusion that managers would have made the same business decision regardless of data collection and sampling venue, assuming the existence of scientific sample controls. Experts in psychometrics also point out that an occasional benchmark study to compare a "control" group of offline respondents to online respondents will be important to support the validity of online research [5].

This suggestion is based on the presupposition that proper sampling practices are followed; what companies save in costs and time by performing studies online is so attractive that they are hard-pressed to engage in the necessary benchmarking processes to backstop quick and dirty data collection. After all, if one online research provider insists on more rigorous and valid approaches, costing the client vastly more money to complete a study in valid style, several other competing online research services providers will be more than happy to sell a vastly cheaper and quicker study, while speaking its praises loudly.

Back to Top

Conclusion

There is little economic incentive for quality and validity in the current online research industry, and this problem will remain so long as managers continue to blind themselves to the essential nature of valid scientific approaches to studying online customers and markets. Validity comes at a definable cost that balances against convenience; scientifically valid research, after all, can't be done fast and cheap. There are trade-offs involved. You can get your research done online cheaply, and it can be good, at the same time, but you'll have to sacrifice speed to do so. To get your online research done quickly, you'll either have to sacrifice cost or quality. It's an easy choice: two of three important characteristics are available. You just don't get it all, because the "free lunch" of cheap, quick, and valid results doesn't exist in online studies any more than it did in the world of offline research. Good sampling takes time or money; take your choice. Just don't choose to make important managerial decisions based on poor sampling; you'd do as well flying blind than assuming such research outcomes relate very much to the online populations one wishes to generalize to.

Back to Top

References

1. Downes-LeGuin, T., Janowitz, P., Stone, R., and Khorram, S. Use of pre-incentives in and Internet survey." Journal of Online Research 1, 7 (2002).

2. Gonier, D. Factionalization imperils market research groups. Advertising Age 71, 25 (2000), 40.

3. Grossnickle, J. and Rasking, O. The Handbook of Online Marketing Research. McGraw-Hill, New York, 2000.

4. Harmon, G. Online Research Category Review: Opportunities and Issues. Presentation to the Council of American Survey Research Organizations Data Collection Conference (Dec. 1, 2005).

5. Krantz, J.H. and Dalal, R. Validity of Web-based psychological research. In M.H. Birnbaum, Ed., Psychological Experiments on the Internet. Academic Press, San Diego, 2000.

6. Stafford, T.F. and Gonier, D. Gratifications for Internet use: What Americans like about being online. Commun. ACM 47, 1 (Jan. 2004), 107–112.

Back to Top

Authors

Thomas F. Stafford ([email protected]) is an assistant professor MIS at the University of Memphis Fogelman College of Business and Economics in Tennessee.

Dennis Gonier ([email protected]) is an executive vice president with America Online in Dallas, TX.


©2007 ACM  0001-0782/07/0900  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2007 ACM, Inc.


 

No entries found