One thing I do as a researcher is that I read the journal Science for fun. It's a weird three-year-long habit, since most of the articles contain heavy equation or biological concepts that I have no clue about, but I still flip thru it to see what the real scientists talk about. :) Fortunately, my previous life working on sequence alignment problems in molecular biology helps somewhat every once in a while.
What's interesting is that Science does publish occasional articles on psychology, education and learning science, economics, and Web science. And when they do, inevitably they're highly interesting and impactful research--at least in the sense that they make me go "Wow, that's worth mulling over!"
The latest article I read is on group decision making, where group is loosely defined as "greater than 2." The problem is the following: When a single individual makes a decision, we know that this person often take a multitude of facts into account. Modeled using probability distributions, she aggregates the probabilities and the degree of confidence around each data point, integrates them all, and makes a final decision. The question is: What happens when the sensory input (facts) come from multiple sources (people), and the group has to integrate the facts, and then come up with a decision?
Well, in an recent article [1], Bahrami et al. showed that in perceptual visual tasks, indeed, joint decisions are better than individual decisions, but only under certain kinds of conditions. In particular, they summarize their research by saying "For two observers of nearly equal visual sensitivity, two heads were definitely better than one, provided they were given the opportunity to communicate freely, even in the absence of any feedback about decision outcomes. But for observers with very different visual sensitivities, two heads were actually worse than the better one."
Some key phrases that got me thinking here is that:
(1) First, they must be able to communicate freely to each other. The opinion piece that discusses the research focused particularly about the communication of the confidence of the evidence held by each observer [2]. This is interesting because we know in many group decision making tasks, people often project their confidence in their choice in varied ways, often for ego, power, authority, or other reasons. The experiment here presumes decisions made by peers together, rather than any sort of hierarchical relationships. What happens if these assumptions are violated?
(2) The result also talks about observers with nearly equal sensitivity. Well, in non-perceptual tasks, it is hard to say when two people are using the same scale to evaluate things. For example, how do we know all of the product reviewers are using the rating scale in similar ways? Does a "5" on a product really mean a "5 star" for one user but a "4 star" for another? How do we ensure they have the same sensitivity?
There has been much made about human computation. In fact, I'm on the committee of a recent workshop on exactly this topic. What do we really know about human decision making, or human perceptual capabilities? What do we know about how to combine their computations, their decisions? Do we know under what conditions crowdsourcing works or fails? These are the basic questions that we must answer. That's why basic research is needed.
PS: Don't get me started on how we need to encourage brokerage of information between basic research fields.
References
[1] Optimally Interacting Minds
[2] Decisions Made Better
No entries found