acm-header
Sign In

Communications of the ACM

BLOG@CACM

CHI 2010: ­User Interfaces Learn a Thing or Two


View as: Print Mobile App Share:
Michael Bernstein

MIT Ph.D. student Michael Bernstein

photo by Jason Dorfman

The debates were notorious in the human-computer interaction (HCI) literature: Shneiderman vs. Maes, or Smart Design vs. Smart Computers. Should interfaces embed artificial intelligence into user-facing tools?  Mud-slinging followed on both sides: "Look at the Microsoft Office Assistant, Clippy! What a failure!"; "We'll never get anywhere if we keep computers from doing what they're getting good at!"  Factions formed; conferences appeared to forge connections between like-minded individuals.

Recently, however, there has been an increasing synthesis of the two approaches. One approach is to embed intelligence into novel interactions. For example, Sikuli (a best paper winner), Prefab (another best paper winner) and CueFlik put machine vision to work to support scripting, reverse-engineering user interfaces and interactive search. New faculty at Harvard work directly at the intersection of AI and UI.  Novel interfaces give users direct control over the algorithms, like ManiMatrix's mapping from a desired confusion matrix back to the original learning parameters.

What's more, the community seems to be deeply engaging the technical content of this research. Audience questions at the conference have intelligently critiqued assumptions about the machine learning algorithms being employed: Are they overfitting? Dimensionality problems? Cross-validation results?

Is the debate solved? Certainly not. But it's clear that the role of machine learning techniques in user interface design is on the upswing.

Michael Bernstein External Link is a PhD student in the Computer Science and Artificial Intelligence Lab External
Link at MIT External
Link. You should follow him on Twitter External
Link.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account