Andy van Dam has been on the faculty at Brown University for more than 50 years. A committed mentor and educator, he co-founded the university's computer science department and served as its first chairman; he still teaches undergraduates. His research has been formative to the field of interactive computer graphics—from the Hypertext Editing System (or HES, co-designed with Ted Nelson), which used interactive displays to create and visualize hypertext, to Fundamentals of Computer Graphics, a book he co-wrote with James Foley that later became the widely used reference book Computer Graphics: Principles and Practice.
You got your Ph.D.—one of the first formal computer science Ph.D.'s ever awarded—at the University of Pennsylvania.
I'd gone to Penn to do electronics engineering. The year I entered, the engineering school launched a new track in computer and information science. My officemate, Richard Wexelblat, and I took a course from Robert McNaughton, who was what we'd now call a theoretical computer scientist. It had a little bit of everything—from programming to automata theory. I fell in love, and decided to enter the new track.
How did you get into graphics?
I saw Ivan Sutherland's still-great movie about Sketchpad, which is one of the top half-dozen Ph.D. dissertations in terms of impact it had—seeing it changed my life.
You are referring to the film that showcased the groundbreaking computer program Ivan Sutherland wrote in 1963 for his dissertation at the Massachusetts Institute of Technology (MIT)'s Lincoln Labs.
Exactly. This was the era of mainframes and the beginning of minicomputers, and Sketchpad introduced two important innovations.
First was interactivity. In those days, you had to use a keypunch to make a deck of 80-column punch cards or use a Teletype to create paper tape, and then feed them to the computer. When your task was run, the computer would grind, and then eventually print something on fan-fold striped paper, typically that your program bombed, followed by a memory dump.
And instead of this painfully slow cycle of submit/resubmit, where you could get one or two runs a day on a mainframe serving an entire organization like a university, here's this guy sitting at an interactive display, manipulating what looks a bit like an organ console with lots of buttons, dials, and switches. With his left hand, he's playing on a panel of buttons, and with his right hand he's manipulating a light pen, and it looks he's drawing directly on the screen. With the push of a button, he causes the rough drawing to be straightened out in front of your eyes. He designs a circuit or a mechanical drawing in a matter of minutes. Parts can be replicated instantly. And that's the second important innovation: he is manipulating graphical diagrams directly instead of having to work through code and coordinates. I was awestruck.
After you got your Ph.D., in 1965, you went to Brown, where you have been ever since.
Brown has been my home for more than 50 years, in no small part because of its emphasis on undergraduate teaching. I'm grateful I'm in this field that's still booming, where students can get fascinating jobs and have significant impact.
You also still teach undergraduate 101-level computing.
I have for 50+ years. The course has, of course, morphed over the years, but it still gives me great pleasure to turn newbies on to the field.
You and your students, working with Ted Nelson, designed one of the first hypertext systems, HES, in the late 1960s. I understand you're now working on your seventh hypermedia system.
The idea is to let you gather information from a variety of sources—the Web, PowerPoint decks, Excel spreadsheets, Word documents—and live-link them to extracts on your unbounded 2D work space. You can group things, name those groups, hyperlink between notes and documents (where "hyperlinks" are first-class objects with metadata), hyperlink between hyperlinks, annotate the notes, documents, and hyperlinks. It's a very rich system for gathering and organizing information. The part we're going to start working on soon is figuring out how to crawl through all that data to make some sort of linear or even branching narratives. Prezi provides a simple example of what we have in mind.
You also have done a lot work in so-called post-WIMP (windows, icons, menus, and pointer devices) user interfaces, trying to create ways to go beyond the standard keyboard-and-mouse paradigm.
The WIMP user interface has many limitations. It typically isn't driven by speech, though there are now multiple ways of using speech to input raw text. It's two-dimensional, and there are many situations in which you simply don't walk around with a keyboard and mouse; for example, virtual and augmented reality.
Most people today think of what lies beyond WIMP as touch.
That's absolutely one of the powerful addenda you can have for a WIMP GUI. But if you look at how you use your smartphone, the finger is in many cases just a substitute for the mouse. You're still clicking on targets to select them, or using swipe, flick, and pinch-zoom gestures. So this by-now "universal" gesture vocabulary is very limited.
But people are capable of learning dozens of gestures.
One of the earliest things our group did when tablet PCs came out around 2000 is create a scribble gesture. I don't even have to tell you how to do it—you scribble over something and that deletes it. We had a gesture for lassoing things, which is now available on some WIMP interfaces. Undo-redo is a simple back-and-forth gesture, etc.
But the dream of all of us is that you should not just use your hands but your voice, and the system should know where you're looking, and it should know much more about your intent and your overall context. MIT's Architecture Machine Group, a forerunner of the Media Lab, produced a video called "Put That There" of a system in the late '70s that was as inspirational as Sketchpad was for showing how such smart, multimodal UIs could work.
What are some of the applications for the interfaces you and your research group have built?
Most are educational, and all were designed by students and other researchers in my group. The Music Notepad lets you draw notes and play them back through a MIDI-driven synthesizer. MathPad lets you handwrite mathematics and manipulate and solve equations, draw diagrams, create simple 2D animations to show the workings of the system of equations, and so forth. With ChemPad, you can draw two-dimensional molecule diagrams and it will turn them into three-dimensional ball-and-stick diagrams that you can tumble and get various kinds of information about.
We're still working on various sketching applications. In fact, one of our sponsors, Adobe, has decided to include our latest sketching program, called Shaper, as a plugin for Adobe Illustrator.
Another application your group created is the Touch Art Gallery, or TAG.
TAG is a platform for inputting artworks digitized at high resolution and letting users explore and annotate them with familiar touch and pen (for precise drawing) gestures. From the beginning, we specialized in handling the largest possible artworks, including what we believe to be the largest artwork ever digitized, the AIDS Memorial Quilt, which is 22 acres in size. It's been exhibited in pieces, but it's too big to really take in. Through TAG, you can use large touch displays and go all the way from an overview, where the screen is filled with almost indistinguishably small tiles, to zooming in on the details of an individual panel, which is the size of a grave.
Each panel contains not just fabric, but objects and mementos like photographs, toys, souvenirs ...
When you're dealing with something that is loaded with emotional meaning, your ability to interact with it not just visually but tactilely is very important. People want to touch art, and they can't in a museum; they can't even get close enough to really see the details. TAG is currently being used in an exhibition in Singapore by the Nobel Foundation, featuring the terms of Alfred Nobel's will, interactive tours of his life, his associates, and his factories and houses and, of course, a gallery of all 900 (Nobel) laureates.
In a sense, this work brings you back to your original interests in computer-driven displays and their use in human-computer interaction.
Indeed. I was always interested less in hardware and software than in the interaction. One of my earliest published papers, in 1966, was "Computer Driven Displays and their Use in Man-Machine Interaction."
In France, computer science is called informatique. The emphasis is not on the computer, which is a machine, after all, but about what you do with the computer, which is manage and visualize information. It's been a fantastic journey to see computer graphics evolve from an arcane technology accessible to a handful of specialists to completely integral to the world's daily consumption and production of information, communication, entertainment and, increasingly, education.
©2016 ACM 0001-0782/16/03
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2016 ACM, Inc.
No entries found