On the morning of September 11, while I was still reeling with shock and disbelief, I returned to my room to find a telephone message from a colleague, Miriam Walker. Not content only to watch the awful news unfold on television, she had decided it was time to take constructive action. Her housemates had tried to call New York to check on friends and could not get through, so she wanted to set up a Web site where people could register themselves or those they had contacted so that others could be assured of their safety.
I was galvanized by the idea and hurried to my office to start working with her on the project. She showed me her design; I began programming and setting up a simple database. With the help of Eric Fraser and Jennifer Mankoff, we had a site up (safe.millennium.berkeley.edu) and accepting survivor reports just after noon that day.
The site provided a reporting form where people could submit information on a survivor, and a search form where people could enter a name and view matching records. The reporting form asked for identifying information on the survivor (name, birth date, zip code, part of a phone number, and any additional details in free text), the name of the submitter, an indication of reliability (whether the person had been spoken to directly or identified by an official source), and a message describing the survivor's status. At first, only the name was required; all the other fields were optional.
Our next step was to announce the site as widely as possible to get people using it and registering names on it. We tried to contact any official groups, news agencies, radio stations, or major Web sites we could find, but it was difficult to get the message to the right people. Web sites avoided giving out telephone contact numbers, the people we could contact were uncertain how to publicize our site, and there was no central organizer of disaster-related information. But with some help from the people at BusinessWire, we finally got a press release out on the wire by that evening, and shortly afterward the link appeared on high-profile sites like CNN.com and Yahoo.com. There were over 130,000 hits on the site on the first day, and over half a million hits by the next afternoon.
As the survivor reports started flooding in, so did the email. As the author of the pages and programming, I had given my personal email address on the site. I received over 100 messages the first day after the site was running, and about 500 messages within the first week. The night of September 11 would be the first in a series of sleepless nights spent answering email, maintaining the database, and making changes to the site.
Some of the messages were kind words of thanks. Most, however, were desperate pleas for more information on survivor names that people had found in our database. Sadly, there was no more information to give. It was very emotionally trying to communicate directly with people so close to the disaster, and difficult to have to reply that I could not help them.
Especially in times of great stress, people will not take the time to read instructions—even those that might seem too simple or obvious to ignore.
In some cases, database records contained only a first and last name and nothing else. Consequently, I changed our submission form to require people to submit at least one additional piece of information aside from a name in order to create a report.
Some people wrote to tell me they had entered names in error. The default page for the site was a reporting form asking for the name and various identifying details of the survivor; a separate link led to the search form. The reporting form was labeled, "Tell us about the person who is known to be safe," and the submission button itself was labeled, "Report that this person is safe." But some visitors still entered the names of missing people, skipped the rest of the form, and clicked the button without reading it, expecting search results.
In addition to removing names as people requested, I added a separate starting page that forced visitors to first make a choice between reporting and searching. I also added a note to the reporting page telling people to use the search page if they were looking for someone. Yet the requests to remove mistaken entries kept coming in. I then added a confirmation page that displayed everything that had been entered and asked users to check it for correctness before saving the report. Later, I still had to add a large red warning to the reporting page asking users to proceed only if they were truly certain they were entering the name of a survivor.
Many of the people who found the names they were seeking wanted to know who had entered the report. But if submitters had not volunteered their names or contact information, there was no way for me to know. I changed the form again to require submitters enter a name for themselves, and later changed it to require that they also enter some contact information.
Some people wrote to say they had found joking or offensive entries in the database. The names of presidents, famous people, and fictional characters were entered into the database with tasteless or sometimes hateful comments. I removed these entries as soon as I discovered them, and started collecting and displaying the IP addresses of submitters in the hope that this would cause people to take the site more seriously.
We were not the only group to construct a survivor registry, nor were we the first to do so. The first such registry, created by Bill Shunn, appeared an incredibly short time after the attacks, hours before ours was ready. We only became aware of it after we had started our project, and solicited Shunn's help in posting a link to us. At least half a dozen other unofficial survivor registries were soon built by resourceful people across the country working independently, all carrying different kinds of information. In an attempt to coordinate all of this data, I began writing Python scripts to automatically gather records from the other registries into our database.
Getting all of the data into one database was a challenge. Some registries listed hometowns; others listed a current location; others provided an indicator of reliability. Most, like ours, were unofficial and contained unverified data. One of the other registries collected many fields of detailed information about a survivor, but most tended to collect very little information.
In particular, the largest registry, at ny.com, collected only names, and there were reported incidents of incorrect names appearing on their list (probably also due to people who misunderstood their reporting form). Consequently, adding all the other data to our site was not necessarily an improvement. Soon after incorporating data from other sites, I began to receive more messages asking for the origin and submitter of records. Of greater concern were the complaints that some of the names in the database were names of people who had not survived. I didn't receive any complaints about such incorrect records originating from our registry, only from other registries, but I still became very concerned about the accuracy of reports entered at our site.
Abusive and hateful messages also appeared on many of the registries. Those that immediately listed all of their data on their main page were easy targets: people could vandalize them by entering fake names that were early in the alphabet, effectively posting a public message at the top of the list. Our registry was not quite as susceptible since people had to enter a part of a name into a search form before they could see any records, but we still had our share of abuse.
Here are some specific suggestions for future disaster information service providers:
Be aware of the magnitude of the undertaking. Choosing to run such a service is a considerable responsibility. Decisions that seemed small to me at first ended up having far-reaching effects, and so actions require careful consideration.
Provide every opportunity to make information accurate. Quality is more important than quantity. During an emergency, the accuracy of information is even more important. Give people chances to confirm and correct entries, and also provide a way to add annotations later so they can provide more detail and keep facts up to date.
Manage the level of trust in information. Ideally, one would like to know that all information is accurate. However, in a situation such as this, accuracy is compromised in order to provide a freer flow of information. To accommodate this, a good information service must maintain the correspondence between actual reliability and apparent reliability.
The most distress is caused when these two factors are disparate or unknown. A joke record, while offensive, causes little real harm because it is obvious: apparent and actual reliability are equally low. On the other hand, the most complaints came from people who could not estimate the reliability of a record. Even though there was a general statement on the front page about the unofficial nature of the data, people were unhappy when there was no specific indicator of reliability displayed with a particular record.
Therefore, indicators of reliability must be maintained and explicitly provided wherever possible. Apparent reliability can be signaled by statements and warnings where information is entered as well as where it is displayed. When incorporating information from other sources, note each record's origins to prove tracability. When information cannot be verified, try to encourage or require people to enter as much detail as possible. Completely accurate information can be useless when there is insufficient distinguishing detail (for example, a report mentioning only a common name, such as "John Smith is safe" is unhelpful). Provide free-form text fields so there is always a place for extra details.
Any public service will be abused. The Internet is a big place and there will always exist people who have too much free time and strange methods of amusement. Any site that accepts public input with no apparent accountability becomes an easy target for vandalism.
A simple economic model can estimate the likelihood of abuse in terms of payoff versus risk. The perceived payoff from abuse is the degree of impact per unit of effort; the perceived risk is the probability of being caught multiplied by the cost of being caught. So to reduce abuse, one can decrease impact (for example, by not displaying all records immediately), increase the effort required (by requiring more fields to be filled in the reporting form), increase the perceived probability of being caught (by logging and displaying IP addresses), or increase the perceived cost of being caught (by threatening severe punishment). Note that for these factors, perception is more important than reality.
Do not expect that instructions will be followed. Especially in times of great stress, people will not take the time to read instructions—even those that might seem too simple or obvious to ignore. Try to design a user interface that needs no instructions. Ideally, design one that can be learned and understood during the act of using the interface itself.
Plan ahead for user support. Handling all of the email was upsetting and exhausting, and the urgency with which I had to respond prevented me from fixing other problems on the site. It's best to avoid posting one's personal email address; set up a dedicated mailbox, then solicit help and delegate support tasks if possible.
Establish a central information hub ahead of time. To make relief and recovery efforts effective, someone needs to coordinate all of the people and information. Our attempts to announce the survivor registry were hampered by the lack of an official information center. Since the disaster, I have also received many messages from people eager to offer help, but who cannot find any coordinating organization to put them in contact connect them with those in need.
All of us would like to express our heartfelt sympathies to everyone affected by the terrorist attacks in New York and Washington. This article is written with the hope this experience can benefit others dealing with disaster situations, though I hope that we will never again have to face a tragedy of this magnitude.
This project was only possible because of the ready availability of open-source tools (Apache, PHP, Python, and MySQL) that could be easily installed and deployed. In addition to the people mentioned previously, David Waters, Becca Middleton, Josh Levenberg, and Primrose Boynton worked hard to help us maintain the site.
The survivor registry was run on the Millennium Cluster at UC Berkeley. Equipment for the Millennium Cluster was sponsored by the NSF.
©2001 ACM 0002-0782/01/1200 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2001 ACM, Inc.
No entries found