Researchers at MIT's Computer Science and Artificial Intelligence Laboratory and the Qatar Computing Research Institute have developed new tools that allow people with minimal programming skill to rapidly build cellphone applications that can help with disaster relief.
The tools are an extension of the App Inventor, open-source software that enables nonprogrammers to create applications for devices running Google's Android operating system by snapping together color-coded graphical components. Based on decades of MIT research, the App Inventor was initially a Google product, but it was later rereleased as open-source software managed by MIT.
With the new tools, an emergency aid worker — or anyone else, for that matter — could, for instance, build an application to monitor many different data sources on the Internet for updated information about the locations of ad hoc shelters, and display them all on a Google map. The app could also allow individual users to revise, annotate, or supplement the information displayed in the map.
The researchers presented their new tools in a paper, "Democratizing Mobile App Development for Disaster Management," at the IJCAI 2013 Workshop on Semantic Cities last month in Beijing. The MIT researchers on the paper — principal research scientist Lalana Kagal, graduate students Oshani Seneviratne, Daniela Miao, and Fu-ming Shih, and postdoc Ilaria Liccardi — are all members of CSAIL's Decentralized Information Group (DIG).
DIG shares office space in MIT's Stata Center with the World Wide Web Consortium (W3C), the organization that establishes Web standards like the hypertext markup language (HTML) and the extensible markup language (XML). Tim Berners-Lee, the Web's inventor, heads the W3C, but in his capacity as 3Com Founders Professor of Engineering at MIT, he also directs DIG.
DIG's focus is research that takes advantage of the standards developed by the W3C. The new app-development tool requires that the data it accesses be formatted according to the resource description framework, or RDF.
RDF is the central standard of the so-called Semantic Web, which would, in effect, convert the Web from a giant text file into a giant database. RDF provides a simple way both to label data items at different locations on the Web and to describe the relationships among them. Where a standard Google search could, say, find Web pages on which the phrases "restaurant" and "Penn Station" appear — including e-books in which they're thousands of words apart, or the website for a restaurant chain that happens to be called "Penn Station" — a Semantic Web search could retrieve only the pertinent sections of only those sites that contain information about restaurants within a mile of the precise geographic coordinates of New York's Penn Station that are open past 10 p.m. and have vegetarian entrees.
Since the RDF standard was first released in 2004, its adoption has been slow but steady. Companies like IBM and Sears, media outlets like The New York Times and the BBC, and public information sources like airport websites and the PubMed index of medical-journal articles all use RDF. But perhaps more importantly for the new disaster-response tool, so do many government agencies. Data on the U.S. government's data.gov site — and at the corresponding sites in many other countries — as well as on the websites of agencies like the Securities and Exchange Commission, the Census Bureau, and the National Science Foundation, put data online using RDF.
Kagal, however, hopes that new tools like the disaster-response application she and her colleagues developed will accelerate the adoption of RDF. "We're hoping that we'll have a kind of cyclic effect," she says. "As people use these apps more, they will automatically generate structured data. And as there's more structured data out there, there will be people building more apps to consume them, which will in turn generate more structured data."
"When you have a disaster, there are two issues," says Jim Handler, director of the Institute for Data Exploration and Applications at Rensselaer Polytechnic Institute's. "One is, 'How do you get the data you need and pull it together?' And two is, 'How do you put that in the hands of the person who needs it?' And this project is one of the first to really approach both parts of the problem together."
Handler points out that the new tools do require an application developer to know something about SPARQL, a scripting language that's used to query linked data. But, he adds, with the tools, "Someone who knows how to develop apps can much more quickly develop something and move it to a mobile platform. You may need some basic knowledge, but the tool's going to make it possible to do it much faster and easier." Moreover, Kagal says, DIG is currently working on user-friendly software that will enable people to compose SPARQL queries without knowing anything about the language's syntax.
In the field of Semantic Web technologies, Handler says, "The holy grail is pulling together apps like this very generally across all sorts of information sources. This is a good step in the right direction."
No entries found