acm-header
Sign In

Communications of the ACM

blog@CACM

Software Engineering, Smartphones and Health Systems, and Security Warnings


BLOG@CACM logo

Software release cycles are usually long, measured in months, sometimes in years. Each of the stages—requirements, design, development, and testing—takes time.

Recently, some of the constraints on software deployment have changed. In Web software, deployment is to your own servers, nearly immediate and highly reliable. On the desktop, many of our applications routinely check for updates on each use and patch themselves. It no longer is the case that getting out new software to people is slow and inconsistent. The likelihood of a reliable, fast Internet connection on most machines has made it possible to deploy software frequently.

But just because a thing is possible does not mean it is desirable. Why would we want to deploy software more frequently? Is it not better to be careful, slow, and deliberate about change?

The main reason to consider frequent deployments is not the direct impact of getting software out to customers more quickly, but the indirect impact internally. Frequent releases force changes in how an organization develops software. These changes ultimately reduce risk, speed development, and improve the product.

For example, consider what is required to deploy software multiple times per day. First, you need to build new deployment tools that are capable of rapidly pushing out new software, can handle thousands of potential versions and enforce consistency, and allow rapid rollbacks in case of problems.

Software development has to change. With multiple near-simultaneous rollouts, no guarantee of synchronous deployment, and no coordination possible with other changes, all software changes have to be independent and backward-compatible. The software must always evolve.

Requirements, design, and testing can be shortened and replaced with online experimentation. To learn more about customer requirements and design preferences, deploy to a small set of customers, test against a larger control group, and get real data on what people want. Bugs are expected and managed as a risk through small deployments, partial deployments, and rapid rollbacks.

Compare this to a more traditional development process. Requirements gathering and design are based on small user studies and little data. Software is developed without concern about backward compatibility and must be rolled out synchronously with many other changes. Testing has the goal of eliminating bugs—not merely managing risk—and is lengthy and expensive. When the software does roll out, we inevitably find errors in requirements, design, and testing, but the organization has no inherent capacity to respond by rapidly rolling back the problems or rolling out fixes.

Frequent releases are desirable because of the changes it forces in software engineering. It discourages risky, expensive, large projects. It encourages experimentation, innovation, and rapid iteration. It reduces the cost of failure while also minimizing the risk of failure. It is a better way to build software.

The constraints on software deployment have changed. Our old assumptions on the cost, consistency, and speed of software deployments no longer hold. It is time to rethink how we do software engineering.

Back to Top

From Ruben Ortega's "Smartphones and Health Systems Research at Intel Seattle"

Intel Labs in Seattle, WA, hosted an open house on September 28, 2009 to showcase its research projects (http://seattle.intel-research.net/projects.php). Intel's health systems research encompasses myriad projects that are focused on long-term health monitoring and care systems. Since most people dislike carrying an extra health-dedicated device, the research has focused on adding sensors to the technology people carry with them everywhere—smartphones. The two specific areas with the most potential for near-term change are: (a) using sensors already present in smartphones (accelerometers and GPS) to monitor the movements and mobility of the wearer and (b) building applications that encourage ad hoc team-building and tracking for people to help accomplish their health goals.

Sensor technologies on cell phones can be adapted to help do long-term tracking of family and loved ones. Accelerometers could be used to identify different kinds of motion and measurement of "gait" in people's movement. By tracking and measuring the "gait" of someone's walking over time, the technology could help identify when someone is moving normally or if something has changed and an individual's walking gait is impaired. The information that is captured on the device could either be stored and analyzed locally, or uploaded to caregivers and health-care providers. Given the ubiquity of cell phones, the extra cost of adding sensors and inputs would be minimized as the large volume production costs should drive the price down.

A nearer-term application for smartphones would be to use their abilities to connect people via data-sharing technologies to form social health-support groups. You could easily imagine using an application to create teams of individuals who are working to improve their own health. The first best uses would be to create teams that encourage weight loss through the creation of ad hoc competitions modeling TV shows like The Biggest Loser. Using peer-pressure, peer-support, and realtime feedback, individuals could track how their peers are doing in improving their weight management over time. Other potential applications would be creating a tool so that compliance is tracked among groups of people in taking medication or monitoring their insulin level, or providing a pregnancy application to contact other people, like themselves, who are working through the trials of a pregnancy to ask, "Is this normal?"

The research being done at the Intel lab is still in the formative stages. However, I am eager to see this technology made into a product and launched so that it moves from "good idea" to useful to its intended customers.

Back to Top

From Jason Hong's "Designing effective Warnings"

In my last post, "Designing Effective Interfaces for Usable Privacy and Security," I gave an overview of some of the issues in designing usable interfaces for security. Here, I will look at more of the nuts and bolts of designing and evaluating effective user interfaces.

Now, entire Web sites, courses, and books are devoted to how to design, prototype, and evaluate user interfaces. The core ideas—including observing and understanding your users' needs, rapid prototyping, iterative design, fostering a clear mental model of how the system works, and getting feedback from users, through both formal and informal user studies—all still apply.

However, there are also several challenges that are unique to designing interfaces dealing with security and privacy. Let's look at one common design issue with security, namely security warnings.

Computer security warnings are something we see every day. Sometimes these warnings require active participation from the user; for example, dialog boxes that ask the user if they want to store a password. Other times they are passive notifications that require no specific action by the user; for example, letting users know that the Web browser is using a secure connection.

Now, if you are like most people I've observed, you are either hopelessly confused by these warnings and just take your best guess or you pretty much ignore most of these warnings. And sometimes (perhaps too often) both of these situations apply.

At least three different design issues are in play here. The first is whether the warning is active or passive. Active warnings interrupt a person's primary task, forcing them to take some kind of action before continuing. In contrast, passive warnings provide a notification that something has happened, but do not require any special actions from a user. So far, research has suggested that passive warnings are not effective for alerting people to potentially serious consequences, such as phishing attacks. However, bombarding people with active warnings is not a viable solution, since people will quickly become annoyed with being interrupted all of the time.

The second design issue is habituation. If people repeatedly see a warning, they will become used to it, and the warning will lose its power. Worse, people will expect the warning and simply swat it away even if that was not their intended action. I know I've accidentally deleted files after confirming the action, only to realize a few seconds later that I had made a mistake.

A related problem is that these warnings have an emergent effect. People have been trained over time to hit "OK" on most warnings just so that they can continue. In other words, while people might not be habituated to your warnings specifically, they have slowly become habituated to warnings.

The third design issue here is defaults. In many cases, you, as the system designer, will know more about what users should be doing, what the safer action is. As such, warning interfaces need to guide users toward making better decisions. One strategy is providing good defaults that make the likely case easy (e.g., no, you probably don't want to go to that phishing site) while making it possible, but not necessarily easy, to override.

Back to Top

Authors

Greg Linden is the founder of Geeky Ventures.

Ruben Ortega is a technologist and startup enthusiast.

Jason Hong is an assistant professor at Carnegie Mellon University.

Back to Top

Footnotes

DOI: http://doi.acm.org/10.1145/1629175.1629181


©2010 ACM  0001-0782/10/0100  $10.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.


 

No entries found