acm-header
Sign In

Communications of the ACM

BLOG@CACM

The Autocracy of Autonomous Systems


View as: Print Mobile App Share:
Saurabh Bagchi

Autonomous systems are all around us. To large parts of the population, they are like the mythical creatures of yore of great power, great knowledge, and great control over us. Some in the wide world ask with growing consternation: "Are we still going to be in charge? Are we still the driver at the wheels, literally as well as figuratively?" Or are we slowly, unobtrusively ushering in the autocracy of autonomous systems. Such an autocracy portends a world where decisions are made by these systems with the ceding of control by us humans and lack of accountability of these systems, and by proxy, their developers. These deep dark fears have been voiced eloquently through the ages through many culture-defining books: Aldous Huxley's "Brave New World" (1932), George Orwell's "Nineteen Eighty-Four" (1949), and Kurt Vonnegut's "Player Piano" (1952), to name the three at the top of my list.

My belief, based on working on reliability and security of autonomous systems, is that we are firmly in control, but also that we, as developers of such systems, need to take some conscious decisions to make sure we do not usher in the age of autocracy of autonomous systems. In this article, I will discuss three aspects of the topic:

  1. What some feared future scenarios are
  2. What we can do technologically to prevent such scenarios
  3. What we can do policy-wise to prevent such scenarios

1) Some Dystopian Scenarios, Please

Jeremy had had a great week, a great way to start the summer of 2050. He had delivered his software module on time and with all the promised functionality. His global team had come through with delivering the code with zero defects and ahead of its competitor. Getting a flying car had been in the popular imagination for more than a century now, and their company had been the first to deliver on that imagination. And now this had become a hypercompetitive industry with every point gain in market share being painfully won with a host of companies all running at breakneck speed. So it was good that they had shipped this peer-to-peer communication mode of the passenger drone product before anybody else. This would let the drones figure out their flight paths without careful a priori planning and on the go as events unfolded. Better still, the human overlords of these machines would not have to get involved, at all, and could not get involved even if they got it into their head to try it.

Jeremy also realized that a great week can turn into a nightmare week with five lines of code. The operators of the passenger drones started getting increasingly frantic calls almost all at the same time. The vehicle would be in mid-air and would stall and no amount of lever pressing and knob twiddling by the passengers inside would do any good. As the aircraft, or as it is more common to call such things these days, simply vehicles, stalled and hovered above ground, and as the gas gauge moved toward empty, the calls became increasingly frantic. And then there was the robotic voice that would come on over the peer-to-peer communication channel saying in how many minutes the vehicle would plummet to the ground, destroying all inside and any who happen to be on the ground on its path. But wait, there was a solution. A moderate amount of money paid in the digital currency that was the flavor of the day would get the vehicle back in motion. And it hit Jeremy which five lines of code had left the door open, albeit a tiny crack, for such criminal activity.

So Jeremy wants to go to the code repo, do the change himself (yes, as a team leader, he could still do it), and push it out to the millions of vehicles up in the air around the world. The online software upgrade feature would be put to its test, but it should come through. Yet he had a sinking feeling in the pit of his stomach when the code repo rejected his credentials and would not let him in. His efforts to call members of his team on his super-smart calling device did not work either: there was the same robotic voice, telling him his outgoing calling privileges had been suspended. So the vehicles running his code would continue to remain suspended in mid-air until the requisite amount of currency changed digital hands.

So what are some guiding principles for us technologists to avoid dystopian scenarios like the one above, and even worse. Autonomy will be around us in ever increasing pervasiveness. How do we make it less a toxic miasma and more an enabling springboard? We will theorize about some high-level principles in the next post.

2) Beating the Autocracy of Autonomous Systems

I believe that we, as technologists, can develop technologies with some guiding principles that can help us avoid plunging into such dystopian scenarios, under most cases. I also believe that as enlightened technologists, we should try to influence policy that governs the use of autonomous systems.

Some Technological Guiding Principles

I love thinking of algorithms that can solve a problem (I also love creating practical instantiations of such algorithms, but that is a story for another day). Increasingly these days, the problem I am trying to solve through my work is how to make our interactions more autonomous. A working definition for autonomous operation that I will use here is that it uses less of my attention, my cognition, my engagement (pick your level of cognitive functioning), and the job gets done through the mystic working of some computational device. However, sometimes I force myself to think of some guiding principles that will increase the likelihood of greater good for the greater number. These are a few high-level principles that keep bubbling up in multiple rumination sessions.

Build technology that is usable by many, not just the dominant class or majority class of users.

This means building in features that broaden the user base (obviously, there are commercial reasons for doing this) and leaving out features that raise a barrier for some classes of users. Internationalization efforts for most popular software packages is a success story from the earlier days of software. With autonomous software, there is the added angle that users of different skill levels whose wellness and, more dramatically, life and death may depend on the software should have the required level of understanding of the technology.

Go as far as you can to guard your software against vulnerabilities.

Tautologically obvious, right? Use the state-of-the-art static and dynamic verification techniques to reduce the number of vulnerabilities in the autonomous software that you ship. This will entail stumbling through various software packages, some of which will be only minimally documented. This will entail difficult conversations with your product manager, for whom this one thing trumps all others: the ship date of the product. Yet this is time and effort that we owe to our broader society. If our technology becomes popular, there will be incentives to attack it, and security built into the software is much more effective than security pixie dust sprinkled as a patch later on.

Use the highest-grade security techniques that you can afford, to store data.

This is a deployment practice rather than a design practice. If your technology becomes wildly successful, it will be used to collect lots and lots of consumer data. With the ubiquity of sensors and monitoring software, data is being collected about us at ever-finer granularity. We are talking of large volumes of data, high rates of data, and data of widely varying formats. Still, go ahead and embrace the tedium of high-strength encryption and decryption.

Have manual overrides for your autonomy algorithms.

These may be the lifesaver when the algorithm runs into choppy waters. This also entails providing some interpretability for your algorithms so that the dark curtain can be peeled open if an end-user wants to know why the algorithm told it something. This is a subject of intense research activity today. We as systems researchers should push forward the adoption of the best into our systems plumbing. We as autonomous system developers should look to use such plumbing. Sure, this is not easy because it does not add a flashy bell or whistle. But, nor is it easy to clean up after a failure, either legally or for your conscience. Just ask Boeing.

Minimize the chances of wrong use.

Check that the implementation of your algorithm is not easy to use in scenarios that you do not want it to be used in. This is, of course, tricky because much of the useful technology we create has dual use, for good and for bad, for recognizing faces of intruders and for tracking citizens protesting for their rights. But we can, and should, build into our autonomous systems safeguards that make it harder to abuse, by single miscreants to state actors. The latter is beyond the capability of any single developer or even a small team, but could be done as an organizational goal.

So, to sum up, many of us are researching and developing autonomous systems in some of its very many shapes and forms. As technologists, it behooves us to pay attention to some design and development practices to increase the chances that our creation will be used for good. Most of these are known to us from traditional software design and development. But in many cases ill-effects of not following them are magnified with increasing autonomy of software.

3) Policy Actions to Beat the Autocracy of Autonomous Systems

Following are some policy actions we can take to beat the autocracy of autonomous systems. This is offered knowing well how wading into policy issues is anathema to most of us technologists, but there are worse things that we can do with our time and talents.

Some Policy Guiding Principles

As technologists, we are averse to participating in policy discussions, even when we believe strongly that policy should be informed by good science and engineering, and even when we believe that we do that good science and engineering. The reasons are fairly self-evident: our reward system in academia does not factor in policy victories, and in industrial settings, there is usually a stern line of separation between those who build technology and those who argue for its uses. However, I argue that especially in the area of autonomous systems, it makes urgent sense for us to inform policy makers by engaging in some policy debates. Sure, this will involve stepping outside the comfortable hygienic space of cold reason and unarguable formulae, but without this, as autonomous systems are adopted at breakneck speed, the spectre of its misuse becomes increasingly likely. So here are three things that I believe we can do without being tagged as "Thou doth protest too much."

Engage in public forums to inform others, most importantly policymakers, of uses and misuses of your technology.

When one tries to engage with policymakers, we are often surprised by the fact that arguments do not win the day based on pure scientific or engineering merit. There are broader contexts at play, which we are often unaware of because it is not part of our day jobs. The discussions often reprise Mark Twain's saying.

"What gets us into trouble is not what we don't know. It's what we know for sure that just ain't so."

The policymakers and their retinues, such as congressional staff members, are trying to drink from a firehose of information. It is wise for us not to add to that firehose, but rather enable them to glean the key points, the actionable insights, and unbiased conclusions. As academics, we can speak our mind free from commercial pressures, and so our unbiased recommendations, if delivered in the right dosage, carry a lot of weight.

It also amplifies the effect of our recommendation if we tie it to issues that resonate with the policymakers, e.g., appeals to the constituents of the elected official. Does the electorate care about the security of autonomous systems? If there has recently been a privacy breach of a database being collected by an autonomous system, sure.

Applaud victories of your technology, and flag misuses.

When technology helps in achieving a newsworthy goal, the mass media may pay little attention to the technology behind it. But we can highlight these victories, puny or momentous. For example, when Project Loon provides Internet connectivity to people in remote parts of Puerto Rico after a hurricane, we can use this opportunity to highlight the research in machine learning, wireless communication, and balloon navigation that went into it. This can be highlighted through blog posts on forums read by people even outside of our technology circles (such as, Medium and The New York Times blog), or through university or technical community news releases. The other side of the coin is when we see misuse of a technology that we understand deeply, it makes sense to stand up and explain the fundamental reason behind the misuse and how it can be mitigated. For example, as society becomes inured to news of data breaches whereby personal information is released, this may be a useful opportunity for security researchers to talk about the uses of two-factor authentication. The cost of deploying it is so much smaller than the psychological cost and, increasingly, the dollar cost of a large-scale data breach.

Create closed-loop feedback and change between real-world use and technology.

Technology at the leading edge is often a work in progress. Once we put out our open source software and it becomes wildly popular because it fills a need of a large-enough number of people in some part of the world, our work is not done. As we see uses and misuses of the technology, we should refine and revise it, such as through software releases. This is, of course, not as exciting as the thrill of putting out the novel software package in the first place, but it is the right thing to do, and also something that is necessary if we want our technology to continue to be embraced.

A good case in point is the Tor network, the software package for enabling anonymous communication on the Internet. From its inception in the 1990s, it has been at the forefront of battles to increase Internet freedom, freeing people from censorship, tracking, and surveillance. Started with funding from the U.S. Navy and developed by researchers employed by the U.S. Navy, it has gone through 19 releases in the last 8 years, and every year the top security conferences publish a handful of papers on attacks and resultant improvements to Tor.

The Wrap

So, to sum up, as technology developers, if we nurse the ideal of our technology changing lives, we need to engage in shaping policy. We do not have to go the full distance on this, and thus we can stick to our day jobs of doing good science and engineering. But here, even half-measures will be much better than status quo.

First, we can inform and educate our policy makers through articles and civil discussions, where we lay out the powers and the limits of the technology, using our unbiasedness as the rare talisman.

Second, we can publicly and loudly applaud when some technology that we care about achieves a noteworthy victory in the public sphere. Likewise, we can publicly and loudly decry a misuse of our technology.

Finally, we can adopt the philosophy that our labor-of-love technology is not a finished story with its first release; rather, it needs to be refined with inputs from its real-world deployment.

With these ingredients in place, we will have taken steps toward technology for greater good, and out of the hand-wringing that we typically do when we see technology faltering in its transition outside of the academe.

Saurabh Bagchi is a professor of Electrical and Computer Engineering and Computer Science, at Purdue University, where he leads a university-wide center on resilience called CRISP. His research interests are in distributed systems and dependable computing, while he and his group have the most fun making and breaking large-scale usable software systems for the greater good.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account