acm-header
Sign In

Communications of the ACM

ACM News

Taking the Reins of AI


View as: Print Mobile App Share:
The potential for an artificial intelligence system to intentionally or unintentionally cause great harm requires people to prepare for and prevent such potentially negative consequences.

Said Illinois Congresswoman Robin Kelly, "Regardless of the risks and benefits, AI is coming and it will fundamentally change our economy, the management of data and the delivery of government services. We need to be prepared for this forthcoming reality,

Credit: Future of Life Institute

The U.S. House of Representatives is aiming to take a leading roll in making certain artificial intelligence (AI) in the U.S. is safe and secure, calling for more funding to assert tighter controls, regulations, and education, according to the first comprehensive AI report presented to Congress, "Rise of the Machines: Artificial Intelligence and its Growing Impact on U.S. Policy."

Illinois Congresswoman Robin Kelly, ranking member of the Subcommittee on Information Technology of the House Committee on Oversight and Government Reform, which produced the report, said, "AI presents many opportunities and many challenges. Regardless of the risks and benefits, AI is coming and it will fundamentally change our economy, the management of data and the delivery of government services. We need to be prepared for this forthcoming reality, and we are not. That's why our IT subcommittee wrote Congress' first report on AI to start the conversation and raise awareness about our lack of readiness,"

After assessing the state of the art in AI, the report concludes that the U.S. "cannot maintain its global leadership in artificial intelligence absent…increased engagement with AI by Congress and the Administration." In more detail, the report says AI "has the potential to disrupt every sector of society in both anticipated and unanticipated ways. In light of that potential for disruption, it's critical that the federal government address the different challenges posed by AI, including its current and future applications."

Congresswoman Kelly said overall, the report advocates a single government-wide set of regulations be implemented across the government, by every existing regulatory agency. "AI will fundamentally alter the way that all government agencies work and deliver services. Rules governing AI will need to be rooted in a whole-of-government approach because AI is changing the way that all agencies and levels of government operate."

The report recommends "increased federal spending on research and development [and engagement] with stakeholders on the development of effective strategies for improving the education, training, and re-skilling of American workers to be more competitive in an AI-driven economy." It also says the U.S. government should immediately begin to assess how the risks to public safety are already being addressed by existing regulatory bodies, to determine whether those agencies are adequately dealing with the urgency of those risks. Finally, it advises the implementation of additional measures across all aspects of government to ensure the adequate regulation of aspects of AI not being addressed today.

Said James Hendler, a professor and director of data exploration and applications at Rensselaer Polytechnic Institute, "The report is pretty good. AI is a new technology and we must invest in it, but we must also keep our eyes out for potential safety, bias and ethical issues."

Hendler said AI "already falls under the jurisdiction of many existing regulatory agencies—Department of Transportation for autonomous vehicles, Department of Labor for the workforce, Department of Health for medical uses, but there is also talk about restricting certain extraordinary uses of AI, as is done for stem cell research. Some of these usages include lethal autonomous weapon systems, which the U.N. is moving to restrict."

Regarding the military use of AI, the U.S. Defense Department currently requires a "human in the loop" for all applications of AI with the potential to take lives, according to Defense Advanced Research Project Agency (DARPA) spokesman Jared Adams. "The Department of Defense issued directive 3000.09 in 2012, which was re-certified in 2016, and it notes that for every system capable of carrying out or assisting the use of lethal force, a human must be involved in the decision. Accordingly, DARPA's autonomous research portfolio is purely defensive in nature, looking at ways to protect soldiers from adversarial unmanned systems, operating at machine speed, and/or limiting exposure of our servicemen and women from potential harm," said Adams.

"As we work to prevent the inclusion of bias and remove existing biases in AI, we have to understand the data inputs and mechanisms used by AI. It's a case of garbage in, garbage out that we can prevent by demanding and ensuring transparency in computing, data and development," said Kelly.

The Rise of the Machines report also addresses malicious uses of AI, such as for cyberattacks on critical infrastructure like the power grid. OpenAI, a non-profit AI research company testified at the IT Subcommittee hearings that were the basis for the study about the conclusion of an Electronic Frontier Foundation report, "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, which said that unless adequate defenses are developed, AI progress will result in cyberattacks that are "more effective, more finely targeted, more difficult to attribute, and more likely to exploit vulnerabilities."

Supporting that notion were the results of a survey by software security firm Cylance presented at the IT Subcommittee hearings, which found that "62% of [information security] experts believe artificial intelligence will be used for cyberattacks in the coming year."

The Rise of the Machines report says the National Institute of Standards and Technology (NIST) "is situated to be a key player in developing standards" for AI. It also mentions the IEEE's Global Initiative on Ethics of Autonomous and Intelligent Systems as private-sector effort that could produce AI standards, and mentions the AI Index, part of Stanford University's "One Hundred Year Study on AI," which collects data on AI to track its progress, as "critical in the standards development process to provide historical context."

In sum, the report says, "The federal government should look to support public, academic, and private sector efforts in the development of standards for measuring the safety and security of AI products and applications."

The report cites Ben Buchanan, an assistant teaching professor at Georgetown University, where he conducts research on the intersection of cybersecurity and statecraft, about the privacy risks consumers face when their personal data is used in AI systems. According to Buchanan, "There is the risk of breaches by hackers, of misuse by those who collect it or access data, and of secondary use—in which data collected for one purpose is later re-appropriated for another."

"There is so much attention to AI today because the knee of the exponential growth curve is being approached for many domain specific uses of AI," said Hendler. " But the main point is that AI will impact policy in many, many ways, so we need mechanisms by which legislators and judges can evaluate the public's needs, which the ACM already helps to provide."

For instance, ACM is a member of Partnership on AI, an organization created "to conduct research, organize discussions, share insights, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advances the understanding of AI technologies including machine perception, learning, and automated reasoning. As members of that organization, ACM, along with Amazon, Apple, Facebook, Google, IBM, Microsoft, and other AI giants, will work with the Partnership on AI "to educate the public and ensure that these technologies serve humanity in beneficial and responsible ways," said ACM president Vicki Hanson.

R. Colin Johnson is a Kyoto Prize Fellow who ​​has worked as a technology journalist ​for two decades.

 

For more detail on the pros and cons of regulating AI, watch the exclusive December 2018 ACM video:

Point-Counterpoint on AI Regulation


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account