acm-header
Sign In

Communications of the ACM

ACM News

Computing and Social Responsibility


View as: Print Mobile App Share:

Companies don't set out to create the technological equivalent of Frankenstein's monster, of course, but without careful planning and actionable processes in place, that is exactly what can happen.

Credit: iStock

In March, Microsoft launched its artificial intelligence (AI)-powered tool Copilot across its Microsoft 365 app platform. The company's chairman and CEO, Satya Nadella, touted the release as a "major step in the evolution of how we interact with computing,"

The new functionality comes on the heels of Microsoft's partnership with OpenAI, the company behind the controversial and much-hyped ChatGPT and DALL-E 2. Something Microsoft didn't announce but was widely reported: its entire AI ethics and society team was laid off around the same time during a round of company cuts.

In a society obsessed with AI, these moves are no surprise, and yet according to the Computer Science and Telecommunications Board of the U.S. National Academies, in their 2022 paper Fostering Responsible Computing Research: Foundations and Practices, Microsoft may want to rethink both.

The paper, which examines the various ethical and societal impact concerns that can accompany new technology, such as "erosion of personal privacy, the spread of false information and propaganda, biased or unfair decision-making, disparate socioeconomic impacts, or diminished human agency," provides very specific guidelines for technology companies that are building new technology products. These include expanding computing research to include consideration of potential product and use concerns to find effective ways to address them, and employment of rigorous methodologies and frameworks for those processes. The organization also recommendations more government policies and regulations to ensure adverse impacts of new technologies are minimized.

However, given the continuing competition to bring more features and functionalities to market first, companies may not be putting enough thought into any of these issues, according to experts.

More Features, More Function

Companies don't set out to create the technological equivalent of Frankenstein's monster, of course, but without careful planning and actionable processes in place, that is exactly what can happen. Google is a good example of this, says Catherine Flick, an associate professor of Computing and Social Responsibility at the U.K.'s De Montfort University. Early on, the company's motto was 'Don't Be Evil', but that didn't work out as planned, says Flick, who is also the vice chair of the ACM Committee on Professional Ethics. "Google found it couldn't really say 'Don't Be Evil' because they could not really back that up legally if someone were to challenge it. And also, it started to become a bit of a definition game; how do you actually put that into practice?" Responsible innovation statements aren't practical, she says. They aren't potentially profitable, either, since ethics teams may ask companies to hold off or slow down on a project that is potentially valuable but ethically untenable.

Some companies are being proactive when it comes to technology that has potentially negative ethical or societal impacts. For instance, SAP was one of the first to release guidelines for the ethical use of AI, says Feiyu Xu, senior vice president and global head of Artificial Intelligence of SAP. "The use of AI at SAP is governed by clearly defined rules of ethics for employees on how SAP's ethical guiding principles relate to and should be applied in their work," he explains. SAP doubled down on its work in January 2022 when it released its SAP Global Artificial and Intelligence (AI) Ethics Policy, which Xu says ensures that SAP's AI systems are "developed, deployed, used, and sold in line with the ethical and trustworthiness standards laid out in these guidelines. The policy defines the rules, expectations, and direction for the lifecycle of development, deployment, use, and sale of our AI systems."

This kind of work is important since, for most companies, values and mission statements are only as good as the way they're operationalized, says Kirsten Martin, a professor of tech ethics at the University of Notre Dame. "They really are only as good as the way they they're melted into the product development and testing and evaluation processes. Is there a checklist of whether or not a design is ready to go into development? Do those principles and values and mission statement show up there [in product development] throughout the process," she says. "If I were a VP of product management, I would want to know what types of value-laden decisions my people are making, so that they're not making the wrong ones."

It Takes Tech to Know Tech

Intel Fellow Lama Nachman, director of the Intelligent Systems Research Lab at Intel Labs, says her company is doing just that by not only putting multidisciplinary review processes in place—internal advisory councils review development activities using six lenses: human rights, human oversight, explainable use of AI, security, safety, and reliability, personal privacy, and equity and inclusion—but also collaborating with academia and industry partners to mitigate risk.

"We also have created tools, training materials, and documentation to help improve the competency across the company in assessing these risks and run workshops to help developers and program managers imagine different problems that can arise for various types of products," Nachman says. Intel requires its partners to follow the same ethical standards its own employees must follow and will restrict or cease business with partners who are using its technology to violate human rights, she adds.   

At VMware, the engineering organization is focused on similar concerns, says Kit Colbert, the company's CTO. He says that in his experience as an engineer, he has seen how the thinking around development often focuses on how to implement technology, but oftentimes, there's not as much not as much thought into whether—or if a company should make it work? "In my mind, we are [as an industry needs to be] opening the aperture a bit, widening the lens to take into account some of these issues more holistically, than people have done in the past," Colbert says.

VMware builds this accountability into its business model and rewards employees when they align with their mission, Colbert says, adding that the company also prioritizes bringing disparate teams together to make sure they can share information. "I think for our organization and culture made a lot of sense for it to be rooted in R&D. I find when you do it outside R&D, and then the try to place those sorts of policies into R&D doesn't always work that well. Oftentimes you need the technical folks' buy-in on it to help influence other technical folks."

These efforts are strong strategies, and work that every technology company will need to grapple with if they want to do the right thing going forward, Flick says. "If you're a CEO or a CTO and you care about ethics, and you care about more than just the bottom line—it really comes down to that —it needs to be an understanding from the very top of the organization that one of the values that your company is keen on making the space for ethics. Because if you're just focused on the bottom line, there's no space there for ethics in a way that is meaningful."

 

K.J. Bannan is a writer and editor based in Massapequa, NY. She began her career on the PC Magazine First Looks team reviewing all the latest and greatest technologies. Today, she is a freelancer who covers business, technology, health, personal finance, and lifestyle topics.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account