acm-header
Sign In

Communications of the ACM

ACM News

In A.I. Race, Microsoft and Google Choose Speed Over Caution


View as: Print Mobile App Share:

The surprising success of ChatGPT has led to a willingness at Microsoft and Google to take greater risks with their ethical guidelines, according to 15 current and former employees and internal documents from the companies.

Credit: Jackie Carlise

In March, two Google employees, whose jobs are to review the company's artificial intelligence products, tried to stop Google from launching an A.I. chatbot. They believed it generated inaccurate and dangerous statements.

Ten months earlier, similar concerns were raised at Microsoft by ethicists and other employees. They wrote in several documents that the A.I. technology behind a planned chatbot could flood Facebook groups with disinformation, degrade critical thinking and erode the factual foundation of modern society.

The companies released their chatbots anyway. Microsoft was first, with a splashy event in February to reveal an A.I. chatbot woven into its Bing search engine. Google followed about six weeks later with its own chatbot, Bard.

The aggressive moves by the normally risk-averse companies were driven by a race to control what could be the tech industry's next big thing — generative A.I., the powerful new technology that fuels those chatbots.

From The New York Times
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account