acm-header
Sign In

Communications of the ACM

ACM Careers

AI Expert Affirms Musk's Claim: Robots Will Kill Jobs


View as: Print Mobile App Share:
New York Institute of Technology Professor Kevin LaGrandeur

"Machines have been a bigger job-killer to U.S. jobs than both immigrants and outsourcing," says NYIT Professor Kevin LaGrandeur.

Credit: YouTube

Entrepreneur Elon Musk recently testified to U.S. governors that automation poses a serious threat to American jobs, a point proven by New York Institute of Technology Professor Kevin LaGrandeur, an  expert in technology and culture.

In a USA Today op-ed and in Surviving the Machine Age, published by Springer, LaGrandeur argues that intelligent technology is displacing not only manual labor, but even middle-class jobs and higher level jobs. He notes this displacement also includes accountants, who risk a very significant chance of being displaced by intelligent technology in the next ten years. This is also true of other professionals such as journalists and technical writers.

"Technological unemployment is growing rampant in the United States, with intelligent machines displacing American workers every day," says LaGrandeur. "Eighty-eight percent of manufacturing job losses over the past few years are a result of decreased demand for human labor. Machines have been a bigger job-killer to U.S. jobs than both immigrants and outsourcing, and the problem is only growing worse."

LaGrandeur agrees with Musk's proposal for a universal basic income system, and stresses the need for other radical economic policy reforms.

"Relieving the effects of technological unemployment will require fundamentally new approaches to economic policy," says LaGrandeur. "Potential reforms might include a universal basic income, as Musk has mentioned, or perhaps a shorter workweek and a mechanism for paying individuals when their personal data is used by technology firms to turn a profit."

LaGrandeur offers a realistic perspective on regulating intelligent machines.

"Limits on the development of AI would likely result in malicious groups and outside nations finding more creative ways to violate regulations, but alternative forms of regulation may work," he says. "For instance, scientists and governing bodies could develop protocols to build and test AI, procedures for fail-safe controls built into AI, and methods to examine the reliability of these controls. Most importantly, governments could invest in funding to research non-military forms of AI, so that benevolent innovations in the technology could offset the more dangerous ones."


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account