acm-header
Sign In

Communications of the ACM

ACM Careers

Study Finds AI-Assisted Code Is More Likely to be Buggy


View as: Print Mobile App Share:
binary code

The study looked at vulnerabilities in Python, Javascript, and C.

Credit: Getty Images

Computer scientists from Stanford University have found that programmers who accept help from AI tools like Github Copilot produce less secure code than those who fly solo.

Worse still, AI help tends to delude developers about the quality of their output, the study says.

"We found that participants with access to an AI assistant often produced more security vulnerabilities than those without access, with particularly significant results for string encryption and SQL injection," the authors state. "Surprisingly, we also found that participants provided access to an AI assistant were more likely to believe that they wrote secure code than those without access to the AI assistant."

From The Register
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account