Researchers from New York University found that nearly 40% of the suggestions by GitHub's Copilot code-generation tool are erroneous from a security point of view.
Developed by GitHub in collaboration with OpenAI, and currently in private beta testing, Copilot leverages artificial intelligence to make relevant coding suggestions to programmers as they write code.
In their analysis, the researchers asked Copilot to generate code in scenarios relevant to common software security weaknesses. In reviewing the results, the researchers discovered that almost 40% were vulnerable in one way or another. The researchers theorize that the vulnerable code could be the result of buggy code in the Github repositories used as training data.
They describe their work in "An Empirical Cybersecurity Evaluation of GitHub Copilot's Code Contributions."
From TechRadar
View Full Article
No entries found