acm-header
Sign In

Communications of the ACM

ACM Careers

DEFCON to Set Thousands of Hackers Loose on LLMs


View as: Print Mobile App Share:
red team icon

Red teams will assess models from Anthropic, Google, Hugging Face, Nvidia, OpenAI, and Stability.

Credit: PNGkit

This year's DEFCON AI Village has invited hackers to show up, dive in, and find bugs and biases in large language models (LLMs) built by OpenAI, Google, Anthropic, and others.

The collaborative event, which AI Village organizers describe as "the largest red teaming exercise ever for any group of AI models," will host "thousands" of people, including "hundreds of students from overlooked institutions and communities," all of whom will be tasked with finding flaws in LLMs that power today's chatbots and generative AI. 

A focus will be problems more specific to machine learning, such as bias, hallucinations, and jailbreaks, all of which ethical and security professionals are grappling with as these technologies scale.

DEFCON, an annual hackers conference, will be held from August 10 to 13 this year in Las Vegas.

From The Register
View Full Article


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account