acm-header
Sign In

Communications of the ACM

ACM Opinion

What Does It Mean to Align AI with Human Values?


View as: Print Mobile App Share:
Abstract illustration of a face in a primitive art style.

We humans are prone to giving machines ambiguous or mistaken instructions, and we want them to do what we mean, not necessarily what we say.

Credit: James O’Brien/Quanta Magazine

It is a familiar trope in science fiction—humanity being threatened by out-of-control machines that have misinterpreted human desires. A not-insubstantial segment of the artificial intelligence (AI) research community is deeply concerned about this kind of scenario playing out in real life.

But what about the more immediate risks posed by non-superintelligent AI, such as job loss, bias, privacy violations, and misinformation spread? It turns out that there is little overlap between the communities concerned primarily with such short-term risks and those who worry more about longer-term alignment risks.

From Quanta Magazine
View Full Article


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account