acm-header
Sign In

Communications of the ACM

ACM News

The Fight to Define When AI Is 'High Risk'


View as: Print Mobile App Share:

More than 300 organizations have weighed in on the EUs Artificial Intelligence Act, which lawmakers and regulators introduced in April. The comment period on the proposal ended August 8, and it will now be considered by the European Parliament and Europe

Credit: Elena Lacey/Getty Images

People should not be slaves to machines, a coalition of evangelical church congregations from more than 30 countries preached to leaders of the European Union earlier this summer.

The European Evangelical Alliance believes all forms of AI with the potential to harm people should be evaluated, and AI with the power to harm the environment should be labeled high risk, as should AI for transhumanism, the alteration of people with tech like computers or machinery. It urged members of the European Commission for more discussion of what's "considered safe and morally acceptable" when it comes to augmented humans and computer-brain interfaces.

The evangelical group is one of more than 300 organizations to weigh in on the EU's Artificial Intelligence Act, which lawmakers and regulators introduced in April. The comment period on the proposal ended August 8, and it will now be considered by the European Parliament and European Council, made up of heads of state from EU member nations. The AI Act is one of the first major policy initiatives worldwide focused on protecting people from harmful AI. If enacted, it will classify AI systems according to risk, more strictly regulate AI that's deemed high risk to humans, and ban some forms of AI entirely, including real-time facial recognition in some instances. In the meantime, corporations and interest groups are publicly lobbying lawmakers to amend the proposal according to their interests.

From Wired
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account