Researchers at the University of Bergen in Norway propose that since ethical behavior is not consistent across societies or individuals, artificial intelligence (AI) systems should be flexible, allowing them to be geared to better reflect local law and the preferences of the owner.
The researchers presented the idea at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES 2019) in Hawaii last month.
The team thinks a few AI bots should debate the possibilities of ethical dilemmas before making a decision.
The moral AIs each represent one of the stakeholders, and they have individual priorities according to who they represent: to be lawful, to operate safely, or to prioritize individual autonomy.
The system maps out the various arguments from each stakeholder, noting which ones conflict with each other.
The conflicting demands are removed and the system decides on a course of action based on the remaining instructions.
From New Scientist
View Full Article
Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA
No entries found