Responsible Intelligent Systems

Blog

Can we build artificial moral agents using machine learning?

It has been suggested by several people that we can use machine learning to build artificial moral agents. In my opinion that is a mistake. This article even goes as far as to suggest that “we can train AI to identify good and evil, and then use it to teach us morality”. One example it gives is that people often have biases, and that well-trained machine learning systems will not have biases; so we should consult the machine to form an opinion without bias. The problem is that this goes against every experience we have with such systems. One observation is that machine learning systems will copy and likely magnify the biases of those who provide it with a training set. Another is that those systems are good at statistical relations but bad at causal relations; mistaking the first for the second is a major source of bias. But, probably the biggest mistake made in the article is the idea that morality is just what all members of a huge set of moral examples that could be fed to a machine learning program, have in common.

Leave a Reply

Your email address will not be published. Required fields are marked *