Responsible Intelligent Systems

Blog

Wat are the origins of bias in AI systems?

In this video on the ethics of AI, professor Shannon Vallor says “Machines are not racist, machines are not sexist, machines are not ableist or classist. We are.”. And I have read similar opinions elsewher several times. The reason people claim this is that we are the ones training and deploying AIs by feeding them with data, and that this data reflects our biases. The AI will copy or even magnify and reinforce these human biases through the data that we feed it.

I agree that this data-related phenomenon exists. However, what I do not agree with is that machines ‘by themselves’ have no bias. I think machines can easily be racist, sexist, ableist and classist, even if they are fed with unbiased data. I think bias is often the result of bad reasoning of the kind that jumps to conclusions way to quickly and that creates relations that are not there. For instance, a learning AI that is fed with data about academics can easily come to the conclusion that woman are not good logicians. There is no bias in the data: the data just record the facts. But the conclusion is wrong and biased. Clearly, there is some common cause at work here; a common cause that explains why there are not that many female logicians (side note: the best logician in our department is a woman). But our current learning AIs are not capable of distinguishing between causal and general statistical relations, and can therefore come to wrong and biased conclusions. So yes, AIs, especially those of the sub-symbolic type, actually can be rather biased by themselves. The reason is that they are bad at (causal, non-monotonic, social, moral, common sense, etc.) reasoning. They would need explicit symbolic knowledge representations to help them out with this. How that could be made to work exactly, we do not yet know.

Leave a Reply

Your email address will not be published. Required fields are marked *