Responsible Intelligent Systems

Blog

How possible is it technically that science can create a machine that can understand and “produce” emotions?

Guest blog post by dr. ir. Jan Broersen, associate professor at the Department of Information and Computing Science at Utrecht University.

As a member of the scientific community working on the challenges of Artificial Intelligence, the editor of this blog asked me the question that is the title of this contribution. I will approach the question in the way an analytical philosopher would do, by breaking it up in parts and analysing the concepts involved.

“How possible is it technically that science can create a machine that…”
Science advances continuously. And new machines appear on the stage all the time. But can we say anything with certainty about how far technical developments will go? Can we say anything about the possibility of future machines of a certain kind? To answer questions like these thoroughly, we first need to define what a machine is and second we have to find evidence or, even better, proof about what machines of the kind defined can or cannot do.

In 1936 Alan Turing gave the definition for the machines that form the blue-print of our modern day computers: the Turing machines. He also put forward the thesis (now commonly known as the Church-Turing thesis) saying that anything that can be computed at all, can be computed by a Turing machine. But his most important result was that there are classes of problems (e.g., the halting problem) whose answers cannot be calculated by a Turing machine. So indeed, there are limits to what Turing machines, and thus our modern computers, can do. And if we combine this with the Church-Turing thesis, there are limits to what can be achieved by computation as such.

If we go back just a little further in time (1931) we come to a second impossibility theorem; one that sheds some light on the kind of things that could never be computed. I am referring here to Gödel’s incompleteness theorem, that roughly says that for strong enough symbolic systems (and in particular, for arithmetic) we cannot find sound and complete formal derivation systems that capture all true statements in that system. Gödel himself, but more prominently J.R. Lucas in 1963 have tried to turn this result against the idea that the human mind is computable by a finite machine. Very roughly, the argument is that for humans there seem to be no limits to the capacity to discover and asses truths of mathematics, while for Turing machines this is necessarily bounded by Gödel’s incompleteness result. But all this is highly controversial, and the debates on the demarcating implications of Turing’s and Gödel’s impossibility theorems and variants thereof have gone on until today.

Interestingly, the impossibility theorems can be contrasted with some more recent possibility theorems. In 1989 Cybenko proved the first version of the universal approximation theorem saying that a multilayer feedforward artificial neural network can approximate any continuous function to any degree of accuracy. Now, if intelligence, or emotion for that matter, is characterised by some behavioural function, this function could be approximated by training a suitable neural network. The problem with this kind of results is that the calculation of an approximation might never converge to the real thing and that the thing we aim to approximate, might behave as a moving target (which is akin to the well-known problem of overfitting to training sets). Practical experiences with neural networks are not seldom disappointing.

A second development is the discovery of possibilities to go beyond the powers of the traditional Turing machine. In 1994 Siegelmann and Sontag showed that sufficiently complex analog neural networks have a power that exceeds that of the traditional Turing machine. However, now the question is, are analog networks still machines? If some day in the future we design and ‘grow’ a biological system that could count as an analogous neural network that can be trained to exhibit useful behaviours, can we then say to have build a ‘machine’? Going back to the title of this contribution, we would certainly have created something with possibly far reaching powers, and we would have created it through scientific and technical means, but, maybe we would not call it a ‘machine’.

Another way to go beyond the powers of Turing machines is to look at quantum mechanical phenomena. It is known that some of these phenomena are not Turing computable, which seems to open up the possibility to deploy them for the benefit of super-Turing computations and machines. However, this is still very much open to research. Many groups world-wide work on quantum computing nowadays, but the models of quantum computation they use are mostly still Turing complete (with only gains in efficiency on certain classes of problems).

It is not only believed by some academics that notions like intelligence, agency and free will are intimately connected to quantum mechanical phenomena. Recently also Google joined this enterprise with its quantum artificial intelligence project. However, projects like the one by Google have net yet resulted in usable machines. So, what projects like these bring to the fore quite clearly is that even if we could assess theoretically that the human mind can (only) be modelled as a super-Turing machine of a certain kind, it would not follow that we would ever be able to build such a device; something that is possible in theory might not be feasible in practice.

“… can understand emotions?”
The crucial word in this part of the question in the title is ‘understanding’. If understanding means ‘interpreting correctly’, ’classifying correctly’ or ‘responding in an appropriate way’, then already today machines could certainly count as being capable of understanding emotions. At airports, governments use video-based surveillance software that detects suspicious emotions of passengers in order to select them for further investigation. Also, computer games are being developed that make their interaction with the player dependent on the emotions a player radiates (either by his behaviour in the game, or through special sensors connected to the players body). And software is being developed that monitors the mental conditions of astronauts who have to operate solitarily over extended periods of time, in order to have them steer clear from depressions.

But the word ‘understanding’ also has a deeper and more profound connotation. The concept of understanding is directly linked to the concepts of ‘meaning’ and ‘semantic content’. The philosopher John Searle is famous for arguing that symbol manipulating devices such as Turing machines will never be able to display ‘true understanding’ because symbol manipulation, as he claims, is inherently insufficient for producing semantic content (his well-known and much debated Chinese room argument). If we are asking about this more, let us say, phenomenological version of understanding, then things are far less clear. A position one could take is that a device could never understand an emotion, if it could not ‘feel’ or ‘produce’ such an emotion itself. And this brings us to the last part of the question in the title.

“… can “produce” emotions?”
It is quite clear that we humans produce emotions. And for a good reason. Psychologists have maintained that emotions can be seen as a kind of behavioural heuristics that provoke and influence behaviour in situations where other, more rationality driven considerations are less accessible or available. Emotions looked at in this way, that is, emotions as heuristics for guiding behaviour in special situations, can be easily build in artificially intelligent devices. So, in this sense, we could say it is quite feasible to build machines that ‘produce’ emotions; if, for instance, at a certain point the ‘anger’ heuristic is called, letting the machine behave more aggressively, having it focussed on only one particular issue, and neglecting all other possibly relevant processes, we could say the machine is behaving emotionally. However, again there is the issue if this is indeed the same thing as what we experience ourselves if we ‘produce’ an emotion. The phenomenology of experiencing emotions seems so far removed from what is described above that it is simply hard to believe we are talking about the same thing.

There is one important point I did not address yet. One could take the position (as very likely Alan Turing would have done, but not John Searle) that in the end, what counts, is whether or not we will attribute emotions to machines. And if this is indeed the central issue, than we could even say that that point is not very far away. Already in 1996 the Tamagotchi game came to the market. In this game, the player has to take care of an artificial pet after receiving emotional feedback. The game exploits the fact that children easily attribute emotions to the artificial pet. But our children are not that different from ourselves. There is no reason to assume that it is impossible that in the future we will be emotionally attached to machines (a very recent movie based on this idea is “Her”). And if we reach that point, we will start to assign these machines a certain personhood and we will start to treat them as having certain rights (as some now advocate, likely for the same emotional reasons, we should do for animals). I think that the moment where we start to attribute emotions to machines, either rightfully or unrightfully, and start to interact with them on an emotional level, we will have reached a turning point. We would not ask if the machine we feel emotionally attached to is a Turing machine or a super-Turing machine, we would not ask if it has biological components or was driven by quantum phenomena, we would not question its genuineness on the basis of it being a symbol manipulating system; we would just see such a machine as a person. I do not know if that point will ever be reached, but it is certainly not excluded.

Dr. ir. Jan Broersen is associate professor at the Department of Information and Computing Science at Utrecht University. He works in the Intelligent Systems group that studies intelligent systems in a fundamental as well as an application-oriented way, with a special focus on intelligent agents and mulit-agent systems. He received an important grant from the European Research Council for his REINS research proposal (Responsible Intelligent Systems).

 

From: http://impakt-festival.tumblr.com/

Leave a Reply

Your email address will not be published. Required fields are marked *