Responsible Intelligent Systems


Selfdriving car. Who is responsible (and to what extent) if something goes wrong?

As intelligent systems are integrated into our daily life more and more, the division, assignment and checking of responsibilities between human and artificial agents become increasingly important. From robots in medicine, the military and transportation (self-driving cars); to automated trading agents on the financial markets (algotrades); to automated monitoring, prediction and protection systems (pacemakers, automated surveillance, early warning systems); we delegate more and more responsibility to intelligent devices. In doing that, we run the risk of losing track of our indirect legal and moral liabilities.

The REINS project aims to contribute to solutions for this problem in two different ways. Firstly, it aims to investigate how model checking and theorem proving techniques traditionally used in system verification can be amended to the problem of responsibility checking for intelligent systems. Secondly, it investigates the possibility to endow artificially intelligent systems with (deontic) reasoning capacities that prepare them for operating in moral and legal normative environments, thereby giving them a sense of responsibility.


“REINS” is a research project funded by an ERC Consolidator Grant of the European Research Council. The project is led by Jan Broersen and is hosted by the Department of Philosophy and Religious Studies of the Utrecht University.