As intelligent systems are increasingly integrated into our daily life, the division, assignment and checking of responsibilities between human and artificial agents become increasingly important. From robots in medicine, the military and transportation (self-driving cars); to automated trading agents on the financial markets (algotrades); to automated monitoring, prediction and protection systems (pacemakers, automated surveillance, early warning systems); we delegate more and more responsibility to intelligent devices. Ultimately only the human operators and designers can be responsible for the choices and actions of an autonomous computational system. By delegating responsibilities to intelligent devices, we run the risk of losing track of our indirect legal and moral liabilities. This project’s aim is to keep the reins firm in our hands.
The REINS project aims to develop a formal framework for automating responsibility, liability and risk checking for intelligent systems. The computational checking mechanisms have models of an intelligent system, an environment and a normative system (e.g., a system of law) as inputs; the outputs are answers to decision problems concerning responsibilities, liabilities and risks. The goal is to answer three central questions, corresponding to three sub-projects of the proposal: (1) What are suitable formal logical representation formalisms for knowledge of agentive responsibility in action, interaction and joint action? (2) How can we formally reason about the evaluation of grades of responsibility and risks relative to normative systems? (3) How can we perform computational checks of responsibilities in complex intelligent systems interacting with human agents? To answer the first two questions, we will design logical specification languages for collective responsibilities and for probability-based graded responsibilities, relative to normative systems. To answer the third question, we will design suitable translations to related logical formalisms, for which optimized model checkers and theorem provers exist. All three answers will contribute to the central goal of the project as a whole: designing the formal blueprints for a responsibility checking prototype system. To reach that goal the project will combine insights from three disciplines: philosophy, legal theory and computer science.