Blog
Workshop Politics and Philosophy of Artificial Intelligence
Jan Broersen will give a presentation at this workshop.
The Politics and Philosophy of Artificial Intelligence
Government Department Workshop, LSE, 2013/14
Organizers: Christian List and Kai Spiekermann
Until now, human beings have been the only genuinely intelligent creatures in the world. Humans have always been the ultimate loci of intentional action and moral responsibility. Although some non-human animals may be interpreted as intentional agents of a simpler kind, they lack the capacity for moral responsibility. Similarly, on some accounts, groups or organizations can qualify as agents in their own right (e.g., states in IR theory; firms, organizations, and parties in economics and political science), but their agency is often seen as deriving from, and somehow reducible to, the agency of their individual members. In light of this, it is not surprising that explanatory theories in the social sciences tend to make individual human beings the ultimate units of explanation and that normative theories tend to make individual humans the ultimate units of value and responsibility.
The arrival of artificial intelligence, even more than the phenomenon of corporate agency, challenges this traditional picture. Autonomous or semi-autonomous computational systems are increasingly being used in areas as diverse as financial markets, transportation, control of complex technologies, and of course the military.
While the first generation of drones were simply remote-controlled but human operated aircraft, the next generation (as well as some self-directing missiles) will be systems that make autonomous decisions on how to operate and potentially which targets to hit. Thus, we could be faced with machines making life-and-death decisions. More harmlessly, in 2011, an IBM computer named “Watson” won a Jeopardy contest in the United States, and chess computers now outperform all but very best human chess players in the world.
While these technologies already raise a number of questions for society in general and for political, legal, and moral theorists in particular, the challenges could be even greater as the systems in question become more advanced. So far, we have only achieved what scholars call “weak artificial intelligence”, that is, machine intelligence weaker, or less general, than human intelligence. “Strong artificial intelligence”, which refers to machine intelligence comparable to (or even stronger than) human intelligence, remains elusive for the time being.
A number of people have warned, however, that it is only a matter of time until we will be faced with “strong AI” systems. This seems all the more plausible in the context of “Moore’s law”, named after a co-founder of Intel, Gorden Moore, who observed in 1965 that “over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years”. Surprisingly, this trend has continued up to now, and we have not yet hit a ceiling.
One of the first to raise the possible consequence of an “intelligence explosion” was the statistician I. J. Good, who offered the following speculation, also in 1965:
“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent
machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.” (Quoted in David Chalmers, “The Singularity”)
Until very recently, this possibility was discussed mainly by futurologists, science-fiction writers, and computer scientists, but had not really entered the academic mainstream. However, this is now slowly changing. Oxford’s Future of Humanity Institute, for example, conducts research into “existential risks”, among which are a number of risks associated with artificial intelligence. Similarly, the newly founded Cambridge Centre for the Study of Existential Risk has taken up this theme. And there is an increasing number of scholars across several academic fields, including legal and political theory, who are working on the social, political, and legal implications of the advent of intelligent machines.
The aim of the proposed small workshop is to bring together political scientists and theorists, philosophers, and computer scientists to explore political and philosophical aspects of artificial intelligence. We are particularly interested in issues related to agency ascription, moral and legal responsibility, risk assessment and management, in relation to artificially intelligent and autonomous systems.
Who is responsible for their decisions? How should a justice system incorporate them? Can such machines be held responsible themselves, and how should they be regulated? Could they conceivably be objects of moral concern or even rights, and what would this imply? And how would our answers to some of these questions change if it turned out that machines could be conscious, or that they could pass the Turing test?
The trend towards the development of artificially intelligent and autonomous systems seems irreversible, and it is important for our social and political theories to come to terms with their possible arrival and to develop theoretical and conceptual repertoires for thinking about the challenges they raise.
Leave a Reply