We've received your request
You will be notified by email when the transcript and captions are available. The process may take up to 5 business days. Please contact email@example.com if you have any questions about this request.
The need for judging moral responsibility arises both in ethics and in law. In an era of autonomous vehicles and, more generally, autonomous AI agents, the issue has now become relevant to AI as well. Although hundreds of books and thousands of papers have been written on moral responsibility, blameworthiness, and intention, there is surprisingly little work on defining these notions formally. But we will need formal definitions in order for AI agents to apply these notions.
In this talk, given May 1, 2017, computer science professor Joe Halpern takes some preliminary steps towards defining moral responsibility, blameworthiness, and intention. Halpern works on reasoning about knowledge and uncertainty, game theory, decision theory, causality, and security. He is a fellow of AAAI, AAAS American Association for the Advancement of Science, the American Academy of Arts and Sciences, ACM, IEEE, and SEAT Society for the Advancement of Economic Theory. Among other awards, he received the ACM SIGART Autonomous Agents Research Award, the Dijkstra Prize, the ACM/AAAI Newell Award, and the Godel Prize, and was a Guggenheim Fellow and a Fulbright Fellow.