{"id":37298,"date":"2024-09-10T14:58:00","date_gmt":"2024-09-10T14:58:00","guid":{"rendered":"https:\/\/www.lifeandnews.com\/articles\/?p=37298"},"modified":"2024-09-28T12:08:13","modified_gmt":"2024-09-28T12:08:13","slug":"medieval-theology-has-an-old-take-on-a-new-problem-%e2%88%92-ai-responsibility","status":"publish","type":"post","link":"https:\/\/www.lifeandnews.com\/articles\/medieval-theology-has-an-old-take-on-a-new-problem-%e2%88%92-ai-responsibility\/","title":{"rendered":"Medieval theology has an old take on a new problem \u2212 AI&nbsp;responsibility"},"content":{"rendered":"\n<p><a href=\"https:\/\/theconversation.com\/profiles\/david-danks-322450\">David Danks<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-california-san-diego-1314\">University of California, San Diego<\/a><\/em> and <a href=\"https:\/\/theconversation.com\/profiles\/mike-kirby-1550897\">Mike Kirby<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-utah-1188\">University of Utah<\/a><\/em><\/p>\n\n\n\n<p>A self-driving taxi has no passengers, so it parks itself in a lot to reduce congestion and <a href=\"https:\/\/doi.org\/10.1038\/nclimate2685\">air pollution<\/a>. After being hailed, the taxi heads out to pick up its passenger \u2013 and tragically strikes a pedestrian in a crosswalk on its way.<\/p>\n\n\n\n<p>Who or what deserves praise for the car\u2019s actions to reduce congestion and air pollution? And who or what deserves blame for the pedestrian\u2019s injuries?<\/p>\n\n\n\n<p>One possibility is the self-driving taxi\u2019s designer or developer. But in many cases, they wouldn\u2019t have been able to predict the taxi\u2019s exact behavior. In fact, people typically want artificial intelligence to discover some new or unexpected idea or plan. If we know exactly what the system should do, then we don\u2019t need to bother with AI.<\/p>\n\n\n\n<p>Alternatively, perhaps the taxi itself should be praised and blamed. However, these kinds of AI systems are essentially deterministic: Their behavior is dictated by their code and the incoming sensor data, even if observers might struggle to predict that behavior. It seems odd to morally judge a machine that had no choice.<\/p>\n\n\n\n<p>According to <a href=\"https:\/\/philarchive.org\/archive\/FRAAPA-8\">many<\/a> <a href=\"https:\/\/www.jstor.org\/stable\/pdf\/2025667.pdf\">modern<\/a> <a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1111\/j.1468-0114.2008.00333.x\">philosophers<\/a>, rational agents <a href=\"https:\/\/www.jstor.org\/stable\/pdf\/25115787.pdf\">can be morally responsible<\/a> for their actions, even if their actions were completely predetermined \u2013 whether by neuroscience or by code. But most agree that the moral agent must have certain capabilities that self-driving taxis almost certainly lack, such as the ability to shape its own values. AI systems fall in an uncomfortable middle ground between moral agents and nonmoral tools.<\/p>\n\n\n\n<p>As a society, we face a conundrum: it seems that no one, or no one thing, is morally responsible for the AI\u2019s actions \u2013 what philosophers call a responsibility gap. Present-day theories of moral responsibility simply do not seem appropriate for understanding situations involving autonomous or semi-autonomous AI systems.<\/p>\n\n\n\n<p>If current theories will not work, then perhaps we should look to the past \u2013 to centuries-old ideas with surprising resonance today.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/images.theconversation.com\/files\/617970\/original\/file-20240907-20-cjr3ks.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" alt=\"A cityscape with cars, some of which seem to be connected by thin blue lines, crossing a highway bridge over trees.\"\/><figcaption>If self-driving cars take off, these questions about accountability will only grow. <a href=\"https:\/\/www.gettyimages.com\/detail\/photo\/smart-transportation-with-motorway-intersection-royalty-free-image\/1327573950?phrase=self-driving&amp;searchscope=image%2Cfilm&amp;adppopup=true\">dowell\/Moment via Getty Images<\/a><\/figcaption><\/figure>\n\n\n\n<h2>God and man<\/h2>\n\n\n\n<p>A similar question perplexed Christian theologians in the 13th and 14th centuries, from <a href=\"https:\/\/www.newadvent.org\/summa\/1083.htm\">Thomas Aquinas<\/a> to <a href=\"https:\/\/plato.stanford.edu\/entries\/duns-scotus\/\">Duns Scotus<\/a> to <a href=\"https:\/\/plato.stanford.edu\/entries\/free-will-foreknowledge\/\">William of Ockham<\/a>. How can people be responsible for their actions, and the results, if an omniscient God designed them \u2013 and presumably knew what they would do?<\/p>\n\n\n\n<p>Medieval philosophers held that someone\u2019s decisions result from their will, operating on the products of their intellect. Broadly speaking, they understood <a href=\"https:\/\/www.newadvent.org\/summa\/1079.htm\">human intellect as a set of mental capabilities<\/a> that enable rational thought and learning.<\/p>\n\n\n\n<p>Intellect is the rational, logical part of people\u2019s minds or souls. When two people are presented with identical situations and they both arrive at the same \u201crational conclusion\u201d about how to handle things, they\u2019re using intellect. Intellect is like computer code in this way.<\/p>\n\n\n\n<p>But the intellect doesn\u2019t always provide a unique answer. Often, the intellect provides only possibilities, and <a href=\"https:\/\/doi.org\/10.1353\/tho.1985.0013\">the will selects among them<\/a>, whether consciously or unconsciously. Will is the act of <a href=\"https:\/\/doi.org\/10.1093\/acprof:oso\/9780199579914.003.0004\">freely choosing<\/a> from among the possibilities.<\/p>\n\n\n\n<p>As a simple example, on a rainy day, intellect dictates that I should grab an umbrella from my closet, but not which one. Will is choosing the red umbrella instead of the blue one.<\/p>\n\n\n\n<p>For these <a href=\"https:\/\/www.cuapress.org\/9780813228747\/human-action-in-thomas-aquinas-john-duns-scotus-and-william-of-ockham\/\">medieval thinkers<\/a>, <a href=\"https:\/\/www.heritagebooks.org\/products\/divine-will-and-human-choice-freedom-contingency-and-necessity-in-early-modern-reformed-thought-muller.html\">moral responsibility<\/a> depended on what the will and the intellect each contribute. If the intellect determines that there is only one possible action, then I could not do otherwise, and so I am not morally responsible. One might even conclude that God is morally responsible, since my intellect comes from God \u2013 though the medieval theologians were very cautious about attributing responsibility to God.<\/p>\n\n\n\n<p>On the other hand, if intellect places absolutely no constraints on my actions, then I am fully morally responsible, since will is doing all of the work. Of course, most actions involve contributions from both intellect and will \u2013 it\u2019s usually not an either\/or.<\/p>\n\n\n\n<p>In addition, other people often constrain us: from parents and teachers to judges and monarchs, especially in the medieval philosophers\u2019 days \u2013 making it even more complicated to attribute moral responsibility.<\/p>\n\n\n\n<h2>Man and AI<\/h2>\n\n\n\n<p>Clearly, the relationship between AI developers and their creations is not exactly the same as between God and humans. But as <a href=\"https:\/\/www.daviddanks.org\">professors of philosophy<\/a> <a href=\"https:\/\/www.cs.utah.edu\/%7Ekirby\">and computing<\/a>, we see intriguing parallels. These older ideas might help us today think through how an AI system and its designers might share moral responsibility.<\/p>\n\n\n\n<p>AI developers are not omniscient gods, but they do provide the \u201cintellect\u201d of the AI system by <a href=\"https:\/\/medium.com\/@datasciencewizards\/machine-learning-pipeline-what-it-is-why-it-matters-and-guide-to-building-it-2940d143fd37\">selecting and implementing<\/a> its learning methods and response capabilities. From the designer\u2019s perspective, this \u201cintellect\u201d constrains the AI\u2019s behavior but almost never determines its behavior completely.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/images.theconversation.com\/files\/617971\/original\/file-20240907-16-ij7h3.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" alt=\"A man in glasses looks through a see-through screen with words in black font covering most of it.\"\/><figcaption>Where does his responsibility stop and the AI system\u2019s begin? <a href=\"https:\/\/www.gettyimages.com\/detail\/photo\/chatbot-see-through-screen-royalty-free-image\/1510979104?phrase=ai+engineering&amp;searchscope=image%2Cfilm&amp;adppopup=true\">Laurence Dutton\/E+ via Getty Images<\/a><\/figcaption><\/figure>\n\n\n\n<p>Most modern AI systems are designed to learn from data and can dynamically respond to their environments. The AI will thus seem to have a \u201cwill\u201d that chooses how to respond, within the constraints of its \u201cintellect.\u201d<\/p>\n\n\n\n<p>Users, managers, regulators and other parties can further constrain AI systems \u2013 analogous to how human authorities such as monarchs constrain people in the medieval philosophers\u2019 framework.<\/p>\n\n\n\n<h2>Who\u2019s responsible?<\/h2>\n\n\n\n<p>These thousand-year-old ideas map surprisingly well to the structure of moral problems involving AI systems. So let\u2019s return to our opening questions: Who or what is responsible for the benefits and harms of the self-driving taxi?<\/p>\n\n\n\n<p>The details matter. For example, if the taxi developer explicitly writes down how the taxi should behave around crosswalks, then its actions would be entirely due to its \u201cintellect\u201d \u2013 and so the developers would be responsible.<\/p>\n\n\n\n<p>However, let\u2019s say the taxi encountered situations it was not explicitly programmed for \u2013 such as if the crosswalk was painted in an unusual way, or if the taxi learned something different from data in its environment than what the developer had in mind. In cases like these, the taxi\u2019s actions would be primarily due to its \u201cwill,\u201d because the taxi selected an unexpected option \u2013 and so the taxi is responsible.<\/p>\n\n\n\n<p>If the taxi is morally responsible, then what? Is the taxi company liable? Should the taxi\u2019s code be updated? Even the two of us do not agree about the full answer. But we think that a better understanding of moral responsibility is an important first step.<\/p>\n\n\n\n<p>Medieval ideas are not only about medieval objects. These theologians can help ethicists today better understand the present-day challenge of AI systems \u2013 though we have only scratched the surface.<\/p>\n\n\n\n<p><a href=\"https:\/\/theconversation.com\/profiles\/david-danks-322450\">David Danks<\/a>, Professor of Data Science, Philosophy, &amp; Policy, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-california-san-diego-1314\">University of California, San Diego<\/a><\/em> and <a href=\"https:\/\/theconversation.com\/profiles\/mike-kirby-1550897\">Mike Kirby<\/a>, Professor of Computing, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-utah-1188\">University of Utah<\/a><\/em><\/p>\n\n\n\n<p>This article is republished from <a href=\"https:\/\/theconversation.com\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/medieval-theology-has-an-old-take-on-a-new-problem-ai-responsibility-236016\">original article<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>David Danks, University of California, San Diego and Mike Kirby, University of Utah A self-driving taxi has no passengers, so it parks itself in a lot to reduce congestion and air pollution. After being hailed, the taxi heads out to pick up its passenger \u2013 and tragically strikes a pedestrian in a crosswalk on its [&hellip;]<\/p>\n","protected":false},"author":44,"featured_media":37299,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[8025,28,8],"tags":[401,10656,885,891,886,860,13960,581,4271],"_links":{"self":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/37298"}],"collection":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/users\/44"}],"replies":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/comments?post=37298"}],"version-history":[{"count":2,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/37298\/revisions"}],"predecessor-version":[{"id":37617,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/37298\/revisions\/37617"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/media\/37299"}],"wp:attachment":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/media?parent=37298"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/categories?post=37298"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/tags?post=37298"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}