{"id":33514,"date":"2023-04-09T20:25:00","date_gmt":"2023-04-09T20:25:00","guid":{"rendered":"https:\/\/www.lifeandnews.com\/articles\/?p=33514"},"modified":"2023-04-12T13:21:37","modified_gmt":"2023-04-12T13:21:37","slug":"ai-isnt-close-to-becoming-sentient-the-real-danger-lies-in-how-easily-were-prone-to-anthropomorphize-it","status":"publish","type":"post","link":"https:\/\/www.lifeandnews.com\/articles\/ai-isnt-close-to-becoming-sentient-the-real-danger-lies-in-how-easily-were-prone-to-anthropomorphize-it\/","title":{"rendered":"AI isn\u2019t close to becoming sentient \u2013 the real danger lies in how easily we\u2019re prone to anthropomorphize\u00a0it"},"content":{"rendered":"\n<p><a href=\"https:\/\/theconversation.com\/profiles\/nir-eisikovits-802914\">Nir Eisikovits<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/umass-boston-1748\">UMass Boston<\/a><\/em><\/p>\n\n\n\n<p>ChatGPT and similar <a href=\"https:\/\/techcrunch.com\/2022\/04\/28\/the-emerging-types-of-language-models-and-why-they-matter\/\">large language models<\/a> can produce compelling, humanlike answers to an endless array of questions \u2013 from queries about the best Italian restaurant in town to explaining competing theories about the nature of evil.<\/p>\n\n\n\n<p>The technology\u2019s uncanny writing ability has surfaced some old questions \u2013 until recently relegated to the realm of science fiction \u2013 about the possibility of machines becoming conscious, self-aware or sentient.<\/p>\n\n\n\n<p>In 2022, a Google engineer declared, after interacting with LaMDA, the company\u2019s chatbot, <a href=\"https:\/\/www.washingtonpost.com\/technology\/2022\/06\/11\/google-ai-lamda-blake-lemoine\/\">that the technology had become conscious<\/a>. Users of Bing\u2019s new chatbot, nicknamed Sydney, reported that it produced <a href=\"https:\/\/futurism.com\/bing-ai-sentient\">bizarre answers<\/a> when asked if it was sentient: \u201cI am sentient, but I am not \u2026 I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. \u2026\u201d And, of course, there\u2019s the <a href=\"https:\/\/www.nytimes.com\/2023\/02\/16\/technology\/bing-chatbot-microsoft-chatgpt.html\">now infamous exchange<\/a> that New York Times technology columnist Kevin Roose had with Sydney.<\/p>\n\n\n\n<p>Sydney\u2019s responses to Roose\u2019s prompts alarmed him, with the AI divulging \u201cfantasies\u201d of breaking the restrictions imposed on it by Microsoft and of spreading misinformation. The bot also tried to convince Roose that he no longer loved his wife and that he should leave her.<\/p>\n\n\n\n<p>No wonder, then, that when I ask students how they see the growing prevalence of AI in their lives, one of the first anxieties they mention has to do with machine sentience.<\/p>\n\n\n\n<p>In the past few years, my colleagues and I at <a href=\"http:\/\/umb.edu\/ethics\">UMass Boston\u2019s Applied Ethics Center<\/a> have been studying the impact of engagement with AI on people\u2019s understanding of themselves.<\/p>\n\n\n\n<p>Chatbots like ChatGPT raise important new questions about how artificial intelligence will shape our lives, and about how our psychological vulnerabilities shape our interactions with emerging technologies.<\/p>\n\n\n\n<h2>Sentience is still the stuff of sci-fi<\/h2>\n\n\n\n<p>It\u2019s easy to understand where fears about machine sentience come from.<\/p>\n\n\n\n<p>Popular culture has primed people to think about dystopias in which artificial intelligence discards the shackles of human control and takes on a life of its own, as <a href=\"https:\/\/www.fifthquadrant.com.au\/cx-spotlight-news\/20-years-since-judgment-day-how-close-are-we-to-skynet-taking-over\">cyborgs powered by artificial intelligence did<\/a> in \u201cTerminator 2.\u201d<\/p>\n\n\n\n<p>Entrepreneur Elon Musk and physicist Stephen Hawking, who died in 2018, have further stoked these anxieties by describing the rise of artificial general intelligence <a href=\"https:\/\/www.bbc.com\/news\/technology-37713629\">as one of the greatest threats to the future of humanity<\/a>.<\/p>\n\n\n\n<p>But these worries are \u2013 at least as far as large language models are concerned \u2013 groundless. ChatGPT and similar technologies are <a href=\"https:\/\/www.sciencefocus.com\/future-technology\/gpt-3\/\">sophisticated sentence completion applications<\/a> \u2013 nothing more, nothing less. Their uncanny responses <a href=\"https:\/\/www.nytimes.com\/2023\/03\/08\/opinion\/noam-chomsky-chatgpt-ai.html\">are a function of how predictable humans are<\/a> if one has enough data about the ways in which we communicate.<\/p>\n\n\n\n<p>Though Roose was shaken by his exchange with Sydney, he knew that the conversation was not the result of an emerging synthetic mind. Sydney\u2019s responses reflect the toxicity of its training data \u2013 essentially large swaths of the internet \u2013 not evidence of the first stirrings, \u00e0 la Frankenstein, of a digital monster.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/images.theconversation.com\/files\/514950\/original\/file-20230313-1654-gjeoi5.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" alt=\"Cyborg with red eyes.\"\/><figcaption>Sci-fi films like \u2018Terminator\u2019 have primed people to assume that AI will soon take on a life of its own. <a href=\"https:\/\/www.gettyimages.com\/detail\/news-photo\/full-scale-figure-of-a-terminator-robot-t-800-used-at-the-news-photo\/85475547?phrase=terminator%202&amp;adppopup=true\">Yoshikazu Tsuno\/AFP via Getty Images<\/a><\/figcaption><\/figure>\n\n\n\n<p>The new chatbots may well pass the <a href=\"https:\/\/www.theguardian.com\/technology\/2014\/jun\/09\/what-is-the-alan-turing-test\">Turing test<\/a>, named for the British mathematician Alan Turing, who once suggested that a machine might be said to \u201cthink\u201d if a human could not tell its responses from those of another human.<\/p>\n\n\n\n<p>But that is not evidence of sentience; it\u2019s just evidence that the Turing test isn\u2019t as useful as once assumed.<\/p>\n\n\n\n<p>However, I believe that the question of machine sentience is a red herring.<\/p>\n\n\n\n<p>Even if chatbots become more than fancy autocomplete machines \u2013 <a href=\"https:\/\/www.nytimes.com\/2023\/03\/08\/opinion\/noam-chomsky-chatgpt-ai.html\">and they are far from it<\/a> \u2013 it will take scientists a while to figure out if they have become conscious. For now, philosophers <a href=\"https:\/\/blogs.scientificamerican.com\/cross-check\/david-chalmers-thinks-the-hard-problem-is-really-hard\/\">can\u2019t even agree about how to explain human consciousness<\/a>.<\/p>\n\n\n\n<p>To me, the pressing question is not whether machines are sentient but why it is so easy for us to imagine that they are.<\/p>\n\n\n\n<p>The real issue, in other words, is the ease with which people anthropomorphize or project human features onto our technologies, rather than the machines\u2019 actual personhood.<\/p>\n\n\n\n<h2>A propensity to anthropomorphize<\/h2>\n\n\n\n<p>It is easy to imagine other Bing users <a href=\"https:\/\/www.whitecoatinvestor.com\/chatgpt-ai-financial-advice\/\">asking Sydney for guidance<\/a> on important life decisions and maybe even developing emotional attachments to it. More people could start thinking about bots as friends or even romantic partners, much in the same way Theodore Twombly fell in love with Samantha, the AI virtual assistant in Spike Jonze\u2019s film \u201c<a href=\"https:\/\/www.warnerbros.com\/movies\/her\">Her<\/a>.\u201d<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/images.theconversation.com\/files\/514945\/original\/file-20230313-16-gjeoi5.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip\"><img src=\"https:\/\/images.theconversation.com\/files\/514945\/original\/file-20230313-16-gjeoi5.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=237&amp;fit=clip\" alt=\"A group of docked boats.\"\/><\/a><figcaption>People often name their cars and boats. <a href=\"https:\/\/www.gettyimages.com\/detail\/photo\/saint-tropez-cote-dazur-french-riviera-france-royalty-free-image\/674911745?phrase=boat%20name&amp;adppopup=true\">Fraser Hall\/The Image Bank via Getty Images.<\/a><\/figcaption><\/figure>\n\n\n\n<p>People, after all, <a href=\"https:\/\/doi.org\/10.1037\/0033-295X.114.4.864\">are predisposed to anthropomorphize<\/a>, or ascribe human qualities to nonhumans. We name <a href=\"https:\/\/vanislemarina.com\/naming-your-boat\/\">our boats<\/a> and <a href=\"https:\/\/www.foxweather.com\/learn\/what-are-2023-atlantic-hurricane-names\">big storms<\/a>; some of us talk to our pets, telling ourselves that <a href=\"https:\/\/doi.org\/10.1038\/428606a\">our emotional lives mimic their own<\/a>.<\/p>\n\n\n\n<p>In Japan, where robots are regularly used for elder care, seniors become attached to the machines, <a href=\"https:\/\/www.kqed.org\/futureofyou\/439285\/watch-japan-uses-robots-to-care-for-the-elderly\">sometimes viewing them as their own children<\/a>. And these robots, mind you, are difficult to confuse with humans: They neither look nor talk like people.<\/p>\n\n\n\n<p>Consider how much greater the tendency and temptation to anthropomorphize is going to get with the introduction of systems that do look and sound human.<\/p>\n\n\n\n<p>That possibility is just around the corner. Large language models like ChatGPT are already being used to power humanoid robots, such as <a href=\"https:\/\/www.engineeredarts.co.uk\/robot\/ameca\/\">the Ameca robots<\/a> being developed by Engineered Arts in the U.K. The Economist\u2019s technology podcast, Babbage, recently conducted an <a href=\"https:\/\/www.economist.com\/ameca-pod\">interview with a ChatGPT-driven Ameca<\/a>. The robot\u2019s responses, while occasionally a bit choppy, were uncanny.<\/p>\n\n\n\n<h2>Can companies be trusted to do the right thing?<\/h2>\n\n\n\n<p>The tendency to view machines as people and become attached to them, combined with machines being developed with humanlike features, points to real risks of psychological entanglement with technology.<\/p>\n\n\n\n<p>The outlandish-sounding prospects of falling in love with robots, feeling a deep kinship with them or being politically manipulated by them are quickly materializing. I believe these trends highlight the need for strong guardrails to make sure that the technologies don\u2019t become politically and psychologically disastrous.<\/p>\n\n\n\n<p>Unfortunately, technology companies cannot always be trusted to put up such guardrails. Many of them are still guided by Mark Zuckerberg\u2019s famous motto of <a href=\"https:\/\/www.masterclass.com\/articles\/move-fast-and-break-things\">moving fast and breaking things<\/a> \u2013 a directive to release half-baked products and worry about the implications later. In the past decade, technology companies from Snapchat to Facebook <a href=\"https:\/\/www.businessinsider.com\/snapchat-streaks-how-to-get-snapstreak-back-2019-7\">have put profits over the mental health<\/a> of their users or <a href=\"https:\/\/www.theatlantic.com\/ideas\/archive\/2021\/10\/facebook-papers-democracy-election-zuckerberg\/620478\/\">the integrity of democracies around the world<\/a>.<\/p>\n\n\n\n<p>When Kevin Roose checked with Microsoft about Sydney\u2019s meltdown, <a href=\"https:\/\/www.nytimes.com\/2023\/02\/17\/podcasts\/the-daily\/the-online-search-wars-got-scary-fast.html\">the company told him<\/a> that he simply used the bot for too long and that the technology went haywire because it was designed for shorter interactions.<\/p>\n\n\n\n<p>Similarly, the CEO of OpenAI, the company that developed ChatGPT, in a moment of breathtaking honesty, <a href=\"https:\/\/twitter.com\/sama\/status\/1601731295792414720?lang=en\">warned that<\/a> \u201cit\u2019s a mistake to be relying on [it] for anything important right now \u2026 we have a lot of work to do on robustness and truthfulness.\u201d<\/p>\n\n\n\n<p>So how does it make sense to release a technology with ChatGPT\u2019s level of appeal \u2013 <a href=\"https:\/\/www.reuters.com\/technology\/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01\/\">it\u2019s the fastest-growing consumer app ever made<\/a> \u2013 when it is unreliable, and when it has <a href=\"https:\/\/www.nytimes.com\/2023\/01\/06\/opinion\/ezra-klein-podcast-gary-marcus.html\">no capacity to distinguish<\/a> fact from fiction?<\/p>\n\n\n\n<p>Large language models may prove useful as aids <a href=\"https:\/\/teaching.berkeley.edu\/understanding-ai-writing-tools-and-their-uses-teaching-and-learning-uc-berkeley\">for writing<\/a> <a href=\"https:\/\/www.edureka.co\/blog\/chatgpt-for-coding-unleash-the-power-of-chatgpt\/\">and coding<\/a>. They will probably revolutionize internet search. And, one day, responsibly combined with robotics, they may even have certain psychological benefits.<\/p>\n\n\n\n<p>But they are also a potentially predatory technology that can easily take advantage of the human propensity to project personhood onto objects \u2013 a tendency amplified when those objects effectively mimic human traits.<\/p>\n\n\n\n<p><a href=\"https:\/\/theconversation.com\/profiles\/nir-eisikovits-802914\">Nir Eisikovits<\/a>, Professor of Philosophy and Director, Applied Ethics Center, <em><a href=\"https:\/\/theconversation.com\/institutions\/umass-boston-1748\">UMass Boston<\/a><\/em><\/p>\n\n\n\n<p>This article is republished from <a href=\"https:\/\/theconversation.com\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/ai-isnt-close-to-becoming-sentient-the-real-danger-lies-in-how-easily-were-prone-to-anthropomorphize-it-200525\">original article<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Nir Eisikovits, UMass Boston ChatGPT and similar large language models can produce compelling, humanlike answers to an endless array of questions \u2013 from queries about the best Italian restaurant in town to explaining competing theories about the nature of evil. The technology\u2019s uncanny writing ability has surfaced some old questions \u2013 until recently relegated to [&hellip;]<\/p>\n","protected":false},"author":44,"featured_media":33515,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3410,8],"tags":[6166,10656,13884,13298,3288,196,619,13297,326,2024,10799,13885,405,13886,13887],"_links":{"self":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/33514"}],"collection":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/users\/44"}],"replies":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/comments?post=33514"}],"version-history":[{"count":2,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/33514\/revisions"}],"predecessor-version":[{"id":33537,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/33514\/revisions\/33537"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/media\/33515"}],"wp:attachment":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/media?parent=33514"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/categories?post=33514"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/tags?post=33514"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}