{"id":34010,"date":"2023-06-02T01:27:00","date_gmt":"2023-06-02T01:27:00","guid":{"rendered":"https:\/\/www.lifeandnews.com\/articles\/?p=34010"},"modified":"2023-07-02T07:52:48","modified_gmt":"2023-07-02T07:52:48","slug":"how-ai-could-take-over-elections-and-undermine-democracy","status":"publish","type":"post","link":"https:\/\/www.lifeandnews.com\/articles\/how-ai-could-take-over-elections-and-undermine-democracy\/","title":{"rendered":"How AI could take over elections \u2013 and undermine\u00a0democracy"},"content":{"rendered":"\n<p><a href=\"https:\/\/theconversation.com\/profiles\/archon-fung-1440913\">Archon Fung<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/harvard-kennedy-school-3840\">Harvard Kennedy School<\/a><\/em> and <a href=\"https:\/\/theconversation.com\/profiles\/lawrence-lessig-1442775\">Lawrence Lessig<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/harvard-university-1306\">Harvard University<\/a><\/em><\/p>\n\n\n\n<p>Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways?<\/p>\n\n\n\n<p>Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a <a href=\"https:\/\/www.c-span.org\/video\/?528117-1\/openai-ceo-testifies-artificial-intelligence\">May 16, 2023, U.S. Senate hearing<\/a> on artificial intelligence. Altman replied that he was indeed concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters.<\/p>\n\n\n\n<p>Altman did not elaborate, but he might have had something like this scenario in mind. Imagine that soon, political technologists develop a machine called Clogger \u2013 a political campaign in a black box. Clogger relentlessly pursues just one objective: to maximize the chances that its candidate \u2013 the campaign that buys the services of Clogger Inc. \u2013 prevails in an election.<\/p>\n\n\n\n<p>While platforms like Facebook, Twitter and YouTube use forms of AI to get users to <a href=\"https:\/\/www.vox.com\/technology\/2018\/2\/19\/17020310\/tristan-harris-facebook-twitter-humane-tech-time\">spend more time<\/a> on their sites, Clogger\u2019s AI would have a different objective: to change people\u2019s voting behavior.<\/p>\n\n\n\n<h2>How Clogger would work<\/h2>\n\n\n\n<p>As a <a href=\"https:\/\/scholar.google.com\/citations?user=3Bl9cn8AAAAJ&amp;hl=en\">political scientist<\/a> and a <a href=\"https:\/\/scholar.google.com\/citations?user=LxG5YWcAAAAJ&amp;hl=en\">legal scholar<\/a> who study the intersection of technology and democracy, we believe that something like Clogger could use automation to dramatically increase the scale and potentially the effectiveness of <a href=\"https:\/\/www.washingtonpost.com\/opinions\/book-review-the-victory-lab-the-secret-science-of-winning-campaigns-by-sasha-issenberg\/2012\/11\/02\/35aa27a0-1d20-11e2-9cd5-b55c38388962_story.html\">behavior manipulation and microtargeting techniques<\/a> that political campaigns have used since the early 2000s. Just as <a href=\"https:\/\/www.vox.com\/recode\/2019\/12\/10\/20996869\/facebook-political-ads-targeting-alex-stamos-interview-open-sourced\">advertisers use your browsing and social media history<\/a> to individually target commercial and political ads now, Clogger would pay attention to you \u2013 and hundreds of millions of other voters \u2013 individually.<\/p>\n\n\n\n<p>It would offer three advances over the current state-of-the-art algorithmic behavior manipulation. First, its language model would generate messages \u2014 texts, social media and email, perhaps including images and videos \u2014 tailored to you personally. Whereas advertisers strategically place a relatively small number of ads, language models such as ChatGPT can generate countless unique messages for you personally \u2013 and millions for others \u2013 over the course of a campaign.<\/p>\n\n\n\n<p>Second, Clogger would use a technique called <a href=\"https:\/\/towardsdatascience.com\/reinforcement-learning-101-e24b50e1d292\">reinforcement learning<\/a> to generate a succession of messages that become increasingly more likely to change your vote. Reinforcement learning is a machine-learning, trial-and-error approach in which the computer takes actions and gets feedback about which work better in order to learn how to accomplish an objective. Machines that can play Go, Chess and many video games <a href=\"https:\/\/www.nytimes.com\/2018\/12\/26\/science\/chess-artificial-intelligence.html\">better than any human<\/a> have used reinforcement learning. https:\/\/www.youtube.com\/embed\/O0mVwDuZAtc?wmode=transparent&amp;start=0 How reinforcement learning works.<\/p>\n\n\n\n<p>Third, over the course of a campaign, Clogger\u2019s messages could evolve in order to take into account your responses to the machine\u2019s prior dispatches and what it has learned about changing others\u2019 minds. Clogger would be able to carry on dynamic \u201cconversations\u201d with you \u2013 and millions of other people \u2013 over time. Clogger\u2019s messages would be similar to <a href=\"https:\/\/theconversation.com\/why-bad-ads-appear-on-good-websites-a-computer-scientist-explains-178268\">ads that follow you<\/a> across different websites and social media.<\/p>\n\n\n\n<h2>The nature of AI<\/h2>\n\n\n\n<p>Three more features \u2013 or bugs \u2013 are worth noting.<\/p>\n\n\n\n<p>First, the messages that Clogger sends may or may not be political in content. The machine\u2019s only goal is to maximize vote share, and it would likely devise strategies for achieving this goal that no human campaigner would have thought of.<\/p>\n\n\n\n<p>One possibility is sending likely opponent voters information about nonpolitical passions that they have in sports or entertainment to bury the political messaging they receive. Another possibility is sending off-putting messages \u2013 for example incontinence advertisements \u2013 timed to coincide with opponents\u2019 messaging. And another is manipulating voters\u2019 social media friend groups to give the sense that their social circles support its candidate.<\/p>\n\n\n\n<p>Second, Clogger has no regard for truth. Indeed, it has no way of knowing what is true or false. <a href=\"https:\/\/spectrum.ieee.org\/ai-hallucination\">Language model \u201challucinations\u201d<\/a> are not a problem for this machine because its objective is to change your vote, not to provide accurate information.<\/p>\n\n\n\n<p>Third, because it is a <a href=\"https:\/\/theconversation.com\/what-is-a-black-box-a-computer-scientist-explains-what-it-means-when-the-inner-workings-of-ais-are-hidden-203888\">black box type of artificial intelligence<\/a>, people would have no way to know what strategies it uses. https:\/\/www.youtube.com\/embed\/Yg3q5x7yDeM?wmode=transparent&amp;start=0 The field of explainable AI aims to open the black box of many machine-learning models so people can understand how they work.<\/p>\n\n\n\n<h2>Clogocracy<\/h2>\n\n\n\n<p>If the Republican presidential campaign were to deploy Clogger in 2024, the Democratic campaign would likely be compelled to respond in kind, perhaps with a similar machine. Call it Dogger. If the campaign managers thought that these machines were effective, the presidential contest might well come down to Clogger vs. Dogger, and the winner would be the client of the more effective machine.<\/p>\n\n\n\n<p>Political scientists and pundits would have much to say about why one or the other AI prevailed, but likely no one would really know. The president will have been elected not because his or her policy proposals or political ideas persuaded more Americans, but because he or she had the more effective AI. The content that won the day would have come from an AI focused solely on victory, with no political ideas of its own, rather than from candidates or parties.<\/p>\n\n\n\n<p>In this very important sense, a machine would have won the election rather than a person. The election would no longer be democratic, even though all of the ordinary activities of democracy \u2013 the speeches, the ads, the messages, the voting and the counting of votes \u2013 will have occurred.<\/p>\n\n\n\n<p>The AI-elected president could then go one of two ways. He or she could use the mantle of election to pursue Republican or Democratic party policies. But because the party ideas may have had little to do with why people voted the way that they did \u2013 Clogger and Dogger don\u2019t care about policy views \u2013 the president\u2019s actions would not necessarily reflect the will of the voters. Voters would have been manipulated by the AI rather than freely choosing their political leaders and policies.<\/p>\n\n\n\n<p>Another path is for the president to pursue the messages, behaviors and policies that the machine predicts will maximize the chances of reelection. On this path, the president would have no particular platform or agenda beyond maintaining power. The president\u2019s actions, guided by Clogger, would be those most likely to manipulate voters rather than serve their genuine interests or even the president\u2019s own ideology.<\/p>\n\n\n\n<h2>Avoiding Clogocracy<\/h2>\n\n\n\n<p>It would be possible to avoid AI election manipulation if candidates, campaigns and consultants all forswore the use of such political AI. We believe that is unlikely. If politically effective black boxes were developed, the temptation to use them would be almost irresistible. Indeed, political consultants might well see using these tools as required by their professional responsibility to help their candidates win. And once one candidate uses such an effective tool, the opponents could hardly be expected to resist by disarming unilaterally.<\/p>\n\n\n\n<p>Enhanced privacy protection <a href=\"https:\/\/theconversation.com\/how-can-congress-regulate-ai-erect-guardrails-ensure-accountability-and-address-monopolistic-power-205900\">would help<\/a>. Clogger would depend on access to vast amounts of personal data in order to target individuals, craft messages tailored to persuade or manipulate them, and track and retarget them over the course of a campaign. Every bit of that information that companies or policymakers deny the machine would make it less effective. https:\/\/www.youtube.com\/embed\/q55Tusc-rxc?wmode=transparent&amp;start=0 Strong data privacy laws could help steer AI away from being manipulative.<\/p>\n\n\n\n<p>Another solution lies with elections commissions. They could try to ban or severely regulate these machines. There\u2019s a <a href=\"https:\/\/www.lawfareblog.com\/machine-first-amendment-rights\">fierce debate<\/a> about whether such <a href=\"https:\/\/dx.doi.org\/10.2139\/ssrn.3922565\">\u201creplicant\u201d speech<\/a>, even if it\u2019s political in nature, can be regulated. The U.S.\u2019s extreme free speech tradition <a href=\"https:\/\/freespeechproject.georgetown.edu\/artificially-speaking-the-intersection-of-free-speech-and-ai\/\">leads many leading academics to say it cannot<\/a>.<\/p>\n\n\n\n<p>But there is no reason to automatically extend the First Amendment\u2019s protection to the product of these machines. The nation might well choose to give machines rights, but that should be a decision grounded in the challenges of today, <a href=\"https:\/\/dx.doi.org\/10.2139\/ssrn.3922565\">not the misplaced assumption<\/a> that James Madison\u2019s views in 1789 were intended to apply to AI.<\/p>\n\n\n\n<p>European Union regulators are moving in this direction. Policymakers revised the European Parliament\u2019s draft of its Artificial Intelligence Act to designate \u201cAI systems to influence voters in campaigns\u201d <a href=\"https:\/\/www.europarl.europa.eu\/news\/en\/press-room\/20230505IPR84904\/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence\">as \u201chigh risk\u201d<\/a> and subject to regulatory scrutiny.<\/p>\n\n\n\n<p>One constitutionally safer, if smaller, step, already adopted in part by <a href=\"https:\/\/ec.europa.eu\/commission\/presscorner\/detail\/en\/ip_22_3664\">European internet regulators<\/a> and in <a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/codes_displayText.xhtml?lawCode=BPC&amp;division=7.&amp;title=&amp;part=3.&amp;chapter=6.&amp;article=\">California<\/a>, is to prohibit bots from passing themselves off as people. For example, regulation might require that campaign messages come with disclaimers when the content they contain is generated by machines rather than humans.<\/p>\n\n\n\n<p>This would be like the advertising disclaimer requirements \u2013 \u201cPaid for by the Sam Jones for Congress Committee\u201d \u2013 but modified to reflect its AI origin: \u201cThis AI-generated ad was paid for by the Sam Jones for Congress Committee.\u201d A stronger version could require: \u201cThis AI-generated message is being sent to you by the Sam Jones for Congress Committee because Clogger has predicted that doing so will increase your chances of voting for Sam Jones by 0.0002%.\u201d At the very least, we believe voters deserve to know when it is a bot speaking to them, and they should know why, as well.<\/p>\n\n\n\n<p>The possibility of a system like Clogger shows that the path toward <a href=\"https:\/\/www.nytimes.com\/2023\/03\/27\/opinion\/ai-chatgpt-chatbots.html\">human collective disempowerment<\/a> may not require some superhuman <a href=\"https:\/\/www.wired.com\/story\/what-is-artificial-general-intelligence-agi-explained\/\">artificial general intelligence<\/a>. It might just require overeager campaigners and consultants who have powerful new tools that can effectively push millions of people\u2019s many buttons.<\/p>\n\n\n\n<p><em>Learn what you need to know about artificial intelligence by <a href=\"https:\/\/memberservices.theconversation.com\/newsletters\/?nl=ai&amp;source=inline-promo\">signing up for our newsletter series of four emails<\/a> delivered over the course of a week. You can read all our stories on generative AI at <a href=\"https:\/\/theconversation.com\/topics\/generative-ai-133426\">TheConversation.com<\/a>.<\/em><\/p>\n\n\n\n<p><a href=\"https:\/\/theconversation.com\/profiles\/archon-fung-1440913\">Archon Fung<\/a>, Professor of Citizenship and Self-Government, <em><a href=\"https:\/\/theconversation.com\/institutions\/harvard-kennedy-school-3840\">Harvard Kennedy School<\/a><\/em> and <a href=\"https:\/\/theconversation.com\/profiles\/lawrence-lessig-1442775\">Lawrence Lessig<\/a>, Professor of Law and Leadership, <em><a href=\"https:\/\/theconversation.com\/institutions\/harvard-university-1306\">Harvard University<\/a><\/em><\/p>\n\n\n\n<p>This article is republished from <a href=\"https:\/\/theconversation.com\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/how-ai-could-take-over-elections-and-undermine-democracy-206051\">original article<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Archon Fung, Harvard Kennedy School and Lawrence Lessig, Harvard University Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways? Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman replied that he was indeed [&hellip;]<\/p>\n","protected":false},"author":44,"featured_media":34011,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[4],"tags":[401,13298,474,14152,13297,326,13885,14151,14150,255,8571],"_links":{"self":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/34010"}],"collection":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/users\/44"}],"replies":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/comments?post=34010"}],"version-history":[{"count":1,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/34010\/revisions"}],"predecessor-version":[{"id":34012,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/34010\/revisions\/34012"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/media\/34011"}],"wp:attachment":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/media?parent=34010"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/categories?post=34010"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/tags?post=34010"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}