{"id":38183,"date":"2024-11-23T13:45:00","date_gmt":"2024-11-23T13:45:00","guid":{"rendered":"https:\/\/www.lifeandnews.com\/articles\/?p=38183"},"modified":"2024-11-25T06:53:35","modified_gmt":"2024-11-25T06:53:35","slug":"ai-harm-is-often-behind-the-scenes-and-builds-over-time-a-legal-scholar-explains-how-the-law-can-adapt-to-respond","status":"publish","type":"post","link":"https:\/\/www.lifeandnews.com\/articles\/ai-harm-is-often-behind-the-scenes-and-builds-over-time-a-legal-scholar-explains-how-the-law-can-adapt-to-respond\/","title":{"rendered":"AI harm is often behind the scenes and builds over time \u2013 a legal scholar explains how the law can adapt to&nbsp;respond"},"content":{"rendered":"\n<p><a href=\"https:\/\/theconversation.com\/profiles\/sylvia-lu-2222047\">Sylvia Lu<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-michigan-1290\">University of Michigan<\/a><\/em><\/p>\n\n\n\n<p>As you scroll through your social media feed or let your favorite music app curate the perfect playlist, it may feel like artificial intelligence is improving your life \u2013 learning your preferences and serving your needs. But lurking behind this convenient facade is a growing concern: algorithmic harms.<\/p>\n\n\n\n<p>These harms aren\u2019t obvious or immediate. They\u2019re insidious, building over time as AI systems quietly make decisions about your life without you even knowing it. The hidden power of these systems is becoming <a href=\"https:\/\/www.ntia.gov\/issues\/artificial-intelligence\/ai-accountability-policy-report\/requisites-for-ai-accountability-areas-of-significant-commenter-agreement\/recognize-potential-harms-and-risks\">a significant threat<\/a> to privacy, equality, autonomy and safety.<\/p>\n\n\n\n<p>AI systems are embedded in nearly every facet of modern life. They suggest <a href=\"https:\/\/research.netflix.com\/research-area\/recommendations\">what shows and movies you should watch<\/a>, help employers <a href=\"https:\/\/www.theguardian.com\/technology\/2024\/mar\/16\/ai-racism-chatgpt-gemini-bias\">decide whom they want to hire<\/a>, and even influence judges to <a href=\"https:\/\/www.reuters.com\/legal\/transactional\/us-judge-runs-mini-experiment-with-ai-help-decide-case-2024-09-06\/\">decide who qualifies for a sentence<\/a>. But what happens when these systems, often seen as neutral, begin making decisions that put certain groups at a disadvantage or, worse, cause real-world harm?<\/p>\n\n\n\n<p>The often-overlooked consequences of AI applications call for regulatory frameworks that can keep pace with this rapidly evolving technology. I <a href=\"https:\/\/scholar.google.com.au\/citations?hl=en&amp;user=1jTquTsAAAAJ&amp;view_op=list_works&amp;sortby=pubdate\">study the intersection of law and technology<\/a>, and I\u2019ve outlined <a href=\"http:\/\/dx.doi.org\/10.2139\/ssrn.4949052\">a legal framework<\/a> to do just that.<\/p>\n\n\n\n<h2>Slow burns<\/h2>\n\n\n\n<p>One of the most striking aspects of algorithmic harms is that their cumulative impact often flies under the radar. These systems typically don\u2019t directly assault your privacy or autonomy in ways you can easily perceive. They gather vast amounts of data about people \u2014 often without their knowledge \u2014 and use this data <a href=\"https:\/\/theconversation.com\/forget-dystopian-scenarios-ai-is-pervasive-today-and-the-risks-are-often-hidden-218222\">to shape decisions affecting people\u2019s lives<\/a>.<\/p>\n\n\n\n<p>Sometimes, this results in minor inconveniences, like an advertisement that follows you across websites. But as AI operates without addressing these repetitive harms, they can scale up, leading to significant cumulative damage across diverse groups of people.<\/p>\n\n\n\n<p>Consider the example of social media algorithms. They are ostensibly designed to promote beneficial social interactions. However, behind their seemingly beneficial facade, they silently track users\u2019 clicks and <a href=\"https:\/\/www.pewresearch.org\/internet\/2019\/01\/16\/facebook-algorithms-and-personal-data\/\">compile profiles of their political beliefs, professional affiliations and personal lives<\/a>. The data collected is <a href=\"https:\/\/hbr.org\/2022\/09\/ai-isnt-ready-to-make-unsupervised-decisions\">used in systems that make consequential decisions<\/a> \u2014 whether you are identified as a jaywalking pedestrian, considered for a job or flagged as a risk to commit suicide.<\/p>\n\n\n\n<p>Worse, their addictive design <a href=\"https:\/\/theconversation.com\/are-social-media-apps-dangerous-products-2-scholars-explain-how-the-companies-rely-on-young-users-but-fail-to-protect-them-222256\">traps teenagers in cycles of overuse<\/a>, leading to escalating mental health crises, including anxiety, depression and self-harm. By the time you grasp the full scope, it\u2019s too late \u2014 your privacy has been breached, your opportunities shaped by biased algorithms, and the safety of the most vulnerable undermined, all without your knowledge.<\/p>\n\n\n\n<p>This is what I call \u201c<a href=\"http:\/\/dx.doi.org\/10.2139\/ssrn.4949052\">intangible, cumulative harm<\/a>\u201d: AI systems operate in the background, but their impacts can be devastating and invisible. <\/p>\n\n\n\n<h2>Why regulation lags behind<\/h2>\n\n\n\n<p>Despite these mounting dangers, legal frameworks worldwide have struggled to keep up. In the United States, <a href=\"https:\/\/www.nytimes.com\/2023\/07\/21\/technology\/ai-united-states-regulation.html\">a regulatory approach emphasizing innovation<\/a> has made it difficult to impose strict standards on how these systems are used across multiple contexts.<\/p>\n\n\n\n<p>Courts and regulatory bodies are <a href=\"https:\/\/www.bu.edu\/bulawreview\/files\/2022\/04\/CITRON-SOLOVE.pdf\">accustomed to dealing with concrete harms<\/a>, like physical injury or economic loss, but algorithmic harms are often more subtle, cumulative and hard to detect. The regulations often fail to address the broader effects that AI systems can have over time.<\/p>\n\n\n\n<p>Social media algorithms, for example, can gradually <a href=\"https:\/\/theconversation.com\/mounting-research-documents-the-harmful-effects-of-social-media-use-on-mental-health-including-body-image-and-development-of-eating-disorders-206170\">erode users\u2019 mental health<\/a>, but because these harms build slowly, they are difficult to address within the confines of current legal standards.<\/p>\n\n\n\n<h2>Four types of algorithmic harm<\/h2>\n\n\n\n<p>Drawing on existing AI and data governance scholarship, I have categorized algorithmic harms into <a href=\"https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=4949052\">four legal areas<\/a>: privacy, autonomy, equality and safety. Each of these domains is vulnerable to the subtle yet often unchecked power of AI systems.<\/p>\n\n\n\n<p>The first type of harm is eroding privacy. AI systems collect, process and transfer vast amounts of data, eroding people\u2019s privacy in ways that may not be immediately obvious but have long-term implications. For example, <a href=\"https:\/\/www.nytimes.com\/2023\/03\/10\/technology\/facial-recognition-stores.html\">facial recognition systems can track people<\/a> in public and private spaces, effectively turning mass surveillance into the norm.<\/p>\n\n\n\n<p>The second type of harm is undermining autonomy. AI systems often subtly undermine your ability to make autonomous decisions by manipulating the information you see. Social media platforms use algorithms to show users content that maximizes a third party\u2019s interests, <a href=\"https:\/\/www.theguardian.com\/uk-news\/2020\/jan\/04\/cambridge-analytica-data-leak-global-election-manipulation\">subtly shaping opinions, decisions and behaviors<\/a> across millions of users.<\/p>\n\n\n\n<p>The third type of harm is diminishing equality. AI systems, while designed to be neutral, often <a href=\"https:\/\/yjolt.org\/algorithms-and-economic-justice-taxonomy-harms-and-path-forward-federal-trade-commission\">inherit the biases present in their data and algorithms<\/a>. This <a href=\"https:\/\/www.uclalawreview.org\/private-accountability-age-algorithm\/\">reinforces societal inequalities over time<\/a>. In one infamous case, a facial recognition system used by retail stores to detect shoplifters <a href=\"https:\/\/www.ftc.gov\/news-events\/news\/press-releases\/2023\/12\/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without\">disproportionately misidentified women and people of color<\/a>.<\/p>\n\n\n\n<p>The fourth type of harm is impairing safety. AI systems make decisions that <a href=\"https:\/\/scholarship.law.vanderbilt.edu\/jetlaw\/vol23\/iss1\/3\/\">affect people\u2019s safety and well-being<\/a>. When these systems fail, the consequences can be catastrophic. But even when they function as designed, they <a href=\"https:\/\/theconversation.com\/facebook-whistleblower-frances-haugen-testified-that-the-companys-algorithms-are-dangerous-heres-how-they-can-manipulate-you-169420\">can still cause harm<\/a>, such as social media algorithms\u2019 <a href=\"https:\/\/www.youtube.com\/watch?v=rd2yC63DMBE\">cumulative effects on teenagers\u2019 mental health<\/a>.<\/p>\n\n\n\n<p>Because these cumulative harms often arise from AI applications <a href=\"https:\/\/dx.doi.org\/10.2139\/ssrn.4949052\">protected by trade secret laws<\/a>, victims have no way to detect or trace the harm. This creates a gap in accountability. When a biased hiring decision or a wrongful arrest is made due to an algorithm, how does the victim know? Without transparency, it\u2019s nearly impossible to hold companies accountable. <\/p>\n\n\n\n<h2>Closing the accountability gap<\/h2>\n\n\n\n<p>Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety \u2013 before and after it\u2019s deployed. For instance, firms using facial recognition systems would need to evaluate these systems\u2019 impacts throughout their life cycle.<\/p>\n\n\n\n<p>Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms\u2019 use of facial recognition systems and allowing users to opt out at any time.<\/p>\n\n\n\n<p>Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology.<\/p>\n\n\n\n<p>As AI systems become more widely used in critical societal functions \u2013 from health care to education and employment \u2013 the need to regulate harms they can cause becomes more pressing. Without intervention, these invisible harms are likely to continue to accumulate, affecting nearly everyone and <a href=\"https:\/\/doi.org\/10.1016\/j.jrt.2020.100005\">disproportionately hitting the most vulnerable<\/a>.<\/p>\n\n\n\n<p>With generative AI multiplying and exacerbating AI harms, I believe it\u2019s important for policymakers, courts, technology developers and civil society to recognize the legal harms of AI. This requires not just better laws, but a more thoughtful approach to cutting-edge AI technology \u2013 one that prioritizes <a href=\"https:\/\/www.whitehouse.gov\/ostp\/ai-bill-of-rights\/\">civil rights and justice<\/a> in the face of rapid technological advancement.<\/p>\n\n\n\n<p>The future of AI holds incredible promise, but without the right legal frameworks, it could also entrench inequality and erode the very civil rights it is, in many cases, designed to enhance.<\/p>\n\n\n\n<p><a href=\"https:\/\/theconversation.com\/profiles\/sylvia-lu-2222047\">Sylvia Lu<\/a>, Faculty Fellow and Visiting Assistant Professor of Law, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-michigan-1290\">University of Michigan<\/a><\/em><\/p>\n\n\n\n<p>This article is republished from <a href=\"https:\/\/theconversation.com\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/ai-harm-is-often-behind-the-scenes-and-builds-over-time-a-legal-scholar-explains-how-the-law-can-adapt-to-respond-240080\">original article<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Sylvia Lu, University of Michigan As you scroll through your social media feed or let your favorite music app curate the perfect playlist, it may feel like artificial intelligence is improving your life \u2013 learning your preferences and serving your needs. But lurking behind this convenient facade is a growing concern: algorithmic harms. These harms [&hellip;]<\/p>\n","protected":false},"author":56,"featured_media":38184,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[5,30,8025,291,825,25,296,28,8],"tags":[2341,14870,1639,401,10656,885,891,886,860,702],"_links":{"self":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/38183"}],"collection":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/users\/56"}],"replies":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/comments?post=38183"}],"version-history":[{"count":1,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/38183\/revisions"}],"predecessor-version":[{"id":38185,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/38183\/revisions\/38185"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/media\/38184"}],"wp:attachment":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/media?parent=38183"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/categories?post=38183"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/tags?post=38183"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}