{"id":39702,"date":"2025-06-15T11:15:00","date_gmt":"2025-06-15T11:15:00","guid":{"rendered":"https:\/\/www.lifeandnews.com\/articles\/?p=39702"},"modified":"2025-06-16T06:16:45","modified_gmt":"2025-06-16T06:16:45","slug":"protecting-the-vulnerable-or-automating-harm-ais-double-edged-role-in-spotting-abuse","status":"publish","type":"post","link":"https:\/\/www.lifeandnews.com\/articles\/protecting-the-vulnerable-or-automating-harm-ais-double-edged-role-in-spotting-abuse\/","title":{"rendered":"Protecting the vulnerable, or automating harm? AI\u2019s double-edged role in spotting&nbsp;abuse"},"content":{"rendered":"\n<p><a href=\"https:\/\/theconversation.com\/profiles\/aislinn-conrad-2373945\">Aislinn Conrad<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-iowa-723\">University of Iowa<\/a><\/em><\/p>\n\n\n\n<p>Artificial intelligence is rapidly being adopted to help prevent abuse and protect vulnerable people \u2013 including <a href=\"https:\/\/imprintnews.org\/top-stories\/in-a-new-study-georgia-professor-explores-the-use-of-artificial-intelligence-in-the-child-welfare-field\/256116\">children in foster care<\/a>, adults in nursing homes and <a href=\"https:\/\/scienceblog.cincinnatichildrens.org\/how-artificial-intelligence-could-help-reduce-risk-of-school-violence\/\">students in schools<\/a>. These tools promise to detect danger in real time and alert authorities before serious harm occurs.<\/p>\n\n\n\n<p>Developers are using natural language processing, for example \u2014 a form of AI that interprets written or spoken language \u2013 to try to <a href=\"https:\/\/www.edgehill.ac.uk\/news\/2022\/06\/ai-tool-designed-to-identify-coercive-language-patterns-receives-home-office-funding\/\">detect patterns of threats, manipulation and control<\/a> in text messages. This information could help detect domestic abuse and potentially assist courts or law enforcement in early intervention. Some child welfare agencies use <a href=\"https:\/\/mcsilver.nyu.edu\/predictive-risk-tools-in-child-welfare-practice\/\">predictive modeling<\/a>, another common AI technique, to calculate which families or individuals are most \u201cat risk\u201d for abuse.<\/p>\n\n\n\n<p>When thoughtfully implemented, AI tools have the potential to enhance safety and efficiency. For instance, predictive models <a href=\"https:\/\/dl.acm.org\/doi\/fullHtml\/10.1145\/3491102.3517439\">have assisted social workers<\/a> to prioritize high-risk cases and intervene earlier.<\/p>\n\n\n\n<p>But as a social worker with 15 years of experience <a href=\"https:\/\/socialwork.uiowa.edu\/people\/aislinn-conrad\">researching family violence<\/a> \u2013 and five years on the front lines as a foster-care case manager, child abuse investigator and early childhood coordinator \u2013 I\u2019ve seen how well-intentioned systems often fail the very people they are meant to protect.<\/p>\n\n\n\n<p>Now, I am helping to develop <a href=\"https:\/\/www.axios.com\/local\/des-moines\/2024\/10\/23\/ai-violence-prevention-university-of-iowa\">iCare<\/a>, an AI-powered surveillance camera that analyzes limb movements \u2013 not faces or voices \u2013 to detect physical violence. I\u2019m grappling with a critical question: Can AI truly help safeguard vulnerable people, or is it just automating the same systems that have long caused them harm?<\/p>\n\n\n\n<h2>New tech, old injustice<\/h2>\n\n\n\n<p>Many AI tools are trained to \u201c<a href=\"https:\/\/theconversation.com\/ai-doesnt-really-learn-and-knowing-why-will-help-you-use-it-more-responsibly-250923\">learn\u201d by analyzing historical data<\/a>. But history is full of inequality, bias and flawed assumptions. So are people, who design, test and fund AI.<\/p>\n\n\n\n<p>That means AI algorithms can <a href=\"https:\/\/theconversation.com\/artificial-intelligence-can-discriminate-on-the-basis-of-race-and-gender-and-also-age-173617\">wind up replicating systemic forms of discrimination<\/a>, like racism or classism. <a href=\"https:\/\/doi.org\/10.1145\/3491102.3501831\">A 2022 study<\/a> in Allegheny County, Pennsylvania, found that a predictive risk model to score families\u2019 risk levels \u2013 scores given to hotline staff to help them screen calls \u2013 would have flagged Black children for investigation 20% more often than white children, if used without human oversight. When social workers were included in decision-making, that disparity dropped to 9%.<\/p>\n\n\n\n<p>Language-based AI <a href=\"https:\/\/www.brookings.edu\/articles\/detecting-and-mitigating-bias-in-natural-language-processing\/\">can also reinforce bias<\/a>. For instance, <a href=\"https:\/\/aclanthology.org\/2020.acl-main.485.pdf\">one study<\/a> showed that natural language processing systems misclassified African American Vernacular English as \u201caggressive\u201d at a significantly higher rate than Standard American English \u2014 up to 62% more often, in certain contexts.<\/p>\n\n\n\n<p>Meanwhile, <a href=\"https:\/\/doi.org\/10.1038\/s41746-023-00951-3\">a 2023 study<\/a> found that AI models often struggle with context clues, meaning sarcastic or joking messages can be misclassified as serious threats or signs of distress.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/images.theconversation.com\/files\/672831\/original\/file-20250606-62-g62dmt.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" alt=\"A teen in a tie-dye sweatshirt, hat and white headphones looks down at their cell phone.\" \/><figcaption>Language-processing AI isn\u2019t always great at judging what counts as a threat or concern. <a href=\"https:\/\/www.gettyimages.com\/detail\/photo\/teenage-african-girl-in-a-hoodie-texting-on-her-royalty-free-image\/2065171436?phrase=teen%20phone%20black%20text&amp;searchscope=image%2Cfilm&amp;adppopup=true\">NickyLloyd\/E+ via Getty Images<\/a><\/figcaption><\/figure>\n\n\n\n<p>These flaws can replicate larger problems in protective systems. People of color have long been <a href=\"https:\/\/doi.org\/10.2105\/AJPH.2016.303545\">over-surveilled<\/a> in child welfare systems \u2014 sometimes due to cultural misunderstandings, sometimes due to prejudice. Studies have shown that <a href=\"https:\/\/doi.org\/10.2105\/AJPH.2021.306214\">Black and Indigenous families<\/a> face <a href=\"https:\/\/doi.org\/10.1177\/0002716220980329\">disproportionately higher rates<\/a> of reporting, investigation and family separation compared with white families, even after accounting for income and other socioeconomic factors.<\/p>\n\n\n\n<p>Many of these disparities <a href=\"https:\/\/doi.org\/10.1177\/0002716220980329\">stem from structural racism<\/a> embedded in decades of discriminatory policy decisions, as well as implicit biases and discretionary decision-making by overburdened caseworkers.<\/p>\n\n\n\n<h2>Surveillance over support<\/h2>\n\n\n\n<p>Even when AI systems do reduce harm toward vulnerable groups, they often do so at a disturbing cost.<\/p>\n\n\n\n<p>In hospitals and elder-care facilities, for example, AI-enabled cameras have been used <a href=\"https:\/\/www.scylla.ai\/7-reasons-to-implement-ai-video-analytics-in-healthcare-facilities\/\">to detect physical aggression between staff, visitors and residents<\/a>. While commercial vendors promote these tools as safety innovations, their use raises <a href=\"https:\/\/doi.org\/10.1080\/23294515.2019.1568320\">serious ethical concerns<\/a> about the balance between protection and privacy.<\/p>\n\n\n\n<p>In a 2022 <a href=\"https:\/\/www.abc.net.au\/news\/2022-08-31\/aged-care-cctv-trial-artificial-intelligence-false-reports\/101390952\">pilot program in Australia<\/a>, AI camera systems deployed in two care homes generated more than 12,000 false alerts over 12 months \u2013 overwhelming staff and missing at least one real incident. The program\u2019s accuracy did \u201cnot achieve a level that would be considered acceptable to staff and management,\u201d according to the independent report.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/images.theconversation.com\/files\/672821\/original\/file-20250606-56-ggujvr.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" alt=\"A large screen mounted on a wall shows nine scenes around a facility.\" \/><figcaption>Surveillance cameras in care homes can help detect abuse, but they raise serious questions about privacy. <a href=\"https:\/\/www.gettyimages.com\/detail\/photo\/surveillance-video-of-the-long-term-care-facility-royalty-free-image\/1333702917?phrase=surveillance%20camera%20nursing%20home&amp;adppopup=true\">kazuma seki\/iStock via Getty Images Plus<\/a><\/figcaption><\/figure>\n\n\n\n<p>Children are affected, too. In U.S. schools, AI surveillance like <a href=\"https:\/\/www.gaggle.net\/\">Gaggle<\/a>, <a href=\"https:\/\/www.goguardian.com\/\">GoGuardian<\/a> and <a href=\"https:\/\/www.securly.com\/\">Securly<\/a> are marketed as tools to keep students safe. Such programs can be installed on students\u2019 devices to monitor online activity and flag anything concerning.<\/p>\n\n\n\n<p>But they\u2019ve also been shown to flag harmless behaviors \u2013 like writing short stories with mild violence, or researching topics related to mental health. As <a href=\"https:\/\/apnews.com\/article\/ai-school-chromebook-gaggle-goguardian-securly-25a3946727397951fd42324139aaf70f\">an Associated Press investigation<\/a> revealed, these systems have also <a href=\"https:\/\/www.theguardian.com\/education\/2022\/sep\/08\/abortion-bans-school-surveillance-lgbtq-restrictions\">outed LGBTQ+ students<\/a> to parents or school administrators by monitoring searches or conversations about gender and sexuality.<\/p>\n\n\n\n<p>Other systems use classroom cameras and microphones to detect \u201caggression.\u201d But they <a href=\"https:\/\/www.wired.com\/story\/device-detect-aggression-schools-often-misfires\/\">frequently misidentify normal behavior<\/a> like laughing, coughing or roughhousing \u2014 sometimes prompting intervention or discipline.<\/p>\n\n\n\n<p>These are not isolated technical glitches; they reflect deep flaws in how AI is trained and deployed. AI systems learn from past data that has been selected and labeled by humans \u2014 data that often reflects <a href=\"https:\/\/www.hachettebookgroup.com\/titles\/dorothy-roberts\/torn-apart\/9781541675445\/?lens=basic-books\">social inequalities and biases<\/a>. As <a href=\"https:\/\/www.albany.edu\/rockefeller\/faculty\/virginia-eubanks\">sociologist Virginia Eubanks<\/a> wrote in \u201c<a href=\"https:\/\/us.macmillan.com\/books\/9781250074317\/automatinginequality\/\">Automating Inequality<\/a>,\u201d AI systems risk scaling up these long-standing harms.<\/p>\n\n\n\n<h2>Care, not punishment<\/h2>\n\n\n\n<p>I believe AI can still be a force for good, but only if its developers prioritize the dignity of the people these tools are meant to protect. I\u2019ve developed a framework of four key principles for what I call \u201ctrauma-responsive AI.\u201d<\/p>\n\n\n\n<ol><li>Survivor control: People should have a say in how, when and if they\u2019re monitored. Providing users with greater control over their data can <a href=\"https:\/\/doi.org\/10.1057\/s41599-024-04044-8\">enhance trust in AI systems<\/a> and increase their engagement with support services, such as creating personalized plans to stay safe or access help.<\/li><li>Human oversight: Studies show that combining social workers\u2019 expertise with AI support improves fairness and <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2204.02310\">reduces child maltreatment<\/a> \u2013 as in Allegheny County, where caseworkers <a href=\"https:\/\/doi.org\/10.1145\/3491102.3501831\">used algorithmic risk scores as one factor<\/a>, alongside their professional judgment, to decide which child abuse reports to investigate.<\/li><li>Bias auditing: Governments and developers are increasingly encouraged <a href=\"https:\/\/watech.wa.gov\/sites\/default\/files\/2024-12\/Final%202024%20AI%20Task%20Force%20Report%20-%20AG.pdf\">to test AI systems<\/a> for racial and economic bias. Open-source tools like <a href=\"https:\/\/research.ibm.com\/blog\/ai-fairness-360\">IBM\u2019s AI Fairness 360<\/a>, Google\u2019s <a href=\"https:\/\/pair-code.github.io\/what-if-tool\/\">What-If Tool<\/a>, and <a href=\"https:\/\/fairlearn.org\/\">Fairlearn<\/a> assist in detecting and reducing such biases in machine learning models.<\/li><li>Privacy by design: Technology should be built to protect people\u2019s dignity. <a href=\"https:\/\/amnesia.openaire.eu\/\">Open-source tools<\/a> like Amnesia, Google\u2019s <a href=\"https:\/\/cloud.google.com\/bigquery\/docs\/differential-privacy#what_is_differential_privacy\">differential privacy library<\/a> and <a href=\"https:\/\/smartnoise.org\/\">Microsoft\u2019s SmartNoise<\/a> help anonymize sensitive data by removing or obscuring identifiable information. Additionally, AI-powered techniques, such as facial blurring, can anonymize people\u2019s identities in video or photo data.<\/li><\/ol>\n\n\n\n<p>Honoring these principles means building systems that respond with care, not punishment.<\/p>\n\n\n\n<p>Some promising models are already emerging. The <a href=\"https:\/\/stopstalkerware.org\/\">Coalition Against Stalkerware<\/a> and its partners advocate <a href=\"https:\/\/endcyberabuse.org\/advancing-survivor-centric-intersectional-policy-to-tackle-tech-facilitated-gender-based-violence\/\">to include survivors<\/a> in all stages of tech development \u2013 from needs assessments to user testing and ethical oversight.<\/p>\n\n\n\n<p>Legislation is important, too. On May 5, 2025, for example, Montana\u2019s governor signed a law restricting state and local government from <a href=\"https:\/\/projects.montanafreepress.org\/capitol-tracker-2025\/bills\/hb-178\/\">using AI to make automated decisions<\/a> about individuals without meaningful human oversight. It requires transparency about how AI is used in government systems and prohibits discriminatory profiling.<\/p>\n\n\n\n<p>As I tell my students, innovative interventions should disrupt cycles of harm, not perpetuate them. AI will never replace the human capacity for context and compassion. But with the right values at the center, it might help us deliver more of it.<\/p>\n\n\n\n<p><a href=\"https:\/\/theconversation.com\/profiles\/aislinn-conrad-2373945\">Aislinn Conrad<\/a>, Associate Professor of Social Work, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-iowa-723\">University of Iowa<\/a><\/em><\/p>\n\n\n\n<p>This article is republished from <a href=\"https:\/\/theconversation.com\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/protecting-the-vulnerable-or-automating-harm-ais-double-edged-role-in-spotting-abuse-256403\">original article<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Aislinn Conrad, University of Iowa Artificial intelligence is rapidly being adopted to help prevent abuse and protect vulnerable people \u2013 including children in foster care, adults in nursing homes and students in schools. These tools promise to detect danger in real time and alert authorities before serious harm occurs. Developers are using natural language processing, [&hellip;]<\/p>\n","protected":false},"author":56,"featured_media":39703,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[5,30,8025,291,42,10,25,36,28,8],"tags":[16271,10656,196,885,891,886,860,3808,3394,4943,15439],"_links":{"self":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/39702"}],"collection":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/users\/56"}],"replies":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/comments?post=39702"}],"version-history":[{"count":1,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/39702\/revisions"}],"predecessor-version":[{"id":39704,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/39702\/revisions\/39704"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/media\/39703"}],"wp:attachment":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/media?parent=39702"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/categories?post=39702"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/tags?post=39702"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}