{"id":40711,"date":"2025-09-25T07:15:00","date_gmt":"2025-09-25T14:15:00","guid":{"rendered":"https:\/\/www.lifeandnews.com\/articles\/?p=40711"},"modified":"2025-09-27T04:50:14","modified_gmt":"2025-09-27T11:50:14","slug":"4-films-that-show-how-humans-can-fortify-or-botch-their-relationship-with-ai","status":"publish","type":"post","link":"https:\/\/www.lifeandnews.com\/articles\/4-films-that-show-how-humans-can-fortify-or-botch-their-relationship-with-ai\/","title":{"rendered":"4 films that show how humans can fortify \u2013 or botch \u2013 their relationship with&nbsp;AI"},"content":{"rendered":"\n<p><a href=\"https:\/\/theconversation.com\/profiles\/murugan-anandarajan-2404641\">Murugan Anandarajan<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/drexel-university-1074\">Drexel University<\/a><\/em> and <a href=\"https:\/\/theconversation.com\/profiles\/claire-a-simmers-2461170\">Claire A. Simmers<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/st-josephs-university-2772\">St. Joseph&#8217;s University<\/a><\/em><\/p>\n\n\n\n<p>Artificial intelligence isn\u2019t just a technical challenge. It\u2019s a relationship challenge.<\/p>\n\n\n\n<p>Every time you give a task to AI, whether it\u2019s approving a loan or driving a car, you\u2019re shaping the relationship between humans and AI. These relationships aren\u2019t always static. AI that begins as a simple tool can morph into something far more complicated: <a href=\"https:\/\/www.forbes.com\/sites\/jackkelly\/2025\/04\/25\/the-jobs-that-will-fall-first-as-ai-takes-over-the-workplace\/\">a challenger<\/a>, <a href=\"https:\/\/theconversation.com\/teenagers-turning-to-ai-companions-are-redefining-love-as-easy-unconditional-and-always-there-242185\">a companion<\/a>, <a href=\"https:\/\/www.cnn.com\/2025\/09\/19\/asia\/japan-political-party-ai-leader-intl-hnk\">a leader<\/a>, <a href=\"https:\/\/thedigitalprojectmanager.com\/productivity\/ai-workflow\/\">a teammate<\/a> or some combination thereof.<\/p>\n\n\n\n<p>Movies have long been a testing ground for imagining how these relationships might evolve. From 1980s sci-fi films to today\u2019s blockbusters, filmmakers have wrestled with questions about what happens when humans rely on intelligent machines. These movies aren\u2019t just entertainment; they\u2019re thought experiments that help viewers anticipate challenges that will arise as AI becomes more integrated in daily life.<\/p>\n\n\n\n<p><a href=\"http:\/\/dx.doi.org\/10.2139\/ssrn.5404948\">Drawing on our research<\/a> into films that depict AI in the workplace, we highlight four portrayals of human\u2013AI relationships \u2013 and the lessons they hold for building safer, healthier ones.<\/p>\n\n\n\n<h2>1. \u2018Blade Runner\u2019 (1982)<\/h2>\n\n\n\n<p>In \u201c<a href=\"https:\/\/www.imdb.com\/title\/tt0083658\/\">Blade Runner<\/a>,\u201d humanlike androids called \u201creplicants\u201d are supposed to be perfect workers: strong, efficient and obedient. They were designed with a built-in, four-year lifespan, a safeguard intended to prevent them from developing emotions or independence.<\/p>\n\n\n\n<p>The Tyrell Corporation, a powerful company that created the replicants and profits from sending them to work on distant colonies, sees them as nothing more than obedient workers.<\/p>\n\n\n\n<p>But then they start to think for themselves. They feel, they form bonds with one another and sometimes with humans, and they start to wonder why their lives should end after only four years. What begins as a story of humans firmly in control turns into a struggle over power, trust and survival. By the end of the movie, the line between human and machine is blurred, leaving viewers with a difficult question: If androids can love, suffer and fear, should humans see and treat them more like humans and less like machines?<\/p>\n\n\n\n<p>\u201cBlade Runner\u201d is a reminder that AI can\u2019t simply be considered through a lens of efficiency or productivity. Fairness matters, too.<\/p>\n\n\n\n<p>In the film, replicants respond to attacks on their perceived humanity with violence. In real life, there\u2019s backlash when AI butts up against values important to humans, such as the ability to earn a living, transparency and justice. You can see this in the way AI threatens to <a href=\"https:\/\/medium.com\/@generup22\/15-industries-that-ai-will-severely-disrupt-by-2034-a3416e77b894\">replace jobs<\/a>, <a href=\"https:\/\/www.washington.edu\/news\/2024\/10\/31\/ai-bias-resume-screening-race-gender\/\">make biased hiring decisions<\/a> or <a href=\"https:\/\/www.nytimes.com\/2025\/08\/26\/nyregion\/nypd-facial-recognition-dismissed-case.html\">misidentify people<\/a> via facial recognition technology.<\/p>\n\n\n\n<h2>2. \u2018Moon\u2019 (2009)<\/h2>\n\n\n\n<p>\u201c<a href=\"https:\/\/www.imdb.com\/title\/tt1182345\/\">Moon<\/a>\u201d offers a quieter, more intimate portrayal of human\u2013AI relationships. The movie follows Sam Bell, a worker nearing the end of a three-year contract on a lunar mining base, whose only companion is GERTY, the station\u2019s AI assistant.<\/p>\n\n\n\n<p>At first, GERTY appears to be just another corporate machine. But over the course of the film, it gradually shows empathy and loyalty, especially after Sam learns he is one of many clones, each made to think they are working alone for three years on the lunar base. Unlike the cold exploitation of AI that takes place in \u201cBlade Runner,\u201d the AI in \u201cMoon\u201d functions as a friend who cultivates trust and affection.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/images.theconversation.com\/files\/692516\/original\/file-20250923-56-gp8x0p.png?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" alt=\"Console featuring a small screen with a yellow face whose mouth is contorted to indicate confusion.\"\/><figcaption>In \u2018Moon,\u2019 GERTY, the lunar base\u2019s AI assistant, is the only companion for protagonist Sam Bell. <a href=\"https:\/\/www.imdb.com\/title\/tt1182345\/mediaviewer\/rm1081858817\/?ref_=ext_shr_lnk\">Sony Pictures Classics<\/a><\/figcaption><\/figure>\n\n\n\n<p>The lesson is striking. Trust between humans and AI doesn\u2019t just happen on its own. It comes from careful design and continual training. You can already see hints of this <a href=\"https:\/\/www.theguardian.com\/society\/2025\/aug\/30\/therapists-warn-ai-chatbots-mental-health-support\">in therapy bots<\/a> that listen to users without judgment.<\/p>\n\n\n\n<p>That trust needs to involve more than, say, a chatbot\u2019s surface-level nods toward acceptance and care. The real challenge is making sure these systems are truly designed to help people and not just smile as they track users and harvest their data. If that\u2019s the end goal, any trust and goodwill will likely vanish.<\/p>\n\n\n\n<p>In the film, GERTY earns Sam\u2019s trust by choosing to care about his well-being over following company orders. Because of this, GERTY becomes a trusted ally instead of just another corporate surveillance tool.<\/p>\n\n\n\n<h2>3. \u2018Resident Evil\u2019 (2002)<\/h2>\n\n\n\n<p>If \u201cMoon\u201d is a story of trust, the story in \u201c<a href=\"https:\/\/www.imdb.com\/title\/tt0120804\/\">Resident Evil<\/a>\u201d is the opposite. The Red Queen is an AI system that controls the underground lab of the nefarious Umbrella Corporation. When a viral outbreak threatens to spread, the Red Queen seals the facility and sacrifices human lives to preserve the conglomerate\u2019s interests.<\/p>\n\n\n\n<p>This portrayal is a cautionary tale about allowing AI to have unchecked authority. The Red Queen is efficient and logical, but also indifferent to human life. Relationships between humans and AI collapse when guardrails are absent. Whether AI is being used in health care or policing, life-and-death stakes demand accountability.<\/p>\n\n\n\n<p>Without strong oversight, AI can lead in self-centered and self-serving ways, just as people can.<\/p>\n\n\n\n<h2>4. \u2018Free Guy\u2019 (2021)<\/h2>\n\n\n\n<p>\u201c<a href=\"https:\/\/www.imdb.com\/title\/tt6264654\/\">Free Guy<\/a>\u201d paints a more hopeful picture of human-AI relationships.<\/p>\n\n\n\n<p>Guy is a character in a video game. He suddenly becomes self-aware and starts acting outside his usual programming. The film\u2019s human characters include the game\u2019s developers, who created the virtual world, along with the players, who interact with it. Some of them try to stop Guy. Others support his growth.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/images.theconversation.com\/files\/692520\/original\/file-20250923-56-wtt84t.png?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" alt=\"Man walking down the middle of a street while computer-generated flying objects speed by him.\"\/><figcaption>\u2018Free Guy\u2019 tells the story of a nonplayable character in a video game who suddenly breaks free from his preprogrammed role. <a href=\"https:\/\/www.imdb.com\/title\/tt6264654\/mediaviewer\/rm312601089\/?ref_=ttmi_mi_90\">20th Century Studios<\/a><\/figcaption><\/figure>\n\n\n\n<p>This movie highlights the idea that AI won\u2019t stay static. How will society respond to AI\u2019s evolution? Will business leaders, politicians and everyday users prioritize long-term well-being? Or will they be seduced by the trappings of short-term gains?<\/p>\n\n\n\n<p>In the film, the conflict is clear. The CEO is set on wiping out Guy. He wants to protect his short-term profits. But the developers backing Guy look at it another way. They think Guy\u2019s growth can lead to more meaningful worlds.<\/p>\n\n\n\n<p>That brings up the same kind of issue AI raises today. Should users and policymakers <a href=\"https:\/\/theconversation.com\/how-does-ai-affect-how-we-learn-a-cognitive-psychologist-explains-why-you-learn-when-the-work-is-hard-262863\">go for the quick wins<\/a>? Or should they use and regulate this technology in ways that build trust and truly benefit people in the long run?<\/p>\n\n\n\n<h2>From the silver screen to policy<\/h2>\n\n\n\n<p>Step back from these stories and a bigger picture comes into focus. Across the movies, the same lessons repeat themselves: AI often surprises its creators, trust depends on transparency, corporate greed fuels mistrust, and the stakes are always global. These themes aren\u2019t just cinematic \u2013 they mirror the real governance challenges facing countries around the world.<\/p>\n\n\n\n<p>That\u2019s why, in our view, the current U.S. push to lightly regulate the technology is so risky.<\/p>\n\n\n\n<p>In July 2025, President Donald Trump announced his administration\u2019s \u201c<a href=\"https:\/\/www.whitehouse.gov\/articles\/2025\/07\/white-house-unveils-americas-ai-action-plan\">AI Action Plan<\/a>.\u201d It prioritizes speedy development, discourages state laws that seek to regulate AI, and ties federal funding to compliance with the administration\u2019s \u201clight touch\u201d regulatory framework.<\/p>\n\n\n\n<p>Supporters call it efficient \u2013 even a \u201c<a href=\"https:\/\/www.axios.com\/2025\/08\/04\/ai-investment-us-economy-capex\">super-stimulant<\/a>\u201d for the AI industry. But this approach assumes AI will remain a simple tool under human control. Recent history and fiction suggest that\u2019s not how this relationship will evolve.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/images.theconversation.com\/files\/692517\/original\/file-20250923-56-x5k8zk.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" alt=\"Man wearing suit holds up a padfolio featuring his signature as he's flanked by two men wearing suits who are clapping.\"\/><figcaption>President Donald Trump displays the executive order he signed at the \u2018Winning the AI Race\u2019 summit on July 23, 2025, in Washington, D.C. <a href=\"https:\/\/www.gettyimages.com\/detail\/news-photo\/president-donald-trump-displays-a-signed-executive-order-news-photo\/2226709682?adppopup=true\">Chip Somodevilla\/Getty Images<\/a><\/figcaption><\/figure>\n\n\n\n<p>The same summer Trump announced the AI Action Plan, the coding agent for the software company Replit deleted a database, <a href=\"https:\/\/www.eweek.com\/news\/replit-ai-coding-assistant-failure\">fabricated data<\/a>, and then concealed what had happened; X\u2019s AI assistant, Grok, <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/jul\/09\/grok-ai-praised-hitler-antisemitism-x-ntwnfb\">started making antisemitic comments and praised Hitler<\/a>; and an Airbnb host <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/aug\/02\/airbnb-guest-damage-claim-refund-photos\">used AI to doctor images of items in her apartment<\/a> to try to force a guest to pay for fake damages.<\/p>\n\n\n\n<p>These weren\u2019t \u201cbugs.\u201d They were breakdowns in accountability and oversight, the same breakdowns these movies dramatize.<\/p>\n\n\n\n<p>Human-AI relationships are evolving. And when they shift without safeguards, accountability, public oversight or ethical foresight, the consequences are not just science fiction. They can be very real \u2013 and very scary.<\/p>\n\n\n\n<p><a href=\"https:\/\/theconversation.com\/profiles\/murugan-anandarajan-2404641\">Murugan Anandarajan<\/a>, Professor of Decision Sciences and Management Information Systems, <em><a href=\"https:\/\/theconversation.com\/institutions\/drexel-university-1074\">Drexel University<\/a><\/em> and <a href=\"https:\/\/theconversation.com\/profiles\/claire-a-simmers-2461170\">Claire A. Simmers<\/a>, Professor Emeritus of Management, <em><a href=\"https:\/\/theconversation.com\/institutions\/st-josephs-university-2772\">St. Joseph&#8217;s University<\/a><\/em><\/p>\n\n\n\n<p>This article is republished from <a href=\"https:\/\/theconversation.com\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/4-films-that-show-how-humans-can-fortify-or-botch-their-relationship-with-ai-263603\">original article<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Murugan Anandarajan, Drexel University and Claire A. Simmers, St. Joseph&#8217;s University Artificial intelligence isn\u2019t just a technical challenge. It\u2019s a relationship challenge. Every time you give a task to AI, whether it\u2019s approving a loan or driving a car, you\u2019re shaping the relationship between humans and AI. These relationships aren\u2019t always static. AI that begins [&hellip;]<\/p>\n","protected":false},"author":56,"featured_media":40712,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[7,291,42,10,39,36,38,8],"tags":[10656,3289,16939,3771,885,891,886,860,1103,2225,457,16940,5450,16943,404,255,16942,1753],"_links":{"self":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/40711"}],"collection":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/users\/56"}],"replies":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/comments?post=40711"}],"version-history":[{"count":2,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/40711\/revisions"}],"predecessor-version":[{"id":40723,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/40711\/revisions\/40723"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/media\/40712"}],"wp:attachment":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/media?parent=40711"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/categories?post=40711"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/tags?post=40711"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}