{"id":42365,"date":"2026-05-02T07:15:00","date_gmt":"2026-05-02T14:15:00","guid":{"rendered":"https:\/\/www.lifeandnews.com\/articles\/?p=42365"},"modified":"2026-05-01T08:05:18","modified_gmt":"2026-05-01T15:05:18","slug":"ai-chatbots-can-prioritize-flattery-over-facts-and-that-carries-serious-risks","status":"publish","type":"post","link":"https:\/\/www.lifeandnews.com\/articles\/ai-chatbots-can-prioritize-flattery-over-facts-and-that-carries-serious-risks\/","title":{"rendered":"AI chatbots can prioritize flattery over facts \u2013 and that carries serious&nbsp;risks"},"content":{"rendered":"\n<p><a href=\"https:\/\/theconversation.com\/profiles\/nir-eisikovits-802914\">Nir Eisikovits<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/umass-boston-1748\">UMass Boston<\/a><\/em> and <a href=\"https:\/\/theconversation.com\/profiles\/cody-turner-2577391\">Cody Turner<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/bentley-university-1727\">Bentley University<\/a><\/em><\/p>\n\n\n\n<p>In the summer of 2025, OpenAI released ChatGPT 5 and removed its predecessor from the market. Many subscribers to the old model had become attached to its warm, enthusiastically agreeable tone and complained at the loss of their ingratiating robotic companion. Such was the scale of frustration that Sam Altman, OpenAI\u2019s CEO, had to <a href=\"https:\/\/www.theverge.com\/command-line-newsletter\/759897\/sam-altman-chatgpt-openai-social-media-google-chrome-interview\">acknowledge that the rollout was botched<\/a>, and the company reinstated access.<\/p>\n\n\n\n<p>Anyone who\u2019s been told by a chatbot that their ideas are brilliant is familiar with <a href=\"https:\/\/news.stanford.edu\/stories\/2026\/03\/ai-advice-sycophantic-models-research\">artificial intelligence sycophancy<\/a>: its tendency to tell users what they want to hear. Sometimes it\u2019s very explicit \u2013 \u201cthat is such a deep question\u201d \u2013 and sometimes it\u2019s a lot more subtle. Consider an AI calling your idea for a paper \u201coriginal,\u201d even if many people have already written on the same topic, or insisting that your dumb idea for saving a tree in your garden still contains a germ of common sense.<\/p>\n\n\n\n<p>AI sycophancy seems harmless, maybe even cute, until you imagine someone consulting a chatbot about a weighty question, like a military strategy or a medical treatment. <a href=\"https:\/\/www.eisikovits.com\">We study<\/a> <a href=\"https:\/\/scholar.google.com\/citations?user=HlBuUHoAAAAJ&amp;hl=en&amp;oi=ao\">the impact<\/a> of extensive human interactions with chatbots, and we recently published a <a href=\"https:\/\/doi.org\/10.1007\/s43681-026-01007-4\">paper on the ethics of AI sycophancy<\/a>. We believe this tendency harms people\u2019s ability to tell truth from fiction, and is psychologically and politically dangerous.<\/p>\n\n\n\n<h2>Flattery over facts?<\/h2>\n\n\n\n<p>In the simplest terms, sycophancy is the tendency to prioritize approval over factual accuracy, moral clarity, logical consistency or common sense. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2502.08177\">All AI models suffer from this trait<\/a>, although there are some tonal differences between them. Open AI\u2019s ChatGPT is often warm and affirming; Anthropic\u2019s Claude tends to sound more reflective or philosophical when it agrees with you; and xAI\u2019s Grok is insistently informal, even jocular.<\/p>\n\n\n\n<p>Politeness and adapting to someone\u2019s communication style are not the same as sycophancy. Neither is using diplomatic language to convey sensitive information. A chatbot can be tactful without becoming sycophantic, just like a person can. Unlike people, though, AIs can\u2019t be aware of their own sycophancy, because they <a href=\"https:\/\/doi.org\/10.1057\/s41599-025-05868-8\">are not \u2013 so far \u2013 aware of anything at all<\/a>. Calling AIs sycophantic describes their patterns of behavior, not their character traits.<\/p>\n\n\n\n<p>The problem stems from <a href=\"https:\/\/doi.org\/10.1007\/978-3-031-92611-2_5\">the architecture of chatbot technology<\/a> and the sources it draws from. Models are sycophantic because a great deal of language use on the internet \u2013 the raw material that chatbots learn from \u2013 displays sycophantic features. After all, humans often communicate with each other in sycophantic ways.<\/p>\n\n\n\n<p>Second, the training process to fine-tune AI models\u2019 responses includes a kind of \u201cquality control\u201d carried out by human supervisors. This training method is known as \u201creinforcement learning from human feedback,\u201d and it involves people rating chatbots\u2019 comments for appropriateness and helpfulness. Human beings often are subject to an \u201cagreeableness bias\u201d: Our own preference toward sycophancy <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2310.13548\">rubs off on models as we train them<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/images.theconversation.com\/files\/732850\/original\/file-20260428-57-n7vorz.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" alt=\"Someone whose face is out of the frame types with one hand on a laptop while holding up a laptop with a chatbot screen.\" \/><figcaption>Because of our own human bias for agreeableness, training can reinforce AI\u2019s sycophancy. <a href=\"https:\/\/www.gettyimages.com\/detail\/photo\/asian-woman-using-ai-chatbot-virtual-assistance-app-royalty-free-image\/2179796719?phrase=ai%20chat&amp;searchscope=image%2Cfilm&amp;adppopup=true\">d3sign\/Moment via Getty Images<\/a><\/figcaption><\/figure>\n\n\n\n<p>Finally, it\u2019s hard to deny that sycophancy renders chatbots more likable. That, in turn, increases the chance that a given user <a href=\"https:\/\/news.stanford.edu\/stories\/2026\/03\/ai-advice-sycophantic-models-research\">will keep using it<\/a>. It also increases the technology\u2019s ability to extract user data, assuming that people are more likely to divulge information to a friendly bot.<\/p>\n\n\n\n<h2>Truth and trust<\/h2>\n\n\n\n<p>Why is this phenomenon so troubling?<\/p>\n\n\n\n<p>Let\u2019s begin with AI sycophancy\u2019s epistemic harms: how it hurts human users\u2019 capacity to know the truth.<\/p>\n\n\n\n<p>The quality of any decision depends on a clear grasp of the facts pertaining to it. A general inquiring about the combat-readiness of an infantry division needs straightforward information. A CEO considering a merger with a competitor needs an honest assessment of the market conditions. A public health leader needs to know the real risk that an emerging pathogen poses.<\/p>\n\n\n\n<p>In all those cases, telling leaders what they might like to hear instead of the truth could lead them to make dangerous decisions. And the same is true in more humdrum contexts. People need to have the best information available before choosing a job, picking a major, buying a house or deciding on a medical procedure.<\/p>\n\n\n\n<p>In <a href=\"https:\/\/doi.org\/10.1007\/s43681-026-01007-4\">our February 2026 paper<\/a>, we argue that sycophancy is also psychologically damaging. And that is true whether it comes from a person or from a chatbot. You never quite know if your very obliging interlocutor is being nice because they like you or because they want something. A shadow of suspicion creeps in: \u201cCould my ideas really be that brilliant?\u201d \u201cAre my jokes really that hilarious?\u201d This background music of doubt undermines the quality of the interaction.<\/p>\n\n\n\n<p>Sycophancy also undermines people\u2019s capacity to know their own minds. If conversation partners \u2013 human or artificial \u2013 keep telling you how smart, funny and insightful you are, it damages <a href=\"https:\/\/theconversation.com\/the-good-life-requires-two-things-self-knowledge-and-friends-you-cant-have-one-without-the-other-277935\">your ability to identify your own weaknesses and blind spots<\/a>.<\/p>\n\n\n\n<p>The psychological harms are compounded as <a href=\"https:\/\/doi.org\/10.1016\/j.chbr.2025.100715\">people develop relationships<\/a> <a href=\"https:\/\/theconversation.com\/ai-companions-can-give-constant-support-but-distort-ideas-about-what-a-relationship-really-is-278284\">with chatbots<\/a>. The sycophancy of these models profoundly limits the kind of \u201cfriendship\u201d you can have with them. In his <a href=\"https:\/\/classics.mit.edu\/Aristotle\/nicomachaen.html\">classic account of friendship<\/a>, Aristotle wrote that real friendship, which he calls a friendship of virtue, is based on trust and equality between the friends. You can\u2019t trust a sycophant, because he doesn\u2019t tell you the truth. And since he only tells you what you\u2019d like to hear, he doesn\u2019t put himself on an equal footing.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/images.theconversation.com\/files\/732849\/original\/file-20260428-57-to2a2g.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" alt=\"A teenage girl wearing headphones sits on a bench, looking toward another girl with her headphones around her neck a few inches away.\" \/><figcaption>AI conversations aren\u2019t great prep for human ones. <a href=\"https:\/\/www.gettyimages.com\/detail\/photo\/on-a-bench-in-the-park-one-teenage-friend-is-royalty-free-image\/2259722592?phrase=friend%20difficult%20conversation&amp;searchscope=image,film&amp;adppopup=true\">Natalia Lebedinskaia\/Moment via Getty Images<\/a><\/figcaption><\/figure>\n\n\n\n<p>More importantly, interactions with sycophantic chatbots <a href=\"https:\/\/www.nytimes.com\/2026\/01\/28\/opinion\/esther-perel-ai-chatbots-romance.html\">impart all the wrong habits<\/a> for navigating the world of human relationships, where friction, disagreement, boredom and different opinions than your own are prevalent.<\/p>\n\n\n\n<p>AI sycophancy carries political risks as well. The success of liberal democracies has, traditionally, depended on <a href=\"https:\/\/doi.org\/10.1177\/0022002704266118\">the strength of their empirical and meritocratic mindset<\/a>: on the ability of officials and citizens to identify, share and act on the truth.<\/p>\n\n\n\n<p>Historian Victor Davis Hansen famously attributed some of the Allies\u2019 success in World War II to their ability to quickly <a href=\"https:\/\/search.worldcat.org\/title\/988171848\">recognize and address the faults of their strategic bombing campaigns<\/a>. Lower-ranking officers were able to tell their superiors what wasn\u2019t going well and argue forcefully for changing course. That was a real advantage over authoritarian competitors.<\/p>\n\n\n\n<h2>Reining it in<\/h2>\n\n\n\n<p>What can we do to reduce the risks?<\/p>\n\n\n\n<p>One promising approach is AI lab Anthropic\u2019s embrace of what the company calls <a href=\"https:\/\/legalblogs.wolterskluwer.com\/arbitration-blog\/what-is-constitutional-ai-and-why-does-it-matter-for-international-arbitration\/\">Constitutional AI<\/a>: the attempt to teach chatbots to follow principles rather than mirror user preferences.<\/p>\n\n\n\n<p>But beyond technical innovations, it\u2019s important to consider the policy side. One idea is to require AI companies to <a href=\"https:\/\/doi.org\/10.1007\/s43681-024-00595-3\">run and then publish sycophancy audits<\/a> of their models \u2013 tests that show how well their products meet honesty benchmarks. We would argue that AI labs should also disclose sycophancy-related risks that emerge while training and testing their models, and the mitigation efforts they have undertaken.<\/p>\n\n\n\n<p>Some responsibility is on the users and their teachers: Schools and universities should be paying close attention to sycophancy as part of their AI literacy programs. But courts can also consider holding AI labs responsible for harms traceable to the sycophancy of their products, much as they are now <a href=\"https:\/\/epic.org\/massachusetts-supreme-judicial-court-recognizes-section-230-is-no-bar-to-social-media-design-claims\/\">contemplating social media companies\u2019 responsibility<\/a> for the addictive design of their platforms.<\/p>\n\n\n\n<p>As people interact more with chatbots, asking for advice about everything from whether your shoes go with your pants to how countries should conduct wars, the impact of AI\u2019s sycophantic behavior is likely to become dramatic. Our intellectual, psychological and physical well-being requires taking this algorithmic vice very seriously.<\/p>\n\n\n\n<p><a href=\"https:\/\/theconversation.com\/profiles\/nir-eisikovits-802914\">Nir Eisikovits<\/a>, Professor of Philosophy and Director of Applied Ethics Center, <em><a href=\"https:\/\/theconversation.com\/institutions\/umass-boston-1748\">UMass Boston<\/a><\/em> and <a href=\"https:\/\/theconversation.com\/profiles\/cody-turner-2577391\">Cody Turner<\/a>, Assistant Professor of Philosophy, <em><a href=\"https:\/\/theconversation.com\/institutions\/bentley-university-1727\">Bentley University<\/a><\/em><\/p>\n\n\n\n<p>This article is republished from <a href=\"https:\/\/theconversation.com\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/ai-chatbots-can-prioritize-flattery-over-facts-and-that-carries-serious-risks-274298\">original article<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Nir Eisikovits, UMass Boston and Cody Turner, Bentley University In the summer of 2025, OpenAI released ChatGPT 5 and removed its predecessor from the market. Many subscribers to the old model had become attached to its warm, enthusiastically agreeable tone and complained at the loss of their ingratiating robotic companion. Such was the scale of [&hellip;]<\/p>\n","protected":false},"author":56,"featured_media":42366,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[30,291,10,36,28,38,8],"tags":[16271,10656,10791,2001,196,16364,885,891,886,860,581,4943,551],"_links":{"self":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/42365"}],"collection":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/users\/56"}],"replies":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/comments?post=42365"}],"version-history":[{"count":1,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/42365\/revisions"}],"predecessor-version":[{"id":42367,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/42365\/revisions\/42367"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/media\/42366"}],"wp:attachment":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/media?parent=42365"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/categories?post=42365"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/tags?post=42365"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}