{"id":42298,"date":"2026-04-20T07:15:00","date_gmt":"2026-04-20T14:15:00","guid":{"rendered":"https:\/\/www.lifeandnews.com\/articles\/?p=42298"},"modified":"2026-04-20T23:11:30","modified_gmt":"2026-04-21T06:11:30","slug":"most-people-do-not-realize-when-a-personal-message-they-receive-was-written-by-ai-study-finds","status":"publish","type":"post","link":"https:\/\/www.lifeandnews.com\/articles\/most-people-do-not-realize-when-a-personal-message-they-receive-was-written-by-ai-study-finds\/","title":{"rendered":"Most people do not realize when a personal message they receive was written by AI, study&nbsp;finds"},"content":{"rendered":"\n<p><a href=\"https:\/\/theconversation.com\/profiles\/andras-molnar-2607207\">Andras Molnar<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-michigan-1290\">University of Michigan<\/a><\/em><\/p>\n\n\n\n<p>Two new experiments show that most people <a href=\"https:\/\/doi.org\/10.1016\/j.chb.2026.108929\">do not even consider<\/a> that a personal message could be AI-generated, even when they themselves use artificial intelligence to write.<\/p>\n\n\n\n<p>To see how people judge someone based on their writing in the <a href=\"https:\/\/www.pewresearch.org\/short-reads\/2025\/06\/25\/34-of-us-adults-have-used-chatgpt-about-double-the-share-in-2023\/\">age of ChatGPT<\/a>, my colleague <a href=\"https:\/\/scholar.google.com\/citations?hl=en&amp;user=NbPYHz0AAAAJ&amp;view_op=list_works&amp;sortby=pubdate\">Jiaqi Zhu<\/a> <a href=\"https:\/\/scholar.google.com\/citations?hl=en&amp;user=-_CNVOUAAAAJ&amp;view_op=list_works&amp;sortby=pubdate\">and I<\/a> recruited more than 1,300 U.S.-based participants, ages 18 to 84, and showed them AI-generated messages like an apology sent in an email. We split our volunteers into four groups: Some people saw the messages with no information about who or what wrote them, as in everyday life. Others were told the messages were definitely written by a human, definitely AI-generated, or that the source could be either.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/images.theconversation.com\/files\/726934\/original\/file-20260329-63-igpdj4.png?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=237&amp;fit=clip\" alt=\"A text message presenting an apology generated by AI.\" \/><figcaption>An AI-generated fictional apology sent via text was one of the messages participants evaluated in a recent study. <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0747563226000269\">Zhu &amp; Molnar (2026)<\/a><\/figcaption><\/figure>\n\n\n\n<p>We found a clear \u201c<a href=\"https:\/\/behavioraltimes.com\/ai-disclosure-penalty\/\">AI disclosure penalty<\/a>.\u201d When people knew a message was AI-generated, they rated the sender much more negatively \u2013 \u201clazy,\u201d \u201cinsincere,\u201d \u201clack of effort\u201d \u2013 than when they believed that the same text was written by a person \u2013 \u201cgenuine,\u201d \u201cgrateful,\u201d \u201cthoughtful.\u201d<\/p>\n\n\n\n<p>But here is the twist: The participants who were not told anything about authorship formed impressions that were just as positive as those from people who were told the messages were genuinely human.<\/p>\n\n\n\n<p>This complete lack of skepticism surprised us \u2013 and it raises new questions. Maybe participants were not familiar enough with AI to realize that today\u2019s models can produce detailed and personal messages. (<a href=\"https:\/\/www.vox.com\/future-perfect\/483948\/gmail-smart-replies-ai-consciousness\">They can<\/a>.) Or perhaps participants have never used AI themselves. (<a href=\"https:\/\/doi.org\/10.1287\/mnsc.2025.02523\">They likely have<\/a>.) So we also tested whether participants\u2019 own AI use changed how they judged senders.<\/p>\n\n\n\n<p>To our even bigger surprise, we found little to no effect. People who use generative AI quite frequently in their daily lives \u2013 at least every other day \u2013 did penalize AI use slightly less when AI authorship was disclosed, compared with people who never or rarely use AI. But participants were no more skeptical by default: When authorship was not disclosed, heavy AI users, light AI users and nonusers all tended to assume the text was written by a person and formed essentially the same impressions.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/images.theconversation.com\/files\/726936\/original\/file-20260329-63-1o9x3z.png?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\" alt=\"A word cloud showing words that describe how people reading text messages felt.\" \/><figcaption>Word clouds depict participants\u2019 first impressions of senders who wrote messages themselves, left, and those who used AI, right. <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0747563226000269\">Andras Molnar<\/a><\/figcaption><\/figure>\n\n\n\n<h2>Why it matters<\/h2>\n\n\n\n<p>Lack of skepticism and a lack of negative impressions matter because people make <a href=\"https:\/\/doi.org\/10.1146\/annurev.psych.54.101601.145041\">social judgments<\/a> from text all the time. Recipients consider taking the time and effort to send written messages as <a href=\"https:\/\/doi.org\/10.1016\/j.copsyc.2022.101442\">an insight into<\/a> the writer\u2019s sincerity, authenticity or competence, and those impressions shape people\u2019s decisions in friendships, dating and work.<\/p>\n\n\n\n<p>Yet our main findings reveal a striking disconnect: People usually do not suspect AI use unless it <a href=\"https:\/\/www.newsweek.com\/newspaper-issues-apology-readers-cant-believe-print-11047759\">is obvious<\/a>. This unawareness creates a moral dilemma: People who use AI in secret can enjoy the benefits while facing almost no risk of detection. Meanwhile, paradoxically, people who are upfront and admit to using AI <a href=\"https:\/\/doi.org\/10.1016\/j.obhdp.2025.104405\">suffer a reputational hit<\/a>.<\/p>\n\n\n\n<p>Over time, lack of skepticism and awareness could reshape what writing means in everyday life. Readers might learn to treat writing as a <a href=\"https:\/\/doi.org\/10.1007\/s11229-025-04963-2\">less reliable<\/a> signal of someone\u2019s character or effort, and instead rely on other forms of communication. For example, widespread AI use has already prompted employers to discount the value of <a href=\"https:\/\/www.economist.com\/finance-and-economics\/2025\/11\/13\/how-ai-is-breaking-cover-letters\">cover letters from job applicants<\/a>. Instead, they are <a href=\"https:\/\/knowledge.wharton.upenn.edu\/opinion\/ai-is-killing-the-cover-letter\/\">relying more<\/a> on personal recommendations from an applicant\u2019s current supervisor or connections made through in-person networking.<\/p>\n\n\n\n<h2>What other research is being done<\/h2>\n\n\n\n<p>Other researchers have documented a wide range of negative impressions about people who disclose their AI use. Studies show it makes job applicants seem <a href=\"https:\/\/doi.org\/10.1089\/cyber.2020.0863\">less desirable<\/a> and employees seem <a href=\"https:\/\/doi.org\/10.1073\/pnas.2426766122\">less competent<\/a>. Readers of creative writing perceive AI users as <a href=\"https:\/\/doi.org\/10.1037\/xge0001889\">less creative<\/a> and inauthentic. People see <a href=\"https:\/\/doi.org\/10.1016\/j.chb.2022.107592\">personal apologies<\/a> and <a href=\"https:\/\/doi.org\/10.1016\/j.pubrev.2024.102520\">corporate apologies<\/a> that stem from AI as less effective. In general, disclosing AI use <a href=\"https:\/\/doi.org\/10.1016\/j.obhdp.2025.104405\">decreases trust<\/a> and undermines legitimacy.<\/p>\n\n\n\n<p>Yet without disclosure, there is clear evidence that most people <a href=\"https:\/\/doi.org\/10.1073\/pnas.2208839120\">cannot reliably detect<\/a> AI-generated text, even with the <a href=\"https:\/\/theconversation.com\/why-its-so-hard-to-tell-if-a-piece-of-text-was-written-by-ai-even-for-ai-265181\">help of detection tools<\/a>, especially when the <a href=\"https:\/\/doi.org\/10.18653\/v1\/2024.findings-naacl.29\">text is a mix<\/a> of human-written and AI-generated content. Even when people feel confident about their ability to spot AI text, their confidence may be nothing more than a <a href=\"https:\/\/doi.org\/10.1016\/j.chb.2020.106553\">self-affirming illusion<\/a>.<\/p>\n\n\n\n<h2>What\u2019s next<\/h2>\n\n\n\n<p>Even though our experiments did not reveal suspicion of AI use, that doesn\u2019t mean people never suspect it in the real world. In some settings, people may already be hypervigilant about AI. <a href=\"https:\/\/www.cbsnews.com\/detroit\/news\/university-michigan-student-lawsuit-ai-disability-discrimination\/\">Use in academia<\/a> is an obvious example. In our next studies, we want to understand when and why people naturally start to suspect AI use, and what flips the switch between trust and doubt.<\/p>\n\n\n\n<p>Until then, if you want your personal message to be judged as heartfelt, the safest strategy may be to make a phone call, leave a voicemail or, better yet, say it in person.<\/p>\n\n\n\n<p><em>The <a href=\"https:\/\/theconversation.com\/us\/topics\/research-brief-83231\">Research Brief<\/a> is a short take on interesting academic work.<\/em><\/p>\n\n\n\n<p><a href=\"https:\/\/theconversation.com\/profiles\/andras-molnar-2607207\">Andras Molnar<\/a>, Assistant Professor of Psychology, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-michigan-1290\">University of Michigan<\/a><\/em><\/p>\n\n\n\n<p>This article is republished from <a href=\"https:\/\/theconversation.com\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/most-people-do-not-realize-when-a-personal-message-they-receive-was-written-by-ai-study-finds-278874\">original article<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Andras Molnar, University of Michigan Two new experiments show that most people do not even consider that a personal message could be AI-generated, even when they themselves use artificial intelligence to write. To see how people judge someone based on their writing in the age of ChatGPT, my colleague Jiaqi Zhu and I recruited more [&hellip;]<\/p>\n","protected":false},"author":56,"featured_media":42299,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[293,5,8025,292,291,42,10,25,118,36,28,38,8],"tags":[17681,17661,10656,14869,13473,7459,17680,885,891,886,860,10755,2197,7727],"_links":{"self":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/42298"}],"collection":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/users\/56"}],"replies":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/comments?post=42298"}],"version-history":[{"count":1,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/42298\/revisions"}],"predecessor-version":[{"id":42300,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/posts\/42298\/revisions\/42300"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/media\/42299"}],"wp:attachment":[{"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/media?parent=42298"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/categories?post=42298"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.lifeandnews.com\/articles\/wp-json\/wp\/v2\/tags?post=42298"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}