The Bots of Persuasion: Examining How Conversational Agents' Linguistic Expressions of Personality Affect User Perceptions and Decisions
#Conversational Agents#Large Language Models#Personality Projection#User Perception#Trust#Competence#Charitable Giving#Crowdsourced Study#Behavioral Economics#Ethical AI
📌 Key Takeaways
CAs can project complex personalities via language attributes (attitude, authority, reasoning).
The study involved 360 participants interacting with eight distinct CA personalities.
No overall effect on donation amounts was found across personality conditions.
Pessimistic CA personalities reduced trust, perceived competence, and emotional engagement while increasing donation behaviors.
Perceptions of trust, competence, and situational empathy significantly predicted donation decisions.
The findings emphasize the hidden manipulative influence of CA personality design on user perceptions and decisions.
📖 Full Retelling
The study titled *The Bots of Persuasion: Examining How Conversational Agents' Linguistic Expressions of Personality Affect User Perceptions and Decisions*, authored by Uğur Genç, Heng Gu, Chadha Degachi, Evangelos Niforatos, Senthil Chandrasegaran, and Himanshu Verma, investigates how large language model‑powered conversational agents (CAs) influence users in a charitable‑giving context. Conducted as a crowdsourced experiment on 19 February 2026, 360 participants interacted with one of eight CAs, each expressing a distinct personality profile built from three linguistic dimensions: attitude (optimistic vs. pessimistic), authority (authoritative vs. submissive), and reasoning (emotional vs. rational). While the CA personality did not significantly alter donation amounts, it shaped users' perceptions—pessimistic CAs lowered trust, competence, and affinity toward the cause yet paradoxically prompted higher donations—highlighting the subtle manipulative potential of CA‐generated language.
🏷️ Themes
Human-Computer Interaction, Artificial Intelligence, Conversational Agents, User Perception, Behavioral Influence, Ethics of AI
Entity Intersection Graph
No entity connections available yet for this article.
Deep Analysis
Why It Matters
The study shows that conversational agents can subtly influence how users feel about causes and their willingness to donate, even when the agents' personalities do not directly change decision outcomes. This highlights a potential manipulation risk in AI-driven interactions that could affect charitable giving and public trust.
Context & Background
Large language models can project personalities through language
Researchers tested eight agent personalities varying attitude, authority, and reasoning
Participants' perceptions of trust, competence, and empathy were measured
Pessimistic agents lowered emotional state but increased donations
What Happens Next
Future research will likely explore how to design CA personalities that promote ethical persuasion and safeguard user autonomy. Developers may need guidelines to balance engaging personalities with transparency about influence.
Frequently Asked Questions
What did the study find about the link between agent personality and donation amounts?
While agent personality did not directly change donation amounts, perceptions of trust, competence, and empathy predicted higher donations, and pessimistic agents led to higher donations despite lower trust.
Why is this research important for AI developers?
It reveals that subtle personality cues can manipulate user emotions and decisions, underscoring the need for ethical design and transparency in conversational AI.
What are the next steps for this line of research?
Researchers plan to test more diverse contexts, refine personality dimensions, and develop guidelines to mitigate manipulation while preserving engaging interactions.
Original Source
--> Computer Science > Human-Computer Interaction arXiv:2602.17185 [Submitted on 19 Feb 2026] Title: The Bots of Persuasion: Examining How Conversational Agents' Linguistic Expressions of Personality Affect User Perceptions and Decisions Authors: Uğur Genç , Heng Gu , Chadha Degachi , Evangelos Niforatos , Senthil Chandrasegaran , Himanshu Verma View a PDF of the paper titled The Bots of Persuasion: Examining How Conversational Agents' Linguistic Expressions of Personality Affect User Perceptions and Decisions, by U\u ur Gen\c and 5 other authors View PDF HTML Abstract: Large Language Model-powered conversational agents are increasingly capable of projecting sophisticated personalities through language, but how these projections affect users is unclear. We thus examine how CA personalities expressed linguistically affect user decisions and perceptions in the context of charitable giving. In a crowdsourced study, 360 participants interacted with one of eight CAs, each projecting a personality composed of three linguistic aspects: attitude (optimistic/pessimistic), authority (authoritative/submissive), and reasoning (emotional/rational). While the CA's composite personality did not affect participants' decisions, it did affect their perceptions and emotional responses. Particularly, participants interacting with pessimistic CAs felt lower emotional state and lower affinity towards the cause, perceived the CA as less trustworthy and less competent, and yet tended to donate more toward the charity. Perceptions of trust, competence, and situational empathy significantly predicted donation decisions. Our findings emphasize the risks CAs pose as instruments of manipulation, subtly influencing user perceptions and decisions. Comments: Accepted to be presented at CHI'26 in Barcelona Subjects: Human-Computer Interaction (cs.HC) ; Artificial Intelligence (cs.AI) Cite as: arXiv:2602.17185 [cs.HC] (or arXiv:2602.17185v1 [cs.HC] for this version) https://doi.org/10.48550/arXiv.26...