CMU PhD Student Award-Winning Work Reveals How Bots Are Reshaping Online Discourse
By Josh Quicksall
In an era where artificial intelligence drives our digital world, a disturbing pattern has emerged: social media bots aren't just spreading information – they're strategically targeting human users with unprecedented effectiveness. This revelation comes from Carnegie Mellon University Societal Computing PhD student, Lynnette Hui Xian Ng, whose work shows how bots are evolving from mere amplifiers into sophisticated agents of influence.
"Bots are deliberately engaging with humans rather than other bots," explains Ng, whose research won the 2024 Grefenstette Center for Ethics Best Graduate Student Poster Award at Duquesne University's Tech Ethics Symposium, a prestigious annual event that brings together leading minds from tech giants like Google and Microsoft to tackle pressing questions about AI's societal impact. "By directly mentioning or retweeting specific users, they're exploiting a fundamental human tendency: we pay more attention to content that personally engages us. This direct targeting makes their influence attempts significantly more effective than broad, untargeted messaging."
This finding is particularly alarming given the current landscape of AI advancement. As language models become more sophisticated and accessible, the line between human and automated communication continues to blur. Ng's research, conducted at CMU's Software and Societal Systems Department, reveals that bots constitute about 20% of users across various platforms – a consistent presence that shapes our online conversations in ways we're only beginning to understand.
Mapping Manipulation Strategies
Perhaps most concerning is how effectively bots exploit human psychology. "We found bots using cognitive biases more effectively than humans do," Ng notes. For example, during election discussions, bots might initially agree with a user's political stance to establish credibility, then gradually introduce conflicting information to create doubt. This subtle manipulation technique, known as cognitive dissonance, proves particularly effective when paired with emotional appeals.
"Imagine a bot that first strongly agrees with your views on healthcare, sharing emotional stories that reinforce your position," Ng explains. "Then it gradually introduces contradictory 'facts' or experiences, creating internal conflict that makes you more susceptible to changing your views. Unlike humans, bots can maintain this strategy consistently across thousands of interactions without fatigue or doubt."
Innovation in Detection
Ng's award-winning research builds upon her work with the "BotBuster Universe" framework, which introduces a novel approach to bot detection. Traditional methods rely heavily on machine learning algorithms analyzing account characteristics. Ng's innovation combines these technical measures with human perception patterns – how real users identify and report suspicious accounts.
"Current detection systems might miss bots that technically follow platform rules but still manipulate conversations," Ng says. "By incorporating how users perceive and report suspicious behavior, we've developed a more nuanced understanding of bot tactics. This helps platforms better distinguish between beneficial automated accounts, like weather services, and those designed to manipulate public opinion."
Shaping a Resilient Digital Future
The implications of Ng's research extend far beyond academic interest. As a Knight Scholar at CMU's Center for Informed Democracy & Social-cybersecurity (IDeaS), a research hub focused on combating online disinformation and protecting democratic discourse, Ng's insights are being translated into practical tools through partnerships with organizations like Nexalogy and various government agencies. Her work directly contributes to IDeaS' mission of developing scalable solutions for detecting and countering harmful bot activity while preserving beneficial automated communication.
The interdisciplinary nature of Ng's research exemplifies the complex challenges of modern cybersecurity. Her work combines elements of psychology, analyzing how bots exploit cognitive biases; sociology, studying how online communities respond to manipulation; and computer science, developing detection tools that can identify sophisticated bot behavior. This comprehensive approach has revealed patterns that might be missed by a purely technical analysis.
"As AI continues to advance, the sophistication of these bots will only increase," Ng warns. "We're not just facing a technical challenge – we're racing to protect the autonomy of human discourse in an increasingly automated world." Her research at CMU isn't just detecting bots; it's helping build the frameworks we'll need to maintain authentic human communication in a future where AI-driven interactions become the norm.
The stakes couldn't be higher. As we move into an era where AI capabilities expand exponentially, understanding and countering sophisticated bot manipulation becomes crucial for preserving genuine human discourse and democratic decision-making. Ng's work represents a critical step toward ensuring that our digital future remains shaped by human choices rather than automated influence campaigns.
This research is supported by CMU's Center for Informed Democracy & Social-cybersecurity (IDeaS) and contributes to broader initiatives in election integrity and crisis response. For more information about research opportunities in this area, visit the Software and Societal Systems Department website.