When I sat with Ben Gelman, a senior data scientist at Sophos AI, I was expecting a technical deep dive into the world of generative models and cybersecurity. What I got instead felt closer to a Black Mirror episode than a technical brief—except this one isn’t fiction.
Gelman’s latest research focuses on how generative AI can be weaponized for political manipulation. Not just the kind of broad, meme-based misinformation we’ve seen during previous elections—but something far more sophisticated, precise, and disturbingly personal. “The biggest shift,” he tells me, “is how low the barrier has become for launching highly targeted disinformation campaigns at scale. It doesn’t take a foreign state actor anymore. A few individuals with the right tools can do significant damage.”

What’s alarming is that these tools aren’t theoretical—they’re already available. In a simulation run by Sophos, Gelman’s team used a combination of Auto-GPTs and AI art generators like Stable Diffusion to build complete political campaigns out of thin air. “We created fake political websites, generated fake user profiles, and used AI to write micro-targeted emails based on synthetic personality data,” he explained. “The level of persuasion these emails had—using emotional appeal, omission, and selective truths—was chilling.”
The simulation involved four fictional political campaigns, each pushing wildly different ideologies. Sophos fed user profiles—constructed from public domain and social media-style prompts—into large language models and generated 20 individualised emails. Each one was written to subtly nudge the reader toward a campaign’s cause, often without a single outright lie. “AI doesn’t need to lie to manipulate you,” Gelman says. “It just needs to tell you exactly what you want to hear.”
This isn’t just about bad emails. Gelman warns that in the hands of someone with even modest technical skill, today’s generative tools can mimic grassroots political movements, generate supporter communities, and mislead voters through deepfakes and fake endorsements—all while staying under the radar. “It’s the perfect storm,” he adds. “We’re more connected, more data-rich, and more algorithmically influenced than ever before. Generative AI just ties it all together.”
If this sounds like something that could happen in your backyard—it already has. Singapore’s recent brush with deepfake scams involving top politicians was only a taste. In the Philippines, where mid-term elections are fast approaching, the risk is arguably higher due to the country’s active social media landscape and fragmented digital literacy.
So what do we do about it? Gelman is clear: there’s no silver bullet. “It’s going to take a mix of better AI-generated content classifiers, improved public education, and tighter policy regulations. People need to become more aware of how they’re being influenced—not just by what they see, but by how it’s being tailored for them.”
He also warns against complacency. While AI can improve lives and drive innovation, the same tools can deepen ideological divides and erode trust in institutions if left unchecked. “The scary part,” Gelman says, “isn’t that AI is good at lying. It’s that it’s great at telling half-truths you want to believe.”
After the interview, I sat with the thought that maybe the next election won’t be decided by policy debates, but by who has the more convincing algorithm. What a scary thought. Are we doomed? Rational thought and a clear understanding of the liberal arts may be the way out of this pitfall.