Author’s Note: This is Entry #8 in Future Reference, my ongoing series where I look at how technology is quietly reshaping the systems we live under. This time, I’m turning the lens on politics because as governments start experimenting with artificial intelligence, the line between human leadership and algorithmic decision-making is beginning to blur.
In September 2025, Albania made history—sort of.
They appointed Diella, an AI system, as their Minister of State for Artificial Intelligence.
Her job? To oversee public procurement and help make the country’s contracting process “100% corruption-free.”
Diella isn’t human. She’s an algorithm with an avatar: a polite digital woman in traditional Albanian dress, fluent in bureaucracy. Her creators say she will analyze bids, flag anomalies, and ensure that government contracts are awarded on merit, not on who knows whom.
It’s an unprecedented experiment. Some call it visionary; others call it a PR stunt.
But the question hanging over this headline is universal: Can AI really clean up government—or will it just create new ways to hide corruption behind code?
The rise of algorithmic governance
Around the world, AI is quietly embedding itself into how states work.
- Estonia has “Bürokratt,” an AI that connects citizens to e-services through chat and voice.
- Singapore uses “Pair,” an AI assistant that helps civil servants summarize documents and speed up casework.
- The UK is testing AI to analyze public consultations, claiming it can save thousands of work hours.
- Abu Dhabi is going further, building an “AI-native” government where systems like TAMM AutoGov will eventually automate parts of public service delivery.
These projects share a goal: make bureaucracy faster, more transparent, and less dependent on human error or favoritism.
Yet the irony is hard to miss. The more we automate governance, the more we have to trust the people (and companies) who built the algorithms.
The Albanian experiment
Back in Tirana, Prime Minister Edi Rama insists Diella’s appointment is a step toward transparency. The idea is that an AI can process every bid fairly and impartially, without being tempted by bribes or political pressure.
But critics warn that Diella’s presence in cabinet meetings raises tough questions. If she makes a bad call, who’s accountable? Her developers? The prime minister? Or the machine itself?
Human politicians can be voted out. Algorithms can’t.
And while Diella may not take cash under the table, she still relies on data pipelines and human-written code. Those can be tweaked—or manipulated—just as easily as paper records once were.
Could the Philippines ever have an AI minister?
Let’s imagine it.
A “Diella” for the Philippines. An AI tasked to clean up public procurement, flag ghost projects, and catch red flags faster than any audit team.
Sounds good, right?
We already have early steps toward this idea: digital procurement portals, the Department of Information and Communications Technology (DICT) working on the National AI Strategy Roadmap 2.0, and the Anti-Red Tape Authority exploring automation in public service.
But here’s the catch: AI can only be as clean as the system feeding it data.
If the databases are messy, the reports incomplete, or the inputs biased, the algorithm will replicate that dysfunction at scale—and faster.
Transparency isn’t just about removing people from the equation; it’s about opening the equation itself.
Can citizens audit the AI’s decisions? Can watchdogs see how it scores bids? If the answer is no, then corruption doesn’t disappear. It just migrates into the algorithm.
The human factor
Filipinos are no strangers to long lines, slow paperwork, and “palakasan” culture. So it’s tempting to hope that AI could finally fix what decades of reforms haven’t.
But technology doesn’t change culture overnight.
You can’t code empathy, accountability, or integrity.
What AI can do is give reformers new tools. Automated audits, predictive analytics for fraud detection, and 24/7 citizen portals that reduce face-to-face gatekeeping. It can shine light on processes that used to happen behind closed doors.
Yet to make that work, we need digital trust: secure infrastructure, ethical frameworks, and laws that define who’s responsible when AI makes decisions that affect real people’s lives.
So, are AI-led governments the future?
Maybe not AI-led, but definitely AI-assisted.
Governments will use AI to listen, to process, to monitor but hopefully not to rule.
Albania’s Diella might be the first “AI Minister,” but she’s really just a mirror. Her existence forces us to ask what kind of governance we truly want—efficient or empathetic, automated or accountable.
The answer might be somewhere in between: a future where AI keeps governments honest, but humans keep them humane.
