Agentic AI is becoming one of the most talked-about concepts in technology this year.
At Secure Horizons 2025, a gathering organized by the Cybersecurity Council of the Philippines to help leaders and companies strengthen their defense strategies, the spotlight briefly turned to this emerging form of artificial intelligence.
This discussion, led by Charmaine Valmonte, the chief information security officer of Aboitiz Equity Ventures explained what it means, why it matters, and what risks organizations must anticipate.

Generative AI is powerful but narrow, for it only works within limited parameters. It follows instructions, from fetching data to forming output. It’s simply like an intern who follows a checklist without initiative.
Meanwhile, Valmonte likened the agentic AI to a fresh graduate who learns on the job, gains confidence, and eventually finds smarter ways to accomplish tasks. This marks a shift from AI as a tool to AI as a colleague.
Instead of waiting for prompts, it can learn and start acting independently. As the CISO described it, “Meet your new teammate. It doesn’t need any sleep, it can work 24/7, and you don’t need to feed this guy.”
Applications in cybersecurity
Cybersecurity has emerged as one of the most immediate testbeds for agentic AI. It is an area where speed is critical, resources are limited, and human analysts face overwhelming workloads.
The CISO highlighted how AI reduced the burden of threat intelligence. Analysts once needed three hours a day to process external threat feeds. With AI, the process dropped to 15 minutes.
One of her engineers later pushed the optimization further, cutting it down to just one minute. Valmonte noted that they were still testing that approach.
“Three hours to fifteen minutes to one minute? Amazing,” she said.
The same potential applies to other cybersecurity functions. Agentic AI can be deployed to scan for system gaps, detect vulnerabilities, and apply patches automatically.
It can analyze behavior within organizations, learning what is normal for each user and flagging suspicious activity in real time. In vulnerability management, it can direct issues to the right team and even disconnect compromised systems until they are secured.
The efficiency gains are substantial. By handling tier-one tasks, AI agents allow human analysts to focus on strategy, risk engagement, and collaboration with business leaders.
The risks and guardrails
Valmonte cautioned that enthusiasm for agentic AI must be matched with restraint. The very feature that makes it powerful, its ability to act with agency, also introduces new risks.
Agentic AI works best when its role is specific. Without well-defined goals, it may misinterpret instructions or act inefficiently.
Valmonte compared this to traffic chaos: “It’s like putting one of those self-driving cars from San Francisco in the middle of EDSA. God forbid—it’s not going to move.”
In other words, agents thrive in structured environments but struggle in unstructured, unpredictable ones.
Agents must operate within controlled boundaries. If left too exposed, they can be manipulated through “data poisoning,” where attackers feed misleading information to distort the system’s decisions.
This could turn an AI meant to protect an organization into a vulnerability itself. For this reason, many security leaders advocate keeping AI agents inside a company’s own environment rather than fully integrated with the open internet.
Valmonte emphasized that autonomy cannot be total. A “human in the loop” remains essential, particularly in cybersecurity. Even the most advanced agents must be subject to review and override.
“If my agent’s running amok, I need to turn it off quickly,” she said, emphasizing that this off-switch is not just a safety net but a governance requirement.
Decisions made by agents must still be owned by humans. This is especially critical in regulated industries such as finance and healthcare. Audit trails, clear documentation of AI actions, and well-defined escalation processes ensure that responsibility never disappears into the machine.
Taken together, these guardrails form the foundation for responsible adoption. Without them, the shift from assistant AI to agentic AI could lead to unpredictable outcomes, security breaches, or costly errors.
With them, organizations can begin to explore the benefits of agency while minimizing the risks.
Global and Philippine momentum around agentic AI
Agentic AI is becoming a reality worldwide, more than a buzzword. In United States, Grammarly has introduced a new AI platform called Docs featuring multiple agents that check originality, source references, predict audience reactions, and even align content with grading rubrics, giving writers a set of digital collaborators.
Microsoft has also taken major steps, evolving GitHub Copilot into a coding agent capable of refactoring code and fixing bugs, while Copilot Studio lets businesses build custom agents to navigate data, trigger workflows, and coordinate across applications.
Elsewhere, startups and enterprises are expanding adoption. TinyFish, a Palo Alto–based firm, is developing agentic systems for automating complex web tasks like inventory monitoring and data collection.
Moreover, Microsoft’s 2025 Work Trend Index found that 93 percent of Indian business leaders plan to deploy AI agents within the next 12–18 months to boost productivity. Software providers like ServiceNow and Salesforce are embedding AI agents in workflows to cut case resolution times and automate support tasks.
However, in the Philippines, progress is taking a slower, more deliberate course. Concentrix Philippines recently launched IX Hero AI, touted as technology that enables jobs and supercharges the workforce.
While the details remain broad, this initiative signals private-sector interest in integrating AI tools, possibly with agentic characteristics, to augment productivity and support human workforces
On the government side, the Department of Science and Technology – Advanced Science and Technology Institute (DOST-ASTI), through platforms like DIMER, provides researchers and SMEs access to pre-trained AI models for tasks like traffic detection, hazard mapping, and crop monitoring.
Furthermore, It has also backed projects like the DATOS Help Desk, capable of producing flood and landslide maps in minutes to support disaster response.
Meanwhile, the Philippines’ 2021–2030 National AI Strategy Roadmap, launched by the Department of Trade and Industry, positions AI as a critical engine of the country’s digital economy.
Together, these initiatives reflect how the Philippines is steadily laying the groundwork for an AI-ready future.
The way forward
Agentic AI is no longer an abstract idea. Sooner or later, more people will hear about it, and the first reaction may be fear. That same unease greeted the rise of AI itself, with concerns about jobs, trust, and control.
However, fear alone cannot define the future. The challenge now is to approach agentic AI with clear boundaries and human oversight. Its promise is real, but so are the risks if it moves faster than understanding.
For cybersecurity leaders, this balance is especially critical. Agentic AI may help reduce the burden of repetitive tasks like processing threat intelligence, but it cannot replace human judgment in spotting risks and making final calls.
As Valmonte told the audience at Secure Horizons 2025: “Agents are the way. If you don’t use them today, start. Learn.”
She added, “There can’t be autonomy end to end. [The] human in the loop has to be there.”
