As companies begin deploying agentic AI (artificial intelligence), experts say clear guardrails are needed to ensure these systems stay within the limits set by organizations.
Agentic AI refers to AI systems that can plan tasks, make decisions, and carry out actions with little human involvement. Unlike traditional AI tools that simply respond to prompts, these systems can complete workflows, access company data, and interact with other software to finish a task.
Because of this level of independence, companies are becoming more cautious about how the technology is used.
One concern is that an AI agent may interpret its objective differently from what humans intended. When that happens, the system could act beyond the boundaries set by the organization. Cyberattacks may also manipulate inputs or instructions, causing AI agents to behave in ways that appear to contradict the original prompts.
“The models work. Reasoning works,” said Srini Tallapragada, president and chief engineering and customer success officer of Salesforce, during his visit to Manila for a media briefing. “What is hard is extending those capabilities into the business. AI struggles with accuracy, reliability, consistency, not capability.”
Tallapragada said large language models (LLMs) need additional systems around them to make AI agents more reliable in business settings. Even a high success rate may not be enough in industries that deal with sensitive data or financial transactions.
“Even 98% consistency is not acceptable when handling customer claims, medical policies, financial transactions, (or) any regulated industry use case,” Tallapragada said.
To address these issues, Salesforce has been promoting its Agentforce platform, which aims to give companies more control over AI agents. The company opened its office in the Philippines several months ago and has been introducing the technology to local organizations.
Agentforce provides a foundation where AI agents operate using the same business data, rules, and controls across the organization.
The platform includes several features designed to make AI agents safer and easier to monitor. One is shared context, which ensures that responses are based on unified company data. Another is deterministic controls, which combine AI reasoning with predefined business rules.
The system also provides observability, allowing companies to track what AI agents are doing and measure their performance. Orchestration manages how tasks are routed among different agents and ensures that complex workflows can be escalated to human workers when needed.
“With Agentforce, we’re able to deliver these capabilities to give our customers the confidence, control, and compliance they need to succeed at enterprise scale,” Tallapragada said.
The platform is built with Atlas Reasoning Engine, which helps AI agents think through tasks instead of simply generating answers.
“The Atlas Reasoning Engine is the ‘brain’ behind Salesforce’s Agentforce AI agents,” Tallapragada said. “It is the part that lets them think, plan, and act intelligently rather than just respond with simple answers.”
Atlas helps the AI understand what a user wants to accomplish, break complex requests into steps, and determine the best way to complete the task. It can review results, adjust actions, and ask clarifying questions to reduce errors.
“You can think of it like a smart assistant that doesn’t just hear what you say, but actually comprehends the goal behind it,” Tallapragada said.
This approach allows AI agents to carry out tasks such as updating records, gathering information, or sending summaries while adapting in real time. By combining reasoning, planning, and execution, agentic systems can handle business processes while still operating within defined controls.
