A Strategic Counterprogramming Move as South Korea’s AI Act Takes Effect


On January 22, 2026, Singapore made history at the World Economic Forum Annual Meeting in Davos, Switzerland, unveiling the first comprehensive governance framework specifically designed for agentic AI systems. Minister for Digital Development and Information Josephine Teo announced the Model AI Governance Framework for Agentic AI, positioning the city-state as the global first mover in regulating autonomous AI systems that can reason, plan, and take real-world actions.

The Timing: A Signal Worth Noting

The framework’s release on the very same day that South Korea’s AI Framework Act came into effect cannot be ignored. While much of the global tech policy conversation was dominated by Seoul’s new legislation—making Korea the first country in Asia-Pacific to implement comprehensive AI law—Singapore chose that precise moment to demonstrate its fundamentally different approach.

South Korea’s AI Basic Act introduces mandatory obligations, labeling requirements, and potential fines up to KRW 30 million for non-compliance. It establishes a national AI committee with enforcement powers and requires impact assessments for high-impact AI systems.

Singapore’s framework? Entirely voluntary. No enforcement body. No mandatory compliance. Just guidance.

Whether this timing was deliberate remains unconfirmed, but the juxtaposition sends a clear message: Singapore continues to bet on collaborative governance over prescriptive regulation. For organizations navigating the increasingly complex global AI compliance landscape, this distinction matters.

Download: modelaigovernance modelaigovernance.pdf1 MB.a{fill:none;stroke:currentColor;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.5px;}download-circle

Understanding Agentic AI: Why This Framework Matters Now

Before diving into the framework’s specifics, it’s worth understanding what makes agentic AI fundamentally different from the generative AI systems we’ve been governing for the past two years.

Traditional AI systems process inputs and produce outputs—they recommend, they classify, they predict. Generative AI creates content—text, images, code—based on prompts. But agentic AI does something qualitatively different: it plans across multiple steps, takes actions in the real world, and adapts dynamically to achieve user-defined goals.

Think of it this way: a chatbot answers questions. An AI agent books your flights, manages your calendar, executes financial transactions, and updates your databases—all while you’re sleeping.

The framework identifies six core components of an AI agent:

  1. Model: The LLM or multimodal model serving as the agent’s “brain”2. Instructions: Natural language commands defining the agent’s role and constraints3. Memory: Information storage enabling learning from previous interactions4. Planning and Reasoning: The ability to devise multi-step approaches to tasks5. Tools: Interfaces enabling agents to interact with external systems6. Protocols: Standardized communication methods like the Model Context Protocol (MCP) and Agent2Agent Protocol (A2A)

What makes this particularly relevant for cybersecurity professionals: agents can have access to databases, APIs, financial systems, and even computer interfaces. A compromised or malfunctioning agent doesn’t just generate wrong text—it can delete production databases, exfiltrate sensitive data, or execute unauthorized transactions.

The Four-Stage Framework: A Practical Breakdown

Singapore’s framework organizes its guidance around four interconnected dimensions, each addressing distinct aspects of the agentic AI governance challenge.

Stage 1: Assess and Bound the Risks Upfront

The framework begins where any good security assessment should: risk identification before deployment.

Key risk factors affecting impact include:

  • Domain sensitivity: Financial transaction agents require higher accuracy tolerance than meeting summarization agents- Data access: Agents with access to personal customer data present different risk profiles than those accessing only public information- System connectivity: Agents interfacing with external APIs can leak data to third parties or overwhelm systems with excessive requests- Action scope: Read-only versus write access; limited toolsets versus unrestricted browser access- Reversibility: Scheduling a meeting is easily undone; sending an email to external parties is not

Risk factors affecting likelihood include:

  • Autonomy level: Agents following strict SOPs versus those using “best judgment”- Task complexity: Simple extraction tasks versus nuanced policy interpretation- Exposure to untrusted data: Internal knowledge bases versus open web access

The framework recommends explicit limits on agent capabilities:

  • Restrict tools and data access to minimum necessary permissions- Define SOPs for agentic workflows rather than granting unbounded decision-making authority- Implement containment mechanisms to limit blast radius during malfunctions- Establish robust identity management extending to agents themselves

This last point is particularly noteworthy. The framework acknowledges current gaps in handling agent identity—existing authentication systems weren’t designed for scenarios where agents act on behalf of multiple users with different permissions, or where agents spawn sub-agents recursively. It recommends solutions like integrating OAuth 2.0 into MCP and developing decentralized identity management for agents.

Stage 2: Make Humans Meaningfully Accountable

Here’s where the framework addresses one of the thorniest problems in AI governance: maintaining human accountability when the whole point of autonomous systems is to reduce human involvement.

The framework explicitly calls out automation bias—the tendency to over-trust systems that have performed reliably in the past—as a critical concern. As agents become more capable, humans may increasingly rubber-stamp agent decisions without meaningful review.

The recommended countermeasures include:

Defining significant checkpoints requiring human approval:

  • High-stakes actions and decisions (editing sensitive data, final decisions in healthcare/legal contexts)- Irreversible actions (permanent data deletion, external communications, payments)- Outlier behaviors (accessing systems outside normal work scope, selecting unusual operational parameters)- User-defined thresholds (purchases above certain amounts, communications to specific contacts)

Designing approval workflows that actually work:

  • Keep approval requests contextual and digestible—not raw logs- Consider whether approvals should be binary (approve/reject) or allow human editing- Train humans to identify common failure modes- Regularly audit whether human oversight remains effective

Complementing human oversight with automated monitoring:

  • Implement alerts for logged events indicating potential issues- Use data science techniques to identify anomalous agent trajectories- Deploy agents to monitor other agents

The framework provides clear guidance on responsibility allocation within organizations, mapping accountability across key decision makers, product teams, cybersecurity teams, and end users. For organizations working with external vendors, it recommends clarifying contractual obligations around security arrangements, performance guarantees, and data protection.

Stage 3: Implement Technical Controls and Processes

This is where the framework gets into the technical weeds that security teams will appreciate.

During Development:

The framework identifies specific controls for new agentic components:

For planning and reasoning:

  • Prompt agents to reflect on whether plans adhere to user instructions- Request clarification before proceeding with ambiguous tasks- Log agent reasoning for human verification

For tools:

  • Configure strict input format requirements- Apply least-privilege principles to tool access- Avoid granting write access to sensitive databases unless strictly required- Configure human takeover for sensitive data entry (passwords, API keys)

For protocols:

  • Whitelist trusted MCP servers- Sandbox code execution- Use standardized protocols for sensitive operations (e.g., agentic commerce protocols for financial transactions)

Before Deployment:

Testing agentic systems requires adapting traditional approaches. The framework recommends testing for:

  • Overall task execution: Can the agent actually complete its assigned tasks?- Policy compliance: Does it follow defined SOPs and escalate appropriately?- Tool calling accuracy: Does it call the right tools, with right permissions, right inputs, right order?- Robustness: How does it handle errors and edge cases?

Critically, the framework emphasizes testing entire agent workflows (not just outputs), testing agents individually and in multi-agent configurations, testing in realistic environments, and testing repeatedly across varied datasets to catch low-probability high-impact failures.

During and After Deployment:

Given the dynamic nature of agentic systems, the framework recommends:

  • Gradual rollout controlled by user experience level, available tools, and system exposure- Continuous monitoring with defined alert thresholds- Logging and tracing for debugging and regular audits- Escalation protocols ranging from human review to full system termination depending on severity

Stage 4: Enable End-User Responsibility

The framework distinguishes between two user archetypes:

Users who interact with agents (e.g., customers using service agents):

  • Declare upfront that users are interacting with AI- Inform users of agent’s range of actions and decisions- Clarify data collection and usage practices- Provide human escalation contacts

Users who integrate agents into work processes (e.g., employees using coding assistants):

  • Training on relevant use cases and restrictions- Education on effective prompting and agent instructions- Understanding of agent capabilities and potential impact- Familiarity with common failure modes (hallucinations, loops)- Awareness of potential tradecraft erosion as agents take over entry-level tasks

This last point deserves emphasis: as agents automate entry-level work traditionally used for training junior staff, organizations risk losing the pipeline through which employees develop foundational skills. The framework recommends proactive intervention to maintain core capabilities.

Singapore’s Model Framework Lineage: Context for the Rapid Release

This Agentic AI Framework represents the fourth AI-related Model Framework released by Singapore’s Infocomm Media Development Authority (IMDA):

  1. Model AI Governance Framework 1.0 (January 2019): First edition establishing AI governance principles2. Model AI Governance Framework 2.0 (January 2020): Updated framework launched at Davos3. Model AI Governance Framework for Generative AI (May 2024): Response to ChatGPT-era challenges4. Model AI Governance Framework for Agentic AI (January 2026): Current release

The 18-month gap between the Generative AI and Agentic AI frameworks is notably shorter than previous cycles. More interestingly, this framework was released without the preceding draft consultation period that characterized earlier versions—the Generative AI framework, for instance, underwent public consultation from January to March 2024 before finalization in May.

Several explanations are possible. Agentic AI systems build upon the technical foundations already established in previous frameworks—the underlying LLMs remain subject to earlier guidance. Additionally, industry momentum may have demanded faster regulatory response. Whatever the reason, the rapid release signals IMDA’s recognition that agentic AI is not a theoretical concern but an immediate deployment reality.

The Governance Complexity Problem: A Call for Consolidation

One challenge this framework doesn’t directly address: the increasingly fragmented landscape of AI governance guidance in Singapore.

By my count, Singapore government agencies have now issued at least ten distinct AI governance documents:

  • Model AI Governance Framework (2019, 2020)- Implementation and Self-Assessment Guide for Organisations (ISAGO)- Model AI Governance Framework for Generative AI- Model AI Governance Framework for Agentic AI- AI Verify Framework and Toolkit- CSA’s Addendum on Securing Agentic AI- MAS FEAT Principles and AI Model Risk Management Guidelines- GovTech’s Agentic Risk & Capability Framework- PDPC Advisory Guidelines on AI and Personal Data- Various sector-specific guidance from healthcare, financial, and other regulators

For organizations attempting to operationalize AI governance, navigating this expanding universe of frameworks, guidelines, and toolkits presents a genuine challenge. Perhaps Singapore needs a centralized portal—call it “AI Governance Go Where”—to help organizations map applicable requirements across this increasingly complex ecosystem.

Implications for Cybersecurity Professionals

For those of us in the security space, several framework elements warrant particular attention:

Agent Identity and Access Management: The framework acknowledges current IAM systems weren’t designed for agentic contexts. Expect rapid evolution in authentication and authorization protocols, particularly around OAuth 2.0/MCP integration and dynamic access control.

Threat Modeling for Agents: The framework references specific threats including memory poisoning, tool misuse, and privilege compromise. It recommends taint tracing to map how untrusted data flows through complex multi-agent systems.

Incident Response Considerations: When agents malfunction, traditional incident response playbooks may not apply. The framework emphasizes the need for kill switches, containment mechanisms, and audit trails that trace agent actions across multiple systems.

Supply Chain Security: Agents connecting to external MCP servers, third-party APIs, and untrusted data sources create novel supply chain attack vectors. The framework recommends whitelisting trusted servers and clearly defining vendor responsibilities.

Red Teaming Evolution: Testing agentic systems requires moving beyond traditional penetration testing to evaluate entire agent trajectories across multiple interactions and system states.

What Comes Next

The framework explicitly positions itself as a living document, inviting feedback and case studies from organizations deploying agentic AI responsibly. IMDA has also announced ongoing development of guidelines specifically focused on testing agentic AI applications for safety and reliability.

Given Singapore’s track record, expect iterative updates as deployment patterns mature and new risks emerge. The framework’s invitation for case studies suggests future versions will incorporate practical implementation lessons from early adopters.

For now, the Model AI Governance Framework for Agentic AI represents the most comprehensive government guidance available for organizations navigating the transition from AI systems that advise to AI systems that act. Whether Singapore’s light-touch approach ultimately proves more effective than Korea’s regulatory framework remains to be seen—but for organizations deploying agentic AI today, this framework offers a practical starting point.


Key Takeaways for Security Leaders

  1. Agentic AI requires distinct governance: Traditional AI and even generative AI frameworks don’t adequately address systems that take real-world actions autonomously2. Agent identity is an unsolved problem: Current IAM systems weren’t designed for agents acting on behalf of multiple users or spawning sub-agents3. Automation bias is the enemy: As agents become more capable, maintaining meaningful human oversight becomes harder, not easier4. Testing approaches must evolve: Evaluating agent trajectories across multi-step workflows requires new methodologies beyond traditional software testing5. The governance landscape is fragmenting: Organizations need strategies for navigating multiple overlapping frameworks and guidelines

The full Model AI Governance Framework for Agentic AI is available from IMDA at https://www.imda.gov.sg. Feedback and case studies can be submitted at https://go.gov.sg/mgfagentic-feedback.