Most AI governance fails for one of two reasons: it is either so loose that teams start building in the dark, or so heavy that they work around it. The practical middle path is risk-tiered governance: let low-risk work move quickly, add review where impact rises, and keep the hardest controls for workflows that touch core systems, customer outcomes, or sensitive data.
That matters now because the market is moving past chat-only usage into agents and workflow execution. The question is no longer whether people can use AI. It is whether the organisation has a credible operating model for where AI is allowed to act, what it can touch, who approves it, and how it gets monitored after launch.
Why this is now an operating problem, not a policy problem
A lot of governance still assumes software changes move slowly and stay inside neat boundaries. That is not what current AI rollout looks like. Microsoft argued this week that many organisations are not blocked because agents are unsafe in principle, but because their governance models were designed for a world before autonomous or semi-autonomous systems moved fluidly across apps, data, and workflows. The same piece warns that “lock everything down” and “we’ll figure it out later” both produce bad outcomes: either stalled delivery or shadow IT.
Source: Building trustworthy AI: A practical framework for adaptive governance
That matches what Rettare sees in practice. Buyers do not need another abstract AI policy PDF. They need an operating answer to four questions:
- what is safe to trial without ceremony
- what needs human approval before it can affect a customer or a record
- what is prohibited unless the architecture and controls are in place
- how performance, drift, and incidents are reviewed after go-live
What risk-tiered AI governance actually means
Risk-tiered governance is simple in principle: not every workflow deserves the same controls.
Microsoft’s current framing is useful here. It proposes graduated risk zones based on the data an agent can access, how broadly it can be shared, what actions it can take, what identity it runs under, and what level of oversight applies as risk increases.
Source: Building trustworthy AI: A practical framework for adaptive governance
A practical three-tier model for operators looks like this:
| Risk tier | Typical use | Speed | Minimum controls |
|---|---|---|---|
| Low | Internal drafting, summarising, research prep, meeting packs | Fast self-serve | approved tools, limited data access, basic logging |
| Medium | Workflow assistance inside business systems, recommendation engines, draft replies, routing | Managed rollout | named owner, output validation, approval gates for edge cases, monitoring |
| High | Customer-facing actions, financial consequences, sensitive records, system updates, delegated tool use | Deliberate rollout | least privilege, explicit approvals, audit trail, rollback, incident review, evaluation before launch |
The point is clarity, not bureaucracy. Low-risk work should not need a two-week committee process. High-risk work should not go live because someone found a clever demo.
Why approvals and checkpoints are becoming the default pattern
The strongest mainstream signal this month is that major vendors are normalising checkpointed execution rather than “magic autonomy”. Microsoft’s new Copilot Cowork turns a request into a background plan, keeps clear checkpoints, lets users pause execution, and surfaces recommended actions for approval before changes are applied.
Source: Copilot Cowork: A new way of getting work done
That is a better operating pattern than pretending human oversight is old-fashioned. In most organisations, the goal is not to remove accountability. It is to remove repetitive handling while preserving accountability where consequences are real.
This is exactly where Agent Ops becomes useful. Rettare treats approvals, logging, fallbacks, and ownership as part of the implementation, not as extras for later. In practice that means workflows usually earn autonomy in stages:
- Shadow - observe the work and measure quality.
- Draft - let AI prepare outputs, but keep a human approver.
- Execute - allow controlled actions only after the workflow has passed its tests and monitoring is in place.
The risk language is hardening fast
Another reason this topic matters now: agent risk is becoming legible to non-specialists. Microsoft Security’s March briefing on the OWASP Top 10 for Agentic Applications frames the problem plainly: agent failures are rarely just “bad output”. They become bad outcomes when systems can interpret untrusted content as instruction, chain tools, act with delegated identity, and keep going across workflows.
Source: Addressing the OWASP Top 10 Risks in Agentic AI with Microsoft Copilot Studio
OpenAI’s new Safety Bug Bounty tells the same story from a different angle. It explicitly treats issues such as third-party prompt injection, data exfiltration, and harmful agentic actions as concrete safety problems worth operationalising, not hypothetical edge cases.
Source: Introducing the OpenAI Safety Bug Bounty program
And OpenAI’s Promptfoo acquisition pushes the point further: security testing, red-teaming, traceability, and integrated reporting are becoming part of the expected delivery stack for enterprise agent systems.
Source: OpenAI to acquire Promptfoo
The practical takeaway is straightforward: if your governance model still assumes the main risk is a wrong paragraph in a draft, you are under-scoping the problem.
Governance without monitoring is still incomplete
A common rollout mistake is to treat governance as a launch checklist. In reality, production safety is won after go-live.
NIST said in March that AI systems have novel properties that introduce variability and unpredictable behaviour, making post-deployment monitoring crucial for confident, widespread adoption. Its summary also highlights barriers that operators will recognise immediately: fragmented logging, drift detection, scaling human review during rapid rollouts, and weak information-sharing mechanisms.
Source: New Report: Challenges to the Monitoring of Deployed AI Systems
The Australian Bureau of Statistics is heading in the same direction. Its GenAI Evaluation Workflow is designed as a continuous and iterative process embedded through the development cycle, balancing human and domain expertise with automated scenario generation and scoring.
Source: Methodological News, March Quarter 2026
That is the right mental model for operators: governance is not just “who signed off”. It is also whether the system is still behaving acceptably after new data, new prompts, new integrations, and new edge cases show up.
A practical Rettare model for moving faster safely
For most buyers, the best first step is not a grand enterprise AI policy. It is one governed workflow with visible rules.
A workable rollout usually includes:
- one named workflow owner
- a clear risk tier for the workflow
- explicit allowed and prohibited data types
- draft-first behaviour for customer-facing or high-impact outputs
- approval gates for irreversible actions
- logging that makes runs and changes inspectable
- a weekly review of exceptions, failures, and drift
If teams are already using public AI tools, the answer is not another warning email. It is to make the approved path faster and clearer than the workaround. That usually means a small, governed implementation with obvious value: triage, drafting, routing, evidence-pack generation, or internal research assembly.
The buyer question to ask next
If you are an operations leader, the most useful question is not “Do we have an AI policy?” It is this:
Which workflows are low-risk enough to speed up now, and which ones need approvals, monitoring, and stronger controls before they touch customers or core systems?
That is the point where AI governance becomes useful. Not as a blocker, but as the thing that lets practical rollout happen without losing trust.
References
- Building trustworthy AI: A practical framework for adaptive governance
- Copilot Cowork: A new way of getting work done
- Addressing the OWASP Top 10 Risks in Agentic AI with Microsoft Copilot Studio
- Introducing the OpenAI Safety Bug Bounty program
- OpenAI to acquire Promptfoo
- New Report: Challenges to the Monitoring of Deployed AI Systems
- Methodological News, March Quarter 2026