Humans in the Lead: A New Governance Model for Agentic Media
Introduction
'Human in the loop' has become the default reassurance offered whenever someone asks how AI agents will be managed. While it sounds responsible, it is actually a trap that positions humans as a checkpoint in someone else's process rather than the architects of the system itself. Human in the lead is a stronger fit.

The 'Humans in the Loop' Model Needs Updates
In a media context, 'Humans in the Loop' often means a planner approving AI-generated budget reallocation recommendations after the fact, or a CMO reviewing a campaign performance report generated by systems they did not design and cannot interrogate. These seem to be governance checkpoints, but they are not. They are rituals that provide the comfort of oversight without the substance of it.
71% of organisations say they cannot fully trust autonomous AI agents for enterprise use. Yet only 46% have governance policies in place, and adherence to those policies remains low (Capgemini Research Institute).
The shift that is required (and that separates high-performing AI-driven media organizations from the rest) is from reactive oversight to proactive design. From humans checking AI outputs to humans defining the system's logic, boundaries, and success criteria from the outset. That is what it means to be in the lead.
What 'Humans in the Lead' Actually Requires
Being in the lead is not a passive role. It requires media leaders to make explicit decisions about which decisions agents should own, which require human judgment, and what the escalation triggers are that route agent recommendations back to a human before execution.
In media specifically, this means leaders making clear decisions across four categories:
- fully autonomous decisions agents can execute without review (bid adjustments within defined parameters, routine budget pacing);
- recommended decisions surfaced for human review before execution (significant budget shifts, new audience segments);
- flagged exceptions escalated immediately (brand safety triggers, budget floor breaches);
- off-limits decisions agents cannot touch under any circumstances (strategic campaign positioning, partner relationships, creative direction).
The Governance Architecture that Makes Leadership Possible
The governance architecture that makes leadership real has three practical components.
The first is explainability infrastructure: every consequential agent action should come with a clear, human-readable rationale: what signal triggered it, what alternatives were considered, what outcome it is optimizing for.
AI brings new trust, risk and security management challenges that conventional controls don't address. Organizations must implement layered governance technology across all AI entities in use.
The second component is overriding capability: in a system where humans are genuinely in the lead, any agent's decision must be reversible by a human with appropriate authority, in real time, with full visibility of the downstream consequences.
The third is a performance feedback loop that routes outcomes back to the governance model itself. If an agent's autonomous decisions are consistently outperforming human overrides, the autonomy boundary should shift. The governance model should be a living system that learns alongside the agents it governs.
Why This Matters More in Media than Almost Anywhere Else
Media is a domain where AI agents make decisions that are simultaneously fast-moving and high stakes. A Media Optimization agent acting on programmatic signals can shift six figures of spend in minutes. A Planning agent making real-time placement decisions may touch brand reputation in every execution. These are not back-office automation decisions: they are marketing decisions with strategic, commercial, and reputational consequences. But the landscape is changing fast and we need to deploy governance models that can be ready to go beyond single campaigns or flight management.
As shared by Capgemini, for example, 90% of B2B buying is predicted to be AI agent intermediated by 2028, with over $15 trillion in B2B spend flowing through AI agent exchanges (extending agent governance well beyond individual campaigns).
The transition from 'Humans in the Loop' to 'Humans in the Lead' requires media leaders to invest time in understanding the systems they govern, to make explicit decisions about autonomy and accountability, and to build the organizational muscle to review, challenge, and improve agent performance over time.
MINT is designed to make that leadership possible: not by making agents invisible, but by making them transparent, auditable, and genuinely responsive to the strategic direction of the humans who lead the function.



