Where “Human in the loop” Safeguards Matter in Agentic Ad Workflows

Adam Epstein

Adam Epstein / Co-Founder and CEO at Gigi

Editor’s note: This post is part of our ongoing series where U of Digital AI Literacy Alliance partners share explainers and practical perspectives to help our industry understand and apply AI.

This post expands on “Safeguards Are How You Earn Automation” recently published on the Gigi blog.

As Ad Buying Gets More Automated, Human Judgment Remains Critical

Media buying has moved from hand-coded line items and spreadsheets to rule-based automation, and now toward agentic workflows where AI systems execute and optimize campaigns. 

For media buyers, this shift may mean fewer manual or rote tasks, faster optimization cycles, and the promise of more “hands-off” campaigns.

But as AI encounters the real world, “human in the loop” safeguards become essential to keeping agentic systems aligned with client expectations, business goals, and accountability structures.

Why Safeguards Matter in Agentic Advertising

AI safeguards and governance in adtech have many drivers: privacy and consent requirements; new regulations such as the EU AI Act; concerns about hallucinations and brand safety; the risk of bias, inference, or manipulative targeting; and the growing threat of agent impersonation and fraud. In agentic advertising, a more immediate issue also emerges: AI systems start making decisions faster than the organizations that oversee them, widening the gap between what the system does and what teams can confidently explain.

How Safeguards Can Work With Agentic Workflows

Flowchart of AI ad agent safeguards, routing risky actions to human review and safe actions to automatic execution.
Here are three examples where safeguards can help ensure the effectiveness and reliability of agentic workflows:

Example 1: Making Sure Campaigns Are Properly Paced Against Budget

Advertisers use pacing controls and minimum spend thresholds to ensure budgets are deployed efficiently across a flight. This prevents under-delivery in early periods and last-minute spend dumps that inflate CPMs and waste budget. Agentic AI can monitor delivery curves in near-real time and adjust pacing and minimum spend levels across hundreds of line items far more precisely than a human team working manually.

However, when a proposed change jumps from a negligible to a material dollar amount, surfacing that decision for human review adds a crucial quality check before the real budget is put at risk.

UI card showing AI ad safeguard requiring approval before raising any maximum bid above a set CPM limit.   

Example 2: Better, Faster Bid Optimizations

Bid strategy is another high-impact domain. AI can continuously tune base and max bids in response to auction dynamics, improving win rates on valuable inventory and reducing spend in low-performing areas. Traditionally, this work might be performed semi-manually on a slower weekly cadence.

Human sign-off is most valuable at the edges, when maximum bids rise past preset thresholds. In these cases, a person can be alerted to confirm that the competitive justification is there; that paying for additional reach or quality aligns with the campaign’s goals and limits.

UI card showing AI ad safeguard that flags daily minimum spend changes for human review.

Example 3: Getting an Expert Eye on Outlier Bid Modifiers 

Even within a single campaign, bid modifiers can multiply quickly across audiences, geos, domains, devices, and formats. An agentic system is well-suited to calculating and applying small, incremental adjustments with consistent logic. Once changes exceed a defined band—say, more than a 10% shift from baseline—routing them to a human not only prevents outlier errors but also creates an opportunity for teams to tell the story of why a particular segment or placement is being pushed or pulled.

UI card showing AI ad safeguard that blocks bid modifier shifts over 10 percent without approval.

The Upside of Safeguards vs. Black-Box Automation

Thoughtful safeguards do not slow agentic advertising down; they help teams move from manual execution to strategy while preserving control and explainability. As time goes on, there may be fewer campaign knobs for humans to turn directly, with the remaining decisions made by people becoming more consequential and more carefully considered.

Humans still own context: brand nuance, cultural moments, shifting audience sentiment, and the trade-offs of business goals that no model fully understands. By designing governance that surfaces the right AI actions for review, organizations gain clearer visibility into how decisions are made, can course-correct when needed, and build a feedback loop that aligns autonomous agents with long-term outcomes rather than short-term metrics alone.

– Adam Epstein, Co-founder and CEO at Gigi

Adam Epstein

Adam Epstein / Co-Founder and CEO at Gigi

Sign up to get the FREE U of Digital Community Newsletter
Ad tech news
with a side of memes.

Sign up for our weekly newsletter to get the scoop on what’s going down in ad tech.

JUST ANNOUNCED // Building AI for marketing? Don’t educate the market alone.