Welcome back to The AI Edge podcast recap! Every week, we break down the biggest conversations from the podcast, ideas, frameworks, guest insights, and industry shifts shaping the future of AI in marketing, media, and business. And yes, part of what makes this show great is that it doesn’t speak in AI gibberish. The ideas are big, but the language stays clear, usable, and grounded in the real world. If you listened to the episode, this is your chance to go deeper with extra references and revisit those sharply phrased takes you’ll definitely want to bring into your next meeting. If you haven’t listened yet, this is the context that makes it all make sense.
The podcast is hosted by Shiv Gupta, founder of U of Digital, and Myles Younger, the company’s Chief Growth Officer and long-time ad tech insider. Together, they explore what’s really happening in AI, not just the headlines, but the implications for marketers, agencies, platforms, and the broader ecosystem.
Last week, Shiv and Myles were joined by Chad Reynolds, Founder & CEO of Vurvey Labs, whose work sits at the intersection of AI, human insight, and consumer modeling. And there’s a lot to unpack, because the topics in this episode touch nearly every corner of the marketing and advertising landscape.
Here’s what the group discussed:
- Amazon and Google’s newly announced AI ad agents and what platform-embedded automation means for agencies, marketers, and the creative process
- Marketing automation vs. marketing sameness: whether these agents unlock efficiency or unintentionally create copy-and-paste marketing across the industry
- The growing debate around data trust and AI policies: whether marketers are ready to hand their strategy and proprietary metrics to platform-controlled LLMs
- AI earners vs. AI burners: how the latest earnings cycle revealed a divide between the companies monetizing AI and the ones simply funding it
- Synthetic audiences and the $50B research question: whether AI-generated consumer populations can meaningfully replace traditional audience research
- And yes… the viral “AI grandmother” moment: and what it tells us about emotional design, ethics, memory, and the consumer comfort line
Scroll for the articles, references, and extra context from the discussion. And hit play to listen to the full episode right here on this page.
Amazon & Google’s New Ad Agents: Wizards for the Long Tail, Headaches for Everyone Else
Amazon used its Unboxed event in Nashville to quietly redraw the map of its ad stack. Beyond the usual product updates, the company announced it’s merging its two main ad platforms, DSP and Sponsored Ads, into a single interface. On top of that, it introduced two AI agents: an Ads Agent to help plan and optimize campaigns, and a Creative Agent to generate assets inside the platform.
Within 48 hours, Google answered with its own pair of helpers inside Google Ads: Ads Advisor (an agentic assistant for campaign management) and Analytics Advisor (an assistant for insight and measurement).

Meta was among the first to set expectations publicly, describing a future where advertisers wouldn’t need to adjust bids, build audiences, or configure campaigns manually, because AI agents would eventually run the system end to end.
In other words: the big platforms are finally doing what they’ve been hinting at for a year, turning media buying into a conversation with an AI assistant instead of a slog through a UI.
Shiv Gupta (Host) framed what these tools actually promise to do for marketers: instead of manually building and optimizing campaigns, AI agents are meant to take over setup, targeting, and analytics so marketers can “just tell the system what they want” and let it handle the rest.
And the acceleration isn’t stopping with the duopoly: EXCLUSIVE: Yahoo Is Quietly Testing 6 AI Agents for Advertising
For Myles, the most interesting question isn’t what these agents do, but who they’re really for.
Myles Younger (Co-Host): “I want to know over time whether these agents are being used by like longtail users of those platforms or how usage of those agents moves up into the fatter part of the tail. What occurs to me that’s funny is like if you’re a marketer, you’re going to pay your agency to use an agent. You’re not going to use that and marketers aren’t going to use those agents if they’re paying agencies. So, your agency is going to use the agent on your behalf, which strikes me as funny.”
Myles, in classic fashion, reached for an anachronism to make sense of the modern hype.
Myles Younger (Co-Host): “They are basically the new form of what was called a wizard in the 90s. These are ‘wizard wizards’, such as a campaign setup wizard or a creative wizard. You get a bunch of potentially hundreds of thousands, millions of advertisers around the world using these wizards to like build campaigns and build creatives.”
That’s great for onboarding, but there’s a ceiling. As Myles sees it, once advertisers rely on agents, they’re going to hit several predictable friction points.
Myles Younger (Co-Host): “The next thing that occurs to me is that I think a lot of them are going to run into two things. One, the wizard’s not going to do exactly what they want and how they want it. And the other thing they’re going to run into is you’re going to end up with a ton of like sameness across campaigns because you now have like collapsed all of marketing services into what would have been like one vendor. It’s just this one agent, doing all the work. There’s no longer like a heterogeneous group of agencies and people doing this work.
Instead of replacing agencies or specialists, Myles argued that automation may actually pull them back:
Myles Younger (Co-Host): “That might counterintuitively expose more need for professional services on those platforms. You’re getting people to the point faster where they become dissatisfied with the setup. They might say, ‘That was great that you got me set up in like five minutes and that got me up and running and now I’m spending on your platform, but I have now maxed out what I could do faster than I could have before’. Now that they’ve maxed out what they can do with the wizard, they want professional services.”
Shiv connected the dots to a bigger creative and strategic concern: if everyone is piping their briefs into similar agents, the outputs will start to look increasingly alike.
Shiv Gupta (Host): “AI is going to create kind of like tons of homogeneity. With marketing, you’re going to just have agents kind of absorbing briefs and things like that and then just kind of like all building similar things. Marketing, if managed by AI, is going to start to look more and more the same. I think it’s going to require more creativity and more strategy as opposed to more like busy bees setting things up in UIs. You’re kind of playing that out with what’s happening here with these agents.”
Chad zeroed in on a different kind of risk, not what the agents do, but what they require. These systems only perform well when fed strategy, audience data, business constraints, and proprietary insights. That means marketers aren’t just using the platforms — they’re training them.
Chad Reynolds (Guest): “I’m curious just on how much data you are open to giving Amazon for your strategy. I’m just curious of like it’s kind of the early days of Chat GPT, like everybody would upload your medical records and all different kinds of things to see what it could do without necessarily understanding the safety and security of where this data is going. I’m curious about the level of trust that every marketer would have by putting their brief or their strategy up there.”
The question now isn’t just whether these tools are useful; it’s whether brands are comfortable revealing their economics, audiences, and performance targets to platforms whose incentives don’t always align with theirs.
Myles Younger (Co-Host): “There are all kinds of information you might share with a close agency partner around your internal business where you’re like, well, these are the metrics we need to hit because this is the margin for this product. We have these volume targets. Once you tell the walled garden, the platform those numbers, they’re just going to raise the ad bid to the maximum that they know it allows you to squeeze out a little bit of ROI from your campaign. So, you’re totally tipping your hand to them, which gets me back to what I was saying is that’s totally a home for professional services. You are going to be disincentivized from a whole slew of information that you might divulge to a platform AI.”
And that led to a bigger reflection: if AI agents are going to sit at the center of campaign orchestration, then data boundaries, governance, and access controls aren’t optional; they’re strategy.
Because to get value from agents, brands will need to define:
- What data will they share
- What data they won’t
- Who gets access
- How that access is monitored
- And what protections exist when platforms learn from your inputs
Which raises a question that the group believes every marketer, agency, and platform partner should be asking:
Have we put these guardrails into our AI policies, and have our partners done the same?
As the tools evolve, the discipline around them has to evolve with equal rigor. AI is changing how marketers work and what they must protect.
The real friction isn’t the automation, it’s what automation can’t reach yet. All of this raised a key tension: these agents excel when the task is structured, when goals, audiences, and constraints are already defined. They’re impressive at executing what’s already known. But marketing isn’t only about refining the plan; it’s about expanding it. So while agents can automate the mechanics, the open question is whether they can uncover what hasn’t been articulated yet. Can they surface new audiences, new angles, new whitespace? Or do they simply optimize inside the walls we give them?
Chad Reynolds (Guest): “Yeah. And it feels like it’s doing half or part of what people want, but the other part is what’s going to resonate with my audience, and are there other audiences I should be crafting messages for, and that I don’t even know about? So it’s that kind of second half, does it matter? Does it have meaning to consumers? You can automate a lot of different kinds of things, but you’re still missing the second half of that, which I think is the more important one.”
Shiv pushed the conversation beyond the excitement and into practicality. AI can sometimes show up as a shiny talking point rather than something that meaningfully improves outcomes. His point: marketers and media buyers will need to measure, not assume, whether these agents actually deliver value. And as adoption grows, the real signal to watch is the changing balance between buyers and the platforms: does this become a win-win, or just a new way for vendors to extract more spend?
Shiv Gupta (Host): “If you zoom out and you look at how much money you’re putting in and how much money you’re getting out, are marketers going to get more out by using these tools or are these ways for these platforms to juice more margin out of the same spend? That dynamic is going to be important.”
Beyond Amazon and Google, Shiv pointed out that other companies, like Innovid/MediaOcean with Orchestrator and Hightouch with an AI agent layered on top of customer data, are trying to sit above the platforms as more neutral, full-stack orchestration layers. That raises a new strategic question: if consumers increasingly start journeys in ChatGPT or other AI assistants, where do marketers start theirs?
Shiv Gupta (Host): “If you’re a marketer, what’s your first stop? I think it’s interesting to see how some of these other platforms that theoretically sit on top of more might try to use AI as a way to wrestle control away from some of the bigger platforms.”
At the same time, the big ad platforms are tiptoeing around agencies in their messaging, pitching these tools as designed for small businesses and long-tail advertisers—while everyone can see the subtext: if an agent can handle setup and optimization, what happens to all the humans who used to do that?
Shiv explicitly called out the agency implications: while Amazon and Google talk about empowering SMEs, there’s an obvious tension around disintermediation.
Then he pointed to the counterintuitive outcome: agencies might actually become more important, not less.
Shiv Gupta (Host): “Obviously, the big holdcos are going in hard on their own AI tools. Agencies want to be the first stop; they want to own the AI decisioning layer as well. We’re just getting into like a huge push and pull where there are no clear lines about who owns what, and it’s just like this big territorial battle with AI agents.”
Myles crystallized what that new agency value prop might look like in a single line:
Myles Younger (Co-Host): “You know who you hire to figure out that ball of wax because you don’t have time and you don’t understand it? An agency. Yes. There you go. There’s the new agency model. Figure out the AI agents. That’s the new agency, right? I love it.”
U of Digital Reading List:
- unBoxed 2025 keynote recap | Amazon Ads
- Amazon Ads Is All In On Simplicity
- Google’s AI advisors: agentic tools to drive impact and insights
- Adam Singolda In Conversation With Sir Martin Sorrell: AI, Advertising, and Reinvention in 2025
- How advertisers use Brand Agents to drive awareness, action, and insight – Firsthand
- How AI agents work in media buying and programmatic | Scope3
AI Earners vs. AI Burners: The Earnings Divide and What It Really Signals
With Q3 earnings hitting the news cycle, the conversation shifted from product launches and platform roadmaps to something more uncomfortable: which AI companies are actually making money from AI and which ones are still funding the dream.
Shiv framed the divide with a phrase that’s now starting to get circulated across the industry, a taxonomy of winners and wait-and-see cases.
Shiv Gupta (Host): “We coined this fun little term in our newsletter called AI burners and AI earners. That essentially means there are big tech companies that seem to be riding the AI wave up, and those seem to be more so the infrastructure companies… We’re calling those the AI earners. We’re calling the Metas of the world the AI burners.”
The distinction isn’t just wordplay; it’s reshaping valuation. Infrastructure players like Google Cloud, AWS, Azure, and, of course, Nvidia, are reporting staggering performance gains because AI demand flows through their stacks. Meanwhile, Meta, despite monster user growth and ad revenue, still took a beating from Wall Street. Why? Because its AI investments haven’t translated into monetized, infrastructure-like revenue streams yet. Investors are rewarding whoever sits closest to the compute.
So the obvious question came up:
Does every AI company eventually need to become an infrastructure business?
Chad pushed that exact point.
Chad Reynolds (Guest): “You need to invest in infrastructure to build out and scale a lot of the models that are getting created.”
But the pressure is both technical and financial. Companies burning billions on model development may not have the luxury of patience.
Here’s where a sharp line Myles delivered last episode becomes worth quoting again.
Myles Younger (Co-Host): “There is the earners versus burners distinction. Nvidia is the classic AI earner, but OpenAI falls under the AI burners category; they haven’t quite nailed how to earn back all the capital they are burning through.”
Myles connected it to the competitive pressure behind Sam Altman’s shift from openly dismissing ads to now exploring monetization models that include them.
Myles Younger (Co-Host): “If Mark Zuckerberg figures that out first with ads, it’s going to make Sam Altman look bad, and the shareholders will pressure him to launch ads because he is being lapped by the competition.”
This comment reframes the entire earnings cycle as something more like a race clock than a financial report card.
U of Digital Take
The earners vs. burners divide is now the clearest lens for understanding AI’s next decade.
- Earners control compute, cloud, and model delivery, the toll roads.
- Burners are using those toll roads at scale, hoping brand, product-market fit, or consumer ecosystems catch up before spending becomes unsustainable.
Right now, Wall Street doesn’t seem to care who has the most vision, only who has the most revenue tied directly to AI demand.
And as generative products move from R&D to productized infrastructure, the market is starting to ask a harder question:
Does AI create value, or does it just redistribute it to whoever owns compute?
The answer and the companies that end up on the right side of this divide will determine investment patterns and the competitive structure of the next internet.
U of Digital Reading List:
- ‘Big Short’ investor Michael Burry accuses AI hyperscalers of artificially boosting earnings
- Why Everybody Is Losing Money On AI
- Did Sam Altman Just Announce an OpenAI Cloud Service? – Business Insider

World Models: The Next Layer of Intelligence
Before the conversation shifted toward synthetic audiences, the group paused on a point that felt like a preview of where AI itself is heading, from language models to something richer, spatial, and behaviorally aware.
Myles was the one who surfaced that shift and framed the emerging idea for marketers: world models aren’t just smarter models, they are models that understand people in context and in time.
Myles Younger (Co-Host): “We just mentioned the concept of world models. I saw something talking about integrating spatial data—basically 3D data about the real world. What happens when you marry Google Street View with AI? Tell us about world models and what’s going on there. You’re merging genuine human experience with AI.”
Chad unpacked it with a framework: past models understood data. Language models understood content. But world models? They understand environment, physics, relationships, and, more importantly, time.
Chad Reynolds (Guest): “The next phase after that is spaces… World models are going after those kinds of spaces which give you the concept of time… You can see if not only an ad works, but also if it works today. Does it work in a week? Does it work differently in a month from now?”
Myles Younger (Co-Host):“ It’s a matter of time. You talked about using consumer data or synthetic personas to make predictions into the future. There’s also the use case in marketing of looking into the past. Nostalgia sells; understanding anyone’s present-day emotional state is understanding what got them there, and that involves the past.”
The takeaway was clear: marketers will soon test messaging by simulating behavior, sentiment, and outcomes before spending a single dollar. And that’s where the conversation shifted from technology to economics.
Because if world models are the next platform shift, they will require more than software; they will require new infrastructure, new sensors, and new ways of collecting behavioral data in 3D environments.
That’s when Chad made one of the episode’s sharpest, forward-looking points: building this future is just about better AI, but also it’s about heavier hardware, more compute, and fundamentally different business models.
Chad Reynolds (Guest): “You need to invest in infrastructure to build out and scale a lot of the models that are getting created, especially when we go into like world models. You’re going to need a lot more compute, sensors, and hard hardware you can put out in the world to pull all that data in and make sense of it all… On the social media side, these are somewhat invisible products where our attention is the compute… It just seems like they’re fundamentally different ways to make money.”
In other words, if language models created a software race, world models may trigger a physical one, where data collection, sensors, and real-world context become the raw materials of competitive advantage.
U of Digital Reading List:
- Google says its new ‘world model’ could train AI robots in virtual warehouses | Artificial intelligence (AI) | The Guardian
- AI’s next big thing is world models – Axios
Synthetic Audiences and the $50B Question
With the infrastructure conversation hanging in the air, Shiv moved the discussion into one of the most debated developments in marketing today: whether AI-generated consumer populations can meaningfully replace traditional research.
Synthetic audiences promise faster testing, lower costs, and unlimited segmentation. But they also raise a fundamental tension in marketing:
If your audience isn’t real, how much should you trust what they tell you?
Chad grounded the debate in the work his team is already doing, and importantly, clarified that his version of synthetic audiences isn’t about inventing fictional humans scraped from the internet. He creates his “AI populations” by continually scaling real human behavioral data.
Chad Reynolds (Guest): “Over the last five years, we’ve been building out AI models but also building out a human network of over three million people where you can constantly interview the world that feeds that model.”
Myles jumped in with a joke, teasing the idea that marketers might try to simulate audiences by opening a bunch of ChatGPT tabs.
Myles Younger (Co-Host): “You don’t just open up a thousand tabs of ChatGPT? I thought that was how a professional would do it.”
The joke landed because, of course, no one is actually building audiences that way, but it opened the door perfectly for Chad to explain why the real work behind synthetic populations is far more complex, structured, and intentional.
Chad Reynolds (Guest): “You’re not doing it right if you don’t have a thousand tabs open. That’s like how people were trying to do it—build individual agents that all do the same thing. The tough part is that there are no building blocks to create these as individuals.”
From there, Chad broke down the core design challenge synthetic audiences aim to solve: real humans aren’t just segments, they’re a mix of measurable attributes, emotional nuance, and unspoken motivations. Traditional research captures the first half. Synthetic models, when grounded in real human inputs, can capture both.
Chad Reynolds (Guest): “Typically, you have consumer segments and the traits you found in the data. There’s this whole other side of them that is a little bit more qualitative, and sometimes people don’t have the words to describe them… We collect more of the qualitative side of them and combine it all together so that you can create an entire population of them.”
That’s where the shift becomes meaningful: synthetic audiences aren’t just validators, they can become co-creators.
Chad Reynolds (Guest): “The typical use case today has been more in validation — just test my ideas. What’s really interesting is that… you can also make them the innovators or creators at the tip of the spear.”
The takeaway? These models aren’t designed to replace real humans; they’re designed to scale human insight beyond what research teams can do manually. And because these synthetic populations are grounded in real people, not hallucinated text, it becomes easier for marketers to build trust in the output.
Chad Reynolds (Guest): “As people get more familiar with them… it adds a ton of trust that this isn’t just making stuff up. Then you can open up a world of different use cases… innovation teams can now get instant feedback and also co-create with thousands of them at the same time.”
And finally, Chad clarified the branding nuance because language matters when defining a category that’s still forming.
Chad Reynolds (Guest): “I always try to avoid the word synthetic because it applies back to decade-old technology that was truly computer-generated. When it’s generated by real people, it brings a different category, almost as if we’re creating.”
U of Digital Reading List:
- Will AI ever be better than humans at predicting what humans want? | WPP
- Ep. 17: Synthetic Audiences, Smarter Campaigns, and the Future of Data-Driven Marketing – Myles Younger
The AI Grandmother Moment: Emotion, Boundaries, and the Strange New Consumer UX
Just when the episode felt deep in infrastructure, research models, and future architectures, things took an unexpected turn into something emotional and almost uncanny.
Shiv brought up the viral moment that’s been circulating online: someone using AI to role-play as their grandmother. It was somewhat funny, surreal, and oddly revealing because whether we’re comfortable with it or not, AI turns from utility into relationship.
What if the loved ones we’ve lost could be part of our future? pic.twitter.com/oFBGekVo1R
— Calum Worthy (@CalumWorthy) November 11, 2025
Shiv Gupta (Host): “This moment wasn’t just funny — it raised questions about comfort, memory, emotional design, and where the line is.”
Chad took the idea further, connecting it back to how synthetic populations behave logically, but also emotionally:
Chad Reynolds (Guest): “When you start interacting with AI in ways that feel personal, it changes what you expect from it. You’re no longer interacting with a tool — you’re interacting with something that remembers you, or responds to you emotionally.”
The group aligned on one point: AI is crossing from functional to emotional UX, and that crossover creates a new layer of responsibility, especially for marketers.
Myles Younger (Co-Host): “There’s a moment where it stops being a product and starts feeling like a relationship — and that’s where things get weird.”
Chad Reynolds (Guest): “They’re really just different ways you’re trying to bring your memories to life. Whether it’s putting a Google Home in your kitchen where family photos are popping up from the past, or you hear your dad’s voice, or you smell something, and it takes you back. These are just really different ways that you want to experience memory. You’ve got the component of advice or relationship where you want to feel like you’re not alone. So you’ve got these different dimensions that are starting to come together. It can come across in a super creepy way…”
U of Digital Reading List:
- Polaroid — Ai can’t generate sand between your toes – THEINSPIRATION.COM
- To grow, we must forget… but now AI remembers everything | UX Collective
- ‘AI will become very good at manipulating emotions’: Kazuo Ishiguro on the future of fiction and truth
- Trust, attitudes and use of artificial intelligence: A global study 2025
- How Americans View AI and Its Impact on Human Abilities, Society | Pew Research Center
- LG brings ‘emotionally aware’ targeted advertising to CTV via Zenapse
- Neuro-Contextual Advertising: Winning Audiences Through Interests, Emotions and Intentions
- Multimodal AI 101 for CTV
- Disney Expands “Magic Words” Ad Feature To Live Programming, Touts Other Tech Advancements At CES Showcase
Key Takeaways for Marketers & Leaders
- Automation isn’t eliminating work; it’s changing who (or what) does it. The real shift is from task execution to system orchestration.
- AI will drive sameness before it drives differentiation, unless humans stay in the loop. Strategy and creativity just became more valuable, not less.
- Data trust is now a competitive advantage. Not all platforms deserve your briefs, margins, or audience insights.
- Synthetic research may become the default, but only if it remains grounded in real human behavior. Synthetic populations are evolving from validation tools to co-creators.
- Emotional design is now a product decision, not an accident. Tone, memory, persona, and empathy will define the next wave of brand experiences.
- AI is creating winners and exposing who’s still funding the experiment. “There are AI earners and AI burners.” — Shiv
FAQs
What are world models?
World models are AI systems that process information and simulate how the world works. Unlike chat-based AI models that only understand text, world models learn spatial relationships, sequences, physics, environment, memory, and time. These models are built by combining:
- Visual data (video, AR/VR, 3D maps)
- Behavioral data (what people do, not just what they say)
- Environmental context (objects, motion, interaction)
- Temporal patterns (how behavior changes over time)
With world models, brands may eventually be able to test campaigns in realistic virtual environments; simulate shopper journeys before spending media dollars; predict seasonal or cultural changes in consumer behavior; run scenario planning (“What happens if we change pricing, packaging, or messaging?”)
What are synthetic audiences?
Synthetic audiences are AI-generated populations designed to behave like real consumers. They’re built using real-world behavioral and qualitative inputs, not random generation. They are scalable, data-grounded models of real segments.
Are synthetic audiences replacing real research?
Not today, and maybe not ever. Right now, they’re being used primarily for: speed (to get early signals quickly), exploration (to test ideas before real-world spend, and iteration (to refine concepts before involving human panels). The direction we are heading: hybrid workflows, where synthetic audiences accelerate early research and human panels confirm or refine the insight.


