The Monopoly Report – Episode 69: "Anthony Katsur, CEO at IAB Tech Lab: Navigating AI, Privacy, and Adtech’s Agentic Future"
Host: Alan Chapell
Guest: Anthony Katsur (CEO, IAB Tech Lab)
Date: March 18, 2026
Episode Overview
In this episode, host Alan Chapell is joined by Anthony Katsur, CEO of the IAB Tech Lab, to dissect the present and future of digital advertising as it relates to agentic AI, privacy infrastructure, and content monetization in an era of rapid AI growth. The duo discusses emerging technical standards and frameworks, regulatory pressures, the challenges posed by vector embeddings, the launch of Tech Lab’s agent registry, and the ongoing push for fair compensation for publisher content in AI marketplaces. Throughout, they underscore the urgency of building robust privacy structures before the industry repeats past mistakes.
Key Discussion Points & Insights
1. Agentic AI: Hype vs. Privacy Reality
-
Industry Rush and Consequences
- Alan opens with a warning on the ad industry’s rapid embrace of agentic AI (self-acting software agents): excitement outpaces privacy preparedness (03:00).
"The privacy infrastructure to support the technology is currently an afterthought." – Alan Chapell
- Alan opens with a warning on the ad industry’s rapid embrace of agentic AI (self-acting software agents): excitement outpaces privacy preparedness (03:00).
-
Regulators Are Watching
- Tony emphasizes that regulators actively seek clarity on how agentic systems will handle user privacy (06:03).
“Regulatory bodies...have explicit questions around how agents are going to deal with consumer privacy.” – Anthony Katsur ([06:03])
- Tony emphasizes that regulators actively seek clarity on how agentic systems will handle user privacy (06:03).
-
Integrated Frameworks
- Tech Lab is building privacy protocols (Global Privacy Protocol, TCF, GPP) directly into their agentic standards:
"We've built Global Privacy Protocol directly into it. The TCF has been built directly into the Agentic frameworks as is gpc. So that's step one." – Anthony Katsur ([06:20])
- Tech Lab is building privacy protocols (Global Privacy Protocol, TCF, GPP) directly into their agentic standards:
2. The State of Privacy Compliance and Inputs
-
Need for Multi-layered Verification
- Alan and Tony agree consent and purpose are growing priorities for regulators, moving beyond binary signaling ([10:13]).
-
The Role of Taxonomies
- Tony introduces Tech Lab's privacy taxonomy (donated by Ethica), which standardizes description of data elements, purposes, and data subjects, plus sensitivity scoring ([10:41]).
“At the core of the privacy taxonomy...gives the industry a shared way to describe three things around data...It does give a foundation to then build on top of.” – Anthony Katsur ([10:41])
- Tony introduces Tech Lab's privacy taxonomy (donated by Ethica), which standardizes description of data elements, purposes, and data subjects, plus sensitivity scoring ([10:41]).
-
Audit and Attestation as Next Steps
- Absolute compliance can only happen through a hybrid of technical standards and ongoing auditing ([12:00]).
"I think there's a combination here of attestation, verification, technical audit through things like the accountability platform." – Anthony Katsur ([12:00])
- Absolute compliance can only happen through a hybrid of technical standards and ongoing auditing ([12:00]).
3. Vector Embeddings: Privacy Promise and Practical Pitfalls
-
What Are Embeddings?
- Tony provides an accessible explanation ([16:09]): embeddings transform data into vectors (multi-dimensional numbers), using algorithms like cosine similarity, to match audience segments or behaviors without exposing raw data.
“Embeddings are a way of translating very different kinds of inputs into a common mathematical form where the closeness of those usually means similarity.” – Anthony Katsur ([16:09])
- Tony provides an accessible explanation ([16:09]): embeddings transform data into vectors (multi-dimensional numbers), using algorithms like cosine similarity, to match audience segments or behaviors without exposing raw data.
-
Privacy Value?
- Vector techniques don’t escape privacy law; the definition of personal data remains broad ([21:35]).
“No, it doesn't pull you out of the rule set.” – Anthony Katsur ([21:35])
- Vector techniques don’t escape privacy law; the definition of personal data remains broad ([21:35]).
-
Compliance Headaches
- Hard questions remain: if consumer data is baked into a vector, how do you honor a deletion request? The industry lacks an answer ([23:41]).
“You consumer have been calculated into this vector. How do I rip you out of that vector? You’re in the math now.” – Anthony Katsur ([21:35]) Alan jokes: “It's akin to trying to remove a toddler’s pee out of the pool."
([46:02] Alan’s closing thoughts)
- Hard questions remain: if consumer data is baked into a vector, how do you honor a deletion request? The industry lacks an answer ([23:41]).
4. Agent Registry: Accountability and Current Gaps
-
Launch & Function
- Tech Lab’s agent registry went live two weeks prior, with a dozen companies signed up ([26:56]).
“We look for TCF identifier...start with that...as verification...regarding the question of bad actors...we don't have a mechanism for catching that right now.” – Anthony Katsur ([27:17])
- Tech Lab’s agent registry went live two weeks prior, with a dozen companies signed up ([26:56]).
-
Attestation & Governance Needed
- Current safeguards are insufficient; Tony is candid about industry-wide limitations in monitoring/penalizing bad actors ([27:17]).
- Alan: “The speed at which agentic is going to operate necessitates a better approach than we as an industry currently have.” ([29:00])
- Current safeguards are insufficient; Tony is candid about industry-wide limitations in monitoring/penalizing bad actors ([27:17]).
-
Early Days Analogy
- Agentic is at “cruise control,” not true autonomy; robust protocols must be in place before full automation arrives ([30:17]).
“Nothing is even semi-autonomous at this point. We’re just very early days...we need strict guardrails.” – Anthony Katsur ([30:17])
- Agentic is at “cruise control,” not true autonomy; robust protocols must be in place before full automation arrives ([30:17]).
5. Content Monetization Protocol (COMP) & AI Licensing
-
Why COMP Exists
- It creates a structured, programmable means for LLMs (large language models) and publishers to negotiate and track access to content, including pricing structures for different content tiers ([32:38]).
“It’s a mechanism for once...permission to use their data, compensation has been arranged...that’s where COMP comes into play.” – Anthony Katsur ([32:38])
- It creates a structured, programmable means for LLMs (large language models) and publishers to negotiate and track access to content, including pricing structures for different content tiers ([32:38]).
-
Limitations vs. Robots.txt
- COMP sits on top of existing crawl controls; not about blocking but structuring deals ([34:52]).
“Once you're blocked and if you're, if you're an LLM, obeying robots txt...you pick up the phone or send the email...that's where COMP comes into play.” – Anthony Katsur ([34:52])
- COMP sits on top of existing crawl controls; not about blocking but structuring deals ([34:52]).
-
Enforcement Unresolved
- As with ASCAP in music, compliance depends on the leverage of the content owner; new legal and economic models will emerge, but for now, voluntary adherence is the norm ([38:25], [39:19]).
-
Tokenization and Transparency
- Future protocol versions could allow chunk-by-chunk tracking of how publisher content is used in LLMs, supporting negotiation and ensuring provenance ([39:35]).
“We’re looking at...the ability for publishers to tokenize their content...and track that content in those systems...that's what we're looking at for COMP v2.” – Anthony Katsur ([39:35])
- Future protocol versions could allow chunk-by-chunk tracking of how publisher content is used in LLMs, supporting negotiation and ensuring provenance ([39:35]).
-
Broader Brand Challenge
- Misrepresentation in LLMs isn’t just a publisher issue; brands risk customer confusion and lost sales due to incorrect AI-generated information ([40:12], [43:41]).
6. Market Direction & Industry Call to Action
-
Fragmentation Inevitable
- The AI content marketplace will likely be fragmented for several years, akin to early music licensing, with eventual consolidation ([43:58]).
"I'd say over the next five years I think we see a fragmented content marketplace ecosystem and that's not a bad thing." – Anthony Katsur ([43:58])
- The AI content marketplace will likely be fragmented for several years, akin to early music licensing, with eventual consolidation ([43:58]).
-
Tech Lab Participation
- Tony urges all ecosystem participants (especially publishers, agencies, brands) to actively join working groups—standards are only as strong as broad engagement ([44:47]).
"Tech Lab is only as powerful...as its members engage with it. Think of Tech Lab as like a gym membership. You don't just sign up...and magically get in shape...you've got to participate." – Anthony Katsur ([44:47])
- Tony urges all ecosystem participants (especially publishers, agencies, brands) to actively join working groups—standards are only as strong as broad engagement ([44:47]).
Notable Quotes & Memorable Moments
-
On Privacy and Agentic AI
"The technology is genuinely compelling, but the privacy infrastructure to support the technology is currently an afterthought."
— Alan Chapell ([03:00]) -
On Regulatory Pressure
"That's not how they're thinking about it. They have explicit questions around how agents are going to deal with consumer privacy."
— Anthony Katsur ([06:03]) -
On Data Embeddings
"You don’t have to explicitly share it with me, but I need to know what went into your calculation of the vector because if those are radically off, you’re never going to have any sort of closeness."
— Anthony Katsur ([22:45]) -
On Compliance Dilemmas
"You have my email address, I'd like you to delete it type of requests. And so I'm not even sure where we go with that."
— Alan Chapell ([23:46]) -
On Industry's Tendency to Rush
"Can we just take a pause and just discuss as an industry, like, okay, what's the end goal here and where do we start foundationally and then build from there? ... Let's not repeat the sins of the past."
— Anthony Katsur ([31:05])
Timestamps for Key Segments
- [06:03]: Agentic AI and current privacy infrastructure gaps
- [10:41]: Privacy taxonomy and needs for purpose-based compliance
- [16:09]: Plain-language explanation of vector embeddings
- [21:35]: The compliance challenge with vectors and deletion requests
- [26:56]: Tech Lab’s agent registry: launch and candid limitations
- [32:38]: Introduction and rationale for the Content Monetization Protocol (COMP)
- [39:35]: Vision for COMP v2 and content tokenization
- [43:58]: Future of AI content marketplaces—fragmentation and evolution
- [44:47]: Call to action for broader participation in Tech Lab
Summary & Takeaways
- The digital ad industry is leaping into agentic AI, but privacy, governance, and compliance have not kept pace.
- Regulatory expectations (especially in Europe) are rising, moving beyond mere signaling to mandate purpose verification and rigorous auditability.
- Technical standards (like TCF, GPP, privacy taxonomies, and the newly launched agent registry) lay a foundation, but implementation and enforcement are lagging.
- Vector embeddings and similar privacy-enhancing technologies offer promise, but also create new, unresolved compliance challenges.
- The content monetization framework (COMP) is an early but significant step toward structured AI licensing, though real enforcement mechanisms remain elusive.
- The future market will be fragmented; consolidation and real legal frameworks will take years.
- Tony Katsur repeatedly emphasizes that industry frameworks are only as robust as their community engagement.
- Both speakers caution against repeating programmatic’s mistakes by bolting privacy on after-the-fact.
Alan’s Closing Reflection:
“We're building all this cool agentic stuff on top of a bunch of open questions… The decisions we make right now about identity verification, privacy, signaling, data, provenance, consent are going to determine what the road looks like when we get to full autonomy. And if we wait until we're at full autonomy in order to figure it out, we will have repeated every mistake we made in the early days of programmatic.”
([46:02])
For more, join Tech Lab working groups and subscribe to the Monopoly Report newsletter and podcast.
