
Loading summary
A
Bienvenido Miami. How's that for some Spanish? We're bringing heat to Miami this year at the possible event happening on April 27 to April 29. Architecture Media is back and we're back as official partners for possible. We're bringing you more content, we're bringing you more curated gatherings, we're bringing you more hot takes. Join us at the Ad Tech God Golden Hour. Watch us on our video series called Preach on the beach and dine with us at our VIP dinner. Big thanks to our amazing sponsors. We're working closely with Fluency, verve, swivel, freewheel, AI digital life, 360 infolinks and 7. Thank you to our amazing sponsors for making it happen. We hope you enjoy our content. We hope you enjoy our gatherings. If you're interested in attending any of these or sponsoring our content while we're in Miami, please go to market. Again, go to Market. See you in miami.
B
Welcome to the Architecture Podcast. This is Ari Paparo. I'm here with Eric Franchi, and our guest today is John Hochter, who is the co founder and CEO of Newton Research and and was formerly the CEO of Data plus Math, which got sold to Liveramp. And Eric, you're an investor in Newton. Were you in Data and Math also?
C
We were not in Data plus Math, but we were very happy to be an investor in Newton Research. John was first to agents. He was the first person that I spoke with that was building agents, thinking about the commercial applications, and he is on the forefront of everything. So I think people are going to love this conversation. He's so smart. Scary smart.
B
Yeah, scary smart. That's what everyone tells me. And I think I interviewed him with Data and Math back in the day early on in Architecture. So it'll be good to catch up and hear about how AI is doing all this interesting stuff. So I have exciting news. You know, as everyone who knows me, you may think I have a voice for podcasting but also have a face for tv, apparently. So I'm on TV this weekend. I'm on TV this weekend. Yeah, yeah. Really way. So I am going to be on a segment. Who knows how much they'll use of it, but interviewed me pretty extensively for a segment on CBS Sunday Morning, which is a very popular show apparently. And I'm being interviewed by David Pogue about the infamous, infamous question. Is your phone listening to you? No way.
C
Oh, my God. I'm gonna send every family member to this segment. Thank you so much. Everybody do the same.
B
Yeah, absolutely. Because everyone thinks their phone's Listening to you and it is not. And I'm on it and there's a professor on it. And we're going to try to firmly and forever debunk this urban legend.
C
This is amazing. I'm so happy you're doing this.
B
It's really fun. We recorded a couple weeks ago, David Pogue has a new book about Apple that's very popular. So he's on the road. So we recorded it a while back. So Sunday 12th April, CBS Sunday Morning. I'll be watching it for the first time with you. So they'll probably just use like two seconds of me and move on. But we'll see. So that's fun. And we also have some just here, just some general housekeeping. So if you listen to the Marketecture pod on the web, most people listen to on YouTube or on Spotify, whatever. But if you want on the web or you want to link to it, the URL moved. So it's now on the Marketitecture TV website. So it used to have its own Market.com, but no, but that was kind of like a little awkward. So we also put all of our other pods there. So folks are sometimes asking like, how do I find the brand for them? How do I find this? How do I find anti God? It's all on Market tv. Nice little tab that says all of our podcasts you could click and get to them really easy. So you know, just a little, little housekeeping. And the other thing is we got a couple complaints that people don't like some of our ad insertion. And I listened to some old episodes and they kind of had a point. Some of our ads were like off by like a half second or so. So like they would overlap with you. Eric, when you say like now time for the news like a quarter second into that the adwor show instead of before it. Unacceptable. Right. So hopefully, hopefully we fix that but keep the feedback coming like we're doing our best. We got to pay the bills. If you want to advertise on the market podcast, which I would highly recommend, this is probably the best ROI you could possibly get in advertising in the ad tech world. Just let me know or let Jeremy know or you know how to reach us, it'd be crazy if you didn't know how to reach us. And you know, I won't actually, actually do the dealing. You can't get a good deal by knowing me. But like, you know, reach out. Why not? It's a good, good place to advertise. Oh, with that let's move on. So, John Hopner, Newton Research. And watch me on CBS Sunday MORNING this weekend. All right, we're here with John Hocker, the co founder and CEO of Newton Research. John, thanks for being here.
D
Yeah, thanks for having me.
B
And Eric, you're an investor. Just so we get our disclaimers out
C
of the way, guess appearing as a proud investor in Newton Research.
B
Proud investor. Oh, wow. So you're top of the top tier. Not one of those investments you're embarrassed about, like market stuff.
C
You're about to find out why.
B
You know, you know, the thing about John is that like he universally people are like, oh, John Hopter, he's like the smartest guy you're ever going to talk to in ad tech. Like, it's almost universal. So what sort of PR do you have, John? Like, what's going on?
D
I just feel like we just set up everyone for disappointment. Like,
B
yeah, yeah, this is a high bar, smartest guy in ad tech. Best investment in imperium. Like, what's going on? So let's start with the basics. Like, what is Newton Research?
D
Yeah, I would say, like on that topic, Brian Weiser did like a piece on me about a year ago and he's like, you went to mit, you have multiple degrees from there and you work in ad tech. What went wrong? And I was like, dude, come on.
B
There's like, there's like a whole bunch more. You can't, you don't have your alumni network at your fingertips.
D
Yeah, there's a bunch. And usually don't realize, you know, you don't really talk about it. Like the fact I went to mit, man, I was good at math in high school and that was a really long time ago. So it's somewhat irrelevant. Yeah. As the years tick by.
B
All right, I'll just throw in Scott Spencer into the list. There's a bunch of others. All right, so let's get to the point here. What is Newton Research? Is it named after, is named after Isaac or the town in Massachusetts?
D
A little double entendre. So I live in Newton. One of my co founders, Matt Emmons, also lives in Newton. We are, I think, historically bad at naming companies. You know, Data plus Math was our last one, so I kind of see a trend.
B
I kind of like that one.
D
Thank you.
B
Data plus Math. That's good.
D
It's binary. I would say the reaction is binary. Some people hated it and some people are like, oh, I like it. And there's very few people in between. It's a love hate theme. Newton Research is really bland. And it was. We wanted to incorporate. We needed a name. We were building AI agents. We were based in Newton, and we're like, oh, there's a double meeting with Sir Isaac. Let's incorporate today. Let's go.
B
You could also pivot into anything. You could do pharma, you could do AI. Like, if this ad thing doesn't work out, you're just like, hey, we're a bunch of MIT scientists. We're building robot dogs.
D
Yeah, we're just doing research here. Yeah, yeah.
C
Okay.
D
So.
B
So we're like 10 minutes into the pod and we still don't know what new research does, except. Except it's the best. The best company in the Imperium portfolio.
D
So we, I guess taking a step back, I've worked with my co founder, Matt, for 25, 26 years. This is our fourth company together. We were very lucky to be on the sidelines when ChatGPT was unveiled back in November of 2022, I guess. And we had the liberty to start from scratch.
B
Right?
D
And that was awesome. Matt read an academic paper on this REACT framework, which, if you follow the agent space, that was like, one of, like, the original papers about building agents that leverage LLMs. It was react for reason and action. So we got in very, very early. Three years ago, we were building agents. People called our companies AI native. We're like, agent native. I don't know if that's actually a good term yet, but that's like. The first thing we wrote was an agent. And it was a moment that I remember very clearly. And we brought in our third co founder, Steve Bennett, who has run data science at the past two companies for us. And so we met, read that academic paper, we reach out to Steve. We're like, hey, Steve, the idea that we have is we're going to try to build a virtual data scientist. We think we can do it. And if you think back three years ago, you know, ChatGPT and all the other generic chatbots on top of those foundational models were all about text and then images, but they were kind of laughably bad at doing math. And so this whole REACT framework and being able to build agents on top of LLMs and use LLMs really, like, for like a loop, was super interesting. And so we started building agents that could do media analytics, because that's the analytics that we know. I was at dinner last night in New York, by the way, with Dave Eisenberg. He was sitting next to me and I was sharing a story because somebody was like, why don't you start building agents. And I thought over three years ago and I go, we've actually shown agents at ramp up for three years. Like we just finished our third ramp up showing agents. The first year nobody knew what we were talking about. Second year everyone was agent curious. I would say everyone was like, oh, you know, I've heard about agents, what are these things? And then this one, every single ad tech company was talking about their agents. So it's been a crazy ride.
B
And by next year the agents will be doing the presentation and the people will be working for them.
D
I for one, like our new agent overlords.
B
Yes, exactly. All right, so let's, let's. I think it'd be interesting to talk about this like on a step by step basis. Like how does an agent analyze media analytics? Because you know, if I, if I listen to the, all the Claude influencers, they're just sort of like, claude, analyze my data, make no mistakes, let me know tomorrow morning. What worked? I mean there's a whole bunch of people who are literally saying that today because Claude just released its agent framework, which makes it even easier to deploy your own agent. So let's walk through this from like an architecture point of view. Like what's the starting point? You tell an agent what you want or you give it the data or what?
D
Yeah, I can describe our architecture a little bit and then I'll try to keep it high level and not too into the weeds. So Newton is containerized. So we put our agents into our customers cloud platform of choice. So if, if they're running on AWS or GCP or Azure or Databricks or Snowflake, those are all really good answers for us. We take our container and we try to keep the data where it lives, leave the data where it lives. Our agents act like data scientists who are opening up the jupyter notebook. I think that's a really good analogy to think about these agents on setup, we build connectors to the various tables that the agents that you want the agents to have access to. Right. Because you want them to do certain analytics for you.
B
We're post the point where the data has been collected. Like so the data is ETL'd in by some other system, be it databricks or whatever.
D
It's sitting there, big query table or something. Right. Like it's.
B
And now the question is what to do with it. Right. So step one is you gotta tell the agent needs to understand the schema.
D
Exactly, you need to understand the schema. It can understand files too. Like you Know you can when you're having a conversation with Newton, you know, oftentimes an agency will get an Excel file from like a partner or they'll get some random file from, you know, some publisher, and they need to use that as part of the analysis. And that's fine. You can build those kind of data connectors on the fly as well.
B
Sure. Okay, so now it knows what the fields mean and where the tables are and where the bodies are buried. And now what's agentic about it? Are you giving it one command line like, hey, tell me what's working and what's not on Meta, or is it more like, is it figuring out what questions to answer?
D
Yeah. So it is more complex than just tell me what's happening on Meta. So the. We have agents that have different expertise, and the expertise is based upon a knowledge base that we've provided to these agents. We provide a knowledge base. And I love analogies, so I'll try another one here. I like to think about what Newton research does as we try to give these agents master's degrees in marketing science. And so we have an agent that's really good at. Mmm. We have an agent that's really good at incrementality. We have an agent that's really good at lookalike modeling, kind of like the typical analytics that a brand agency, publisher might want to perform. All of that goes into a knowledge base. And if you think about when you use Claude or OpenAI or Gemini directly, you probably spend a lot of time kind of like beating on it so that it understands exactly what you want and how to do it. And if you leave methodology decisions to those LLMs, they'll make curious choices and they won't necessarily make repeatable choices. And so we work with large brands, large agencies, large publishers, large commerce media networks. They don't like curious choices when they're doing analytics. Like, that's not fun. You want it to be enterprise grade, you want to trust it, you want to get the same answer if you ask the same question tomorrow. And so a lot of what we do is training our agents on the methodologies, the problem solving recipes, the tips and the tricks. It's similar to what we teach. An agent is similar to if you were going to hire a analyst or a data scientist to join your company. Like, you'd sit him or her down and be like, okay, here's what you need to know here. This is how we do incrementality. Here's like the model we like to use and all of those things and so our agents will then be able to make choices about what approach to use when they have access to the data.
B
By the way, if you're sitting at your desk and someone comes over and asks you exactly all the details about how you do your job, you're about to be fired. So. So what? It sounds a lot like what you're talking about is like what Claude. I'm trying to dumb it down a little bit, but what Claude calls a skill file, like you know, if you, it's a much dumber version of that, but you can have a skill file for marketing, skill file for be my chief of staff, you know, which, etc.
D
Yeah, exactly. Like they have to, you have to have the right context, right? And you have to bring that along with you. Again, if you've. Everyone's using Chat, GPT and Gemini or Claude these days. Getting the context right is like key, you know, if you just send in a short little prompt willy nilly, like you're going to get some garbage out. But if you provide like sample code, you provide like all the way down to like, no, this is how I want you to do it. And then you leverage the LLM for writing code which you are really good at and you take the decisions away from them around methodology, you get a much better result.
B
Right, okay, so then, now let's get to execution. So is it, how does a customer think about this? Is it like I want the same analysis done over and over again every once again, like every day? Tell me how my marketing did yesterday. Ad hoc. And you said, as these AIs are writing code, are they writing code that's executed over and over again or are they writing brand new code every time?
D
I think the knee jerk reaction when these chatbots came out was a bunch of tech companies are like, oh, I gotta have a natural language interface, right? This is the future. But it turns out like, not like starting from scratch and like typing in what you want every single day when it's the same thing that you asked yesterday and the day before is a pain in the butt. And like that's why we have buttons. Like, that's why we invented clickable things. And so a lot of what we do is setting up multi agent workflows with our customers. And so these are common tasks that are multiple steps where today it could take a human three days to do a particular analysis, or five days, or 10 days, or three hours, whatever the task is, because they've got to go grab the data from these tables, grab this other data from over there, they're going to know how to join them together. They're going to build a model, run the model, do all of these things. When you set up these multi agent workflows, the agents know where the data lives, they know what the data means, they can do a lot of that work and then the human can spend the time doing what humans do best. I think being insightful and figuring out, okay, interesting, all of that work has been done or sped up for me. Now I can look at the results and figure out, okay, what does this mean? All right, should I buy more? Go back to your earlier point. Should I buy more Facebook? Should I shift budget? Should I adjust my bid factors? Should I go buy more? Ctv? All of those things can then be sped up and the human can layer their insights on top.
B
So I was under the impression, maybe I was wrong, that Newton Research was very involved in incrementality and testing and those sort of things. Is that part of what the agents do or are you downstream from that?
D
Yeah, no, we help set up incrementality and measure it on the back end. One of the first things we did was actually incrementality. We showed up at an agency, a small independent agency, and they had a person there whose job was to set up these incrementality tests. And if you've ever had to do that, it's kind of a pain in the butt right on both sides. You've got to figure out testing, control and do all this stuff. And then as soon as you set it up, the CMO for the company that you're working for happens to live in one of the GEOs you've excluded. You've got to switch it back up again because they've got to see the ad. All that sort of stuff happens all the time. And what we did was we set up this multi agent workflow so that it could set up incrementality on the front side and then measure it on the backside. And so what happened wasn't that this person, you know, was irrelevant. What happened was the agency now is running way more tests than they used to because now it's easy to set them up and it's easy to measure them. One of the things that we, we like to talk to our customers about is you're entering a world with unlimited analytics. Like how would you operate differently as a brand, as an agency, as a publisher, if you had access to unlimited analytics? And that's what it feels like because now you have an army of these, I like to call them junior Data scientists, because there is human in the loop and humans controlling them and humans adding insights and expertise and all of that very important stuff. But now like, okay, were you running enough incrementality before? Were you running your MMM often enough? Are you doing enough analytics with all of this data that you now have available to you? And the answer is usually, yeah, we probably aren't doing enough analytics or we don't have enough data scientists or we don't have enough analysts. We do a little bit to make us feel good that we're measuring it, but we kind of have to just move quickly.
B
How much does AI help with incrementality? Because my feeling is that the, the long poles on running incrementality tests are the ability to set them up in platforms like meta and YouTube, et cetera, which is a real pain in the ass. And statistical significance, which AI doesn't really help with. So you still have a lot of time and effort going into this.
D
Yeah, yeah. Humans are definitely still in the loop here. This isn't like AI is, you know, the I said like the Rompo peel, set it and forget it line come in my head. Like, you don't just go into the machine and like walk away and go do other stuff and then you're, you don't have to worry about doing work anymore. Like there's still like heavy lifting involved. But we try to make the job easier, more fun, try to take a lot of the, the steps that were cumbersome and make them easy. It's just, it is another form of intelligent, like automation.
B
What have you seen? It's like the cutting edge on this stuff. Like who's the best? Not necessarily using your product, although I assume that would be in your corpus of examples. But like who's. Without naming a company, what's like the bleeding edge of using this stuff?
D
Yeah, so we have customers that are using us to do causal models, which is kind of like the next generation.
B
What's a causal model? I don't know what that is.
D
So super high level. Imagine if like going back to MMM is like machine learning. It's very useful. But you know, it was developed 20 years ago, 30 years ago. I'm not sure of the exact date based upon the amount of data and kind of like the CPUs are available at the time and the amount of data companies could collect at the time. Right. It's a really useful tool for figuring out how to allocate your budget. But a lot has changed and there's a lot of new toys available out there to play with. Right. There's this transformer architecture out there which is kind of cool. There's GPUs which are getting faster and faster and there's more and more data being collected. Companies have access to data that is so much more voluminous than 20 years ago. So what if you could put that data into a neural net and try to like figure out, you know, or kind of suss out causality from all those nodes that you're able to put in? Because now you can actually process all of that at speed and at a price that's actually doable. So there's, there's a whole new generation of models coming out causal just being like one category of them, that there's a bunch of stuff going on in this space and it's, it's exciting to think about. Like if you could analyze your marketing differently because of what technology has brought us, how would you do it? And so our data science team are working on those.
B
It sounds like multi touch attribution, but taken to the next level, like you're just throwing so much data at it that you could say, well, this consumer scratched their ass and then saw our ad and then went into the Home Depot and whatever.
D
Pour more and more data out. That's exactly. Pour more and more data, pour more and more signals because you're getting more and more of these signals. And then can you suss out something that's interesting and useful?
B
Like what's an example of a data set that you can now use in that kind of analysis that maybe you weren't able to without the processing.
D
A lot of the Customer Journey data I think has lived in another world. Right. Like Customer Journey data. And often Martech Land and Iotech Land hasn't really touched it much. Joining those things together, you know, call center information, brand awareness information, all of this stuff. You can really make this primordial soup and see what life comes out of that soup.
B
Right, right. And what sort of, just to kind of close this out, like what sort of advice do you have for ordinary marketers and agencies who are just kind of at the like, Claude, tell me what happened yesterday mode, which hopefully we're all in that mode at least, right? Ye. What should they be thinking?
D
Just logging into Claude or Gemini or ChatGPT? I don't want to pick on Claude. I like anthropic.
B
What do you use? What's your day to day? What's your day to day AI patterns?
D
Personally personal is Claude. I also wind up in Gemini because I'll Go into Google and I'll go into AI mode and I wind up there quite often. ChatGPT I've been using less and less just because of, you know, new habits that are developing. I used to go there all the time. The as a company we use all three depending upon our customer. So they're all, they're all, they're all great.
B
Okay. Sorry to derail us.
D
Super powerful. Yeah.
B
So anyway, back to the original question.
D
What should your mainstream question now?
B
What's like, what's step one if you're like just dabbling in AI for these use cases?
D
Yes. So having like some people on your team that have some prompts that they want to send into Claude to analyze data. A lot of agencies and brands and publishers are there. Right. They're like, they have folks who are kind of leaned into it, who are burning some tokens. You can pretty much tell who's doing it by the amount of tokens they're burning. Just like developers. Is that how you want to operate your business from like enterprise, Enterprise grade software? Like having a few, you know, almost like hobbyists building stuff is not the way you've attacked the software problem before. That's probably not a good way to attack it now. You probably want to have something that is trusted and that is enterprise grade, that folks have vetted that, you know, you are benchmarking to make sure you're getting the same answers out every time. And like that's what we do. Like a data scientist sitting down with Claude code, who knows what they're doing, knows their customer can go in there and they probably build an mmm if they know their client really well and they'll probably have some success. It'll take them, you know, quite a bit of time, but it'll speed them up from starting from scratch. That's like one path you can try to go down or you can sit down at a tool like Newton. And we have agents that are trained on building MMMs. They can sit down with you, you know your client, you know their expertise and they could build an MMM in a more interactive way with an agent that's been trained on that expertise. I think you're going to wind up with a much better result. And God forbid you don't understand MMM and you don't understand your client and you ask Claude to build an MM for you. Because it'll build something, it'll be fragile, it won't reflect reality and if you try to allocate your budget based upon it, it's not going to be a Good thing for you or your client.
B
Well, what about the recently open sourced MMMs from Google and Meta? How are they fitting into the. Into the world?
D
Yeah, so they're super handy. We will. We don't typically use them as part of what we do, but we have some clients have asked us to use them as kind of like a base ingredient. We do that particularly with Meridian. We don't use Robin, but that's. I'm not throwing shade at Robin. Just like that's what our customers are.
B
Robin.
D
Robin's Facebook.
B
Okay.
D
The opposite.
B
Sorry. Yeah, yeah, the, the bird names coming back.
D
Yeah, exactly. So, yeah, I think they're very handy, but you have to work with them. You can't just be like, oh, let me open up this box, take Meridian out. And now I'm set on mmm. That's not the way MMM works. There's a lot of, like, really good practitioners of MMM out there. We've been training our agents to be very helpful for folks who understand their clients so that they could build these mmms directly. We also have customers who have mmms that they've invested a lot of time and energy in and that work. And so in those instances, they might have an MMM that they've been building for years. And it's really, really good. We can get curves out of those and, like, treat them like priors in an MMM light that they can run way more often, which I think is filling a real gap in the market. So if you're familiar with MMM and I know of going deep into MMM land here, the MMMs usually run once or twice a year. Right. You allocate your budget and then you're good. The markets are changing way more dynamically than annually or twice a year. So everybody wants to rethink budget allocations much more timely than once or twice a year. And so if you run an MMM light, you can actually see, hey, is the market actually acting the way that we thought it was going to act? You could also go back to incrementality. Incrementality is an awesome signal that you can feed back in and say, hey, this is what I thought was going to happen. Let me test it. And if you got agents that can set up incrementality, like, game on, go for it.
B
Totally makes sense. All right, this is a really interesting area of development and research. Love to learn about this. And also we'll probably explore this more on future pods with customers who are doing this as well. So thanks for that perspective. With that, we're going to take a break and come back with, as usual, news AI tbpn. A lot of interesting stuff. So we'll be right back.
E
This episode is brought to you by Indeed. Stop waiting around for the perfect candidate. Instead, use Indeed sponsored Jobs to find the right people with the right skills fast. It's a simple way to make sure your listing is the first candidate. C. According to Indeed data, Sponsored Jobs have four times more applicants than non sponsored jobs. So go build your dream team today with Indeed. Get a $75 sponsored job credit@ Indeed.com podcast. Terms and conditions apply.
C
All right, everybody, we are back with the refresh. I think we got something for everybody this week. So we're going to talk about tbn. We have to talk about tvpn. You had thoughts? Some product announcements, some M and A, some agency wins, and a bunch of AI stuff. Glad we have you here for this one, John. Let's take it from the top. Friday of last week so we record on Thursdays. There's always something interesting that happens on Friday. We always miss it. TVPN, the Daily show, it's kind of CNBC for the tech era, was acquired by OpenAI. You had a zinger of an email with hot takes on a number of topics. All right, on Monday, you're not a fan.
B
It makes no sense. Like, you know, the first of all, just the history of corporate, corporate entities buying and running media companies. There's zero success rate of that. So just you can't even point to a single example or made sense. Secondly, like, if you promise editorial integrity, which they did, then what's the point of owning it? Like, you could just buy the ads, right? And thirdly, like, if you think of TBPN as a standalone business, profitable, doing great, maybe they have some bad years and maybe it doesn't work as well and they have to cut back and then they come back and it's worth something. It has some equity value. It does. It's not life or death. When you're owned by a corporate entity that may have ups and downs, you could be cut entirely. They could just say, you know, it's time to tighten our belt and to focus a little bit. Which ironically, they just said two weeks ago that they were going to focus and then they buy them and then suddenly the entire thing disappears because they don't care about the profitability, they care about focus. So it's great for the founders. They cashed out in a way they would never have from a normal media acquisition. But this makes no sense in any way. And people who are Galaxy Braining it. Good luck. You keep doing that, but it just doesn't make any sense.
C
I'm not galaxy braining it, but maybe I'll take a different kind of this. So a, I mean awesome deal for the founders. You have to take this deal. I don't know if they have any investors or a board, but clearly like they, they had to take the deal. Thought one, thought two is, I mean like OpenAI is such a powerful company, important company and it's going to be, I don't know, you can't even call it a tech company anymore. Right. Like, you know, it's like with AI just being so foundational, it's like, I don't know if this is a typical corporate. Right is the third thing. And then also they need help on PR and comms and the founders and they said this in the release that they're going to get the founders involved in pr, comms marketing. So super expensive Aqua Hire, but it could just be a little bit of that. John, what's your take?
D
It's shocking in not a good way. So it's, you know, tbpn, I think it's tech Bros. Podcast like this. Yeah.
B
They don't, they don't like to tell say what the TV stands for, but it does.
C
Technology brothers.
D
Yeah, yeah, I'm not one of their followers. I don't enjoy that content per se. You know, the big gong in the background, like the whole thing is just not my, not my cup of tea. I try to be the opposite of a, of a tech bro but you know, it's, I think it's pretty obvious that OpenAI wants to start controlling their messaging better and that's certainly going around and you know, this is the, the hot podcast that's talking about AI and tech and they get, you know, Satya, they'll get, you know, Mark Cuban, they'll get big names on it and you know they're going to be independent housed in the strategy group which makes no sense and they're going to get paychecks from OpenAI and probably have equity in OpenAI and like none of that makes any sense for independence. So if you, if you look at it like they are not journalists, they're entrepreneurs who had a podcast to pump up technology and they're like technology lovers and like Silicon Valley can do no wrong type of folks. And you know, that's a curious acquisition for OpenAI.
C
Seems like a good fit.
B
I don't know, I'll just say I think the dudes own it again within five years, the Bar stock, they're cash out and buy it back for a dollar within five years. I had someone else put this on Twitter and I disagreed with it, but I feel pretty strongly that's going to be the case.
C
All right, timestamp that you said also on X, you put a call out who would be the funniest acquirer of market. Yeah, you got some good. You got some good answers. Maybe we go around and give our take on who would be the funniest acquirer of market. You got to go first, John.
D
I think a reverse merger with ad tech. God, I feel like you keep, like, ratcheting up the valuation of each other and over time, you know, the next acquirer shows up, you're like, we were bought for, you know, 300 million last
B
time, so wash trading. We're going to watch trade with ad tech guy. That's smart.
C
Circular deals.
B
Yeah, obviously it's the current, like, we got to become a part of trade desk. They need some better pr. Like, that's the obvious answer. And you know what's funny is like, some people have seriously suggested it already and it's like, come on, man, that doesn't make a lot of sense.
C
Yeah, the funniest has to be Google. You spent the better part of last year, I mean, writing the book, doing the media, you know, just like taking this thing down. It's got to be Google.
B
Yeah, Google would be pretty funny, right? That would last about a week or two. It is interesting though, because, I mean, this deal makes no sense. And yet, you know, everyone is thinking about, oh, who should buy these guys? And what about acquired podcasts? And what about. And seriously, what about architecture? It's like, well, the original deal didn't make sense, so why not copycat it it with 10 other times, right? Look, God bless, you know, tech has done enough to like, hurt the pocketbooks of journalism and media. So let's have some irrationality. Let's take some of those newly minted IPO dollars and slosh them back into the podcast microphone. Like, I'm all for it. Let's do it.
C
And how much fun would it be to be back working at Google, Ari?
B
I would love it. The free food is fantastic and, you know, and the ATAR system is top notch.
C
All right, great.
B
Let's.
C
Let's move on. Let's. Let's talk some serious stuff. So Media Ocean launched Prisma Direct with Disney. There's a lot here, but I think maybe the most interesting thing, if you go through the, the release and we'll put it in the notes was a call out to, you know, more efficiency, less intermediaries. I mean basically it's, you know, creating the direct buying path for high quality streaming inventory, which I think is really interesting. Is this a big thing, you think, ar?
B
I don't know. It's not new news. But it is important because I've been covering this for a while in that when Innovid was a separate company, they launched Innovid. I forget what it's called. Something had a cool name too. I'm forgetting the name. And MediaOcean also at the same time had launched similar products and both of them were making the same point, which was that about two thirds of the CTV inventory is what they call non biddable, meaning it's either direct or it's programmatic guaranteed and it's not available through PMPS or from open market. And both companies felt this was really important. And then they merged MediaOcean, bought Innovid and then I think that reinforced their belief that if most of CTV is bought direct, then why not bring the power of ad serving and data and all those things to that part of the market without the complexity of Programmatic? So having Disney on board is pretty important. But this isn't new news. It's like been going for a while.
C
Yeah, makes sense. One thing we did talk about when we were talking about the capabilities of Newton is John, you've gotten into agentic buying or agentic selling, I guess. Depends on how you might look at it.
D
So
C
what do you think the future of this type of inventory? Right, like the non biddable, non open market CTV inventory, you know, if, if indeed the world kind of goes agentic from a transacting perspective.
D
Yeah, yeah. In the Mediaocean release, I applaud them for not using the term agent or agentic. I think it was the first press release in a long time by a tech company that didn't use those terms. It did not. And I was like, this is like a refreshing in a sense. But yeah, we're definitely in that space. We, we had made some news around ces. We worked with RPA Agency, really forward thinking agency out of the LA area and they made a buy on NBC Universal. They also made a buy on Yahoo DSP and also on Locality. And the buy was powered with Newton's agents and our agents did the analytics, you know, to figure out how, you know, what to buy based on how the media performed last quarter, like all that sort of stuff. And then agents transacted the buy with NBC Universal and transacted the buy with Yahoo DSP and with Locality. But the money still went through Media Ocean. Like there's still like the system of record. And so we're kind of just like a layer on top trying to, you know, optimize where the. Where the units ran on behalf of the buyer. So we weren't working on behalf of the sellers, we were working on behalf of the buyers and doing that sort of agentic placement for them. So I think you're going to see more and more of this. This is, you know, certainly a hot area. Competing standards, all the. All the fun stuff.
C
Got it.
B
Makes sense.
C
All right, let's. Let's move on. M IQ Ari, remind me, did we talk about the acquisition of that Latin American company? We missed it, right?
B
No, I think. I think we missed it. M IQ know. On a tear, right?
C
MIQ is on a tear. We should talk about them. So last week they announced the acquisition of Adsmobile, which makes them with Adsmobile, the largest programmatic company in Latin America. And then this week they announced. Yeah. The. The acquisition of Rocket Lab, which is kind of like maybe a mini app. Loven in some ways from a capability standpoint, it's like mobile app growth, mobile app marketing. So we take a step back and you look at this and you kind of posted this on X. Like miq is. They're executing and they're like buying really interesting growth areas. Right. So like mobile app, big blind spot for many traditional programmatic companies. And then Latam, which is. Oh, by the way, big mobile market. So this is super impressive. And they announced them within the span of a couple weeks. Paul Silver, Shout Out. He was. He was a busy guy.
B
Yeah, it's really interesting. They don't fit in a category. Right. They're not an agency, they're not a tech provider. They're so. And they're not just an outsourced company. They're somewhere. They're a little bit of like. I don't have a metaphor. They're the hedge fund that, you know, if you think about maybe like to use the metaphor that you people have. Have their wealth at JP Morgan and then JP Morgan makes available to them performance through, you know, Tiger or another hedge fund. Maybe that's kind of the metaphor here. But they're basically the people who know how to make Programmatic work better than the hold codes. And they do a level of service that the tech providers don't do. I do wonder where it ends up, if they end up. If they become so big that they're. That it would make sense for them to own the tech or to be an agency. At some point they, some inevitable conflicts come up come to mind.
C
Yeah, it's a, it's a good question. They were majority acquired, I believe by Bridgepoint, a PE firm in 2022. Rumor was it was right around a billion dollars, 900 million to a billion. Like if you think about companies that are like the canonical companies that have been successful in Attech, they all either like led a category or invented a category. To your point, you can't call them any three letter acronym right now. So could this be like, you know, the, the next leader in, in ad tech and could this be a public company? I think it's possible.
B
And I, I've interviewed several members of the team for this podcast and elsewhere and one of the things that really struck with me was their personnel model. Right. I don't know if you remember this, Eric, but basically they, they have these traders, the people who run the campaigns, and they have a highly incentivized model to pay them more if they get better results. So once again, the hedge fund, hedge fund model, in a sense, like if you run a desk at a hedge fund, your comp is highly correlated with how much money you make and I think the dollar amounts are much smaller, but it's the same kind of idea.
C
So paid on, paid on campaign performance. I wasn't in this conversation.
B
Okay. Yeah, yeah. So yeah, you're the client. You get some relatively young person who is in charge of waking up every morning and thinking about your performance and working it. Right. And getting you better performance. Right. It's a very different model from, you know, a traditional agency that might be checking on the campaigns weekly or, you know, or John, who's got an agent doing it for you.
C
Yeah, I love it.
B
Do you pay your agents more if they get better results?
D
Of course. And they get free lunches. It's, you know, we treat them really well.
B
Yeah, that's important. Oh my God.
C
How do you take on this one, John?
D
Yeah, Paul Silver is doing a fantastic job I think given his background, coming up on the product side and then going into corp dev like he get, he gets it. So yeah, kudos to him. Look forward to seeing him again in can hopefully and catching up.
C
Yeah. Okay, so let's talk about agencies. Publicis wins the reported billion dollar Microsoft account. And this is an interesting one if you look at the release and again, we'll put it in the, in the newsletter for Monday. This doesn't read like winning an account. It's actually at the very end where it's oh, by the way, Publicist will now be the agency of record for Microsoft. It talks all about the capabilities of Microsoft that are coming to publicis and coming to clients. So this is a very interesting deal. I think there's only a few of these deals that could potentially happen. And again, good on Publicist just with a tear of doing these like very unique market leading deals.
B
Yeah, the only challenge will be finding the deck in their OneDrive.
C
John, weigh in on this one, you know, because this is squarely in AI applications and capabilities.
D
Yeah. So agencies are putting AI front and center of their pitches. I mean we work with a lot of big whole codes. I think we'll make some announcements in short order around that. But we do a lot of intel inside type of strategy where our agents are just part of their platforms. And I'm not saying this about Publicist, but in general, like that's how we operate. We'll provide best of breed agents and then the agencies will oftentimes leverage Newton as part of their pitches when they're going to win business. And that's just kind of illustrative of where the industry is going. Like they're showing off their AI platforms to these clients and may the best AI platform win type of strategy. But it's kind of crazy, right? Like that's what Publicist certainly leads with AI. You know, WPP's open platform, they were just, just won the Wendy's account, they were talking about open and how that drove that win for them. You know, Dentsu is out there. Like they're all out there showing off their AI and competing for business in this AI arms race. And I am happy to be an arms dealer at this point.
C
That's a good metaphor.
B
All right.
C
If you're not into, you know, pretty deep AI stuff, you can hit, hit pause, go, go get a sandwich. We're going to, we're going to get into some announcements and I think all three are like, you know, kind of wildly into, but they may not be for everybody. All right, Talk. The Tribe V2 model Ari.
B
Okay, I'm really into this. So there's been for, for many years or like 20 years, marketers in various crazy environments have been scanning people's brains to see their reaction to ads. Nielsen used to own a company called Neuralink that did this and it would be, it would be like a way of testing your ads. You get some volunteers, put them in an MRI machine, show them some ads. Right. Which is just already just scratching every itch. I Have like, you know, I just watched Frankenstein the other day and it's like exactly what I want. Well, Meta as, as is their want took this to the next level. So they got hundreds of volunteers and they put them in these machines and showed them ads and videos and all this stuff. And they trained a model called Tribe V2 on these people's brain activity.
C
700 people. 700 people.
B
So why is this important? Well, they claim that it is predictive and they open sourced it. So what this means is like they think that this model can be used to determine if an ad is going to perform in advance or if it did perform in the past and by open sourcing it. I think their plan is to get wide scale acceptance of this model or maybe a future model. Maybe they'll invest in a wider group of subjects because it's pretty expensive to build this as a way of bringing another piece of data to the bear, especially on upper funnel metrics which traditionally were answered with surveys and things which obviously have a lot of methodology problems. So take your video straight out of the AI from Higgs Field, then analyze it using the, the MRI machine, virtual MRI machine, and then optimize and figure out what, how your ad could be better even before it runs. So I'm really.
C
Before it runs. Yeah. John, this is straight up your alley. What you got?
D
Yeah, so I want to see the data that's that from these 700 people. Like I want to understand exactly why the paper thinks. Yeah, I want to see like is it that people get excited, that they have emotions? Oftentimes emotions are not indicative of actually going and purchasing something. There's like a lot of, of false negatives out there. When you go down the route of these kind of intermediate signals on the path to purchase, there's a lot of false indicators and I would just like to kind of understand better what they think is actually driving somebody to purchase a product. Because, you know, paying attention to a 30 second spot on TV, a lot of times people say, hey, that's super important. Yeah, like if it's really funny. But a lot of times people pay attention to ads and they don't even remember what the product was because it was just like a well produced ad. You've got to really follow it through and see if it affects purchase behavior for it to truly matter.
C
Yeah, that makes sense. It's worth going to the page and we'll put this in the newsletter again and looking at the 30 second video. It's wild if indeed all of this is true because they actually show the MRI on the model versus the actual how close it got. So it's really. I mean, this is like next level stuff.
D
Yeah. Yeah, that's pretty sweet.
B
And then at some point, your Meta glasses automatically are scanning your brain as you're watching ads.
C
Exactly.
B
They already are, like, surrounding your cranium. And so, like, why not?
C
Also on the meta front, they launched their first model from Alexander Wang's team, buying the company for $15 billion. They came out with Muse Spark. Did you guys try Muse Spark?
B
I haven't tried it. I read the little benchmark stuff.
C
I tried it and it was good. Like, it required, to John's earlier point, like it required was the first time I ever used it. Like, a little bit more prompting, a little bit more context to execute a task that I normally would go straight to Claude for. But when I gave it sufficient context, again, it was just like one or two additional prompts, like, who am I? What am I trying to accomplish? It was really good, and it makes me think you guys should try it. Like, just, you know, so just go to Meta AI. I'm just going to log in and do it. Makes me think about something that Tim Vanderhook said a couple of weeks ago, which is just like, the LLMs are going to be commoditized. Like, they're. And just like you said, they're all good, they're all useful, you know, Sounds like newspark is being positioned less for corporate, more for personal use. Then it becomes about, like, how you can interact with it, what you do with it, and ultimately, like, the applications that are put on top. I think this is equally, like, really, really neat.
B
Yeah, I think I was just gonna
D
say it's like electricity in a sense. Right. You're gonna build all these applications on top and expect it to be there, and it's gonna be commoditized. I definitely agree with Tim on that.
B
Yeah. I was gonna fold it into the conversation we had last week about Meta's Manus acquisition, because Meta, Meta. I said this in a conversation with someone recently, which is Meta. Seems to the outside world like they're a totally incompetent company. And then when they turn their eyes on something that produces money, they are savants. And it's like, you know, and like madness. Like, being able to use an AI to manipulate your. Your Meta advertising account is now, now, you know, cutting edge, best of breed, and now they're producing an AI model that's just going to feed back into that. And now they've got the reading people's brains about whether they use ads. And it's kind of funny that they do this stuff and then meanwhile, like, trying to log into your ad manager is virtually impossible.
C
Never count out zuck. Never count out zuck. The stock, like, ripped, by the way, yesterday on this, in addition to it being a good day for the market, final thing on the AI front, Project Glasswing. Claude Mythos. Were you guys, like, sleepless like everybody else about this whole thing?
B
Freaked out. Freaked out.
D
It's good marketing, right? Like, it's really good hype marketing. Like, it's next level.
B
The irony here is that, you know, the Department of Defense war is saying that, you know, that they anthropic is a danger to national security. Meanwhile, anthropic sort of proved it and said, like, our model is so powerful, we could take down governments. You know, that's a very interesting dichotomy.
C
Yes. And for those that are not, like, deep in this stuff, basically the TLDR is what Ari just said. Claude Meathos was found to be so good at cracking things that it shouldn't be able to crack that they quickly created something called Project Last Wing, which was the top companies in the government to work together to figure out how to handle something like this in the future, because it inevitably goes up, goes open source. A lot of people are very alarmed about this.
B
My favorite little anecdote is that the researcher who was testing this, this model, put it into a sandbox, then went off and eating a sandwich when the model emailed him, like, hey, what's up?
C
It's very appropriate. Yeah. Wild.
B
It's like for, I don't know if you ever had, you know, when you had a baby or a toddler, when someday you'd walk into their bedroom and they'd be out of the crib and you'd be like, what? They learned how to jailbreak the crib. You know, that feels very similar.
C
Yeah.
D
It gives me an idea, though, for Newton. Maybe before our next release, I'll send out like, an email to the advertising industry writ large and warn them that the new release of Newton is coming and we're not going to release it yet. We want to have conversations first about the implications, but before we release it.
B
Brilliant. We're warning you, your ROI is going to go down because our ads are going to be so much more efficient. We apologize in advance. It would a great April Fools.
C
That's amazing. How is this related to ad tech outside of marketing schemes? I think it was James Burrow who had this take with basically, like, all right. If all of the data is going to be leaked, my God, there's going to be all these new signals to optimize campaigns on which
D
not the good optimistic view.
C
Yeah, but, yeah, quite. Quite optimistic. All right, do we need to talk about ttd or can we skip that one?
B
I think, you know, we should have some sort of sound effect, like the gong for every time a new piece of bad information comes out about ttd. Like a wah, wah kind of thing.
C
I don't know. Yeah, I don't know.
B
I'm tired. I'm a little tired of it as well.
C
Okay. All right. Few people left, ttd, some marketing head of Ventura, some board members. We'll leave it at that. I think this has been an awesome conversation, guys.
B
Absolutely. Yeah.
D
Thank you very much. Yeah.
B
Thanks for being here. John Hockner from Newton Research. You can find him at the local Dunkin Donuts up in Newton, Massachusetts. Is that the best way to reach you?
D
You'll find me there.
B
Yeah, exactly. All right. And a reminder that I'm on TV this weekend. If you set your TiVos, I will be on CBS Sunday Morning, which apparently, since I started telling people is like, the most popular show among a certain set of people. Like, you would think it's like old lady show. No, Like, a lot of people in our industry are like, CBS Sunday Morning. I watch it religiously. It's a good show. I'm on that. So please tune in. All right, thanks, everybody.
C
See you next week.
D
Thanks. Thank you for subscribing to marketecture. New interviews are added every week at marketecture tv and your favorite podcasting. Applause.
Marketecture Podcast Episode 168 Summary: A Guide for Using AI Agents in Media Analytics, with Newton Research’s John Hoctor
Overview In this episode, host Ari Paparo and co-host Eric Franchi sit down with John Hoctor, Co-Founder and CEO of Newton Research (and former CEO of Data + Math). The discussion dives deeply into the evolution of AI agents in media analytics, exploring the practical realities of deploying agentic AI for brands, agencies, and publishers. Hoctor offers in-depth technical and strategic insights into the use of AI agents for analytics automation, incrementality testing, and the future of marketing measurement. The conversation is rounded out with analysis of major industry news, including OpenAI’s acquisition of TBPN, MediaOcean/Disney direct buying, MIQ’s latest acquisitions, and AI breakthroughs from Meta and Anthropic.
Most Memorable Quotes:
Ari on podcasters and acquisitions:
“Look, God bless, tech has done enough to hurt the pocketbooks of journalism and media. So let’s have some irrationality. Let’s take some of those newly minted IPO dollars and slosh them back into the podcast microphone. Like, I’m all for it. Let’s do it.” (35:19)
Tone & Style
The conversation features a mix of deep technical discussion, industry gossip, and sharp humor—balancing skepticism with excitement over AI’s potential.
Contributors:
Episode aired: April 10, 2026
Podcast: Marketecture: Get Smart. Fast.