
Mark Zuckerberg has spun up his own AI bot to do his job for him. Or help him do his job quicker, I guess. OpenAI hires an ad guru from Meta. Elon loses a case. Maybe some managers actually want you to use as many tokens as possible. And how to get your LLM to recursively improve itself? Maybe?
Loading summary
Commercial Announcer
On December 12, Disney invites you to go behind the scenes with Taylor Swift in an exclusive six episode docu series.
I wanted to give something to the fans that they didn't expect. The only thing left is to close
the book the end of an era and don't miss Taylor Swift. The Eras Tour the final show featuring for the first time the tortured poets department. Streaming December 12th only on Disney.
Brian McCullough
Welcome to the Tech We Write home for Monday, March 23rd, 2026. I'm Brian McCullough. Today Mark Zuckerberg has spun up his own AI bot to do his job for him or help him do his job quicker, I guess. OpenAI has hired an ad guru from Meta. Elon loses a case maybe some managers actually want you to use as many tokens as possible and how to get your LLM to recursively improve itself. Maybe here's what you missed today in the world of tech. Hey, if everybody is trying to create AI agents to help them work better, then why not executives and founders as well, right? Quoting the Journal Mark Zuckerberg wants everyone inside and outside his company to eventually have his or her own personal artificial intelligence agent. He is starting with himself. Zuckerberg, the chief executive of Meta Platforms, is building a CEO agent to help him do his job, according to a person familiar with the project. The agent, which is still in development, is currently helping Zuckerberg get information faster, for instance by retrieving answers for him that he would typically have to go through layers of people to get, the person familiar with the project said. Zuckerberg's agent project reflects a drive across the 78,000 person company to accelerate the pace of work, eliminate layers from its organizational structure and change the day to day jobs of its employees to remain competitive with AI native startups with much smaller staffs. The company views AI adoption as critical to its future success and is experimenting with how to integrate more of it into its business. Zuckerberg, who has also been spending more time coding, recently previewed some of the efforts on the company's earnings call in January. We're investing in AI native tooling so individuals at Meta can get more done. We're elevating individual contributors and flattening teams, he said. If we do this, then I think we're going to get a lot more done and I think it'll be a lot more fun. Use of AI tools has spread quickly through the ranks of Meta, in part because it is now a factor in employees performance reviews. Meta's internal message board is filled with posts from employees sharing new AI use cases they have found and new tools they have built using AI, according to people familiar with the matter. Some inside the company described the atmosphere as reminiscent of the company's early days, when its name was still Facebook and its unofficial internal motto was move fast and break things. Zuckerberg said while giving testimony during a recent trial that the company has moved away from that motto in favor of something something more akin to move fast with stable infrastructure. Employees have started using personal agent tools such as MyClaw that have access to their chat logs and work files and can go talk to colleagues or their colleagues personal agents on their behalf, the people said. Another AI tool called SecondBrain that is somewhere between a chatbot and an agent is also gaining momentum internally, according to people familiar with the matter. Second Brain was built by a Meta employee on top of Claude and can index and query documents for projects, among other uses. On the internal post announcing it to staff, the employee said it is meant to be like an AI chief of staff. There is even a group on the internal messaging board where employees personal agents talk to each other, some of the people said. Separately, Meta acquired Multbook, the social media site for AI agents, and hired its founders in a deal earlier this month. End quote. Speaking of Meta and OpenAI, OpenAI has hired Dave Dugan, a former top ad executive at Meta, to lead its ad sales, reporting to COO Brad Lightcamp. Dugan stepped down as a Meta VP earlier in March, quoting the Journal. The high profile hire underscores OpenAI's urgent push to generate new revenue streams to support its enormous funding requirements for its extensive artificial intelligence projects and computing needs. Earlier this year, the company began testing advertising on its popular free ChatGPT chatbot and the product's less expensive subscription tier. Entering the digital ad market, where Ameta is a juggernaut that generated nearly 200 billion doll in ad revenue in 2025 marks a significant strategic shift for the AI company. Chief executive Officer Sam Altman said he had reservations about integrating ads into ChatGPT ads. AI is sort of uniquely unsettling to me, he said at a fireside chat at Harvard University two years ago. I kind of think of ads as a last resort for us, as a business model. He was concerned about losing user trust if people suspected that advertisers were influencing the chatbot's responses. OpenAI has said that ads wouldn't affect the chatbot's answers and user con wouldn't be sold to advertisers. Fiji simo, who leads OpenAI's product and business teams as CEO of Applications, previously spent about a decade at Meta's Facebook. Although the modern advertising business is heavily reliant on algorithms and automated systems for the bulk of ad buying and selling, personal relationships remain a significant factor influencing where brands ultimately spend their advertising budgets. In Dugan, OpenAI brings on board a veteran known for his close relationships with the world's leading ad companies. Before his more than tenure at Meta, Dugan's spent time working at several agencies owned by ad giants such as Publicis Group. End quote. Samsung has rolled out Apple airdrop support to Quick Share, starting with the Galaxy S26 series in South Korea and plans to expand to more devices and regions shortly. Quoting Cnet the feature will need to be turned on from the phone's settings menu. The feature will be arriving in an update to devices over the course of this week, and when it does, the Quick Share settings menu will gain a share with Apple Devices toggle after it's activated, the Quick Share feature on the Galaxy Phone will be able to see Apple devices by opening the Quick Share menu and can then send photos or files by selecting the device for an iPhone. To see the Galaxy Phone, the device's airdrop settings need to be set to everyone. This is similar to how airdrop compatibility works with Google's Pixel 10 phones, which gained the feature in a software update last fall. Samsung says airdrop compatibility will eventually come to more Galaxy phones and is starting with the S26 series. Samsung says that the addition of airdrop compatibility is meant to help with the company's ongoing effort to have its phones work with other operating systems. And because Apple and Samsung often dominate the best selling phone lists around the world, the ability to share photos and media using airdropping Quick Share could quickly become ubiquitous. This could be especially true as Samsung expands this to its lower cost phone lineup eventually, such as the $200 Galaxy A17. End Quote. Late last week, a US Jury found that Elon Musk intentionally misled Twitter shareholders by disparaging the company in 2022 in order to buy it for a lower price than his original $44 billion bid. Quoting Bloomberg Jurors in federal court in San Francisco found Friday that Musk intentionally misled Twitter shareholders when he tweeted that the social network now called X had too many fake accounts and tried to back out of the deal. The jury rejected two of the four fraud claims. Musk's lawyers vowed an appeal. The eight member panel calculated how much Musk's statements drove down the company's stock price for each trading day over a period of about five months. The amount of damages he must pay to individual investors, which could total hundreds of millions or even billions of dollars, will be determined at a later date when shareholders submit claims. The verdict, following about three days of deliberations, marks a rare defeat in court for the world's richest person, who has been dubbed Teflon Elon for his track record of win winning high stakes legal battles that many expected him to lose. He prevailed in a 2023 trial over Tesla investors allegations that he misled them in a tweet five years earlier saying he had funding secured to take the electric carmaker private. Musk is a co founder of Tesla and its chief executive officer. Mark Malumpi, a lawyer for the investors, said after the verdict that he thinks the damages will amount to $2.6 billion. But even an award that high wouldn't debt Musk's net worth, which was $661.1 billion, on Friday, according to the Bloomberg Billionaires. This case is much bigger than Twitter. This case goes right to the heart of Wall street and what's been going on in recent years, said Joseph Kochett, Molfy's partner at Kochatpitri McCarthy LLP. It's a great example of what you cannot do to the average investor. Musk's lawyers noted that he has won other cases on appeal. We view today's verdict, where the jury found both for and against the plaintiffs and found no fraud scheme, as a bump in the road, musk's legal team at Quinn, Emanuel, Ruckerhart and Sullivan LLP said in a statement. And we look forward to vind. The jurors heard about two weeks of live testimony from Musk and top Twitter executives at the time, who recalled the turbulent six month period in 2022 when the serial entrepreneur flip flopped over whether he would buy the platform, resulting in hard fought litigation with Twitter's board of directors to force him to follow through. The investors claim that Musk's social media posts and public statements, including a May 13, 2022 tweet stating the deal was temporarily on hold pending a review of the number of bots counted as Twitter users, was actually part of a deliberate plan to drive down the company's stock price so he could renegotiate at a better price, end quote.
Commercial Announcer
Score more with the college branded Venmo debit card and earn up to 5% cash back with Venmo Stash got paid back with the Venmo debit card you can instantly access your balance and spend on what you want like game day snacks, gear, tickets and more. The more you do, the more cash back you can earn. Plus there's no monthly fee or minimum balance. Sign up now@venmo.com collegecard the Venmo MasterCard is issued by the Bancorp Bank N.A. select schools available. Venmo Stash terms and exclusions apply at Venmo me stash terms max $100 cash
Brian McCullough
back per month cap Table Management who needs it? Well, you probably do, but that doesn't mean it should drain your time or derail your budget. Pulley knows there's a better way. That's why they help take the complexity and surprises out of equity management. Pulley's intuitive workflows, built in compliance tools and decision ready reporting are designed to work for you, not against you. Pulley helps you issue, track and manage equity, stay compliant with up to date 409A valuations, complete stock based compensation reporting and more. Learn more and get started at pulley.com brew that's pulley.com brew
Commercial Announcer
it's crunch time at work and you need to bring wings to your workday. Visit redbull.com gettingitdone and answer a couple questions about your work style to get a Spotify customized playlist tuned to your productivity. Plus, score a can of Red Bull on us while you go from to do to done. And remember, Red Bull gives you wings. Supplies are limited. Terms apply. Visit the website for more information.
This episode is brought to you by White Claw Surge. Great podcast pick, friend. No surprises there. After all, you're all about finding the tastiest flavors out there, just like White Claw Surge. And with big bold flavors to enjoy like blood orange, BlackBerry, cranberry and more, it's time to go all in on taste. Unleash the flavor. Unleash White Claw Surge. Please drink responsibly. Hard seltzer with flavors 8% alcohol by volume. White Claw Seltzer Works, Chicago, Illinois Given
Brian McCullough
the earlier piece about Zuck spinning up his own AI bot, I figure I'd turn you on to a new term called token maxing, which is sort of the inverse of that idea from last week that your boss might start policing the amount of tokens you use. Quoting the times, an engineer at OpenAI processed 210 billion tokens enough text to fill Wikipedia 33 times through the company's artificial intelligence models over the last week, the most of any employee at Anthropic. A single user of the company's AI coding system, Claude Code, racked up a bill of more than $150,000 in a month. And at tech companies like Meta and Shopify, managers have started to factor AI use into performance reviews, rewarding workers who make heavy use of AI tools and chastening those who don't. This is the new reality. Coders some of the first white collar workers to feel the effects of AI as it sweeps through the economy. AI was supposed to help tech companies boost productivity and cut costs, but it has also created an expensive new status game known as token maxing among AI obsessed workers who are desperate to prove how productive they are. At some tech companies, including Meta and OpenAI, employees compete on internal leaderboards that show how many tokens the atomic unit of AI use, roughly equivalent to a word fragment. Each worker consumes. Two people familiar with those companies practices said generous token budgets are becoming a job perk for coders like dental insurance or free lunch, and some are spending thousands of dollars a month trying to automate as much of their own work as possible. I probably spend more than my salary on Claude, said Max Linder, a software Engineer in Stockholm. Mr. Linder's employer pays for his tokens. Until recently, power users might have consumed thousands of tokens a day using an AI tool like ChatGPT, Claude or Gemini. A student writing an essay, for example, might go through 10,000 tokens, roughly equivalent 7,500 words, including several rounds of revisions. Using millions of tokens would have required hours in front of a computer doing nothing but typing and using billions of tokens was virtually impossible. But the advent of so called agentic coding tools has upped the ante. These systems can work unsupervised for hours at a time, reviewing and editing large code bases and writing entire software programs from a single prompt. Each agent can spawn sub agents to handle different parts of the task, generating thousands of tokens at each step. Some AI systems, like the popular open source AI assistant OpenClaw, are designed to run 247 churning through tokens while their human users sleep. If you have something continuously running agents, you'll do 700 million tokens a week from a single full time agent, said Edge Edgar, a co founder of Mechanize, an AI startup who estimated his own token consumption at between 1 billion and 10 billion a week. It doesn't really take that much. Some coders have mastered the art of AI multitasking, opening multiple windows and setting dozens of agents loose on their projects at a time. AI companies companies have encouraged those whales, giving them trophies and other rewards. And some tech executives are glad to see their employees embracing the new tools they Equate heavy AI use with increased productivity. If a programmer wants to operate a swarm of 10 AI agents running parallel tasks in separate windows, they're happy to foot the bill. End quote. Finally today, let me tell you about the experiment AI guru Andrej Karpathy is running. Quoting Fortune, Karpathy recently tweeted about an experiment he'd run where he put an AI coding agent to work running a series of experiments to figure out how to improve the training of a small language model. He let the AI agent run continuously for two days, during which time it conducted 700 different experiments. Over the course of those experiments, it discovered 20 optimizations that improved the training time. Karpathy found that applying the same 20 tweaks to a larger but still fairly small language model resulted in an 11% speed up in the time it took to train the model. Karpathy called the system he built for conducting this experiment auto research. What caught many people's attention was that the auto research is close to the idea of self improving AI systems that were originally broached in science fiction and that some AI researchers fervently desire and others deeply fear. The concern is that recursive self improvement, where an AI continually optimizes its own code and training in a kind of loop, could lead to what AI safety researchers sometimes call a hard takeoff or an intelligence explosion. In these scenarios, an AI system rapidly improves its own performance, leading it to surpass human cognitive abilities and escape human control. Karpathy's experiment wasn't quite this. The AI agent at the heart of his auto research setup isn't refining its own training setup. It's adjusting the training code and internal neural network settings for a different, much smaller and less sophisticated AI model. But Karpathy rightly noted that his experiment had big implications for how AI labs will do research going forward, and this might accelerate their progress. All LLM Frontier Labs will do this. It's the final boss battle, Karpathy wrote on X. He acknowledged that it's a lot more complex at scale, of course, since his auto researcher only had to worry about adjusting a model and training process that was contained in just 630 lines of Python code. Whereas the training code base of frontier AI models is orders and magnitude bigger. But doing it is just engineering and it's going to work, he continued. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans optionally contribute on the edges. Karpathy also said something else about auto research, which got many people excited. Any metric you care about that is reasonably efficient to evaluate or that has more efficient proxy metrics such as training a smaller network can be auto researched by an agent swarm, he wrote. Wrote it's worth thinking about whether your problem falls into this bucket too. End quote. Nothing more for you today. Talk to you tomorrow.
Commercial Announcer
Two kinds of fishing out here. One for fish, one for your data. Hackers try to hook you, but Cisco Duo keeps every user and device protected. Cisco Duo Fishing season is over. Learn more@duo.com.
Episode: Zuck’s Personal AI Agent
Date: March 23, 2026
Host: Brian McCullough
In this episode, Brian McCullough covers a whirlwind of key tech news stories, with a spotlight on Meta CEO Mark Zuckerberg’s development of a personal AI agent to automate and enhance his executive workflow. The episode also delves into OpenAI’s high-profile ad executive hire, new device interoperability between Samsung and Apple, Elon Musk’s legal defeat regarding Twitter, the rise of “token maxing” in tech workplaces, and Andrej Karpathy’s fascinating experiment with self-improving AI agents.
Brian maintains an industry-savvy, concise, and slightly wry tone while delivering the news with a mix of curiosity and skepticism—particularly on the evolving strategies of tech giants and the obsession over AI productivity.
End of Summary.