
Instructure cut a deal with ShinyHunters to return stolen Canvas data, without disclosing the terms. eBay rejected GameStop's $56B bid as "neither credible nor attractive." OpenAI launches Daybreak for cybersecurity, Amazon employees game AI usage targets, and Mira Murati's first model drops.
Loading summary
A
Study and play come together on a Windows 11 PC and for a limited time, college students get the best of both worlds. Get the unreal college deal Everything you need to study and play with select Windows 11 PCs. Eligible students get a year of Microsoft 365 Premium and a year of Xbox Game Pass ultimate with a custom color Xbox wireless controller. Learn more@windows.com studentoffer while supplies last ends June 30th terms at aka mscollegepc.
B
Welcome to the Techbrew ride hunt for Tuesday, May 12, 2026 I'm Brian McCullough today and structure cut a deal with the hackers eBay rejected GameStop's $56 billion bid as neither credible nor attractive. OpenAI launches Daybreak for cybersecurity, Amazon employees, game AI usage targets and Mira Moradi's first model drops here's what you missed today in the world of tech. Turn your market hunches into trades with Liquid. Liquid lets you go long or short on commodities, stocks or private markets right from your phone with up to 100x leverage. That means a $10 position can give you up to $1,000 in market exposure and you don't have to work around market hours or weekends. With liquid, you get one platform for every market open 24. Seven more than $3 billion has already been traded on Liquid. Check them out at Liquid Trade Tech. Sign up with referral code TECHBREW for a $25 deposit match. Terms apply. Compromised education company Instructure has reportedly reached a deal with hackers who breached its Canvas EdTech platform to return stolen data and destroy copies without saying what it gave in return. Quoting the times, Shiny Hunters, a hacking group, had claimed responsibility for the attack on Instructure, the Salt Lake City based company that provides Canvas to about half of all colleges and universities in North America. The hacker said they had access the data of more than 275 million users at nearly 9,000 schools worldwide, including private conversations between students and teachers, as well as personal identifying information such as names and email addresses. Canvas was shut down for hours after the cyber attack on Thursday, the agreement infrastructure said in a statement. Involved the return of stolen data and confirmation that the data had been destroyed. At the hacker's end, Instructure added that it had been informed that none of its customers would face extortion as a result of the theft. While there is never complete certainty when dealing with cybercriminals, we believe it was important to take every step within our control to give customers additional peace of mind to the extent possible. The company said. Instructure did not say what it had given the hackers in exchange for the return of the data. The company did not immediately respond to questions about the deal. Canvas has more than 30 million active users around the world, according to Instructure. The platform is used by teachers and students for coursework, management and communications. Instructure said the data compromised in the hack included usernames, email addresses, course names, enrollment information and messages. Shiny Hunters on Thursday claimed the attack in a message that appeared on Students Canvas pages and was obtained by the New York Times. The group warned that it would leak an unspecified amount of data on May 12 if it did not receive a response from instructure on its May 3rd ransom note. The group had threatened to leak several billions of private messages among students and teachers. Not much is known about Shiny Hunters, which is believed to have been formed around 2020. Its goal appears to be to obtain personal records and sell them. One of its high profile attacks was against Ticketmaster in 20 when the hackers said they had stolen the user information of more than 500 million customers. End quote. Ebay has rejected GameStop's $56 billion takeover offer, saying the unsolicited bid is neither credible nor attractive. Quoting Bloomberg. EBay's board turned down the offer after taking into account uncertainty around the financing plan, the operational risks involved and GameStop's governance, chairman Paul Pressler said in a letter addressed the GameStop CEO. Pressler also cited GameStop's executive incentives and a takeover's potential impact on ebay's long term growth. A representative for GameStop didn't respond to an emailed request for comment. The company's shares fell about 3% as the market opened in New York on Tuesday, eBay's stock slipped about 2%. The rejection leaves GameStop CEO Ryan Cohen with the option to try to pursue a proxy fight to replace ebay board members, a move that could take more than a year. He had previously said he's prepared to take his plan straight to shareholders should board turned down his offer. Cohen last week offered $125 a share consisting of 50% cash and 50% GameStop stock to eBay shareholders. That was a 20% premium to the stock price the previous Friday's close. In January, GameStop unveiled a compensation package for Cohen that would reward him with options on more than 171 million shares if he lifted GameStop's market value to $100 billion. Cohen has said he would take over a combined entity but get paid solely based on the performance of that company. Ebay Online Marketplace has 136 million users who spend about $80 billion a year on the platform. The company's 2025 revenue totaled $11.6 billion, mostly from commissions. It also sells advertising and makes money processing payments. GameStop operates about 2,200 retail stores in the U.S. france and Australia after shuttering 227 locations last year. The retailer generated $3.6 billion in revenue in the 12 month period ending January 31, mostly from the sale of gaming hardware and collectibles. In his takeover bid, Cohen said GameStop's 1600 US stores could be used collectibles sold on ebay as well as shipping centers for goods sold on the e commerce platform. End quote. OpenAI has launched Daybreak, a cybersecurity initiative integrating AI models and codec security to help organizations patch vulnerabilities and it sounds a lot like that essay we read yesterday about incorporating LLMs into security operations. Quoting testing catalog Daybreak positions OpenAI's models as part of a defensive security workflow, not just a coding assistant. It brings secure code review, threat modeling, patch validation, dependency risk analysis, detection support and remediation guidance into codec security. OpenAI says the goal is to help teams identify high impact issues, generate and test patches inside repositories, and send audit ready evidence back to existing security systems. The rollout is tied to OpenAI's trusted access for cyber framework standard. GPT 5.5 remains the default mode for general work, while GPT5.5 with trusted access is meant for verified defenders handling secure code review, vulnerability triage, malware analysis, detection, engineering and patch validation. GPT 5.5 Cyber is being positioned as a more permissive limited preview model for specialized authorized workflows including red teaming, penetration testing and controlled validation. The availability is not fully public. OpenAI is asking organizations to request vulnerability scans or sales, while broader deployment is planned with industry and government partners in the coming weeks. The company is also tying the initiative to stronger verification, account level controls, scooped access monitoring and human review, reflecting the dual use risk of giving frontier models deeper cyber capabilities. Daybreak also expands the role of Codec Security. OpenAI's Application Security Agent, Codex Security can build a codebase specific threat model, inspect realistic attack paths, validate issues in isolated environments, and propose patches for human review. This turns the product into a more operational security layer for companies that already use codecs in software development. OpenAI is backing the initiative with a large partner list including Cloudflare, Cisco, CrowdStrike, Palo Alto Networks, Oracle, Zscaler, Akamai, Fortinet, Intel Qualys. I'm not going to list all these because it goes on and on and on. The partner structure shows that OpenAI wants Daybreak to sit across the full security chain, from vulnerability discovery and patch to monitoring edge protection and software supply chain defense. For OpenAI, Daybreak is another step in turning Codex from a developer tool into an enterprise security platform. The company is not only selling model access, but also a governed workflow for using stronger AI systems inside sensitive environments. The main bet is that verified defenders should get fewer model refusals and stronger cyber assistance, while OpenAI maintains restrictions against malicious use such as credential theft, theft, stealth, persistence, malware deployment and unauthorized exploitation. End quote. More signs that Elon might be willing to abandon Grok for the simple reason it's just not doing what he needs it to do. Quoting the Times, Elon Musk's artificial intelligence model Grok lags far behind its fast growing competitors, and an agreement by parent company SpaceX to rent massive computing power to Anthropic raises questions about whether it can still catch up. The deal, signed in early May, will give the maker of the Claude AI model and Chatbot all the computing capacity at one of Musk's main data centers. Anthropic and rival OpenAI have been racing to acquire all the computing capacity they can, as booming demand challenges their ability to serve their models. Since it launched two years ago, Grok has reached millions of users through its integration with Musk's social Network X and controversial features such as a sexualized AI companion. But new data shows its growth appears to have flattened. Downloads of Grok fell to about 8.3 million in from a high of more than 20 million in January, according to analysis firm AppMagic. In a survey of more than 260,000 US consumers and workers who use AI, the percent of respondents who said they paid for Grok remained mostly flat at 0.174% in the second quarter of 2026, versus 0.173% a year ago, according to research firm Recon Analytics. More than 6% of respondents said they paid for ChatGPT, OpenAI is Coke, Anthropic is Pepsi, and Grok is RC Cola, said Ben Palladian, an engineer and tech investor based in Los Angeles. I never really saw people drinking it. In recent public statements, Musk has characterized Grok as less than competitive in the AI race. In court for his suit against OpenAI in late April, Musk played down the size and significance of xai, the AI company he recently merged into SpaceX he described it as pretty small, very small, and the smallest of the AI companies. End quote. Sure, AI is everywhere, but that doesn't mean enterprise value is a given. In a recent survey, PwC found the amount of CEOs who reported revenue gains or cost reductions from AI is nearly equal to the amount who say they're still stuck. So what's causing the issues? PwC boiled it down to clarity. Leaders aren't clear about what's hype, what's reality, or where AI can actually create measurable impact. To help change that, PwC is offering their AI expertise and data. They explore how to tune out noise around AI and get clarity on what successful adoption looks like. Learn from the experts by heading to pwc.com US Brewai that's pwc.com US BrewAI. When you're locked into a conversation, you're probably not taking the best notes, and that's normal. It's hard to listen closely and write everything down at the same time. So turn on plaudnote Pro instead. Plaudnote Pro is a small AI powered device that records conversations and turns them into clean, structured summaries automatically. It's built for people who spend a lot of time in conversations, interviews or calls and don't want to lose any important info. Instead of scrambling to type, you just press a button and stay focused. Plod handles transcription, summaries and key takeaways. It's also built with privacy in mind with enterprise grade compliance like SoC2, HIPAA and GD. Get started at Plaud AI TBRH that's P L A U D A I TBRH use code TBRH for 10% off all Plod devices, sources say. Some Amazon employees are using an in house Amazon openclaw like tool named meshclaw for unnecessary tasks in order to inflate AI token usage after Amazon set weekly AI token use targets. Quoting the ft, the Seattle based group has started to widely deploy its in house meshclaw product in recent weeks, allowing employees to create AI agents that can connect to workplace software and carry out tasks on a user's behalf, according to three people familiar with the matter. Some employees said colleagues were using the software to automate additional unnecessary AI activity to increase their consumption of tokens, units of data processed by models. They said the move reflected pressure to adopt the technology after Amazon introduced targets for more than 80% of developers to use AI each week and earlier this year began tracking AI token consumption on internal leaderboards. There is just so much pressure to use these tools, one Amazon employee told the ft. Some People are just using meshclaw to maximize their token usage Amazon has told employees that the AI token statistics would not be used in performance evaluations, but several staff members said they believed managers were monitoring the data. Managers are looking at it, said another current employee. When they track usage, it creates perverse incentives and some people are very competitive about it. Silicon Valley groups are pushing to increase adoption of generative AI tools as companies seek to demonstrate returns on vast spending commitments to AI infrastructure and embed the technology more deeply into day to day work. Amazon this year is expected to spend $200 billion in capital expenditure, the vast majority of which will go towards AI and data center infrastructure. The E Commerce group had posted team wide statistics on AI usage by its staff, but recently limited access so that only employees themselves and managers can view their stats. Managers are discouraged from using token use to measure performance, according to a person familiar with the matter. Meta employees have similarly engaged in so called token maxing to improve their standing on internal leaderboards. The meshclaw tool that some employees have used to increase their statistics was inspired by OpenClaw, which became a viral sensation in February. OpenClaw allows users to run agents locally on their own hardware, including computers and laptops. Amazon's meshclaw can initiate code deployments, triage emails and interact with apps such as Slack, according to people familiar with the matter. Finally today we finally know what miramoradi has been up to and it's pretty interesting. Quoting the decoder Thinking Machines Lab has released a research preview of its first AI model designed to break voice AI out of the traditional question and answer pattern. The model processes audio, video and text in parallel 200 millisecond chunks, and the startup claims it beats OpenAI's GPT Real Time 2 and Google's Gemini Live on interaction quality. Thinking Machines Lab has published a research preview of what it calls interaction models, AI models that handle interaction natively rather than through external scaffolding. The core idea is that interactivity should scale alongside intelligence, not get treated as an afterthought. Today's real time systems like GPT Real Time or Gemini Live continuously take in audio, but the actual language model never sees it directly. According to Thinking Machines, a harness of separate components sits in front of the model, including things like a voice activity detector that decides when a speaker's turn is over. Only then does the finished utterance get handed to the model, which generates a complete response. While it's talking, its perception freezes, receiving no new information until it finishes or gets interrupted. These components are far less intelligent than the model itself. That means behaviors that define real conversations simply don't work according to Thinking Machines proactively jumping in interrupt me if I say something wrong, reacting to visual cues, tell me when I've written a bug, or speaking simultaneously, which would be useful for something like live translation. The lab argues that these handcrafted systems will eventually be outpaced by the advance of general capabilities. Thinking Machines Interaction models replace the harness with a model that processes the audio and video stream directly rather than receiving pre segmented utterances. The approach resembles a full duplex model like Moshi or Nematron voice chat, which work in a similarly interleaved fashion but are smaller scale models focused on latency rather than intelligence benchmarks. The real break from existing architectures is what the team calls time aligned micro turns. The model continuously processes 200 milliseconds of input and generates 200 milliseconds of output, with both token streams running in an interleaved fashion. Input and output no longer happen sequentially instead they share the same clock cycle. This eliminates artificial turn boundaries, letting the model decide on its own whether to say, silent interject or speak alongside the user. Audio and images aren't pre processed through large standalone encoders, but are fed directly into the transformer with minimal pre processing. That saves latency, though it could also limit the model's ability to pick up fine visual details like text. The real time model has another challenge, though. If you need to respond every 200 milliseconds, you can't simultaneously spend minutes reasoning or searching the web. Thinking Machines solves this by pairing the interaction model with a second asynchronous background model that handles longer tasks like reasoning tool use and research. Both models share the same conversation context. The interaction model delegates tasks while keeping the conversation going, then weaves results from the background model into the conversation as they arrive at a moment appropriate to what the user is currently doing, rather than as an abrupt context switch. The goal is to combine the response speed of the fast model with the depth of the reasoning model. Thinking Machines Lab was founded in February 2025 by Mira Muradi and other former OpenAI researchers. In July, the company closed a 2 billion DOL seed round at a $12 billion valuation, all without a product. A follow on round, reportedly in the works at around $50 billion, didn't come together by the end of 2025, and several key employees have since left the company. The interaction model is the first in house AI model backing Murati's claim that she can build a real competitor alongside OpenAI Anthropic and Google DeepMind. Nothing more for you today. Talk to you tomorrow.
C
The Wired newsroom is known for award winning reporting on how technology shapes our world. On WIRED's Uncanny Valley, we take that curiosity even further. Each week, journalists from Wired break down the biggest stories in tech while speaking directly with the people building, challenging and reshaping the future. Future? Is the AI boom sustainable? How do you protect your privacy in an age of constant surveillance? Uncanny Valley tackles the questions driving today's tech debates and lighting up your group chats. Listen to new episodes every Thursday. Wherever you get your podcasts.
Date: May 12, 2026
Host: Brian McCullough
This episode of Tech Brew Ride Home offers a concise yet dense rundown of today’s critical tech news. The main theme centers on the intersection of cybersecurity, AI innovation, and shifting industry norms—highlighting high-profile hacks, new AI models, internal culture battles at major tech firms, and notable business deals and rejections. Key discussions include Instructure’s controversial decision to negotiate with hackers, OpenAI’s foray into cybersecurity tools, Amazon’s unusual AI usage metrics, and the debut of Mira Murati’s new AI venture.
This episode delivers a brisk but insightful look at the fast-moving crossroads of tech, security, and enterprise AI—melding headlines with context and dropping punchy, memorable commentary throughout.