Transcript
A (0:04)
Welcome to the Tech Brew Ride home for Thursday, September 18, 2025. I'm Brian McCullough. Today has intel found the big customer for its foundry that it needs to survive the big tie up with Nvidia announced this morning all the announcements from Meta's event last night. Smart glasses and maybe the Metaverse is still a thing. And AI pattern matching might work as well for health prediction as it has proven to do with weather forecasting. Here's what you missed today in the world of tech. Nvidia and intel this morning announced a partnership to develop multiple generations of x86 products. Nvidia will buy $5 billion of intel stock to seal that deal. Quoting Tom's Hardware In a surprising announcement that finds two longtime rivals working together, Nvidia and Intel announced today that the companies would jointly develop multiple new generations of x86 products together. The products include x86 Intel CPUs tightly fused with an Nvidia RTX graphics chiplet for the consumer gaming PC market, named the intel x86 RTX SoCs. Nvidia will also have intel build custom x86 data center CPUs for its AI products for hyperscale and enterprise customers. Additionally, Nvidia announced that IT will buy $5 billion in intel common stock at $23.28 per share, representing a roughly 5% ownership stake in Intel. Intel stock is now up 33% in pre market trading. We spoke with Nvidia representatives to learn more details about the company's plans. Nvidia says that the partnership between the two companies is in the very early stages, so the timeline for product releases along with any product specifications will be disclosed at a later unspecified date. Given the traditionally long lead times for new products, it is rational to expect these products will take at least a year and likely longer to come to market. Nvidia emphasized that the companies are committed to multi generation roadmaps for the co development products, which represents a strong investment in the x86 ecosystem. However, Nvidia tells us it also remains fully committed to its other announced product roadmaps and architectures, including for its ARM based GB10 Grace Blackwell processors for workstations and the Nvidia Grace CPUs for data centers and the next gen Vera CPUs. Nvidia says it also remains committed to products on its internal roadmaps that haven't been publicly disclosed yet, indicating that the new roadmap with intel will merely be additive to its existing initiatives. Nvidia hasn't disclosed whether it will use Intel Foundry to produce any of the products yet. However, while intel has used TSMC to manufacture some of its recent products, its goal is to bring production of most of its high performance products back into its own foundries, and some of its products never left. For instance, Intel's existing Granite Rapids Data center processors use the Intel 3 node, and the upcoming Clearwater Forest Xeons will use Intel's own 18a process node for compute. This suggests that at least some of the Nvidia Custom x86 silicon, particularly for the data center, could be fabbed on intel nodes. However, intel also uses TSMC to fabricate many of its client x86 processors now, so we won't know for sure until official announcements are made, particularly for the RTX GPU chiplet. End quote well, as expected, Meta last night unveiled their big Smart glasses updates. The crown jewel is the Meta Ray Ban Display glasses, priced at $799 and arriving in the US September 30 with wider rollout to other countries in 2026. It brings a built in colored display embedded in the right lens capable of showing messages, maps, video calls, live translations and AI generated content. The glasses include a camera, speakers, microphone and pair with a neural band wrist device for gesture controls. Battery life is apparently about six hours with mixed usage, with a charging case boosting that by 30 additional hours. Quoting Bloomberg the Meta Ray Ban display features a screen in the right lens. It can show text messages, video calls, turn by turn, directions in maps, and visual results from queries to Meta's AI service. The subtly integrated display can also serve as a viewfinder for the camera on a user's phone or surface music playback. For smart glasses, or AI glasses, as Meta now calls them, a display is key. The addition over time could allow consumers to offload some functionality to their eyewear that they would normally expect their phones to handle. This feels like the kind of thing where you can start to keep your phone in your pocket more and more throughout the day, chief Technology Officer Andrew Bosworth said. He added that the phone isn't going away, but glasses offer a more convenient way to access its most popular features. The glasses introduce a new control system as well. While users can still swipe along the frame as with previous models, the primary interface is now hand gestures detected by a neural wristband strapped around the wearer's dominant hand. The user can select items by pinching their thumb and index finger, swipe through items by sliding a thumb across their gripped hand, double tap their thumb to invoke Meta's AI voice assistant, or twist their hand mid Air to adjust music volume and other controls. In addition to app integrations and the ability to handle AI queries, the glasses include a live caption feature that displays spoken words in real time, including translations similar to closed captions on a tv. The video calling function lets wearers see the person they're speaking with while sharing their own point of view. Users can reply to texts by sending an audio recording or dictating. Later this year, the wristband will add another option, handwriting words in the air. A future update will also let the glasses focus on the person a wearer is speaking with while filtering out background noise. The new glasses will go on sale September 30th and will include the wristband. Meta is offering two sizes and two color options, black and a brown shade called sand. Meanwhile, Meta also upgraded its existing Ray Ban meta line. The Gen 2 Ray Ban Meta AI glasses double the battery life over the previous version, offer 3K Ultra HD video recording and include enhanced audio and low power AI features. On the athletic front, Meta introduced Oakley Meta Vanguard Smart glasses designed for runners, cyclists and other outdoor users. Costing $499 and shipping October 21, they feature a large unified front lens, a 12 megapixel camera with a 122 degree wide angle view, open ear speakers louder than earlier models, better wind noise control via a five microphone array and IP67 water dust resistance. The glasses link with fitness platforms such as Garmin and Strava to provide real time performance data can overlay metrics on captured footage and offer up to 9 hours of battery life, 6 hours continuous music plus extra charge via the carrying case. But back to glasses with the screen and the lens. Right, because that's what we care about. Well Road to VR gotta hands on quote the Meta Ray Ban display's 20 degree monocular display isn't remotely sufficient for proper AR where virtual content floats in the world around you. But it adds a lot of new functionality to Meta's smart glasses. For instance, imagine you want to ask Meta AI for a recipe for teriyaki chicken. On the non display models. You could definitely ask the question and get a response. But after the AI reads it out to you, how do you continue to reference the recipe? Well, you could either keep asking the glasses over and over or you could pull out your phone and use the Meta AI companion app. @ which point why not just pull the recipe up on your phone in the first place? Now with the Meta Ray Ban display glasses you can actually see the recipe instructions as text and in a small heads up display and glance at them whenever you need. In the same way, almost everything you could previously do with the non display Meta Ray Ban glasses is enhanced by having a display. Now you can see a whole thread of messages instead of just hearing one read through your ear. And when you reply, you can actually read the input as it appears in real time to make sure it's correct instead of needing to simply hear it played back to you. It should be noted that Meta has designed the screen and the Ray Ban display glasses to be off. Most of the time. The screen is set off and to the right of your central vision, making it more of a glanceable display than something that's right in the middle of your field of view. At any time, you can turn the display on or off with a double tap of your thumb and middle finger. Technically, the display is a 0.36 megapixel 600 by 600 full color LCOS display with a reflective wave guide. Even though the resolution is low, it's plenty sharp across the small 20 degree field of view. Because it's monocular, it does have a ghostly look to it because only one eye can see it. This doesn't have the functionality of the glasses, but aesthetically it's not ideal. Aside from the glasses being a little chunkier than normal glasses, the social acceptability here is very high. Even more so because you don't need to constantly talk to the glasses to use them or even hold your hand up to tap the temple. Instead, the so called neural band based on EMG sensing allows you to make subtle inputs while your hand is down at your side. To date, controlling XR devices has been done with controllers, hand tracking or voice input. All of these have their pros and cons, but none are particularly fitting for glasses that you wear around in public. Controllers are too cumbersome. Hand tracking requires line of sight, which means you need to hold your hands awkwardly out in front of you. And voice is problematic both for privacy and certain social settings where talking isn't appropriate. The neural band, on the other hand, feels like the perfect input device for all day wearable glasses because it's detecting muscle activity instead of visually looking at your fingers. No line of sight is needed. You can have your arm completely to your side or even behind your back and you'll still be able to control the content on the display. The neural band offers several ways to navigate the UI of the Ray Ban display glasses. You can pinch your thumb and index finger to together to select, pinch your thumb and middle finger to go back and swipe your thumb across the side of your finger to make up down, left and right selections. There are a few other inputs too, like double tapping fingers or pinching and rotating your hand. As of right now, you navigate the Ray Ban display glasses mostly by swiping around the interface and selecting. In the future, having eye tracking on board will make navigation even more seamless by allowing you to simply look and pinch to select what you want. The look and pinch method combined with eye tracking already works great on Vision Pro, but it still misses your pinches sometimes if your hand isn't in the right spot because the camera can't always see your hands at quite the right angle. But if I could use the neural band for pinch detection on Vision Pro, I absolutely would. That's how well it seems to work already. While it's easy enough to swipe and select your way around the Ray Ban display interface, the neural band has the same downside that all the aforementioned input methods have text input, but maybe not for long. In my hands on with the Ray Ban display, the device was still limited to dictation input, so replying to a message or searching for a point of interest still means talking out loud to the headset. However, Meta showed me a demo that I didn't get to try myself of being able to write using your finger against a surface like a table or your leg. It's not going to be nearly as fast as a keyboard or dictation for that matter, but private text input is an important feature. After all, if you're out in public, you probably don't want to be speaking all of your message replies out loud. End quote.
