
In this episode, Matt Garman's 2025 re:Invent keynote unveils exciting AI advancements, including Am
Loading summary
Gillian Ford
This is episode 748 of the AWS podcast, released on December 3rd, 2025.
Simon Leish
Hello everyone and welcome back to the AWS Podcast. Simon Leish here with you. Great to have you back for our very special set of Re Invent editions. And of course, I'm joined by Gillian Ford. G', day, Gillian. How you doing?
Gillian Ford
It is the most wonderful time of the year here.
Simon Leish
Indeed, indeed. And unlike last year where we sort of took turns at doing these episodes, we're doing them this year because that way we get to compare and contrast the things we think are cool and were interesting. And as well as that, a call out. So this episode will focus on Matt Garman's keynote and all the cool things that came out during that keynote. We have two more episodes coming up this week that will include the other keynotes. We'll also be augmenting one of those episodes with kind of the Pre Invent catch up of all the things that have been released during Pre Invent as well. So you won't miss out on anything. So we've got our work cut out for us, Gillian.
Gillian Ford
We really do. It is a lot, but I'm always excited for it and it's always interesting.
Simon Leish
To see what appeals to people and which things are interesting. So we're going to seek to try and unpack some of that during this episode. But let's get started. Firstly, we're going to start with AWS Clean Rooms. And AWS Clean Rooms has launched a privacy enhancing synthetic data set generation for ML model training. Now, obviously training is really, really important, but also maintaining privacy of data sets that you may be using, particularly internally, is really, really important too. So this feature generates synthetic training data sets that preserve the statistical patterns of the original data without the model having access to original records. So this is really a new thing that I think it's going to be interesting to see how customers choose to use this in terms of being able to better secure their training data sets whilst maintaining really cool accuracy. Now, of course, unsurprisingly, there've been huge amount of announcements related to AI and Genai and all those CO things. So let's run you through them. So we're also announcing Amazon Nova 2 Lite, which is a fast cost effective reasoning model for everyday workloads. So it gives you interesting leading performance, it gives you a really effective price point and it now also supports extended thinking including step by step reasoning and task decomposition before doing anything. It supports text, image, video, documents as inputs and you get a 1 million token context window. That's really interesting because I'm sure many of you are finding yourself running out of context. And you can also customize this as well. And it also has two built in tools, Web Grounding and Code Interpreter. Web Grounding will give you publicly available information with citations and the code interpreter lets the model run and evaluate code within the same workflow. So this is a really interesting new one to take advantage of. Now we're also introducing Amazon Nova 2 Sonic, which is our new speech to text model for conversational AI. Now this gives you natural real time voice conversations with your applications and this delivers expressive voices, masculine and feminine voices in each of the supported languages. So lots of new languages. And it has natural expressivity turn, taking a seamless handling of user interruptions as well. And it shows that people, when we test this with customers, they actually like how it feels and how you interact it. There's much better understanding. The model now handles alphanumeric inputs, short utterances and 8khz telephony speech input as well. And it also now has English, French, Italian, German, Spanish, Portuguese and Hindi as well. And it can switch between languages of the same conversation, which is something I can't do, so it's a lot smarter than me. This is really cool, Gillian. I think you've had a lot of customers who sort of been looking for this extra capability.
Gillian Ford
Yeah, I'm super excited for this. I see customers, especially in the healthcare industry, who want to be able to add voice interactions, like the patient doctor type of interaction. So I think this is just going to be super helpful, especially when they have patients who are multilingual.
Simon Leish
If you're in the mood for training your own models, you can now build your own frontier models using Nova. This is called Amazon Nova Forge. And this allows you to have supervised fine tuning and reinforcement learning on top of the existing models. And this allows you to really start your development from early model checkpoints. And then you blend your data sets with the Amazon NOVA curated training data and you can host it all in AWS and you can build things out so you can see how that anonymized data set earlier on comes into play. Now there's a lot that's going on, so we're going to move through some of them, but we're also going to dive deep on some. And one that I think is really interesting is Amazon Nova act is now generally available and this allows AI agents to interact with user interfaces and automate complex workflows. And developers had been experimenting with this and they said, hey, we need this in production, but you can't just say, hey, give it access, you gotta, gotta do some stuff. So now basically this provides you with a complete interface workflow ability at creating this in a isolated way that you can then deploy into your environment. So I think this ability to interact with the web if you like is kind of important.
Gillian Ford
Julian I think so too. And I also really like that it has this no code web UI that I think especially as more teams are really thinking about evaluation driven development and including people that are domain experts into the entire AI agentic process, having a really nice UI that just makes it really easy to be able to get started with making those like browser types of automations and makes it a lot easier for them to be able to work with those AI agentic developers.
Simon Leish
Yeah, and doing it at scale too is always, is always the trick. Now speaking of trust, Amazon Bedrock Agent Core has added quality evaluations and policy controls for deploying trusted AI agents. This is huge because people want to not just deploy this technology, but deploy it with trust and assurance. So policy in Agent Core is in preview and it defines clear boundaries for agent actions by intercepting Agent Core gateway tool calls before they run, using policies with fine grained permissions. And then there's Agent Core evaluations that's also in preview. And this monitors the quality of your agents based on real world behavior using built in evaluators for dimensions like correctness and helpfulness and custom evaluators for your own business specific requirements. So this really allows you to have greater insight and control of what's going on. There's also some new features that are expanding what agents can do. So there's episodic functionality in Agent Core memory. So this is a new long term strategy that lets agents learn from experiences and adapt solutions across similar situations for improved consistency and performance in similar future tasks. This is huge because if you've ever used an agent, it's like they have a complete memory wipe every time they run. They don't learn anything, they don't know anything new. This is kind of starting to address that. The other thing that's really interesting is bi directional streaming in Agent Core runtime. So this deploys voice agents where both users and agents can speak simultaneously following a natural conversation flow. And this is really nice because I was actually planning to build something related to this so I'm going to use this functionality now. Vectors are a thing, I think. Gillian, we see vectors everywhere. You've seen this with your customers a lot. And Amazon S3 vectors is now generally available and it has increased scale and performance. Now. How much scale, you may ask? Well, you can now store and search across up to 2 billion vectors in a single index and up to. And so that's 20 trillion vectors in a vector bucket. My goodness. And a 40x increase from the 50 million per index during preview. So it means basically you can have one Hongkong grade index, which is kind of amazing. And query performance has been optimized. So infrequent queries continue to return results in under 1 second, with more frequent queries now resulting in latencies around 100 milliseconds or less. And you can also retrieve up to 100 search results per query, which is up from the 30 before. The write performance has also improved substantially. Support up to 1,000 put transactions per second when streaming single vector updates into your indexes. So you can really crank this up and it's all serverless. So it's kind of a nice quality of life improvements and really making this robust for production.
Gillian Ford
There's very few things in terms of like architecture patterns where you get like lower cost, lower latency, higher throughput and I mean this is just one of those. I think you just checks all the boxes.
Simon Leish
Yeah, yeah, it really does. And the I don't have to do anything to get the benefit of it is always a big tick. Now of course there's always lots going on in Genai and as we've noticed throughout the development of this technology that there's not one model to rule them all. You need to pick your model carefully depending on your use case. And Amazon Bedrock has now added 18 fully managed open weight models, including the new Mistral Large 3 and a bunch of others which I'll tell you about into Bedrock. So what do you get? So you get new Mistral AI models, you get Mistral Large 3, Ministral 3.3B, which is a new one for me, looks like a typo, but it's not ministral 3.8B and ministral 3.14B as well. Also you get from Google, you get Gemma 3 in 3 different model weights. You get Moonshot AI, you get Minimax AI, you get obviously Nvidia as well OpenAI and Quinn. So there are now over 100 serverless models available to you in Bedrock. And this is the power of Bedrock in that it's the same API, same way of interacting, but you get to use the models that make sense for you. I'm really excited about this, Gillian, because I don't know about you, but I'm seeing a lot of customers sort of have to choose lots of different models for different tasks and often it's a price performance trade off.
Gillian Ford
Yeah, absolutely. I think more and more customers going into next year are going to really be wanting to experiment with different models that are out there. So model choice is always a win.
Simon Leish
Absolutely. Another announcement is we're announcing the ability to use Amazon SageMaker AI with serverless MLflow. So this is a serverless capability that eliminates infrastructure management and it transforms your experiments tracking into an immediate on demand experience with automatic scaling. That means you don't have to do capacity planning. So it's a kind of a zero infrastructure approach to SageMaker AI.
Gillian Ford
Let's talk about compute. I don't think it would be a reinvent launch if we didn't talk about new instances.
Simon Leish
It would make us sad. It would definitely make us sad.
Gillian Ford
It really would. Yes. And we've got new memory optimized instances. These are the EC2X8AEDZ instances powered by 5th gen AMD EPYC processors for memory intensive workloads. They deliver up to 5Ghz processor speeds and 3Tib of memory. So these are going to be ideal for electronic design automation workloads and memory intensive databases requiring high single threaded performance. This one is kind of like a mind trick. I feel like this one because we've got AWS lambda managed instances which are like, wait a minute, like there's serverless, there's EC2, there's no management. What does this mean? Don't worry, we're also figuring this out together. So the reason why we have this is it wanted to address some customer needs around accessing specialized compute options and also doing so in a way where you could optimize costs for steady state workloads without sacrificing the serverless development experience that you get when you're used to using lambda instances. So you kind of get the best of both really. So these are lambda managed instances. I know I need to say this slowly because it is like a lot to wrap your head around if you're used to like picking one side or the other so you get access to the latest generation of EC2 instances. And AWS handles all of the operational complexity, instance lifecycle management, OS patching, load balancing, and auto scaling. So this means that you can select compute profiles that are optimized for your specific workload requirements like high bandwidth networking for data intensive applications without taking on the operational burden of having to manage EC2 inference infrastructure. Another one that is really cool. It's like a mashup of like lambda functions and like step functions AI. So now you can be able to build multi step applications and AI workflows with AWS Lambda durable functions. So this allows you to build these reliable multi step applications. So those who are maybe familiar with like lambda or like the step functions types of experience for these workflows. So now these durable functions that are regular lambda functions with the same event handler that you're already used to, but you can now write sequential code in your preferred programming language. And the durable functions, they're going to track the progress, they're going to automatically retry on failures and suspend execution for up to one year at defined points.
Simon Leish
That's a lot of suspension.
Gillian Ford
Yeah, this really is, I mean it really the, the paradigm of the types of applications that you can build now with like serverless it really starts to bend a bit, which is pretty cool. And this is. So this is all without paying for idle compute during wait times. So that is super cool. So they use a checkpoint and replay mechanism known as durable execution to deliver these cables. I know all of you are like.
Simon Leish
How is this actually, how does this work?
Gillian Ford
Yeah, so cool. So after enabling a function for a durable execution, you add the new open source durable execution SDK to your code and Lambda is going to do a lot of like the heavy lifting for you. So wow, this is super cool. So I know a lot of my serverless friends who are listening, this is definitely something you want to go and get your hands on.
Simon Leish
I think it is really interesting for those use cases where you just kind of needed a little bit of pausing. Like it wasn't, it was like you didn't need full step functions, but you needed it to cope with a little bit of a delay or a break. And this, this is going to unlock a whole lot of new stuff. I'm, I'm really excited too.
Gillian Ford
Yeah. And especially a lot of people who start off with lambda, they love building in lambda functions and then maybe they've had just some limitations before on like lambda. And so now I think this really just continues to expand the use cases as companies and like business requirements grow. So really cool. All right, next we've got one in containers, so we're announcing Amazon EKS capabilities. This is an extensible set of Kubernetes native solutions that streamline workload orchestration. These are fully managed, they're integrated platform capabilities. So they include open source Kubernetes solutions that many customers are providing using today. Argo CD AWS controllers for Kubernetes and Kube Resource Orchestrator. So now you can build and scale kubernetes applications without managing complex solution infrastructure.
Simon Leish
So let's talk about where your data lives Databases now we've always had savings plans for EC2 but now we have database savings plans for AWS databases. So this helps you reduce your database costs by up to 3.35percent when you commit to a consistent amount of usage per hour over a one year term. Savings automatically applied each hour to eligible usage across supported database services and any additional usage is just built at the on demand rate. So this really helps you get more flexibility in your cost savings and just improving the cost profile of your workload. So definitely one worth having a look at. Now let's talk about scaling and other capabilities. We have new capabilities to optimize costs and improve scalability on Amazon RDS for SQL Server and for Oracle. So let's break this down. Firstly we now have SQL Server Developer Edition support. This is huge because this gives you a free SQL Server Edition that gives you all the Enterprise Edition functionalities specifically licensed for non production workloads so you can build and test without paying SQL Server licensing costs. So this is a big deal for developers in those environments. You can now use M7I and R7I instances on Amazon RDS for SQL Server. This will give you significant cost savings over previous generation instances and you can also understand the billing a lot better as well because you can optimize the CPU capability to customize the number of VCPUs and that adjusts your licensing costs. So this is one of those financial engineering pieces. There is also additional storage volumes for RDS for Oracle and SQL Server. In fact they now support up to 250 of storage. That's four times more than they used to, which is pretty huge as far as I'm concerned. So you can store lots of stuff. I haven't met Anyone with a 256TB Oracle or not Oracle. I have Oracle but a SQL Server database so that would be an interesting thing to deal with. Let's also talk about the Amazon OpenSearch service. It has improved vector database performance and cost with GPU acceleration and auto optimization so you can build vector databases up to 10 times faster at a quarter of the indexing costs when compared to non GPU acceleration. So this means you can create billion scale vector databases in under an hour so you can get moving quicker and get up and running. There is now also auto optimization so you can find the best balance between search latency quality and memory requirements for your vector field without needing expertise. So Again, better cost savings, better recalling rate and this saves you weeks of hand tuning that you would have had to do in the past. So big fan of that one. Gillian, We've got some new big changes in support. I know this is one close to your heart. Break it down for us. Cause this is kind of a big change.
Gillian Ford
Yes, this is. And I know before you all are just like wait, I want to hear all like the engineering updates like stick with me. Because if you are using basic support, developer support, there's just like some really cool stuff I would say here that I think can really help you sleep better at night. So there are some new and enhanced support plans. So one is business support plus. So this transforms the developer startup small business experience by providing intelligent assistance powered by AI. You can choose to engage directly with AWS experts or start with AI powered contextual recommendations that seamlessly transition to AWS experts when needed. AWS Experts respond within 30 minutes for critical cases.
Simon Leish
One of the big things here Jillian is this is like response within 30 minutes for critical cases. That's twice as fast as previously, which is pretty cool.
Gillian Ford
That is amazing. So if you are someone who in the past like you had a business critical down issue, you're very smart, you would just go and try to figure it out on your own. Like there are people on the other side who probably have seen this before. So that's what I love about these support plans. Make sure you get those tickets in to someone because when you open a support case it gets routed to a subject matter expert on the actual service team. So they maybe have seen the issue that you are coming across. So that's one super cool. The next is enterprise support. This one I'm also a huge fan of. So with enterprise support you get a designated technical account manager that's a part of your team. They do some really cool offerings to be able to really help you. So like an incident, a security incident response plan. They I've just also seen them just be also really proactive with customers, whether that's cost management helping them. If they have like a really big event coming up like Boxing Day, Black Friday and they need to make sure that their architecture is really prepared for it and if something were to happen they can be able to swoop in right away. So what is also really cool is that they yes, there's now AI powered assistance. So with up to 15 minute response times for production critical issues, you can have support engineers if you need to be triaged like a real person. And there's also those, those AI agents that can also help you to be able to resolve issues faster. And the price point I think for both of these offerings is, I would say it's just absolutely amazing. So business support plus that starts at $29 per month. So this is literally a 70, 71% savings over the previous business support monthly minimum. And then for enterprise support that now starts at 5,000amonth. So this is a 67% savings over the previous enterprise support offering. So I'm really excited for other customers to really be able to take advantage of these additional support offerings to really help them, especially in critical times where they need extra support to help them with their business.
Simon Leish
That's huge. But Jillian, what if I need more support?
Gillian Ford
Ah, I'm so glad you asked. Well, there is another offering called Unified Operations Support and this delivers our highest level of context aware support through an expanded team of experts. So now with this core team you get a technical account manager, a domain engineer, and a designated senior billing and account specialist. And depending on what your specific business needs are, like let's say you're going through a migration, maybe you're going through an incident management or security operations. These experts can really come in based on what your unique business requirements are. And really, so it's like really just super hands on type of support that you can be able to get. There's also the AI powered automation because I guess, I mean this is the year of generative AI.
Simon Leish
This is, this is when it all, it all happens. Exactly. And you also get, you get five minute response time as well with critical incidents.
Gillian Ford
So that's five minute response time.
Simon Leish
Pretty crazy.
Gillian Ford
Yeah, this is absolutely huge. So I'm really excited about all three of these programs for customers. And another one that also I think people have probably been waiting for is Amazon. CloudWatch introduces Unified Data Management and analytics for operations, security and compliance. So this enhancement means that CloudWatch can automatically normalize and process data to offer consistency across sources with built in support for open cybersecurity schema framework and open telemetry formats so you can focus on analytics and insights. CloudWatch also introduces Apache Iceberg compatible access to your data through S3 table so you can run analytics on Athena and SageMaker Unified Studio.
Simon Leish
Now let's shift into the wonderful world of migrations and modernization. Because when I talk to customers, most of them, if not all of them, have some form of tech debt. And let me tell you, the bigger the organization, the bigger the debt. Some studies have shown that organizations will spend 20% of their IT budget just on tech Debt. I've met a lot of customers who dream of it only being 20%. It's often 80%, so it can be right around the wrong way. So how do we help with this? Well, we're happy to announce AWS Transform Custom. This is a new agent that fundamentally changes how organizations can approach modernization at scale. So this agent combines pre built transformations for Java Node, JS and Python and it also now has the ability to define custom transformations. So it means you can learn specific transformation patterns and automate them across entire code bases. So customers using AWS Transform Custom have achieved up to 80% reduction in execution time in many cases, which frees up your developers. So what this means is you can actually define transformations using your documentation, natural language descriptions and code samples and then those get applied consistently across the patch. This whole thing allows you to, it's not gonna just out of the box migrate everything for you with a press of a button. That's not the world today. But what it does do is dramatically compress the amount of work you have to do and it means you can have less people working on it to get a far greater outcome. It's pretty important for those, any of you who have those things that have been niggling away, be it just something as simple as oh, I've got to upgrade all my old Python stuff. This will help you. Now if you are living in the Windows world, I also have great news for you. AWS Transform now has full stack Windows modernization capability. So when we first announced the general availability of AWS Transform for. Net, it was great for. NET applications and customers loved that. But they said hey, we also want to modernize SQL Server and legacy UI frameworks. So now you can do the whole shebang. So basically you can accelerate up to five times faster across applications, UI database and deployment layers. So along with porting. NET framework applications to cross platform net, it migrates SQL Server databases to Amazon Aurora PostgreSQL compatible edition with intelligent stored procedure conversion and dependent application code we factory for validation and testing. It also deploys the application to EC2, Linux or ECS and provides customization of cloudformation templates for production use. And it also has added capabilities to modernize ASP net webform UIs to Blazor as well. There's so much into this. If you're a migration geek, this is something to look at. Now as I've mentioned many, many times on this podcast is I started my career working on mainframes and I always thought it would be my sort of superannuation retirement plan. I could be one of those grizzled old people who knew how to use cobol, et cetera. But I'm in trouble because AWS Transform for Mainframe has introduced reimagined capabilities and automated testing functionality. So what is reimagining mainframe modernization? So this is a new AI driven approach that completely reimagines the customer's application architecture and using modern patterns or moving from batch process to real time functions. So by combining the enhanced business logic extraction with new data lineage analysis and automated data dictionary generation from the legacy source code, customers can transform monolithic mainframe applications written in languages like COBOL into more modern architectural styles like microservices. This is important because building COBOL applications and the way we would build mainframe applications was completely defined by the processing model and modality of the mainframe. So just lifting and shifting to an x86 platform, for example, doesn't give you the benefit. It gets you off the iron, but it doesn't change the way the application works. This is about actually modernizing the actual application, which is really cool. Now of course, testing is always something you need to do. And so there is now automated testing. You can now use automated test plan generation, test data collection scripts and test case automation scripts as well. And AWS Transform for Mainframe also provides functional testing tools for data migration, results validation and terminal connectivity. So this makes you more accurate in the migrations that you're doing. So there's been a fantastic update in the world of networking. We're announcing Amazon Route 53 Global Resolver. This is a new Amazon Route 53 service that gives you secure and reliable DNS resolution globally for queries from anywhere. This is in preview, but this allows you to resolve DNS queries to public domains on the Internet and private domains associated with your Route 53 hosted zones. So this is great for network administrators because it's a unified solution for authenticated clients and sources on on premises data centers, branch offices, remote locations through globally distributed Anycast IP addresses. So this has built in security controls like DNS, traffic filtering, support for encrypted queries, and centralized logging to help you maintain your security profile. And a great update in the partner world. AWS Partner Central is now available in the AWS Management Console. So this allows you to make yourself a partner and find a partner. So if you're interested in working with AWS partners, it's now right there in the console. You don't have to move out anywhere else.
Gillian Ford
Okay, I think I'll admit I don't think I've ever said that the Security Identity section is my favorite, but this time it is absolutely my favorite section. There are so many really good updates. The first one I think this is something that every single person who has been doing DevOps and AWS has wanted, which is a teammate that can be working 24 hours a day, doesn't need constant intervention, is massively scalable, and yes, needs no coffee.
Simon Leish
You don't have to buy them a coffee.
Gillian Ford
That's why we're excited to announce in public Preview the AWS DevOps agent. This is a frontier agent that helps you respond to incidents, identify root causes and prevent future issues through systematic analysis of past incidents and operational patterns. Frontier agents represent a new class of AI agents that are autonomous, massively scalable, and work for hours or days without constant intervention. When production incidents occur on call, engineers face significant pressure. I'm sure every single person listening to this has gone through one of those and you're trying to figure out the root cause. You've got a lot of people that you're talking to and this DevOps agent is going to be hopefully your best friend.
Simon Leish
And the DevOps agent never yells, never freaks out, and never becomes a teapot.
Gillian Ford
Right? So when issues arise, it's going to automatically correlate data across your operational toolchain metrics, logs recent code deployments in GitHub or GitLab. It identifies probable root causes and recommends targeted mitigations helping reduce meantime to resolution. And of course the agent as manages incident coordination. It's going to be integrated for those who use Slack, you can integrate it into your Slack channels for stakeholder updates and maintaining just really detailed investigation timelines. That one I'm sure a lot of people are going to want to integrate with and oh of course because it is the MCP ification of everything. You can bring your own MCP server capability. You can also integrate additional tools such as your organizations, custom tools, specialized platforms or open source observability solutions.
Simon Leish
This is the most excited I've ever seen you about a security update. So if that's any measure for how important this update is, that tells you all you need to know.
Gillian Ford
I know, just like so many people are like have been waiting for something like this. But wait, there's definitely more. I feel like in exciting security and DevOps updates, Amazon GuardDuty extends extended threat detection for Amazon EC2 and ECS. These new findings build on the existing extended threat detection capabilities which are in IAM S3EKS and now these capabilities are going to be just a more consistent and unified way to detect multi stage activity across your AWS workloads. All right, this is definitely one that I know a lot of people have been thinking about is like, I've got all these security risks. I just want them really in one place and to make it really easy to be able to find out what do I work on, what's a real risk. So now what's generally available are near real time analytics and risk prioritization in AWS Security Hub. So Security Hub prioritizes your critical security issues and unifies your security operations to help you respond at scale by correlating and enriching signals across multiple AWS security services. So there's now several new features that have been added to Security Hub. So there's a couple of things that really stand out to me with this one. Being able to look at historical trends so up to like a one year of historical findings. So you can really just have a better sense of your security posture over time. Being able to do real time analytics and just being able to have like a better sense overall through visualizations of like your exposure posture. And for those who liked what the DevOps agent sounded and you're like, hey, if only there was one for security. Well, there is in preview. This is the new AWS security agent that secures applications proactively from design to deployment. I think this so many people have been asking for this type of thing because. Because security feels like a game of whack a mole at times and you're always behind. So this one I'm very excited about. So this is going to continuously validate application security from design to deployment. It's going to help prevent vulnerabilities that are early on in the development process. It'll conduct automated application security reviews that are tailored to your organizational requirements and deliver context aware penetration testing on demand. And the last one in the security updates is we are announcing IAM Policy Autopilot. This helps your AI coding assistants generate IAM identity based policies.
Simon Leish
This is huge. I think this is a big deal because I don't know about you, but I'm not the best at creating IAM policies the first time around. It usually takes me a few goes. To be able to have the AI do it and have an authoritative way of doing it is brilliant.
Gillian Ford
Yeah. And it just like operates in the background as builders converse with their AI coding assistants. So like when your application needs IAM policies, your coding assistants can call the IAM Policy autopilot to analyze the AWS SDK calls within your application and it will generate the required IAM policies. Like that is wow.
Simon Leish
Magic. Magic. We love it. Well, let's finish off today with some great storage things that have been happening. Amazon FSX for NetApp ONTAP now integrates with Amazon S3 for seamless data access. So now you can use your enterprise file data to augment your generative AI applications because you can access it directly through S3 as well as through FSX. Now another thing I know a lot of folks have been waiting for is replication support and intelligent Tiering for Amazon S3 table so this makes sure that the data is automatically tiered to the most cost effective access tier based upon access patterns. So data is stored in three low latency tiers. You get frequent access infrequent Access, which is 40% lower cost than frequent access, and archive instant access, which is 68% lower than the infrequent access. After 38 days without any access, data moves to infrequent Access and after 90 days it moves to archive instant access. You don't have to touch anything, don't have to do anything. It all just works. TABLE Maintenance activities like compaction, snapshot expiration and unreferenced file removal all operate without affecting the data access tiers. This is huge and compaction automatically processes only data in the frequent access tiers, optimizing performance for your actively queried data whilst not maintaining the colder files that don't get managed much at all. And finally, Amazon S3 storage lens has added performance metrics and support for billions of prefixes and the ability to export to S3 Tables. So you've got new categories like read request size, write request size, storage size, concurrent puts, et cetera, et cetera. And you can of course access all of this in S3 Tables so you can use a familiar way to operate in your environment, which is very, very cool as well. There's been a lot today, Gillian. I mean that's a lot to be getting on with. I think reimagining the development approach, the transformation of legacy infrastructure, the change in the way we support customers and giving them more help. The security profile, there's a lot for folks to sort of digest. So I think this is it's good for people to think over the holiday period about how they can take advantage of this.
Gillian Ford
I think a lot of people are going to be doing that. I would hope so.
Simon Leish
Exactly. Well, we will be back tomorrow with yet another episode of the podcast. So Jillian, I'll see you then.
Gillian Ford
All right perfect. Can't wait.
Simon Leish
And we do love to get your feedback. AWspodcastmozon.com is the place to do it. And until next time, keep on building.
Release Date: December 3, 2025
Hosts: Simon Leish & Gillian Ford
In this special re:Invent 2025 keynote recap, hosts Simon Leish and Gillian Ford break down the key announcements from Matt Garman’s keynote address. The episode is jam-packed with major cloud innovations, especially in AI/ML, developer experience, infrastructure, modernization, support, security, and storage. The hosts contextualize these launches for practitioners, with memorable moments, personal insights, and plenty of real-world use case relevance.
AWS Clean Rooms – Synthetic Data for Privacy-Safe ML Training
Amazon Nova 2 Lite
Amazon Nova 2 Sonic
Amazon Nova Forge & Nova Act
Amazon Bedrock: Model Choice & Agent Core Enhancements
Amazon S3 Vectors (GA)
Amazon SageMaker AI with Serverless MLflow
New Compute Innovations
Lambda Managed Instances
Lambda Durable Functions
Container Orchestration with Amazon EKS Capabilities
Database Savings Plans
Amazon RDS Upgrades
Amazon OpenSearch
Revamped Support Offerings
CloudWatch Unified Data Management
Amazon Route 53 Global Resolver
AWS Partner Central in Console
AWS DevOps Agent (Preview)
Amazon GuardDuty Expanded Detection
AWS Security Hub Analytics
AWS Security Agent (Preview)
IAM Policy Autopilot
Amazon FSx for NetApp ONTAP --> S3 Integration
Amazon S3 Table: Replication and Intelligent Tiering
S3 Storage Lens Enhancements
Matt Garman's keynote ushered in a new era for AWS customers: AI and agents everywhere, massive infrastructure and developer productivity gains, aggressive support pricing redesign, and generational security improvements. As the hosts emphasize, re:Invent 2025 positions AWS as both a visionary and pragmatic provider—enabling customers to do more, safer, faster, and with less friction.
For more details or specific segment recommendations, consult the timestamps above.