
There are over 60 new updates that your hosts Simon, Jillian and Shruthi take you through this week!
Loading summary
A
This is episode 729 of the AWS podcast, released on July 14, 2025.
B
Hello everyone and welcome back to the AWS Podcast. Simon here with you. Great to have you back. All three availability zones in effect today. Firstly, welcome to the podcast. Gillian Ford.
C
Welcome. It's always a good day when it's a podcast day, Simon.
B
It is a fun day. And joining us for this fun day is another of the availability zones, Shruti Kopaka. G', day, Shruti. How are you doing?
A
Hi, I'm great. You know, we celebrated a long weekend in the US this past weekend, so I am very rejuvenated and ready for both of you.
B
Both of you very well rested.
A
Yes.
B
Nothing like a long weekend. Well, some. Lots of, lots of updates. In fact, 68 updates today to go through, but a couple that really caught our eye. Firstly, Amazon Aurora Postgres SQL database clusters now support up to 256 tbb of storage volume. Now that's a lot like this is up, this is doubling the previous limit of 128. So as I, as I always remind people, the only reason these limits go up is because people need the limit to go up. So it's always interesting to me that folks have big databases, but I remember when this was, when this was like an unfeasible database size. But now this is normal.
A
Yep. And I mean what's really great is that this, it doubles the capacity but it still kind of maintains its cost efficiency because you're still doing pay as you go model. Customers will still only pay for what they use even if they upgrade to this new database version. And yeah, I mean, as you said, it's really good for customers who have these really large storage requirements, but also like historical data retention requirements and they had to do all this themselves. Now it just scales to their needs.
B
So that's cool. And the funny thing, I just was looking through some of the things that it. And we can still have up to 15 read replicas with this database. So that's a lot. It's like we're now in petabytes.
C
Wow.
A
Yeah, that is. That's a lot.
B
That's very cool. Now, speaking of, I'm all about databases today. Speaking about another database improvement that's taken place is back at Re Invent, we talked a lot about the new consistency modes for DynamoDB and in this case multi region, strong consistency that's now generally available, which I think is really interesting for a lot of customers. Julian.
C
I think so too. This one I'm super excited about. I think as people have always thought historically about no SQL databases. And then we've always thought about like consistency being something that maybe you'd think about as a trade off. And now with DynamoDB and it's almost like you don't have to necessarily make that trade off now because of this feature and multi region, multi region, strong consistency. I mean that is absolutely amazing.
B
It's hard, but it's time. Now you've got to be, you have to be in three regions. You can either be in three regions or you can have two replicas and a witness. But three is the number and the number shall be three. And I'm sure that many people will get that particular reference. But my two co hosts are clearly not Monty Python fans, so they didn't get that reference. They're not nerdy enough. That's okay. They're looking at me confused, everyone.
C
I haven't talked about Monty Python in a while. I had to think about that.
B
Yeah, that was a, that was a very, very Monty Python the holy grail reference there. And I, my audience, and I'm pretty sure my audience will appreciate that.
C
I think so.
A
So I'm no database expert but so explain to me what this like strong consistency is. Is it basically like for example, if there's a financial service company who say, needs to be able to show the real time account balance regardless of where it's accessed from, like whether it's accessed from Tokyo or whether it's accessed from London and there was a transaction that happened in Tokyo which updated the balance. So in real time all the copies across all the regions will be sort of updated so that, you know, customers who like financial services customers originally would have had to like build all this complex logic themselves, but now it just does it.
B
I think that's a very, very good, very good assessment of what it is basically. And this is, I'll read from the documentation to be specific. Item changes in a multi region strong consistency global table replica are synchronously replicated. So that's synchronously, not asynchronously, are synchronously replicated to at least one other region before the write operation returns a successful response. Strongly consistent read operations on any replica always return the latest version of an item. Conditional rights always evaluate the condition expression against the latest version of an item. So yeah, you have a consistent view across the world. Yeah, that's pretty magical.
A
Yeah, it is.
B
Well, let's talk about some other updates and there've been a lot. So firstly with analytics, Aws Glue enables enhanced Apache Spark capabilities for AWS lake formation tables with full table access. So this means that you can use data manipulation language like create, alter, delete, update and merge. So this means you can have a lot more control over what's going on. Amazon Redshift Serverless now supports 4 RPU. That's a Redshift processing unit minimum capacity options. So it's smaller. It's a smaller data warehouse which is useful for a lot of cases a little bit more granular. Amazon Quicksight has launched Trusted Identity Propagation or TIP for Athena DirectQuery. With this capability, customers can apply fine grained access controls using lake formation rules to govern user access to data in QuickSight. So TIP allows authors to securely control roles control I should say rows and columns of data returned by queries, allowing the same dashboard to be used across customers or departments. This is important because often you should only see the data you should be able to see, but the queries may be the same for everyone else. AWS Clean Rooms and now support two enhancements to its machine learning capabilities to let you train your models more efficiently and to scale. So incremental training enables you to build upon existing model artifacts to create new models, and distributed training allows you to train models across multiple compute instances simultaneously. That's pretty cool. AWS announces that Amazon SageMaker has contributed a custom transport Amazon Data Zone transport to the open Lineage community and enhanced automated Lineage capabilities. Now Lineage is important because this explains where data came from and this has become more and more important in this amazing new world of generative AI. So these lineage enhancements include improvements to automation from sources like AWS Glue, Amazon Redshift and automated Lineage capture from tools, which means you can work more efficiently and understand where the data came from. Quick Updates for application integration Amazon EventBridge now supports AWS code build batch builds as a target so you can trigger off EventBridge activities. AWS Security Incident Response has also added integration with Amazon EventBridge, so this lets you react, monitor and orchestrate events associated with cases and memberships within your response. We're excited to announce Q Index now supports seamless application level authentication for its Search Relevant Context API, which makes the end user experience when using this index to retrieve content easier and AWS Service Reference Information now supports annotations for service actions. Now action properties provide context to indicate what an action is capable of, such as write or list capabilities when you use it in a policy. Now Service Reference Information streamlines the automation of policy management workflows because you can retrieve available actions across AWS services from machine readable files. This is a big deal for the automation minded amongst us. Let's talk artificial intelligence. Amazon Nova Canvas has added a pretty cool feature. This is virtual try on and style options for image generation. So you can now see what clothing would look like on a shopper. Or you could see furniture intelligently placed in a living space. You just upload the two images, the person or the space and the thing, and it puts them together, which is kind of interesting. So we can try on all kinds of clothes. I can see people getting up to all sorts of mischief trying on different clothes that they would never wear. Citations API and PDF support for Claude models is now available in Amazon Bedrock. Now the Citations API allows Claude to ground its answers in source documents, providing detailed references to the exact sentences and passages used to generate responses. This is a really interesting and important capability because often when you're analyzing a big document and you're extracting. In fact I have an application I've written to do this for another purpose and it does exactly this capability. So I'm a bit excited about this. It's what I love about this show is I get to read through all these updates and go, oh, I'm going to use that. Amazon Bedrock Flows announces Preview of Persistent Long Running Execution and Inline Code Amazon Bedrock Flows Announces Preview of persistent Long running Execution and Inline Code Support so Bedrock Flows customers currently get three limitations when authoring, executing and monitoring their workflows, a two minute idle timeout restriction per step, the need for custom API monitoring solutions, and the requirement to create lambda functions for basic data processing tasks. Sounds like some undifferentiated heavy lifting. So starting now, we're addressing them with some new preview features that extend workflow step execution to 15 minutes and let things run much more easily for you. AWS Health Imaging has launched support for dicom Web stories, data imports and in a classic case of if you work in the health industry, if you know, you know. And so now there is a synchronous data import action that's ideal for latency sensitive workloads like storing new medical image studies, annotations and reports. Amazon Rekognition Face Liveliness has launched accuracy improvements and a new challenge setting for improved read. Amazon Rekognition Face Liveliness has launched accuracy improvements and a new challenge setting for improved ux. So this allows you to figure out if the face you're seeing is a real face. And there's a whole bunch of nuance in doing that that I'm not going to go into now, but They've improved it. Amazon Textract has announced accuracy and feature updates to detect document text and analyze document text APIs. So it is now even better in terms of what it does. And I know a lot of customers use this all the time, so distinguishing between the letter O and the number zero on a fax, a low resolution fax, not easy to do. So we've done some work that. Amazon SageMaker Catalog has added AI recommendations for descriptions of custom assets. So this reduces the manual documentation effort, improves your metadata consistency and accelerates asset discoverability across organizations. And we're also happy to announce Amazon SageMaker HyperPod training operator. This is a purpose built Kubernetes extension for resilient foundation model training on Hyperpod. Amazon SageMaker Hyperpod empowers customers to accelerate AI model development across hundreds or thousands of GPUs with built in resiliency decreasing model training time by up to 40%. Now this resiliency is really important. When you talk to folks who are doing this model training they'll tell you that the biggest problem they have is stuff not working through a long running job. And as training clusters expand, this is only getting more disruptive. Failure recovery traditionally requires a complete job restart across all nodes even when a single training process fails, which means you've got more downtime and more cost. And figuring out critical training issues like stalled GPUs, low training throughput numerical instabilities often needs a lot of custom monitoring code. With the Hyperpod Training Operator, customers can further enhance training resilience for Kubernetes workloads. So instead of a full job restart when failures occur, the Hyperpod Training Operator performs a surgical recovery collectively restarting only the affected training resources for faster recovery from faults. It also introduces a customizable hanging job monitoring capability to help overcome problematic training scenarios like stall training batches, non numeric loss values and performance degradation through simple YAML configurations. So you can get up and running nice and quick. But this is really really useful if you use SageMaker HyperPod and AWS Health Imaging now supports DICOM Web bulk data so again more improvements in the processing of medical information. And AWS Healthomics has announced automatic input parameter interpretation oops and AWS Healthomics has announced automatic input parameter interpolation for NextFlow workflows. And finally on this one, Amazon Cube Business has launched the ability to customize responses so with response customers customers can provide instructions for identity, tone and output style when configuring your Q business application. So this really helps you adjust the chat Persona, its Communication, formality, response length and detail. And it also has built in guardrails to ensure that response settings align with your existing admin controls.
C
Up next, we've got business applications and usually this segment is just about Connect, but we've actually got some other updates as well, so this one's going to be exciting. Amazon Q InConnect now supports seven languages for proactive recommendations. Those are Spanish, French, Portuguese, Mandarin, Japanese, Korean and English. Amazon Connect now provides enhanced Flow Designer UI for making it a lot easier to be able to edit. So these are things like keyboard navigation, screen reader support, all of the good things that just make it a lot easier for you. Amazon Connect can now integrate agent activities from third party applications as Connect tasks, which can be evaluated alongside work completed in Connect, providing managers with a unified application for quality management. Amazon Connect now provides the ability to execute logic such as routing priority changes within a flow while continuing to play audio to customers who are waiting in a queue. And this is a new one. We actually, I don't think I've ever done an update on this service, so I'm going to actually have to do some background and it's called AWS B2B Data Interchange. And I will admit I, I didn't know what this service was. I know I shouldn't say that out loud.
B
Well, you're not old enough to understand what it's even responsible for. Because when we're talking about x12edi documents, let me tell you, we're going back to the 1990s, but this stuff still exists out there and it's cool.
C
Well, this is, this is the 90s theme episode. We've got Monty Python and electronic data interchange documents and that's what this service is all about. AWS B2B data interchange. This automates the transformation of business critical EDI transactions. So that could be for healthcare documents, financial documents. It's just going to make that a lot simpler. And there's actually a few updates here, so there's going to be new configuration capabilities. These are going to allow you to have some customized formats and technical fields to be able to generate some of these EDI documents. That's going to make it a lot more compatible when you're working with additional trading partners. So some of that could be like documents that are in ANSI x12 documents and if you need to get those from JSON and XML data formats, that'll be a lot simpler. We also had another one from the service as well. AWS B2B data interchange now supports splitting of inbound x12xci documents that contain multiple transactions into individually processed single transaction documents. And back to a couple more updates from Amazon Connect because they always steal the show on the they're busy all the time.
B
They just, I don't know, we need to meet some of those software developers. They're just killing it.
C
I totally agree. So customer profiles this now enables organizations to create customer segments from imported CSV files. So now contact center managers and campaign managers, they can upload predefined customer lists. And this is going to streamline the process of when you have to do targeted segments for personalization. There's also some launches on the API side from Amazon Connect. This allows for you to be able to delete cases, case comments if you need to undo contact associations and remove service level agreements from cases. Another one from Connect it now supports custom work labels for agent schedules and this makes it easier for you to identify the type of work an agent is scheduled to be able to work with. So some examples of this is you could add order processing as a work activity for Monday or you could add returns management for Tuesday. Those are just a few examples that can really help simplify the experience for managers. And then we've got one update here on Amazon QuickSight. So this now supports 2 billion with a B rows in the SPICE dataset. So if you're not familiar with the SPICE acronym within Quicksight, it stands for Super Fast Parallel in Memory Calculation Engine. That's the magic that Amazon Quicksight uses to make all of your pretty dashboards load really fast because they are in memory.
A
All right, well next up we have quite a few Updates under compute AWS announces availability of ECS optimized Windows Server 2025AMIs or the Amazon Machine Images. They offer two distinct platforms, 2025 Core and 2025 Full. These AMIs are specifically engineered to support Windows container deployments on Amazon ECS or our Elastic Container Service. Each AMI comes ready to use with essential components and optimizations for running containerized workloads and makes it really easy to deploy these containerized workloads. AWS Fargate now supports Seekable OCI or soci index manifest v2 for greater deployment consistency. SOCI or Seekable OCI accelerates Amazon ECS task launches by enabling containers to start running before the full container image is downloaded. This index manifest V2 uses a cryptographic method to establish an explicit link between the image and its manifest, ensuring integrity and consistency during and across all the deployment stages. The next one is an especially exciting one. Amazon EC2C8GN instances are now generally available. So these are the new instances powered by our Graviton 4 processors and they provide up to 30% better compute performance than the Graviton 3 based Amazon EC2 C7 GN instances. Cagn instances feature our latest generation which is I think V6 so 6th generation of the AWS nitro cards and they offer up to to 600 gigabits per second of network bandwidth, the highest network bandwidth among network optimized EC2 instances. So customers can take advantage of CAGN instances and their enhanced networking capabilities for network intensive workloads such as network virtual appliances, data analytics, any kind of CPU based AI and machine learning inference and much more. Also for increased scalability, CAGN instances offer instance sizes up to 40x large with up to 384 gigabytes of memory and up to 60 gigabits per second of bandwidth to Amazon Elastic Block Store. So lot of powerful compute, networking and sort of storage capabilities packed into these instances. Go and check those out. Next up is an update on being able to use Amazon EBS GP3 volumes with AWS Outpost racks. You can now use these GP3 volumes with second generation of AWS Outpost Racks for your workloads that require local data processing and Data residency. With GP3 volumes you can scale up to 16,000 IOPS or input output operations per second and thousand megabytes per second, delivering 4x the maximum throughput of the previously supported GP2 volumes. Amazon ECS includes Task ID in unhealthy service events. This makes it easier to troubleshoot unhealthy tasks by adding the Task ID in service action events generated due to health failures. AWS announces Ubuntu support for finch, an open source command line tool that allows developers to build, run and publish Linux containers. Finch simplifies container development by bundling a minimal native client with a curated selection of open source components. Now, with the addition of Ubuntu support, FINCH provides a consistent and streamlined container development experience across more Linux distributions, addressing a key pain point for developers who are working across multiple environments. Next up is an update to AWS Neuron, which is the software development kit for aws, Trainium and Infringia, which are the purpose built AI chips AWS has built. We are announcing the general availability of Neuron 2.24. Neuron 2.24 announces support for PyTorch 2.7, enhanced inference capabilities such as prefix caching for faster time to first token as well as disaggregated inference that reduces the pre filled decode interference and context Palism for improved performance on lawns that sequences so that was it for compute and moving on to databases Amazon Relational Database Service Custom or Amazon RDS Custom for Oracle now supports multi AZ deployments. This will now provide high availability for business critical workloads across. Of course you know so many different regions with multi AZ deployments. RDS custom for Oracle synchronously replicates data between to Availability zones and performs an automatic failover in case the primary database instance becomes unavailable so that customers will benefit from this higher availability. Amazon Aurora MySQL and Amazon RDS for MySQL integration with Amazon SageMaker is now available. This integration automatically extracts and loads data from MySQL tables into your lakehouse where it's immediately accessible through various analytics engines and machine learning tools. The data synced into lakehouse is compatible with Apache Iceberg Open standards, enabling you to use your preferred analytics tools and query engines such as SQL, Apache, Spark, BI and AIML tools. Amazon Aurora now supports PostgreSQL 17.5, 16.9, 15.13, 14.8 and 13.21. The update includes PostgreSQL's communities, product improvements and bug fixes while introducing Aurora specific enhancements which are read replica optimizations to reduce downtime during cluster upgrades, new features for Babelfish and some security improvements. Amazon Keyspaces for Apache Cassandra now supports Change Data Capture Streams CDC streams or change data capture streams in Amazon Keyspaces automatically capture, insert, update and delete operations as change events, delivering them in order with automatic deduplication. With these CDC streams, you can build event driven applications and implement use cases such as data analytics, text search, machine learning, training and inference and continuous data backups for archival. Amazon Neptune Graph Explorer introduces native query support for Gremlin and OpenCypher. This enhancement empowers data scientists, developers and database administrators to seamlessly interact with their graph databases using their preferred query language, eliminating the need for additional tools or interfaces.
C
Two Updates in End User Computing Amazon Workspaces Personal now allows you to route streaming traffic privately between your VPC and and workspaces virtual desktops using AWS PrivateLink without the data ever traversing the public Internet. Definitely like this. If you're someone that's got compliance requirements and you also have a workspace virtual desktop requirement, this is definitely something that I suggest checking out. We also announced Research and Engineering Studio on AWS version 2025.06 which introduces significant improvements to instance bootstrapping, security configurations and logging capabilities. Up next we have one update in The Internet of things. So we've announced the general availability of manage integrations and this is a feature within AWS IoT Device Management. And for those who are doing IoT things and you haven't checked out AWS IoT, I definitely highly recommend looking into it, especially this feature within device management. I know a lot of times customers who have IoT devices and if you need to manage a fleet of these, there comes a point at a certain scale where you either are building this out yourself of having to manage all those device software updates, security vulnerabilities, patches across all of the devices wherever they are roaming around in the world. And device management is a feature that just makes that a lot simplified all in one place. And so now what's even more exciting is that additional integrations these are could be things like Hub SDKs from Zigbee, Z Wave. If this is Simon's saying, like if you're in the know, you know what these are. I. I'm not an IOT person, so I don't know what these are, but you probably know if you're listening, these are some things and wi fi protocols that are now just integrated within IoT device mix management that are going to make it a lot easier for you. Now we've got some updates in management and governance. AWS config rules adds classification information from AWS Control Tower Catalog. This makes it easier for you to identify how config rules map to different compliance frameworks such as CIS V8, FedRamp and NIST config rules. They help you automatically evaluate your AWS resource configurations for desired settings. And you don't necessarily have to use config for compliance. I know a lot of people usually think about config when they're thinking about compliance requirements, but config could also be used if you have your own company's requirements for how the resources within AWS should be utilized and governed so that one's been around for a while. Check it out. Amazon CloudWatch now supports AWS Cloud trail data event logging for the put metric data and get metric statistics and list metrics APIs. So with this launch, customers have full visibility into metric ingestion and egress activity to their AWS account for best practices in security, operational troubleshooting and financial management. So CloudTrail is going to capture API activities that are related to Amazon CloudWatch, put metric data and other metrics and APIs as events. So then when this information is now used with CloudTrail, you can now identify a specific request to any of the CloudWatch metric APIs, the IP address of the requester, the requester's identity, the date and time of the request. This one seems really useful. I'm definitely going to be bookmarking this one for myself. AWS Control Tower and control catalog APIs now come with AWS PrivateLink support, allowing you to invoke AWS Control Tower and control catalog APIs from within your VPC without traversing the public Internet. AWS Glue Data Catalog now offers Usage metrics for APIs in Amazon CloudWatch, enabling you to monitor, troubleshoot and optimize your API usage with greater visibility Arc Zonal Auto Shift Practice now supports on demand runs and balanced capacity pre checks. So Zonal Auto Shift Practice this is going to take place once a week to ensure your application is ready for a Zonal Auto Shift and you can set these practice runs which are going to trigger a practice run anytime you want to validate your application's preparedness. So you'll be able to check if you've been able to successfully balance across Availability zones. And this is done for load balancers, Network Load Balancers, EC2 Auto Scaling Group and you can run automated and on demand practice checks. So hopefully this is a reminder if you haven't done any checks, do so. Otherwise I'm just gonna start patch your stuff, run your checks, tie your shoes.
B
Yeah, brush your teeth, all that good stuff. Look, one thing I do say is if you're not courageous enough to test this stuff during the day when everyone's there, then you're in no position to rely on it when it fails at the worst possible time. So true. Let's do a couple of quick updates for migration and transfer. Amazon Q Developer Java Upgrade Transformation CLI is now generally available. This is very cool because using the CLI you can kick off Q Developers transformation capabilities to perform Java upgrades at scale. So you can upgrade from source versions 8, 11, 17 or 21 to target versions 17 or 21. Selective transformation with options to choose, steps from, transformation plans and libraries and versions to upgrade. You've got lots of different abilities like converting embedded SQL to complete Oracle to PostgreSQL database migrations. Lots of good stuff. So if you're in the migration space, this is important to you. And also the AWS Transfer family has launched support for IPv6 endpoints. I don't think there's been an update show this year where at least one service hasn't brought in the IPv6 support. It is the year of IPv6. Also AWS Transform now analyzes EBS costs, net complexity and expands chat guidance so these allow you to better assess and plan your migration and your modernization journey. So firstly, AWS Transform assessments will include your EBS cost analysis so you can understand what the migration involves. AWS Transform for. Net now supports enhanced code assessment capabilities so it lets you understand exactly how complex the migration you're doing is. And it has now also expanded chat capabilities so you can interact and talk about the assessment and provide input and get more contextual guidance.
A
Next up we have a few updates under Networking and Content Delivery. Amazon Cloudfront announces support for HTTPs resource recovery records in Amazon Route 53. These resource records allow domain name systems or DNS such as Amazon Route 53 to provide additional information such as supported HTTP protocol versions and port numbers before the HTTP connection is attempted. This helps clients establish the initial connection using their preferred HTTP protocol to improve application performance and security. Amazon Route 53 launches capacity utilization Metric for Resolver Endpoints you can now enable the Amazon cloudwatch metric, which is called Resolver Endpoint Capacity Status, to monitor the status of the query capacity for elastic network interfaces associated with your Route 53 resolver endpoint in Amazon Virtual Private Cloud. This metric enables you to quickly view whether the resolver endpoint is at risk of meeting the service limit for query capacity and then takes remediation steps like instantiating additional ENIs to meet your capacity needs.
B
Moving on to Quantum technologies, Amazon Braket adds dynamic circuit capabilities on IQM Garnet. So this capability enables mid circuit measurements or MCM and feed forward operations allowing quantum researchers and developers to implement more advanced quantum algorithms. Dynamic circuits are a key building block for quantum error mitigation and correction. Now if you've been following quantum technology, you'll know that this is the boundary at the moment is the error mitigation correction, how big you can get, how much you can process, et cetera. These dynamic circuits now let customers perform active qubit reset to reuse qubits within a single circuit execution and apply conditional operators based on measurement outcomes.
C
Next up Security Identity and Compliance Amazon Cognito introduces AWS Web Application Firewall Support Incognito Managed Login this new capability allows customers to protect their managed login endpoints, configured incognito user pools from unwanted or malicious requests and web based attacks. Managed Login this is a fully managed hosted sign in sign up experience that customers can personalize to align with their company or application branding. And now you get this additional layer of protection against threat vectors through integration with aws waf, AWS Firewall Manager announces security policy support for enhanced Application Layer 7 DDoS protection within AWS WAF so the Application Layer 7 DDoS protection this is an AWS managed rule group that automatically detects and mitigates ddos events of any application on Amazon cloudfront, application load balancer and other AWS services supported by WAF. AWS WAF announces general availability of resource level DDoS protection for application load balancers. This new WAF DDoS protection is directly integrated with ALB as an on host agent to Detect and mitigate DDoS attacks from known malicious sources within seconds while maintaining service quality for legitimate traffic. This feature efficiently rate limits the traffic based on both direct client IP addresses and proxy networks by inspecting DDoS indicators and ex forwarded for headers. AWS Reposts Private launches channels for targeted and secure organizational collaboration so teams can now collaborate on specific topics without exposing content to the entire community in their company. So companies can manage access through iam. You can have your own like repost private channels. So these are like for situations where maybe there is like company there's people within the company that have like their own private channel related to repost types of topics. So this can seem like a super useful way of sharing knowledge.
A
Next up we have two quick updates under storage. You can now attach Amazon S3 access points to your Amazon FSX for open ZFS file systems so that you can access your file data as if it were in S3. So with this new capability your data in FSX for OpenZFS is effortlessly accessible for use with a broad range of applications such as AI and machine learning or analytics services that work with S3. While your data continues to reside in FSX for OpenZFS file system, Amazon S3 Express 1 Zone now supports tags for cost allocation and attribute based access control. So S3 Express 1 zone is the single availability zone S3 service and it's a high performance S3 storage class that allows you to co locate your compute and storage especially for applications that are low latency, for example AI and ML. With this new capability you can add tags to S3 directory buckets to track and organize AWS costs using AWS billing and cost management. Additionally, with attribute based access control support you can extend your tag based access control to new and existing users roles and directory buckets. This helps you eliminate frequent AWS identity and Access Management or S3 bucket policy updates simplifying how you scale access governance.
B
Wow, there have been a lot of updates in this particular show we're not going to talk about our favorites because you've been with us long enough and for many of you we've gone over your commute time, which I know is a concern. Shruti, how do folks reach out to you? If they want to get in touch.
A
Find me on LinkedIn or on Xruti.
B
Kopparkar and Jillian, what about yourself? LinkedIn, GillianFord and old school AWS podcast at Amazon.com if you want to send us feedback, would you love to get it? And until next time, keep on building.
AWS Podcast Episode #729 Summary: Aurora Storage Upgrades, DynamoDB Multi-Region Strong Consistency, and More
Released on July 14, 2025
In Episode #729 of the AWS Podcast, hosted by Simon Elisha alongside co-hosts Gillian Ford and Shruti Kopaka, listeners are treated to a comprehensive overview of the latest updates and innovations from Amazon Web Services. This episode delves into significant enhancements across various AWS services, including databases, analytics, artificial intelligence, application integration, compute, networking, security, and storage. Here's a detailed breakdown of the key topics discussed:
Amazon Aurora PostgreSQL Storage Expansion
One of the standout updates is the expansion of Amazon Aurora PostgreSQL storage capacity. Aurora now supports up to 256 TB of storage volume, effectively doubling the previous limit of 128 TB.
Simon Elisha [00:44]: "Amazon Aurora PostgreSQL database clusters now support up to 256 TB of storage volume. This doubling of capacity is a direct response to customer needs for larger databases, making what was once unfeasible now entirely manageable."
Cost Efficiency Maintained
Despite the increase in storage capacity, Aurora maintains its cost-efficiency through a pay-as-you-go model, ensuring customers only pay for the storage they use. This is particularly beneficial for organizations with extensive data retention requirements.
DynamoDB Multi-Region Strong Consistency
Furthering database capabilities, DynamoDB has introduced multi-region strong consistency, now generally available. This feature ensures that item changes in a global table replica are synchronously replicated across regions, providing a consistent view of data worldwide.
Gillian Ford [02:16]: "With multi-region strong consistency, DynamoDB removes the traditional trade-off between scalability and consistency, allowing businesses to maintain real-time data accuracy across all regions effortlessly."
Amazon RDS Custom for Oracle Multi-AZ Deployments
Amazon RDS Custom for Oracle now supports multi-Availability Zone (AZ) deployments, enhancing high availability for critical workloads. This ensures synchronous data replication between AZs and automatic failover capabilities in case of primary instance failures.
Integration with Amazon SageMaker
Amazon Aurora MySQL and Amazon RDS for MySQL have been integrated with Amazon SageMaker, facilitating automatic data extraction and loading into lakehouses. This seamless integration supports various analytics engines and machine learning tools, leveraging the compatibility with Apache Iceberg standards.
AWS Glue and Apache Spark Enhancements
AWS Glue now offers enhanced Apache Spark capabilities for AWS Lake Formation tables, allowing full table access and enabling data manipulation operations such as create, alter, delete, update, and merge. This grants users greater control over their data workflows.
Amazon SageMaker Innovations
HyperPod Training Operator: Introduced as a Kubernetes extension, this feature enhances the resilience of SageMaker HyperPod training by enabling surgical recovery of failed training resources without restarting entire jobs. This results in up to 40% reduction in training time.
AI Recommendations for Custom Assets: Amazon SageMaker Catalog now includes AI-driven recommendations for asset descriptions, reducing manual documentation efforts and improving metadata consistency.
AWS Clean Rooms Enhancements
AWS Clean Rooms has rolled out two major improvements:
Amazon EventBridge Updates
Support for AWS CodeBuild Batch Builds: Enables triggering of batch build processes via EventBridge, streamlining continuous integration workflows.
AWS Security Incident Response Integration: Enhances the ability to monitor and orchestrate security-related events within incident cases, improving response strategies.
AWS Transfer Family Enhancements
The AWS Transfer family now supports IPv6 endpoints, aligning with the industry's transition towards IPv6 and ensuring broader compatibility and future-proofing of transfer protocols.
Amazon Q Developer Java Upgrade Transformation CLI
The new Java Upgrade Transformation CLI tool is now generally available, allowing developers to perform Java version upgrades at scale. Features include:
Amazon ECS Optimized Windows Server 2025 AMIs
AWS has released new Amazon Machine Images (AMIs) optimized for Windows Server 2025, available in two versions:
These AMIs are tailored for Windows container deployments on Amazon ECS, incorporating essential components and optimizations for streamlined containerized workload deployments.
Amazon EC2 C8GN Instances
The Amazon EC2 C8GN instances, powered by the latest Graviton 4 processors, are now generally available. Key features include:
Simon Elisha [34:49]: "With the introduction of C8GN instances, customers now have access to unparalleled compute, networking, and storage capabilities, making them ideal for demanding workloads like AI inference and data analytics."
Amazon Nova Canvas Updates
Virtual Try-On and Style Options: Users can now merge images to visualize clothing on shoppers or place furniture in living spaces through intelligent image generation.
Citations API and PDF Support for Claude Models: Available in Amazon Bedrock, this feature grounds AI responses in source documents, providing detailed references and enhancing the reliability of generated content.
Amazon Bedrock Flows Preview
Persistent Long-Running Execution: Extends workflow step execution time to 15 minutes.
Inline Code Support: Simplifies the execution of workflows without the need for external Lambda functions, reducing operational overhead.
Gillian Ford [15:43]: "The preview features in Bedrock Flows address critical pain points, allowing for longer executions and more streamlined workflow management without heavy lifting."
Amazon CloudFront and Route 53 Enhancements
HTTPs Resource Recovery Records: Enables DNS services like Amazon Route 53 to provide additional HTTP protocol information, improving connection performance and security.
Resolver Endpoint Capacity Utilization Metrics: Amazon Route 53 now offers CloudWatch metrics to monitor query capacity for resolver endpoints, ensuring efficient scaling and remediation when approaching service limits.
Quantum Technologies with Amazon Braket
Amazon Cognito Enhancements
AWS WAF and Firewall Manager Updates
Enhanced Layer 7 DDoS Protection: Introduces an AWS-managed rule group that detects and mitigates DDoS attacks on services like Amazon CloudFront and Application Load Balancer.
Resource-Level DDoS Protection for ALB: Provides on-host agents to detect and mitigate DDoS attacks swiftly, maintaining service quality for legitimate traffic.
AWS Repost Private Channels
Simon Elisha [36:12]: "Security is paramount, and these updates enhance our ability to protect applications and user data effectively, ensuring robust defenses against evolving threats."
Amazon FSx for OpenZFS Integration with S3 Access Points
Users can now attach Amazon S3 access points to Amazon FSx for OpenZFS file systems, enabling seamless access to file data as if it resides in S3. This integration broadens application compatibility, especially for AI, machine learning, and analytics services.
Amazon S3 Express One Zone Enhancements
Tagging Support: Allows the addition of tags to S3 directory buckets for better cost allocation and organization via AWS Billing and Cost Management.
Attribute-Based Access Control: Extends tag-based access controls to new and existing users, roles, and directory buckets, simplifying access governance.
AWS Config and Control Tower Enhancements
Classification Information from AWS Control Tower Catalog: AWS Config Rules now include mappings to compliance frameworks like CIS V8, FedRamp, and NIST, facilitating easier compliance management.
AWS Control Tower and Control Catalog APIs via PrivateLink: Enables invoking APIs from within a VPC without traversing the public internet, enhancing security and connectivity.
Amazon CloudWatch and AWS CloudTrail Integration
putMetricData and getMetricStatistics, providing comprehensive visibility into metric activities for security and operational troubleshooting.Amazon WorkSpaces Personal Updates
AWS IoT Device Management Enhancements
Amazon Braket Dynamic Circuits
The introduction of dynamic circuit capabilities on IQM Garnet enables advanced quantum operations such as mid-circuit measurements and conditional operators based on measurement outcomes. These features are pivotal for quantum error mitigation and correction, pushing the boundaries of current quantum computing capabilities.
Simon Elisha [00:44]: "The increase to 256 TB in Amazon Aurora PostgreSQL storage is a testament to how AWS continuously evolves to meet the growing demands of our customers."
Gillian Ford [02:16]: "With multi-region strong consistency, DynamoDB removes the traditional trade-off between scalability and consistency, allowing businesses to maintain real-time data accuracy across all regions effortlessly."
Simon Elisha [34:49]: "With the introduction of C8GN instances, customers now have access to unparalleled compute, networking, and storage capabilities, making them ideal for demanding workloads like AI inference and data analytics."
Gillian Ford [15:43]: "The preview features in Bedrock Flows address critical pain points, allowing for longer executions and more streamlined workflow management without heavy lifting."
Simon Elisha [36:12]: "Security is paramount, and these updates enhance our ability to protect applications and user data effectively, ensuring robust defenses against evolving threats."
Episode #729 of the AWS Podcast delivers a wealth of information on the latest AWS service updates and innovations. From significant storage and database enhancements to advanced artificial intelligence capabilities and robust security improvements, AWS continues to empower developers and IT professionals with tools that foster scalability, efficiency, and security. Whether you're involved in large-scale data management, machine learning, application development, or securing your infrastructure, this episode underscores AWS's commitment to providing comprehensive solutions tailored to evolving technological needs.
For more insights and regular updates, listeners are encouraged to connect with the hosts:
Until next time, keep on building!