
Simon takes you through a big list of cool new things - something for everyone.
Loading summary
A
This is episode 736 of the AWS podcast, released on September 8th, 2025. Hello everyone and welcome back to the AWS Podcast. Simon, Alicia here with you. Great to have you back. Flying solo today. Gillian is on vacation and Shruti has moved on to another opportunity. She's doing some cool things elsewhere. So we of course thank Shruti for all the great work she did on the podcast and we wish her well. Now she'll be a listener rather than a host, but I'm sure she'll still be into the podcast. Lots and lots of updates this week, so let's get going. Amazon EMR on EC2 has added Apache Spark native fine grained access control via lake formation and support for AWS Glue data catalog views. Now this means you can improve your data security, simplify your access management and make sharing a lot easier across your environment. Amazon EMR has also announced S3A as the default connector. Now this optimizes performance for Apache had dupe Apache Spark and Apache Hive workloads on Amazon EMR. Now this enhances the open source S3A architecture with AWS specific optimizations to help organizations process large scale data more efficiently. So this is good in terms of performance in terms of support and lots of other cool features. Amazon OpenSearch Serverless now supports Attribute based access Control or ABAC for the data plane APIs which means it's easy to manage access control across the board. Are you detecting a trend so far? AWS Clean Rooms now supports error Message configurations for PySpark analysis so this means it's easier for you to develop and test sophisticated analytics faster in a collaboration. So you can now specify how much information appears in error messages for analyses that use PySpark and the Python API for Apache Spark. Because if you think about it, log messages can share a lot of information. You may want that you may not. You get to choose the Amazon OpenSearch service now supports I8G instances. This is the latest generation of storage optimized instances with the best performance for storage intensive workloads. AWS Graviton 4 processors they have 60% better compute performance compared to the previous generation i4G. They use the latest third generation AWS Nitro solid state disks, local NVME storage that give you up to 65% better real time storage performance per terabyte whilst up to 50% lower storage I O latency and 60% lower storage IO latency variability. Have a look at this. I always recommend that when a new instance type is supported you run some tests check your configuration and in most cases you're going to want to Swap the Amazon SageMaker Lakehouse Architecture now supports tag based access control for federated catalogs, so TBACK enables simplified permission management by logically grouping catalog resources using tags, which allows scaling permissions across datasets with a minimum set of permissions. It also facilitates data sharing across different accounts, so it just makes things easier. Amazon Quicksight now supports connectivity to Google Sheets, so there is now a connector to get straight into that. And Amazon Quicksight has also expanded limits on calculated fields, so the limits are now increased for the number of calculated fields allowed in an analysis from 500 to 2000 and from 200 to 500 per data set, so more analysis Amazon Managed Service for Apache Flink now supports customer managed keys and Amazon Managed Workflows for Apache Airflow now supports downgrading to minor versions as well, so this means you can do upgrades and downgrades automatically. Let's talk about application integration. AWS End User Messaging now supports international sending for US toll free numbers, so now you can send SMS messages to over 150 country destinations including Canada using your US toll free number. So this means you can have a number to send to any supported country. And Amazon Managed Service for Prometheus now adds direct pager duty integration so you don't have to create your custom lambda function to connect into that capability. Let's talk artificial intelligence. The Account Tokens API is now available in Amazon Bedrock, enabling you to determine the token count for a given prompt or input sent to a particular model ID prior to performing any inference by surfacing a prompt's token count. The Count Tokens API allows you to more accurately project your costs and gives you better transfer transparency and control over your AI model usage. At launch, the Account Tokens API will support CLAUDE models with the functionality available in all regions where these models are supported. And of course more models will be coming up. Amazon Bedrock now provides simplified access to OpenAI open weight models, so these are two new OpenAI models with open weights in Amazon Bedrock. So if you want to use OpenAI's GPT OSS120B and GPT OSS20B models without having to manually activate access anything, you can do it Amazon Polly has launched more synthetic generative voices in English, French, Polish and Dutch. And we're also announcing the AWS Billing and Cost Management MCP Server. Now MCP servers are super useful for getting stuff done. This is now available in the AWS Labs GitHub repository and this allows customers to analyze their historical spending. Find cost optimization opportunities and eliminate costs of new workloads using the AI agent or assistant of your choice. Amazon SageMaker has introduced account Agnostic Reusable Project profiles. This means that administrators can define project configurations once and reuse them across multiple AWS accounts and regions so they're no longer tied to a specific AWS account or region. Amazon Q Developer now supports MCP Admin control. This provides organizations with the granular control needed to manage external resources safely and effectively. With this launch, an admin has the ability to enable or disable the MCP functionality for all the Q Developer clients in that organization. So this means if an administrator disables the functionality, then the users won't be able to add any MCP servers, nor will any previously defined servers be initialized. Q Developer checks and applies admin settings at the start of each session and also every 24 hours when the client is running and Amazon Bedrock Data Automation or BDA now supports five additional languages for document workloads in addition to English, so we've got Portuguese, French, Italian, Spanish and German. With this launch, customers can process documents in these new languages and create blueprint prompts and instructions in these new languages when using BDA Custom output for documents AWS Healthomics now supports task level timeout controls for next flow workloads, so with this launch customers can now set fine grained controls for the next flow workflow tasks and enable automated run cancellation if specific tasks take longer than expected. And AWS Healthomics now also supports third party container registries for private workloads, so this is enabled through the elastic Container Registry ECR and allows you to automatically translate third party container URIs to ECR URIs. So this means you can more easily access containerized tools from popular third party registries without having to manually migrate them to a private ECR registry or make any changes to the workflow definition. Amazon SageMaker HyperPod now supports Amazon EBS CSI driver for persistent storage. This capability allows customers to create, attach and manage EBS volumes through kubernetes persistent volume claims, providing stories that persist across POD restarts and node replacements. Now obviously when you're building models, testing models, et cetera, it's long running workflows. Stuff breaks along the way. This makes it a lot easier to manage in your environment. Amazon SageMaker Unified Studio has added S3 file sharing options to projects, so this is a simplified file storage option giving data workers an easier way to collaborate with their analytics and machine learning workflows without depending on git. So now you can choose between Git repositories so GitHub, GitLab or BitBucket Cloud or Amazon Simple Storage service buckets for sharing code files between various members of a project. SageMaker HyperPod now supports customer managed KMS keys for EBS volumes so you can have full control over the incremental encryption keys while having your high performance compute capabilities. And Amazon Bedrock now supports batch inference for anthropic Claude Sonnet 4 and OpenAI GPT OSS models. So with batch inference you can run multiple inference requests asynchronously improving performance on large data sets at 50% of the on demand inference pricing. So choosing your workflow and your workload is really important and some things work better in batch and if you've got batch stuff, this is the way to do it. Let's talk business applications Amazon Connect now offers a generative text to speech voices, so this allows you to deliver natural human like and expressive conversations with your customers. You have access to 20 different generative enhanced voices across languages like English, French, Spanish, German and Italian. Amazon Connect now provides out of the box embedding of tasks and emails into your websites and applications. So there's a nice Amazon Connect communications widget. So this means that customers can do things like submit a callback request outside business hours or send an email through web forms. Just means you don't have to develop that yourself. Amazon Connect now supports multi user web in app and video calling. So now multiple users can join the same session with an agent through a web browser or mobile applications. And you can also dynamically add participants during a live call or multiple participants can join a scheduled session with the same agent. So this is really useful to support things like joint financial planning between people, family medical conversations and consultations, legal representation, all that stuff where you've got to get people together. And Amazon Connect now supports recurring activities in agent schedules, so you can now schedule activities like you know, a daily standup at 8am or a team meeting every Monday at 9 things that automatically get added to the schedule. AWS B2B Data Interchange now supports custom validation rules for X12 EDI documents, enabling you to expand and alter the validation logic of the X12 ANSI standard to align with customer agreements with your trading partners. This enables the automated validation, transformation and generation of electronic data interchange EDI documents that really still run the world in many places. So this lets you have far more control over what you're doing. Let's talk about Compute AWS Batch now supports default instance type options so you can now set two new default instance types options of default x86 64 which is the default default and default arm64 and this will automatically select the most cost effective instance type across different generations based upon your job queue requirements. Where previously you could only get the optimal instance type, now you get like a family of instance types. AWS Deadline Cloud now supports Cinema 4D and Redshift on Linux service Managed Fleets. Now AWS Deadline Cloud is a fully managed service that simplifies render management for teams creating computer generated graphics and visual effects for things like films and television and web content, et cetera. Previously it was only available on Windows fleets. Now you can have Linux managed fleets as well, which is very exciting. AWS App Runner expands support for IPv6 compatibility so if you're on the IPv6 train, this one has joined you at the station. Amazon EKS enables namespace configuration for AWS and community add ons. With the namespace configuration you can now specify a custom namespace during add on installation, enabling better organization and isolation of add on objects within your eks cluster. Amazon EC2 Mac dedicated hosts now support host recovery and reboot based host maintenance. So reboot based host maintenance automatically stops and restarts instances on replacement hosts when scheduled maintenance events occur, which means you don't need to manually manage your planned Windows, your maintenance Windows and this type of stuff just enhances the reliability and manageability, particularly when you're running a fleet of these things for your testing and your management, et cetera. Amazon EKS has introduced On Demand Insights Refresh so you get on demand refresh of cluster insights which allows customers to more efficiently test and validate if applied recommendations have successfully taken effect. We're happy to announce the general availability of new general purpose Amazon EC2 M8i and M8i Flex instances. These are powered by custom Intel Xeon 6 processors only available on AWS, and they give you the highest performance and fastest memory bandwidth among comparable intel processors. And in the cloud again, always test your workloads against some of these latest instances because you can get great performance increases and price increases. There are also new memory optimized Amazon EC2 R8i and R8i flex instances as well, so those might also suit your particular workload. Now let's talk about databases. Amazon RDS for MariaDB now supports MariaDB 11.8 with MariaDB vector support. Vectors are all the thing when you're doing AI and rag and stuff like that, so that is now built in. Amazon RDS for SQL Server now supports Kerberos authentication with self Managed Active Directory. We're also happy to announce the Amazon Aurora MySQL 3.10 as the Long Term Support or LTS release. So this means that database users that use this LTS release can stay on the same minor version for at least three years or until the end of the standard support for the major version, whichever is sooner. Amazon RDS Custom for SQL Server now supports new general distribution releases for Microsoft SQL Server 2019 and and 2022. Amazon RDS for Oracle now supports a new certificate authority and cipher suites for SSL and OEM agent options, and Amazon RDS for PostgreSQL now supports delayed read replicas. So this means you can specify a minimum time period that a replica database lags behind a source database. Now why might you do this? This feature creates a time buffer that helps protect against data loss from human errors like accidental table drops or unintended data modifications. Not that you would ever do such a thing, but it's nice to know it's there. Amazon RDS for Oracle now supports redo transport compression so this is a feature that compresses redo data before it's transmitted to a standby database. So this improves the redo transport performance because you're moving less data. Aurora DSQL now supports resilience testing with AWS fault injection service, so this means you can now simulate real world scenarios that disrupt your connections to Aurora DSQL clusters, things like regional failures, et cetera, so you can see what would happen. Always good to test in advance. Amazon RDS for DB2 now supports read replicas. You can add up to three of these read replicas and these support read only applications without overloading the primary database instances. Amazon Neptune analytics now introduces a stop start capability. This is a new capability that lets you pause and resume your graph workload on demand, meaning you can reduce costs during idle periods without losing data or configuration. This is really interesting because many customers use Neptune analytics for periodic graph workloads like fraud detection, recommendation engines or research simulations that just run periodically. So instead of having to leave it running, you can now actually stop it and then you can restart it. RDS Data API now supports IPv6 another step in the year of IPv6 that we're going through. Amazon Neptune now supports bring your own knowledge graph for retrieval augmented generation rag so this capability allows customers to connect their existing knowledge graphs to large language models, meaning you can get more accurate content rich and explainable responses. A quick update on the topic of gaming Amazon gamelift Streams now offers enhanced flexibility with default applications. You can now create new stream groups without specifying a default application and modify or remove the default application in the existing stream group. Let's talk Internet of Things AWS IoT Core now supports customer Managed keys for the key management service, so again bring your own keys And AWS IT ExpressLink Technical Specification v1.3 has been announced. The updated specification provides hardware manufacturers who design IoT devices New features for Bluetooth Low Energy communication and a new group of commands that give host processors control of the module I o pins using the Bluetooth Low Energy Expanded feature set. AWS IoT ExpressLink makes it easy to advertise the device's presence and its capabilities and pair securely with other devices in a local personal area network. Let's talk management and governance Amazon VPC IPAM has added in console CloudWatch alarm management so it means you can just be in the same place and take action straight away. Amazon CloudWatch has expanded region support for natural language, query results summarization and query generation. So 15 new regions now let you interactively search and analyze your logs with the Log Insights Query language, the OpenSearch piped processing language, and the OpenSearch Service Structured Query language. So lots of ways you can get into your logs and figure out how you wanna run things. This is a nice small one, but a useful one. AWS Management Console now supports assigning a color to an AWS account for easier identification, so AWS customers now have an easy way to identify their accounts at a glance using the Account Color settings. You can assign a color to your AWS account so like red for production or yellow for testing, and it appears in the console's navigation bar for all authorized users in that account, which gives you a quick visual cue, which is kind of nice. AWS Billing and Cost Management now provides customizable dashboards so now you can visualize and analyze your AWS spending in one consolidated view. So this combines data from AWS Cost Explorer and Savings Plan and reserved instance coverage and utilization reports in the one place. Let's talk migration and transfer AWS Transfer Family Terraform module now supports deployment of SFTP connectors to transfer files between Amazon S3 and Remote SFTP servers. So this adds the existing support for deploying the SFTP endpoints using Terraform so you can infrastructure as code your transfers and AWS transform for. Net has added support for Azure repos and artifacts feeds for NuGet packages so you can connect your repositories directly. Few updates for networking and content delivery. AWS Client VPN now supports connectivity to IPv6 resources it's all happening people. AWS client VPN has extended OS Support to Windows ARM 64 with version 5.3.0 and AWS IAM has launched new VPC endpoint condition keys for network perimeter controls. This is an interesting one. It's new global condition keys. There are things like VPC accounts, vpc.org paths, vpc.org IDs, et cetera that help you ensure that request. Your AWS resources or by your identities are made through your VPC endpoints. And we have now extended traffic mirroring support to new instance types. So Amazon VPC traffic Mirroring lets you replicate the network traffic from EC2 instances within your VPC to security monitoring appliances so you can do content inspection, threat monitoring, et cetera. Basically anything that is a Nitro V4 instance can now support this as well. Let's talk Quantum Technologies Amazon Braket has introduced Local Device Emulator for verbatim circuits. This feature accelerates development by providing early feedback on circuit compatibility and expected behavior under realistic noise conditions, helping customers validate their quantum programs and develop noise aware algorithms without incurring hardware costs. It's pretty cool. Some updates for security identity and compliance the AWS Network Firewall has launched Receive Bytes Metric for stateless and stateful engines. This new feature lets customers monitor the total incoming traffic bytes inspected by firewalls, giving you valuable insights into your traffic pattern. You can understand what's going on. Are things changing? What's my capacity planning looking like? Can I reduce costs? All that good stuff. AWS Security Incident Response has introduced integration with the ITSM so tools like Jira and ServiceNow, which means you can respond faster. These integrations provide bi directional synchronization, allowing you to create, update and delete issues in either platform with automatic data replication into AWS Security Incident Response cases. And speaking of AWS Security Incident Response, it is now Health Information Trust Alliance Commissary Framework Certified so High Trust certification, which means it matches stringent security and privacy requirements established by hitrust for managing sensitive data. Amazon Verified Permissions now supports Cedar 4.5. This allows you to use the latest Cedar features including the IS operator which lets you grant access based on resource types. This also helps you catch potential type related errors early in policy development. Lots of other changes to Cedar on the Cedar release pages, so check it out. And now let's talk storage. Amazon EBS has launched Snapshot Copy for AWS local zones. So Snapshot copies a point in time snapshot of an EBS volume and stores it in Amazon S3 in the region or to another local zone. Amazon S3 has improved AWS cloud formation and AWS CDK support for S3 tables, so you could now do a lot more and consistently deploy and manage them. And Amazon S3 has introduced a new way to verify the content of stored data sets. You can efficiently verify billions of objects and automatically generate integrity reports to prove that your datasets remain intact over time using S3 batch operations. Now this capability works with any Object stored in S3 regardless of storage class or object size, without the need to restore or download data. Whether you're verifying objects for data preservation or doing accurate checks, et cetera. This saves a lot of time. So basically with S3 batch operations you can create a compute checksum for your objects and very easy to get started. And this complements S3's built in validation, meaning that you can independently verify your stored data at any time. And Amazon S3 Express 1 Zone now supports resilience testing with Atavis Fault injection service, so again you can verify that the things you put in place will work on those failure conditions. So lots of cool and interesting updates there. I hope there was something for you. As always, we do love to get your feedback. AWspodcaston.com is the place to do it and until next time, keep on going.
Release Date: September 8, 2025
Host: Simon Elisha
This episode of the AWS Podcast is a fast-paced roundup of the latest AWS service updates, feature launches, and enhancements across AI, compute, storage, security, database, developer tools, and more. Host Simon Elisha presents new capabilities for developers, IT pros, and data teams—from the release of Amazon Bedrock APIs and new generative AI features to major advances in storage, connectivity, application integration, and beyond.
"This enhances the open source S3A architecture with AWS specific optimizations to help organizations process large scale data more efficiently." (01:15)
"The Count Tokens API allows you to more accurately project your costs and gives you better transparency and control over your AI model usage." (13:44)
AWS Batch
AWS Deadline Cloud
AWS App Runner
Amazon EKS and EC2 Mac Dedicated Hosts
New EC2 Instance Launches
"Graviton 4 processors... up to 65% better real time storage performance per terabyte, whilst up to 50% lower storage IO latency." (05:25)
"This feature creates a time buffer that helps protect against data loss from human errors like accidental table drops or unintended data modifications." (47:38)
AWS IoT Core
IoT ExpressLink
Amazon Braket Quantum
"You can efficiently verify billions of objects and automatically generate integrity reports to prove that your datasets remain intact over time using S3 batch operations." (58:35)
On testing new instance types:
"I always recommend that when a new instance type is supported you run some tests, check your configuration, and in most cases you’re going to want to swap." (06:40)
On batch and real-time workflows:
"Some things work better in batch and if you’ve got batch stuff, this is the way to do it." (26:20)
On S3 data integrity:
"You can efficiently verify billions of objects and automatically generate integrity reports to prove that your datasets remain intact over time using S3 batch operations." (58:35)
On delayed read replicas for PostgreSQL:
"This feature creates a time buffer that helps protect against data loss from human errors like accidental table drops or unintended data modifications. Not that you would ever do such a thing, but it's nice to know it's there." (47:38)
This episode is densely packed with AWS product enhancements and industry-focused tools, all covered succinctly and with actionable insights for architects, devs, and IT leaders. If you’re designing on AWS, these updates expand your toolkit, offer greater control, and can directly impact performance, cost, and ease of management.
“Lots of cool and interesting updates there. I hope there was something for you.” — Simon Elisha (59:02)
For more details and to provide feedback:
aws-podcast.com