
Loading summary
A
Can you walk me through what it's been like? I mean, the last two or three years have just been crazy and it doesn't seem to be letting up. And how Snowflake AI has evolved.
B
AI is incredibly exciting. So we've been building a lot of product over the last two, two and a half years. Anywhere from building AI to run right next to data, to just being able to talk to your data in natural language and democratize that access and glean a lot more insight from data. So that's the high level goal of the set of products that we're building at Snowflake.
A
Most AI is just speech to text, plus a language model full for reading transcripts, not understanding conversations. Velma from Modulate, an AI built on ensemble listening model architecture. Specializes in audio analysis. It orchestrates hundreds of smaller sub models purpose built to understand the nuances of voice like tone, timing, stress and intent. Perfect for fraud defense, deep fake detection, agent attrition prevention, or customer service moderation. Check out the live Belma preview at Preview. Modulate AI. That's pretty. Preview Modulate AI to see how the model breaks down audio providing timestamped explainable signals. Stop transcribing. Start listening with Modulate AI.
B
Hi, Craig, nice to meet you. Thanks for having me here. So my name is Baris. I'm the head of product for AI at Snowflake. I joined Snowflake about two and a half years ago when Snowflake acquired my company. And before that I was at Google for a long time, most recently running AI initiatives focused on the Google Assistant. Google Assistant product for. For multiple years, for a decade, before AI was super hot. I love. I love the space. I've been in the space for a long time. And at Snowflake, my job is to build out Snowflake's AI strategy as well as a series of products that we're building for our customers. AI is incredibly exciting. So we've been building a lot of product over the last two, two and a half years. Anywhere from building AI to run right next to data, to just being able to kind of talk to your data in natural language and democratize that access and glean a lot more insight from data. So that's the high level goal of the set of products that we're building at Snowflake.
A
Yeah, And I have to apologize. I said Boris. I'm sure you get that along a lot. It's pronounced Barish.
B
It's Barish. Yeah.
A
Bar. Okay. And can you walk me through what it's been like? I mean, the Last two or three years have just been crazy and it doesn't seem to be letting up now. Snowflake AI has evolved.
B
Yeah, certainly. I mean, the journey started about two and a half years ago. The feedback we got from our customers is AI is incredibly transformative. But of course, governance is also equally important. And for Snowflake, data governance, security is of utmost importance for our customers. So the request was to run AI next to data versus trying to move data to where the AI is, which creates series of kind of governance and security challenges. So that's what we set out to do. What that means is we're bringing in all the large language models to run within the Snowflake Security Boundary, OpenAI, Anthropic, Gemini, Meta and so forth. And that brings a lot of governance benefits. All of the security and governance that our customers put on their data automatically gets respected by the AI solutions. Then on top of this, we have kind of two flavors of products, if you will. One is Snowflake being an analytics engine. We're bringing AI to run inside that analytics engine. So that means any of our customers who want to analyze large amounts of data can now use AI to do that for classification, extraction of data, summarization of large amounts of data that they have on Snowflake, Customer calls, feedback, messages, a lot of text and images and video that they have on the platform can now be processed with AI very easily at large scale in batch. Then we have a series of products that are what we call data agents. So making it easy for customers to talk to their data using natural language.
A
And when you said you bring OpenAI or Anthropic in into the. In in. Is that in into Snowflake's cloud or. Yeah. How do you do that? Because OpenAI is they're not going to give you the model.
B
So yeah, so we have again, deep partnerships with the model providers. And what we do is. So Snowflake runs on top of all three cloud providers, Google, AWS and Azure. So we are essentially working with the likes of anthropic, OpenAI to use their models on those cloud providers inside the Snowflake security boundary with the data governance requirements that our customers expect. So this means there is an element of kind of privacy and data residency guarantees that we give for our customers.
A
So you're not sending data out to the model providers? We're or through an API or something like that.
B
Essentially only Snowflake has, has access to the data. And of course the models are processing the data, but no data gets stored or no data can Be used for training purposes and so forth.
A
Yeah. And what does that philosophy of bringing AI to the data mean and why is it important for enterprise? Is it purely security or is it also latency?
B
Yeah, so there's security is important for regulatory purposes. Data residency requirements need to be respected. The best example is our customers are putting a lot of governance at the full, full stack from data perspective. Best example is we built our own sales agent, for instance, inside the company. All of our sellers are using an agent. And when one seller asks about their customers and their customer specific end, it should be different from the other seller asking it. Setting that level of granular access control so that by definition, by design, it's all respected, makes things both governed as well as easy to build with governance in mind. So that's the philosophy, in addition to of course, the security benefits and the fact that you don't have to replicate both your data and your governance in multiple places and manage them in multiple places.
A
Yeah. And can you talk about agents? There's a wave of agentic tools that are coming out, tools and platforms and Snowflake is also in that space. At a high level, can you talk about how Snowflake approaches the Gentek AI and how you differentiate yourself in the market?
B
The way we think about it is ultimately AI solutions are as good as the data that they have access to, as good as the context that they have access to. So we focused a lot on having very high quality data agents by focusing on the retrieval quality of both structured data, so text to SQL generation, being able to talk to structured data as well as just documents and unstructured data. So our agents are built on top of a product that we call Snowflake Intelligence, where you can just in natural language, ask questions of all your data, be it structured or unstructured, and get high quality answers, results, charts and so forth. What that differentiates Snowflake from the others are one, it turns out it's actually very difficult to have very high quality structured data retrieval. So we excel at being able to answer relatively difficult questions, business intelligence questions, being able to understand business trends, why things are going the way they are. So essentially bringing insights in a very approachable way with very high quality. And of course then also this is all about our customers govern data. So being able to build these agents with high quality very easily, with governance at the core of it is something that differentiates us as well. And these products are resonating in the market we have. Snowflake Intelligence is the fastest growing product that Snowflake ever Had thousands of customers are using it. There's a lot of demand in the market for building high quality data agents.
A
Yeah. So Snowflake Intelligence is the interface for analyzing the data or talking to the data in natural language and getting answers back. Or is it for building agents and other software to work with the data?
B
So Snowflake Intelligence is the interface for business users to interact with data and ask questions. And then we provide series of tools for builders to build these agents and then deploy them for their organizations.
A
I see.
B
Yeah.
A
And Snowflake has partnerships, as you said, with OpenAI, anthropic meta and others. How do you think about building products as opposed to partnering with these tech companies?
B
The way I think about this is first of all, it's been incredibly exciting to see the momentum and the progress with the Premier labs. The models are continuously getting better for our customers. They want model choice and we'd like to offer that choice to our customers. And the way our customers think about it is in some cases each customer has approved certain models to be used in their for their organizations for a variety of reasons. Sometimes it is a customer's in one cloud and they want to use the model available in that cloud. Sometimes it's for different reasons. So we provide all of those options for our customers to choose from. The way we think about product development is we take these models and then we are building the next layer up. We have the retrieval layer, we have the data layer and then the application layer on top of it, be it analytics, integrating it into our SQL engine or products like Snowflake Intelligence. For natural language interfaces, we do build our own focused task specific models as well. Those are for, for focused use cases such as extraction, which is really important for our customers, or text to SQL generation which is again really important for our customers. So certain aspects of the product from the modeling perspective is, are things that we build, but then we rely heavily on general purpose models that are the frontier models and we, we have our partnerships with, with the model providers for those.
A
Yeah, I've been talking to people about post transformer models. Do you, how wide a range of models do you offer? I mean are you pretty much focused on the top model makers or do you offer other architectures?
B
Yeah, so we are predominantly offering the top models from the model providers. We are looking at other architectures as well. We have customers partnerships for instance with companies like Kumo who are experimenting with more kind of tabular foundation models. And so there's other, other areas that we're looking at as well.
A
Yeah, do you see, you work with customers, obviously a lot of snowflake customers. Do you see patterns in the teams that, that are really getting value from AI quickly?
B
I do in most cases. Actually it is a culture shift and usually culture shifts happen when there is a top down mandate as well as top down depth of understanding of transformative nature of AI. So I think we have a lot more of a kind of interaction when that mandate is coming top down, when that depth of understanding is coming from top down. On top of this, coding agents have transformed how products are being built. So we're seeing that the more of our customers adopt and embrace these coding agents, the faster they're able to kind of build and deploy these AI solutions with us.
A
Yeah, yeah, that's interesting. And agents in general are making big waves. Although I keep hearing from enterprises that they're skeptical because of the non deterministic quality of agents and they're afraid to let them make critical business decisions. Do you see that changing? And how can organizations build AI systems that are trustworthy enough for that?
B
I mean, I think trust is at the core of everything we do. If we don't help our customers build AI solutions that we'll trust, it will not go anywhere. So that is a very core focus of, of the way we operate. And trust has different layers. I talked about trust at the governance layer, making sure that the data does not leak unintentionally to someone who should not be seeing that data. For instance, the trust. Also there's a security element, there's a guardrails element to make sure that AI doesn't respond. AI only responds in the tone and to the questions that you want it to respond. Then there's trust at the quality layer. Like I want to only have high quality answers. This is especially true if I'm asking factual data questions. If I say what was my revenue in the last month? There's only one answer to that question that is right. So getting high quality is incredibly important. The way we help our customers get there is first from a design principle perspective, there is governance at the full stack, that we give a lot of control and ease of creation of these agents. Then we built our agent observability and evaluations products directly into in the agent creation flows. So our customers can have a lot of control on kind of how they want the agent to respond and what good answers are. And then they can also see how well the agent is answering and then they can tune that relatively easily.
A
Yeah. With agents, is this in effect retrieval augmentation, what is it Generation issue, I mean, or where does the problem of reliability come in, is it's not in, in finding the right data. And presumably then the models are using that data, formulate an answer and there shouldn't be any hallucinations there. But then when you're taking action, that's where people get spooked.
B
Yeah, I mean, even in the retrieval step, there is a ton of things that, that can go wrong if it's not set up properly. So what we see for instance is especially in structured data, understand the semantics of the business. Understanding the semantics of the data system becomes incredibly important. Semantic models have become a really core part of how our customers are developing agents. So we help them build these semantic models, optimize them, making sure that the semantic model is highly accurate and up to date. So that becomes a critical part of that kind of structured data retrieval, if you will, the text to SQL generation. But then on top of that, evaluations are quite critical, helping our customers build evaluation data sets, evaluate the responses. So that is another core part. And we want agents to also continuously learn and get feedback. So being able to get the feedback from usage, passing it on to the developers who are building it and making it very easy for such developers to quickly, quickly optimize the performance of these agents is part of the development and increasingly more and more of the agent memory. So that is important. Agents are automatically learning and then correcting themselves as well so that they keep getting better as they get used.
A
Yeah. And these are agents being built on the Snowplake platform, is that right? That's right. So how do you, how do you manage those things? I mean, does it depend on the underlying model or are there, is there architecture at this Gentex layer that you use to manage these?
B
So our customers can choose for each of the agent that agents that they're creating which model they'd like to use, or they can just use the default, default model that's available to them. And then each agent has its own set of orchestration capabilities. So agents have a series of data sources that they're given access to, series of tools that they're given access to to either take action or to do things like searches and so forth. And all of those are part of the plat. So that's kind of how our customers are building these agents. One thing that also customers want to make sure happen is they want interoperability. Right. When I'm building an agent here, I want to make sure that this agent can also talk to this other solution that I have elsewhere. So we support open source interoperable platforms like protocols like MCP and soon agent to agent communication so that our customers can build agents in one place and have these agents talk to other solutions
A
and talk to other agents outside the snowflake universe. Yeah, and there's a feeling that LLMs in particular are getting commoditized. So where do you see durable competitive advantage coming from?
B
I mean, for our customers, ultimately it's all about data is their core asset, their, their data, their relationships with their customers. That and LLMs are incredibly capable tools that you could, on top of all of those assets, to unleash new capabilities, to unleash a lot more value out of them. And this is exactly what we're seeing. We're seeing significant acceleration in product development by our customers. And also our customers are able to get a lot more insight from a lot more data to then use for either decision making or building new products.
A
Yeah, the. I hear the, that I mean maybe I'm a little behind the news, but that a lot of the experimentation with agents through 2025 have not led to production models or production systems. Do you see that this agentic AI is really penetrating the enterprise or is it still relatively simple tasks? Because there's a lot of talk about making business critical decisions. But are how much do you see that agents are really being deployed to do that?
B
Yeah, I am absolutely seeing that. We've crossed the threshold and now we're in the scaling phase. We have a lot of customers who are already in production and they're getting already a lot of value and the value starts with productivity gains. I was just on a call with a customer, like they were saying, hey, we're saving 2,000 hours of, of time because we're able to automate a series of processes in, especially in this case in their, in their call centers and call analytics. And that starts from productivity gains, but then also goes into kind of more and more automation and more capabilities being developed. So overall, I think we're now at a point where LLMs have gotten very capable. They are now able to execute on longer and longer running tasks. They're able to be a lot more accurate. So many tasks are now possible to automate. But we're also seeing, especially coming from a data perspective, when you just democratize the access to the data, not just automation, just democratizing access to the data, many customers are able to make a lot more data informed decisions a lot faster. And that results in significant changes in the way our customers are operating.
A
Yeah, and who is making those data informed Decisions are you talking about at all levels of the organization? I mean, even at the C suite,
B
certainly there are many customers are building products for their whole organization. This is starting with the salesforce being able to create a series of agents for the sales organization, the marketing organization, the finance organization and the C suite as well. And then Snowflake also, we eat our own dog food.
A
Right.
B
So we have snow on snow projects where we're using our own products to develop and deploy AI across the organization. From our CEO onto. Everyone in the organization is using such agents across the board.
A
Do you think this is going to not only increase productivity but accelerate business generally? In the way I tell my kids, I remember typing on carbon paper if you needed more than one copy of a letter and you look back, it's just amazing that any business got done given the speed that things get done now. But with agents, do you see that just the pace of business increasing?
B
I live it day to day and absolutely see it everywhere. I think the pace of innovation is increasing, the pace of product development is increasing, the pace of work getting done is increasing because of AI. And we see it across the board. Right. The simple examples are very quickly being able to write custom emails for thousands of people that are just very clear. That's a simple example. But then once you start thinking about automating processes that were all kind of required a lot of manual approvals, manual processes, you're now shrinking something that used to take, you know, 30 days to hours. It makes a big difference from a productivity perspective.
A
Yeah. So as agents take on more decision making responsibilities, I mean we talked about reliability, but how do organizations, how should organizations think about accountability and risk? Does that lie with the. With whoever is managing the department where the agent is working? Is it with the IT department? Is it with the model providers if something goes wrong?
B
Yeah, I mean ultimately it is, it lies with the owners of these agents. Agents are becoming more and more capable. And then there's there is this a long stack of technology and people behind all of this. The way I'm seeing agents deployed is there's usually an owner at the organization who is describing here are the use cases we'd like to develop. These are really important. And then you go down that list and then there's a rollout process. You develop the agents, there's a poc, there's a small deployment pilot and then broad deployment. And all of those are built so that you're kind of both improving the quality of the agent, you're putting the guardrails in place and you're measuring the productivity against the ROI on this agent. And it's no different than kind of software development really. But this is incredibly powerful set of tools and capabilities.
A
Yeah. And what does accountability mean when agents are making decisions without human oversight? Because there's the human in a loop where the agent tees up a decision and the human has to click proceed or. But then there are other decisions and increasingly decisions as reliability problem issues go away, that agents are making decisions really unseen from humans deep in the computer systems of an organization. How do you think about accountability there?
B
Right now what is happening is, especially in cases where there's an automation, a workflow being set up, the workflows are kind of well established, there's a series of steps and anytime there is critical decision making that, that contains risks, there is usually a human in the loop that can take a look at it and then control it. Over time, as trust on these systems increases, more and more of these kind of human in the loop and guardrails will start getting automated. But ultimately it will get automated because the AI is doing exactly what it's told. The instruction following capabilities of these models is increasing day to day. Accountability lies in capturing these business processes and then, and then documenting them. And an AI's job is to follow to the T exactly what it's asked to do.
A
Yeah. So we've reached the point where models don't necessarily mean bigger, models don't necessarily mean better reasoning. Is retrieval becoming more important than scale? I mean,
B
I'll say the thing that is becoming more and more important is the, is the reasoning capabilities. Ultimately retrieval is a very important tool that is given to the agent and the agent uses that tool to get the data that's relevant and then kind of plans what to do, uses these tools and then reflects whether they got it, whether this is right or wrong, and what the next steps are. So that loop is very important and that is the loop that actually unlocks a lot more complex capabilities and powerful capabilities. And that's kind of what we're seeing. Increasingly these agents are able to do longer and longer running tasks without having to be kind of stopped and guided.
A
Yeah. And it's difficult for me because I talk to researchers and they have these grand visions and then I talk to companies and the vision is not as grand sometimes. I've heard Microsoft talks about societies of agents and other people talk about tens of thousands of agents operating as kind of a fabric, back office fabric at the below the unseen in an organization. Do you, do you, how you do you what's the large, excuse me, what's the largest deployment of agents that you've seen, like in the tens of thousands at an organization?
B
I mean, it's early. It's also really dependent on the definition of what an agent is. I sometimes hear these numbers. It's difficult to kind of grasp exactly what's being said. I absolutely believe that there's going to be many agents doing many different tasks, all coordinating with one another. That is the vision. And that vision is not far away. But that is not today. Today we're talking about hundreds of agents at most in an organization deployed. And they're deployed for kind of specific tasks. And in most of these cases, there is a lot of human in the loop interaction, and that is also very powerful. But we're not at a point where there are millions of autonomous agents operating in the market.
A
Do you think that'll happen? I mean, is that coming?
B
I do think that we are moving in that direction and moving quite quickly. But we're also. There's still a ton of work to be done from a technical perspective, from a organizational perspective, to really realize the potential of that.
A
Yeah, and people also talk, as we have here, about critical business decisions. Without giving away anything. What is. What does that mean? What's the most critical business decision that you've seen an organization using an agent to make with, Whether or not it's with human in the loop?
B
What I see is I'll give a very simple example from my own usage. As a product person, I look at dashboards all the time. You know, how things are getting used, what are some of the trends and gaps and so forth. And anytime I look at a dashboard, I say, why is this the way it is? Normally when I ask this question, I would go to a data scientist and I would ask this question. They'd go do some analysis for a couple of days. We'll come back with an answer. Right now, I can just ask that question in natural language and get an answer in seconds. And then what that allows me to do is be a lot more on top of what is going on and a lot more quickly respond. So this is not necessarily groundbreaking. Having that access actually changes how I operate, and it changes the culture in which we operate. And if you expand that, we get a lot of feedback, for instance, from our customers. Normally, someone would have to go through and then read and scan all of that feedback, make sense of it right now in real time. I can just quickly summarize all of this so I'm a lot more informed as someone who now has access to a lot more data, but also can make sense of all of that data and that just changes organizations in my mind.
A
Yeah. And you mentioned UI or dashboards. Is the UI of the agentic age. Is it voice? I mean what is the, what is it going to look like? And why does that transition matter for enterprise users?
B
Yeah, I mean to Today popularized by ChatGPT, the interaction is a chat interaction and I think that is just fast evolving. Now. AI can write user experiences on the fly. Humans of course read and we get a lot of information, but we're very visual.
A
Right.
B
So we can just glance at a series of images or charts. It's a lot easier to just fill out forms quickly versus kind of having a back end, back and forth with an agent. So I expect the user interaction paradigm to evolve to kind of custom user experiences that are just put together on the fly for the task.
A
Yeah. And voice, I mean I asked that because I use and it drives my wife crazy. You turn that off. But I, I just talk to, to whatever model I'm using and it talks back and that's faster than typing often. And, and there are certain times where you want something written, but there are other times where you just want a quick answer, right?
B
Yeah, yeah. So I, I am a big believer in voice. I was in, I was working on the Google Assistant for a long time. Right. And what's interesting is with Google Assistant before LLMs, you had a very limited set of things it could do so that then actually whenever you bring natural interfaces, the expectation is human level intelligence. Right. If I'm talking, I want the response to be, to be very intelligent. And then when that expectation isn't met, there's a lot of disappointment. I think we're now at a point where when I have a conversation with one of these models, the responses are quite rich. So that actually invites a lot more natural interaction. I'm very pro voice interactions with these devices and models.
A
Yeah. And you mentioned that these agents are being used at all levels of the enterprise. In the C suite, I was talking to another company that was developing a sort of a CEO co pilot that the CEO in his office would just talk to and it talk about the business issues that he's facing, the business problems, and use the model as a collaborator to sort of work through problems in a way that he may not feel comfortable, comfortable doing with a subordinate because he doesn't want to show his ignorance or his uncertainty. But do you see that it becoming a tool in the CEO's private office, certainly.
B
And not just CEOs, private office everywhere. Being able to use these capabilities to brainstorm, to structure thoughts, to go down certain rabbit holes and then come back and then just formulate your own thoughts and opinions and use that as a co pilot is a very powerful capability. And that powerful capability is increasing in its capability because we can now not only just talk to an agent that has world knowledge, but also has knowledge of my company's data and my company's business. That becomes quite powerful.
A
Yeah. And so where do you see this going in the next few years for Snowflake and for the business world at large? Do you see, for example, and I hear this at some conferences, the headless organization where there's going to be agents doing all these tasks and business units, managers, the. There's no real. Maybe there is one CEO with a co pilot that's directing strategy. But I mean, where do you see this going? You must having seen the advances and the speed. Do you think about that?
B
I'm basically seeing that everyone is getting superpowers. Yeah. So that means, I don't think that means therefore organizations will only have one person and all agents. I think organizations will be able to. Will be a lot more capable and a lot more powerful and will do a lot more things. That's kind of how I think about it.
A
Yeah. Yeah. And enterprises will be able to work together more seamlessly, it seems.
B
Yeah. That's also a very interesting trend. So ultimately the playing field is being leveled because it's all human natural language that that's being interacted with. Yeah. So in the past you'd have to kind of have software and translate some business logic to some interface. Whereas right now what's happening is there's my data and there's intelligence and this intelligence can interact with other data solutions and other interact intelligences and so forth. So the playing field is being leveled and that's being leveled across the board. The function of a product manager is evolving. A UI designer is evolving. You know, how anyone does their job is evolving and evolving in the sense that it is all becoming more and more natural using natural language. The clearly the domain experience and expertise very important. But what I'll call the middle layer is disappearing. That translator between the business expertise and the solution is disappearing. So that just creates a lot more opportunities for Snowflake.
A
What are you guys working on right now? I mean you've got Snowflake intelligence and you've got the agent building platform, but where are you guys going and does is the structure of Data changing or the way that data is accessed changing. Because again, if certainly data is in databases, but if there is data flowing through an organization, are you able to capture it live, for example?
B
Yeah. So we like calling out, there is no AI strategy without a data strategy. And that is increasingly true. Ultimately, AI feeds on data and we're now increasingly seeing like there is a. There is now a focus on getting the data AI ready. And getting the data AI ready means preparing the data so that it's you again, you capture the semantics. You focus on the series of use cases that you want AI to operate on. You go build search indices when it makes sense. Now a lot of unstructured data get unlocked. So there's a lot of processing of that unstructured data, turning some of that into structure so that you can make a lot more sense of it, but also preparing all of this so that there is again one source of truth for AI to talk to in a governed and secure environment. A lot of what happens before our customers build AI is that beta preparation step where they are bringing data from multiple places, breaking down those silos, governing that data, preparing it, giving it semantics, building search indices and so forth. Then they build agents with business processes captured in the agent. So that data layer is changing also, the underlying data is expanding. Right. So now AI can make sense of a lot more types of data. So a lot of the data that used to be kind of in the dark is coming into more and more utility. So that's also happening. And another thing that is happening is the number of people who are interacting with data is expanding substantially. It used to be only the data professionals who would go and prepare the data, who'd run analysis. And now it's more and more part of the organization that has direct access to kind of querying and understanding and using that data. So those are some of the changes at the data layer that is happening. Yeah.
A
And Snowflake is a. Does it do the data prep for organizations or is that expected?
B
Yeah, so we have done. We have a lot of tools to help customers process that data that can be streaming, as you called out. There's a lot of tooling for kind of information extraction from unstructured documents. There's a lot of capabilities like search to search across large amounts of documents as they come in. So all of that set of capabilities are part of the platform.
A
And what about context outside the organization, in maybe in the domain or even regulatory environment where things are changing? Do you see that feeding in or does Snowflake do that pull in data from other sources to give context to enterprise data.
B
Yeah, we have kind of two different ways to do this. One is we have a product called OpenFlow, which our customers use to connect any data source and then kind of move that data to Snowflake or any other place that they'd like to move the data to. So that set of capabilities and connectors exist as part of the platform. We also support, as I said, open protocols like in mcp. So if our customers want to use their data elsewhere and that has an MCP interface to it, agents can easily go and kind of use that data, pull information from it, and make sense of it.
A
You kind of pioneered the idea of a data lake or lakehouse. Isn't a data lakehouse? Is that the Snowflake term?
B
I mean, it's now an industry term. We do offer capabilities in the lakehouse.
A
Okay, maybe I thought that was a. So I'll cut that out. Can you just define for listeners and what the AI data cloud is in practical terms without the marketing language around it?
B
Certainly for Snowflake, what we pioneered is the ability to not only bring your data in a governed, secure place, but also share it very easily without moving your data. So, for instance, we have a lot of data providers that bring their data onto the Snowflake platform through the marketplace and give access to that data to other companies. So you can imagine in the financial services industry, for instance, there's a lot of data providers and there's a lot of data consumers. Just like how a Google Doc can have users that you can bring to. To a single document. You could also bring consumers to massive amounts of documents. And that's what Snowflake pioneered. So that's the notion of a data cloud, because you have your data and then you have multiple different companies connecting to that data from. From different places. And we recently changed the data cloud term to AI data cloud term because ultimately AI is a AI and data go hand in hand, as we talked about. So being able to bring data AI to run next to the data so that you could make a lot more sense of that data and then use it to get insights from it and democratize it. Is the notion of an AI data cloud?
A
Yeah. And are. Is there. Are there workflows that. That should be outside of Snowflake? I mean, how does enterprise decide what is within Snowflake, what is in object storage or something like that?
B
Certainly. So the way business thinks about this is utilize your data across different data sources, analyze it, process it, then they bring that data onto Snowflake because that's when all these data silos are broken down so that you could join one data from one source with another. Super easy, easily. For instance, if the data is relatively separate and you can just use that data separately. Essentially, if an agent doesn't need to have large scale connections from one data source to the other, then you don't need to do that. So what we see for instance, with our customers are there are certain tools that they're using and all of that data is with that tool. So rather than bringing that data onto the platform, they're just building an agent and having that agent talking, talk to the agent that they build on Snowflake. So those types of patterns are also emerging.
A
People want to learn more about Snowflake. Where should they go?
B
They should just go to snowflake.com we have a lot of information there. If they'd like to learn more, there's an AI section in that, in that website that talks about the set of products that we have.
A
Most AI is just speech to text plus a language model. Useful for reading transcripts, not understanding conversations. Velma for Modulate. An AI built on ensemble listening model architecture. Specializes in audio analysis. It orchestrates hundreds of smaller sub models. Purpose built to understand the nuances of voice like tone, timing, stress and intent. Perfect for fraud defense, deep fake detection, agent attrition, prevention or customer service moderation. Check out the live Velma preview at Preview Modulate AI That's Preview Modulate AI to see how the model breaks down audio providing timestamped explainable signals. Stop transcribing, start listening With Modulate AI.
Date: March 19, 2026
Host: Craig S. Smith
Guest: Baris Gultekin (Snowflake, Head of Product for AI)
This episode explores the evolution of enterprise AI with Baris Gultekin, head of AI product at Snowflake. The discussion dives into how AI agents are transforming business operations, specifically by integrating advanced AI models directly with company data inside secure, governed cloud environments. The conversation addresses the challenges and opportunities of deploying agentic AI at scale, data governance, trust, productivity gains, and the restructuring of enterprise workflows through AI-driven interfaces.
"The request was to run AI next to data versus trying to move data to where the AI is, which creates... governance and security challenges."
— Baris Gultekin [02:59]
"We excel at being able to answer relatively difficult... business intelligence questions, being able to understand business trends, why things are the way they are."
— Baris Gultekin [07:30]
"Ultimately it is, it lies with the owners of these agents... there's a rollout process... putting the guardrails in place and you're measuring the productivity against the ROI on this agent."
— Baris Gultekin [22:20]
"I'm basically seeing that everyone is getting superpowers... organizations will be a lot more capable and... do a lot more things."
— Baris Gultekin [32:56]
"Ultimately AI feeds on data and we're now increasingly seeing... a focus on getting the data AI ready."
— Baris Gultekin [35:03]
On Security and Governance:
"Only Snowflake has access to the data... no data gets stored or can be used for training purposes [by the model providers]."
— Baris Gultekin [05:31]
On Democratizing Data Access:
"I can just ask that question in natural language and get an answer in seconds… and it changes the culture in which we operate."
— Baris Gultekin [27:43]
On Cultural Change:
"Usually culture shifts happen when there is a top down mandate as well as top down depth of understanding of transformative nature of AI."
— Baris Gultekin [11:36]
On the Future of Work:
"The playing field is being leveled because it’s all human natural language that’s being interacted with."
— Baris Gultekin [33:23]
On the Pace of Change:
"You’re now shrinking something that used to take, you know, 30 days to hours. It makes a big difference from a productivity perspective."
— Baris Gultekin [21:14]
On Agent Reliability:
"The instruction-following capabilities of these models is increasing day to day. Accountability lies in capturing these business processes and then, and then documenting them."
— Baris Gultekin [23:47]
This episode provides a grounded, inside look at the practicalities of AI in the enterprise as of 2026—focusing on Snowflake’s approach to secure, agentic AI, the technical and cultural transformations underway, and the next stage of hybrid human-AI collaboration. Gultekin’s perspective emphasizes that AI brings superpowers to organizations—not by replacing humans, but by amplifying insight, innovation, and speed across the business.
For more on Snowflake's AI initiatives, visit their AI section at snowflake.com.