
In this episode of The Digital Executive, host Brian Thomas speaks with Avrom Gilbert, a seasoned technology leader and AI pioneer. Gilbert, the CEO of SparkBeyond, shares his insights on the evolution of AI in business decision-making, from its early days to the era of generative AI.
Loading summary
A
Foreign welcome to Coruscant Technologies, home of the Digital Executive Podcast. Welcome to the Digital Executive. Today's guest is Avram Gilbert. Avram Gilbert is the CEO of Spark beyond, pioneer of the AI powered always optimized platform that allows generative AI to drive constant improvement in business operations. The always optimized platform extends generative AI's world knowledge to a detailed understanding of the key drivers of operational performance as reflected in complex patterns across structured operational data sources such as CRM, ERP and IoT data. Avram is a highly experienced high tech executive with more than 20 years of experience managing funding and driving growth at rapidly growing technology companies. Avram's former roles include Chief Operating officer of Similar Web and Seeking Alpha, as well as being a board member of a number of high growth companies. In addition, Avram has spent a number of years in investment roles at Jerusalem Global Ventures and Ion Asset Management. Well, good afternoon Avram. Welcome to the show.
B
Hi. Well, thanks very much for having me here.
A
Absolutely. I appreciate you making the time hailing out of the great country of Israel right now. So I appreciate you making the second shift tonight because it's very late there for you. So again, thank you. And Avram, we're going to jump right into your first question. You've had extensive experience in high growth tech companies like Similar Web and Seeking Alpha. How has the role of AI evolved in business decision making over the years and what do you see as the next big leap in AI driven optimization?
B
All right, what a great question. Well, there's really two periods of time when looking at the role of AI in business decision making. It's the pregen AI and the post gen AI. And so when I look at the Pregen AI when I was at Similar Web and Seeking Alpha and we didn't really have great AI for decision making, we didn't really know of any good tools. But even though I didn't know it at the time, there was already AI being sold to large enterprises by cutting edge companies like my current company Spark beyond, which was a pioneer using AI for decision making. They already assisted hundreds of large enterprises like Bank Santander, Equinor, HDFC as well as our partners like McKinsey to drive AI decision making. And the speciality was actually different to gen AI to try and solve business problems which were driving bad performance on key company KPIs. And this was done by applying AI on structured data like CRM, ERP, IoT. And then if you look at the post gen AI period, well that's really been the last two to three years starting with ChatGPT. And now there's lots of new ways for AI to assist in decision making. And if you look at it says rapid decision support. This is sort of the AI that all of us have come to know. You ask questions about the world to AI and you get amazing answers in seconds instead of doing days and weeks of researching. It's no exaggeration to think of many of the times I spent days on my team spend even weeks doing research at similar levels seeking alpha and can be done in minutes now. So it's a super important disorder providing support for decision making, getting you facts about the world at scale. And the second thing is that the second evolution is actually underway right now, which is once people are really ready to use agents and trust them to make decisions instead of people. And this is just starting, but pretty soon I think we'll all be making decisions or allowing agents to make decisions both on the personal life and work. So for example, trivial things like I might ask an agent to decide what I'm going to cook for dinner tonight and then order all the ingredients or for work things, I might allow them to handle customer service problems. And I think we're going to see a lot more of that with things like Agent Force from Salesforce. The second part of your question, what's the next big leap in AI driven optimization? Well, I think when we think about the AI that we all instinctively know is going to happen, but probably hasn't happened yet. Have you seen the Marvel movies Iron Man? I don't know if you've seen that. Tony Stark has an AI called jarvis which is deeply understands everything including his business and operations. So we can all imagine that maybe we'll be able to have a AI which we can ask questions like how do I reduce the costs of my business? Or how do I send messages to my customers most at risk of churn and to increase the chances of them remaining my customers. So that's an AI I can trust with achieving my business goals or to make decisions for me. So the big question is why can't we have this now and why is that the next big leap? And the answer is that Gen AI has an amazing understanding of the whole world and very strong reasoning skills, but it doesn't actually know very much about the details of what's driving my business because it deals with language and has read the whole Internet but hasn't really looked into my data. And the real secrets to what's going on in my business are in my CRM database. My ERP database, my customer transactions, my IoT. And that's what Genai needs to understand. And we all know that because every business has a data lake or it collects its data together, has a bi team as data scientists. So right now if you were to ask chatgpt or Perplexity why my costs are going up for a specific product and what should I do to fix it, What I get, I probably get generic advice from the Internet and I certainly wouldn't hire an employee to give me generic advice. And I say I don't really want AI to do that either. So the next big leap is when you give LLMs that knowledge that they need, which is really up to date information that makes them experts on your business. And that's actually why we've launched our always optimized platform because our whole goal right now is to educate LLMs as to what the deep insights are about your business. So we think this is the next big leap. And it's really exciting because then everyone can have this Jarvis type AI which people see in Marvel movies and think in science fiction, but actually you can have it.
A
That's awesome. Thank you again Avram, for really impacting gen AI over the tenure of your career. Obviously that pregen AI period which you talked about and then the post gen which is obviously a lot faster to get answers. But what I really liked is the analogy used with Ironman and Jarvis, right? Agents now can make decisions and we talked about this on the podcast recently as how you can have your own personal assistant now, your pocket, literally. And you can level the playing field with all these big corporations that have deep pockets that are investing a lot in this technology. So thank you. Abram, the next question I have for you, what is the next big leap in AI driven optimization?
B
Well, I think it's going to be having those AIs that actually deeply understand your business. Let's say, for example, I've got a, a factory, right? And my machines might go down and I don't want those machines to go down. So I actually want an AI that deeply understands all the details of my machines and when they see the temperature or the pressure going up to a certain level, knows historically that that's a potential predictor, sends a message to an engineer, that engineer then is also sent the details of how to do the actual maintenance and then goes off and does that. And that machine never fails. That's a big advance, right, that, that LLM having not just world knowledge, but actually the capabilities and understanding of what drives the real world and specifically my business, that's going to be a big deal. The same thing might happen when I want to try and increase revenue. If that AI understands which of my customers typically buy what products and knows that, you know, middle aged men over the age of 30 tend to buy organic goods on a Thursday and it's a good time to give promotions, that's really useful information. But if you multiply that by a thousand or ten thousand different micro segments, all of which the AI understands and all of which it's sending millions of messages and promotions around automatically, that's a game changer.
A
It absolutely is. And I love how you talked a little bit about how you can have your AI deeply understand your business or your machines, keep ahead of the business changes or maintenance needs. But I really love the fact that these LLMs are now going to be really up to the minute, up to date, if not seconds, and be able to help us be more dynamic in our decision making when we need to be. So appreciate that. Avram, why is agentic AI gaining momentum as an industry classification?
B
It's a good question and I think when I look back at it, when Geni first started as a tool we'd ask it questions and although the first version showed promise, they also made a lot of mistakes and had hallucinations and there was a bit of sketch skepticism. But the real momentum, which is really what you're asking, is happening because the quality of the LLMs is advancing at a shocking speed. It's now got to the point where it has incredible knowledge, intuition and reasoning. And so what we've now got is the ubiquitous availability of LLMs to the entire world. Which means there's a very low bar for creating agencies. If you have an idea for an agent, some basic technical skills and some time, you can create an agent fairly easily. And pretty soon a simple natural language instruction to an LLM will likely allow anyone to create an agent which is then usable. And this reminds me a lot of the rapid emergence of cell phone app ecosystem after Apple launched the App Store. I don't know if you remember what it was like but what kind of applications were on our pre smartphone phones feature phones. We went from a situation where, you know, I used to work with startups that tried to convince Nokia or telecoms carriers to add their Apple game to a phone and it was almost impossible. And then after the launch of the App Store became really easy to build phone apps and which used all the special features of a smartphone. And I'll come to the analogy in a second but if you look at the smartphone, it has compute, memory, portability, location awareness, cameras, web browsing. You know, at Launch they had 500 apps. Within one year, there were more than 80,000 apps. And after five years, more than 1 million. Let me ask you, how many, how many different apps on your cell phone do you think you probably use today? And we don't even think about it. It's completely second nature. And all of these apps are just despite the fact, you know, creating a mobile app actually took real investment, you know, at seeking alpha. In the early days of these apps, we created both an iPhone and Google app. It was a lot of effort, but we needed to do it and it was really important for our customers. So now let's look at what LLMs can do. LLMs are really smart. They have reasoning, they can make decisions. And so what we should expect is just a multitude of AI agents which are created to be the things we want to do. If I want to book theater, there'll be an agent for that. I want to pay my bills, do my taxes, answer questions from my customers. The barriers to building agents are getting lower and lower. And so we should see this massive momentum. And I think it's not an exaggeration to say that pretty soon all of us will be using agents which are built to solve our personal problem need, just like we use phone apps.
A
That's awesome. And I love the analogy, you know, with the apps versus the agents. Now we are leapfrogging and that real momentum is happening right now with LLMs, as you've mentioned. But yeah, I can't believe just in the last probably 18 months, two years, I've seen some incredible things as far as leapfrogging and some of the advanced reasoning available today within the LLM spectrum. Appreciate that. And Avram, last question of the day. Can you give a real world example of always optimize with AI at work?
B
Sure. So we build this always optimized capability, which is the ability to use AI constantly in the background to analyze your data and then understand what's driving problems in your work environment and then suggest actions which can then be done automatically by agents if they are available. And so what I'd like to do is actually start by looking at what it looks like if you don't have always optimized. So let's use a typical example of something I mentioned earlier, which is let's say a business like a telco or a bank is seeing increased customer churn. What do they do? Well, probably a dashboard is sharing churns. Going up. And then when the data scientist team or analyst team has time, they'll do some analysis to try and discover some insights. And when they find these insights why their churn might be increasing, then they'll send them to the marketing team will then review the insights and propose actions. And then probably different people on the marketing team will try and send emails to prevent churn or put some promotions out there where they try and win back the customers. And it's how Princess is lengthy and infrequent and expensive. And it reminds me how Winston Churchill used to describe democracy as the worst for all government, except for all other types of governments which have been tried. So this is a terrible way to optimize your business, but without AI, it's actually the best and most practical way to do that. So now let's look at the always optimized version of this. So RAI would divide your customer base into micro segments of users with similar characteristics like location, age, behavior, similar to what a data science team or analyst might do. But this would be done systematically, you know, in real time in the background. And the insights are then automatically derived as to which customer group is leaving and suggestions as to why that might be happening. You then take these insights and give them to an LLM which very rapidly then becomes an expert on what's driving customers to lead the business. And then it can propose custom emails to be sent to each one of your customers in order to reduce the churn. And in the future with agents, once they could be trusted, these emails can actually be sent directly by the AI. Right now these would generally need to be approved by a marketing team. And this entire process can happen as frequently as we see the data changing without delays, no waiting for people to be available or to have the time to do the work. And so we see this as being an always optimized by AI cycle. And the same thing can happen with fraud detection on transactions in banks. The same thing would happen would be cost analysis. You know, as costs increase, you can immediately be doing the analysis and be proposing actions to be taken. You might want to do energy optimization, there's many, many different applications. But always optimized means that AI does the hard work. And really the key to all of this is that we figured out the way to educate an LLM how to become an expert in the relevant parts of your business. And once you've done that, it's much easier because LLMs are awesome. They've got the reasoning capabilities and the understanding of the world, so they can propose actions and then actions can be taken. So we think that actually this whole always optimized capability is going to be perfectly natural for organizations in the coming years as we grow to trust LLMs and generative AI much more.
A
Thank you. I appreciate you unpacking that. It's important for our audience as well here. You know using that always optimized in your everyday work, doing the hard work right versus the old way where you had multiple siloed teams that needed multiple approvals and there was always these dependencies. Now I like how always optimized is where it can go ahead and do some of those tasks almost simultaneously as you start to see maybe customer churn or disruption or something going wrong, it can actually do that work real time, which I think is really amazing. So Avram, it was such a pleasure having you on today and I look forward to speaking with you real soon.
B
Thanks so much. Really a pleasure to be here.
A
Bye for now.
Guest: Avrom Gilbert (CEO, SparkBeyond)
Host: The Digital Executive (Coruzant Technologies)
Episode: 1033 – March 25, 2025
Duration: ~14 minutes
This episode features Avrom Gilbert, CEO of SparkBeyond, who discusses how AI, particularly generative and agentic AI, is revolutionizing business optimization and decision-making. Drawing on Avrom's extensive executive experience in high-growth tech firms, the conversation explores how AI evolved from a supporting tool to a potential autonomous decision-maker, what “always optimized” businesses look like with AI, and why agentic AI is poised to transform the industry.
[01:39–05:29]
Two Eras: Avrom distinguishes “pre-gen AI” and “post-gen AI” periods in business.
Decision Support Shifts: Tasks that once took days or weeks can now be handled in minutes, leveling the playing field for smaller businesses.
Next Leap: The future lies in AI agents with a “Jarvis-like” (from Iron Man) understanding of not just global knowledge, but the unique drivers of a specific business. The challenge is that LLMs excel in general reasoning but lack company-specific operational data.
SparkBeyond’s Role: Their “Always Optimized” platform aims to bridge this gap—educating AI on specific business intricacies for tailored insight and decision-making.
“If you were to ask ChatGPT or Perplexity why my costs are going up for a specific product and what should I do to fix it, … I probably get generic advice from the Internet and I certainly wouldn't hire an employee to give me generic advice. … The next big leap is when you give LLMs that knowledge that they need, which is really up-to-date information that makes them experts on your business.”
— Avrom Gilbert [04:40–05:20]
[06:07–07:17]
“That LLM having not just world knowledge, but actually the capabilities and understanding of what drives the real world and specifically my business, that's going to be a big deal.”
— Avrom Gilbert [06:39–06:51]
[07:44–10:02]
“What we've now got is the ubiquitous availability of LLMs to the entire world. Which means there's a very low bar for creating agencies. … Pretty soon, a simple natural language instruction to an LLM will likely allow anyone to create an agent which is then usable.”
— Avrom Gilbert [08:16–08:44]
[10:30–13:18]
Traditional Approach vs. Always Optimized:
Scalability: This approach is not limited to churn reduction, but extends to fraud detection, cost analysis, energy optimization, and more—essentially, any domain with actionable operational data.
The Key: Turning LLMs into true business experts by feeding them relevant, up-to-date business data, making AI’s reasoning both precise and contextually relevant.
“Always optimized means that AI does the hard work. And really the key to all of this is that we figured out the way to educate an LLM how to become an expert in the relevant parts of your business. … Once you've done that, it's much easier because LLMs are awesome.”
— Avrom Gilbert [12:35–12:58]
On the “Jarvis” Analogy:
“Have you seen the Marvel movies Iron Man? … Tony Stark has an AI called Jarvis which deeply understands everything including his business and operations. … We can all imagine maybe we'll be able to have an AI which ... achieves my business goals or make decisions for me.”
— Avrom Gilbert [03:28–04:02]
On Barriers to Entry for AI Agents:
“If you have an idea for an agent, some basic technical skills and some time, you can create an agent fairly easily. And pretty soon a natural language instruction to an LLM will ... allow anyone to create an agent which is then usable.”
— Avrom Gilbert [08:19–08:44]
On Always Optimized AI:
“This entire process can happen as frequently as we see the data changing—without delays, no waiting for people to be available or to have the time to do the work. … This whole always optimized capability is going to be perfectly natural for organizations in the coming years as we grow to trust LLMs and generative AI much more.”
— Avrom Gilbert [12:58–13:18]
| Timestamp | Topic | |------------|------------------------------------------------------------------------------------------------------| | 01:39 | What is pre-gen AI vs. post-gen AI era in business decision-making | | 04:00 | The “Jarvis” analogy and the need for business-specific AI | | 05:29 | The impact of gen AI and the analogy of personal AI assistants | | 06:07 | What’s next: LLMs deeply understanding your business | | 07:44 | Why agentic AI is rising: historical context and technological progress | | 08:44 | Analogy: Agent ecosystem compared with the rise of mobile app stores | | 10:30 | Real-world scenario: Always optimized AI for churn reduction and other business needs | | 12:35 | Educating LLMs: Creating business experts from language models |
Avrom Gilbert paints a compelling vision of a future where AI is no longer a passive analytical assistant but an active, always-on agent deeply attuned to every operational nuance of a business. As agentic AI continues to advance and “always optimized” platforms become mainstream, organizations are poised to shift from reactive, labor-intensive decision-making to real-time, proactive optimization—enabled by AI that truly understands not just what your business needs, but why, and how to deliver it at scale.