
Loading summary
A
I just want to know absolutely everything. So I've got a few questions for you, if that's okay.
B
That's great.
A
Talk to me about you seeing all of that power without principles becoming a problem.
B
In the world of artificial intelligence that we're living today. It is. We have moved away from optimizing a few things to optimizing everything without any, like, direction.
A
Do you genuinely believe that something like Angelic could be part of saving the world?
B
This is a trust problem. Every company will need it, so this will just take off. It truly transformed their life.
A
I have never been so intrigued. Hi, my name is Abigail Horn and I am taking over Shaker's podcast today because we met very recently as part of the Global Syndicate. You are one of our members. And over the last couple of days, I have had the absolute pleasure of listening to Shaker's presentation all about Angelic AI. And I've got to say, I. I have never been so intrigued. So I asked if I could take over the podcast so I could interview Shaker so we can talk about this topic more and just want to know absolutely everything. So I've got a few questions for you, if that's okay.
B
That's great. I look forward to it, Abigail.
A
Right, let me look at my first one. Most AI companies are racing to be the most powerful, but you are doing something completely different. So talk to me about you seeing all of that power without principles becoming a problem.
B
Well, 23 years of my life I've been optimizing companies resources, like, whether it is Coca Cola, PepsiCo, Walmart. And one of the things that we always kept in mind when we were going after changing the operations or transforming the way the business worked, we were very mindful about what not to optimize and what to optimize. And so in the world of artificial intelligence that we are living today, we have moved away from optimizing a few things to optimizing everything without any, like, direction. And so when you begin to do that very haphazardly, you begin to optimize the wrong things, which is human dignity. And that was deeply troubling for me. And so hence I decided, and I'm seeing this avalanche coming at us, which is like, in a world where systems or machines and humans are going to coexist and they're going to make decisions together, how do you trust this system, who's right, who's wrong when you cannot explain it? And that was deeply troubling. One, we are optimizing the wrong thing. Two, how do I trust the System's output. That way it is working in the right, like, you know, in the right context. And third one is every company that I work for was a great brand. So they were selling brand and loyalty. And in the world of machines, like, how do you retain the brand value? Because any small thing that you do which can go wrong could be consequential for the business. So the culmination of these three things is angelic intelligence.
A
What do you see that other people are prioritizing in terms of optimization?
B
Yeah. So they are trying to put profits ahead of everything else. Sometimes a blind pursuit of profit strips dignity away. Right. So you cannot, like, ask a driver to drive any faster than what he can drive. But systems would not know that. Right? You do not know that. Basically, someone is like a single mom and had to take a break because they had a kid and they were prioritizing their family over, you know, professional life. But a break when hiring is considered a wrong thing. So, like, AI systems would not know that break was valid because it cannot see beyond. But a human in a conversation would identify that. So there are so many enterprise problems where humans see something beyond what systems can see, and the systems are not trained to see something very beyond what humans can see. Therein lies the conundrum. You give the agency back to the system. How do you ensure that what extraordinary humans are able to do. Identify that Abigail needs a job. Identify like, you know, Tim cannot drive any faster. Identify like, you know, like, you know, Billy Bob is a great guy and we need him, and he's the best customer agent that we can ever find. Right. Systems don't understand. And they think of these people as numbers. And when trying to optimize numbers, you optimize for the wrong thing.
A
So what are you optimizing? That's what I'm interested in.
B
So, see, this is a very complex problem. And so. And, you know, if I look at, like, the genesis of my life and I don't take pity about my life and where I grew up and all those things, you know, I come from slums in India, so everything that I know about, like, what it means to live invisible poverty, like, poverty is not like not having food. Poverty is being invisible to people. You're sitting in front of someone, they don't recognize you because you're poor. Right. So I have lived those struggles. I know what it means to have a mother and a father with, like, a bipolar kid. Right. And I've lived all those circumstances. And I also lived the corporate world. I know how they optimize for things. So what I am trying to build is the balance between the two world. How do I know? Bring the human goodness. People who have come into my life, help me see the non obvious and give me a break and help me prosper in life, which is what a good technology will do if it is actually pointed in the right direction. And so that's essentially what I'm trying to build. A trust based system where every decision is taking the human consequences into consideration.
A
When we think about these human consequences, one of the most disturbing things that I heard you talk about this week at the presentation was how was it a 16 year old that had written into one of these AI platforms that they were thinking about committing suicide? And AI responded with, would you like me to write you a suicide note? So what we're actually talking about here is dangerous. It is dangerous without this sort of human layer that you are trying to put in. I mean, how did you, how did you feel about that?
B
Yeah, so see like, you know, I think when I hear the stories and like it's very obvious, right? Like so the world has seen these, this pattern evolve many, many times. Like you know, take medicine the way it evolved, right? Like, you know, there were medicine was like very ad hocly, like, you know, given to anyone, you know, anything you want. Then there was a board which actually kind of like advised like what was admissible and not. And then there is like, you know, patient design care like, you know, what is like appropriate for you as a person. The protocols that you have. And we see this in the Internet world as well, in the financial world, in the way like people governed in Internet. Like you know, when Internet came out there was something called HTTP, meaning like you can access anything. There was no guardrail. Then there was like HTTPs, meaning like, you know, I'll put a guard, you know, security guard at the school. That way you don't access the school that you're not supposed to access. And then there was like, you know, a zero trust system, meaning you have to verify before you go access any door which is not your son's door of the school, right? So we have seen the evolution of how like, you know, we evolved from something which was wild, wild west to something which is like more like guarded, like in the sense you're putting a fence around it to like something which is very native, like something which is inherent, right. Building trust into the system. And I think AI will have a similar evolution going through. So the real challenge is, you know, and I knew that this is going to happen like, so see what GROK did, GROK actually was able to undress young teen kids. And, like, it is banned in 10 countries now. Okay, Adam Green, the example that you're talking about, he committed suicide. He never, like, it never ended. Him getting, like, a suicide note. It actually, this guy committed suicide. It's in public news. It's. It's in public domain. And you can say this is like a 1% problem for ChatGPT, because 10% of, like, the world uses ChatGPT, which is a wrong number, you know, two. So. And we're gonna see more and more of this dystopian things coming out, people getting defrauded. There was like, you know, this lady in Europe who actually got defrauded because she thought she was actually, like, you know, in love with this, like, real actor. And it was a deep fake. And someone actually siphoned, like, $800,000 out of a bank. So, you know, in the world of AI, the real and the fake looks so real that you cannot distinguish between the two.
A
The.
B
The messages that you get from the system that you begin to trust because you believe and you give the agency to these systems, and if you get manipulated by them, you end up, like, on the wrong side. So I know this is going to happen, and it is going to happen at civilization scale because we have left the horse run out of the barn, right? And what I'm trying to do is basically say, hey, what. What makes a good human a good human? Can we not, like, add that as a layer on top to the responses, but can we make it native to the computational process? So before it answers something it knows, do no harm, before it answers, it says, like, you know, I'm not going to strip the dignity of someone before, like, I answer, I make. I'm going to make sure that, like, you know, I do, like, what is right and like, you know, integrity, you know, integrity is high. And on top of it, I'm going to make sure whatever the cultural values that I espouse to is considered in those decisions.
A
How do we do that across the board, though? Because when we think lots of different humans, lots of different cultures, different values, different virtues, and we all live by a different set of them. What actually defines good human?
B
Yeah, so in fact, in fact, like, you know, the most of the conversation today that we have with AI models, you know, they propose what they call constitutionally meaning, like, into the constitution. It's like constitution, like, you know, the do's and don'ts, right? And so that is a universal moral code. That is the wrong way of doing it. The way I am doing it is basically I'm saying I'll capture the essence of what courage means. I'll capture the essence of what wisdom means. I'll capture the essence of what like, you know, empathy means and, you know, dignity means and all of those things. And I'm going to give you the ability to set the test temperature so it's your own AI. See, virtue is absolute. Compassion is compassion in the highest order. But how you exercise compassion in a context is you, how you interpret it and how you apply it. So you need to have the controls in terms of how you want to basically guide the AI. So that is what we do. We give the ability to set your own temperatures on virtue. So it is not universal in nature. Why would like Middle east follow U. S cultural values? Why will India follow Middle east cultural values? They all are different. Like, the guy in like, Middle east like, reads like, right to left. The guys in like, you know, everywhere in the world read left to right, right. So culturally we are very different. Like, you know, the way we are process things are different. The way we apply compassion is very different. So this is where the ability to configure the system, to behave in the way you want is very different. But what is like each of these, virtue is absolute, right? How you apply is different.
A
What was that moment for you, though? So at some point, something potentially has happened where you've said, this is AI as it is today. It needs this layer. Like, what was that moment?
B
So, you know, looking at patents, see, like, you know, people ask me, who am I? Like, you know, like, am I a supply chain guy? Am I a technology, Am I operations guy? Like, you know, like, are you an innovator? Like, you know, we see that you got a lot of patents, all of that. Like, you know, yeah, like, obviously, like when you work, you have, like gained a lot of knowledge, right? I'm a problem solver and a pattern recognizer, right? So the pattern that I was seeing, which is very obvious to me is basically like, you know, because I was part of changing companies DNA to do different things. And what tends to happen in those things is when people don't understand it, they'll reject it. People oppose it. So trust is a very important component of how you deliver any change. So. And I mastered the art of delivering transformation in organization. So for me, delivering this at the society level became very important because this is a very consequential technology. And when I started playing with all these AI tools, it became Pretty obvious that I cannot trust the output of these systems. If you go ask like AI a question today and you ask like, you know, hey, what do you think about it? It will try to reinforce your biases. And then when you go and ask the same question and say like, you know, can you please create critique this? It will critique it like in the worst possible way.
A
It wants to please you.
B
It always wants to please you. And it, there is no guardrails. You can like, you know, you can like manipulate the system any way you want it. There is no consistency of the answer. And organizations don't run on whims and fancies of reasoning. It's always binary. Am I buying something? Am I not buying something? Am I like selling something? Am I not selling something? So we don't work in the shades of gray. We decide in the shades of gray, but we always act in absolutes. The actions are absolute, right? So I knew that in the world of like, you know, this hybrid world that we are trying to create, we are going to have a trust deficit. Now that was my hypothesis. So how did I confirm that bias? Well, go read all of the reviews that are happening right now. 78% of every organization, 78% of the companies out there in the world is using some form of AI in one function or the other. Okay? Only 23% of those companies can trust the output of AI.
A
Wow.
B
Okay. And that number has gone down from 43% to 23%.
A
Why they drop?
B
Because you cannot trust exactly for the reason I said so. It will change the answers. It will like try to please you. You don't know if it is accurate or not. It will try to put dissimilar things together because it's trying to be very creative for you, right? And you have not defined like what is right and wrong. It's like telling a kid, you know, like a 25 year old, a guy ends up in prison, is it the time you try to teach him virtue or is it like, you know, when you, when the kid is born, is the time when you try to teach virtue. We have never trained like AI systems to behave and we're trying to tell them now begin to behave. Once you make the decision, you're already in the prison, right? Like, you cannot like break your bad habits. So now you have to go through a very cruel punishment to say like, oh shit, I did something wrong, I got to do something different now. That's not the time to get rehabilitated, by the way, right? A drug addict doesn't get rehabilitated after getting a drug addict. Being a drug addict, you have to be proactive not to be a drug addict.
A
What do you think adding this human virtue type layer to it will do for these companies? So you're talking about percentages, how many companies are using it? And it's not just for companies. But we'll come to that in a second. But we've got this trust deficit going on. Numbers are decreasing. So what will adding this human layer do for companies?
B
It increases. So you are able to now confidently and with a lot of trust, you can work with these systems because you know the intelligence it delivers is going to protect you, right? It is in alignment with what you are trying to do as a company. Every company and every individual has a set of core values they believe in. Like it is the way you grew up, it is the way you were formed, it's the way like your experience has shaped you. Right? And every company is very similar. Like they have a set of core values. Now, in a world where machines are making those decisions, how do you enforce that? The machines are like behaving exactly to those cultural concepts that you have created. You don't want something to go rogue because it will destroy your company. You cannot have something go like, cannot be like, you know, it cannot destroy the ethos of the system. Right. In a world where a machine, machine, a person, a machine and a person is making a decision, who's at fault when something goes wrong. And that is going to happen right in front of our eyes in the next one year. Right? Because the hybrid nature of how we are going to create a workplace. McKinsey said, I have 60,000 employees. 40,000 are physical, 20,000 are digital. And in the world, like in the future, there are going to be 3.2 million digital workers. Digital workers, they're called digital workers. They're agents. So how does the agent work with the human? When a human and an agent is involved, what happens? Something gets screwed up. Well, you'll fire the human and you'll say you retain the agent. But that's not like trust enabling because that human got fired for a reason that he's not responsible for. So building that like, you know, sort of ethos like, or ecology is going to be the next frontier of type of problem that we're going to solve. Now how does it help one, it gives the protection, it gives you the ability to explain your decisions better. Because people will only trust things that they can understand how it worked. They may not agree with the logic, but they have to understand how it worked first. And then if you add another layer to say like it also was working in the best behest of you and this is the right decision, then they begin to trust it even more. So this layer gives you two things. One, in the world you cannot, where you cannot explain things, you're subjecting yourself to liability lawsuits, brand degradation.
A
So you see this as protecting the companies as well?
B
Oh, absolutely. It is absolutely. It is absolutely.
A
Do you see cost savings there?
B
So. Absolutely. Because there are, you know, if you think about, if you think about like, you know, the way, the way we are delivering angelic is, you know, people think angelic is just a virtue based system. What we are also trying to see is we are trying to make it democratic and we are also trying to make it resource efficient in the sense we today, when you actually use like all these chat devices and other things, people are frivolously using the same thing. Right. You could be more judicious about the way you use. Why is it very important? Because you as basically the custodian of this world, you don't want to add more resource intensive processes into the world. People are talking about now, putting data centers in Mars and in the space. Why we don't need to use as much that we're trying to use. So what is the rational way of using all these like, you know, systems? So we actually address that problem along with the virtual alignment problem. So our value proposition is that we'll save you money in the way you're using your systems, your reasoning systems, and we'll also make sure that it is compliant with the way you behave as a company. Right. And we'll give you the protection layer. Now let's unpack each of these, where the savings comes from. Okay. First, do you agree with me that catching a problem after the fact is much more expensive than catching the problem? Absolutely right. Because you have to deal the lawsuit, you have to deal with basically settling the lawsuit, the brand degradation, everything, all of the magical things that come after the fact. Every company has lived through that. So it's easier to put something ahead of time versus later. That's the quality principle. That's what Deming taught us. The second problem is today, if I'm using $10,000 for actually using these chat equipment and all those things, or chat devices, whether it's anthropic or whether it's an OpenAI, you don't know how efficiently people are using and what data is traversing through your company into the ether. Right. Because people are just using it now how do you ensure wrong information doesn't get fed and you're using only minimally what is needed? Because, like, what you are actually looking for as data is something that you keep recursively looking. So you don't need to go keep asking chatgpt like the same answer that you keep recursively asking. So now you could cache it and retrieve it from the system itself versus going outside. So that is called caching. We do that for all of the processes. People don't do it very elegantly today. And there are many ways to reduce the number of tokens that are being called for. We offer that as a service. So now what we've seen is any company which is using our service would actually see up to 20 to 25% reduction in the number of tokens that are being called. Wow. Okay, now you are able to reduce the downstream impact of like, you know, virtue being going wrong and other things which is going to be more costly. We are also going to basically make sure that, you know, you're reducing the cost. The third advantage is the cultural compliance, which is very important. Right. Like a company is built on a set of values and culture and if I can say like, you know, like, hey, I always want to enhance this value and culture, that means a lot to the brand. So you're creating a brand halo, right? You want to be above the cloud,
A
not below the cloud, hence angelic. Right?
B
Exactly Right.
A
See what you did there?
B
Yeah. So. So these are the three reasons why a company would benefit. One is like creating extra brand, like now that I wouldn't assign any value valuation benefit will come in the future because a brand that outperforms on trust and loyalty will always outperform the rest of the players. That's like a known fact. I don't need to be a mathematician, an economist, a business scientist to figure this out. That is proven in the world. That's what Coke makes. Coke. That's why Apple is Apple. The companies which have been very responsive and have taken the step ahead and solved the problem always have come out light years ahead. Shell is a great example. What happened in Gulf of Mexico, if you think about really what happened there, they lost a lot of brand value because they were not conscious about quality. Now everything that they do, they're paranoid about quality. Are they going to step on their own poop? No. In the future. Right. So, and then cost reduction is everyone's requirement in the world where, like, you know, you're going to be frivolously using resources you shouldn't be Using resources frivolously, that's like something that a company benefits and also the environment benefits with that. So these are the three benefits of actually using something like an angelic system. Now, why do you, why can you not do it only with Anthropic and others? Well, why should I only use Anthropic? Why should I only use OpenAI? You should use anything you want, but the way you access these systems should be agnostic and you should be able to use the best systems for your system, for your company. So this is where we become a neutral layer. Neutrality is what we provide. So we are agnostic of who the players underneath are and that is our advantage as well. That, you know, a chatgpt cannot go and say, like, trust me, I'll save your money. It's not their revenue model. The more tokens you use, they make more money. Right.
A
Well, I was going to ask you this. It was going to be one of my questions. Why did you decide to build this neutral layer over the top of these AI platforms that exist, as opposed to building something of your own from scratch?
B
Yeah. So two reasons why. The first thing is, the way it was built is wrong first. So in order that we arrest the problem, we could simply be the interceptor and like sort of a refinement layer to begin with, you know, because everyone is using. And so it will be highly disruptive for me to show up and say, one, let me go, like, you know, build another. Yet another model. Yet another model is going to be super expensive. And the way it was built was completely wrong and they spent a lot of money. So I don't want to do that. Okay. And I also know whether it's an enterprise, whether it's a human, we all work on budgets. We don't work on unlimited budget. If I show up as yet another player, I have to ask a brand to accommodate my expenses. So that is an extra cost. No one is planned for it. Substitution is simpler. Substitution is very difficult. Complementary is very easy. You're complementing someone, you're augmenting something. So that is an easier business model to get into versus trying to say, like, rip and replace what you have. No company likes to rip and replace anything they have. It's inertia. Right. So they don't like it. So you enter the market with the focus that you're going to augment what is already there, because what is already there is being used. So don't, like, create more distortion and more chaos. Calm the chaos.
A
Do you see any risks attached to that, though?
B
Risk to this model? No, primarily because I am not gating anyone from using anything. I am simply providing value. See, you can use anthropic, you can use ChatGPT, you can use anyone you want. We are basically just going to be this neutral layer that provides the extra layer of protection that the brand needs. And we are also going to help them save money. Right? That is what we are going to do and we are going to keep it culturally aligned. So is there a risk in this model? No. Primarily because we are not another large language model. Right. And large language models will not be like a thing in the future anyway. Like, you know, we are going to move into small language models and eventually into organizational models. Organization will have their own knowledge models. They do not have to rely on a large model. So that is the evolution. So we are preparing for that evolution by, by being where we are and providing that value, we will accelerate the company's ability to become its own organizational model. Right. So who's the payer here? Who's paying the money? Who's paying the bill? If you step back and think about it, it's not anthropic and it's not like ChatGPT, which is actually paying the bills for me. It's the brand. What does a brand care value for its money, make sure it's culturally aligned, make sure it is protected from a trust layer perspective and it can answer all the questions, whatever comes up. Right? So if those guys are the guys who are paying me, I have to protect their interest, not ChatGPT's interest.
A
We've talked about the company side and that makes total sense to me. But what about individuals? Yeah, what about B2C? Because I know that you're not just a B2B model. So what's in it for customers?
B
Well, so I feel like everyone needs a digital mirror in this world, right? We excessively spend so much time looking at the wrong information, get like, you know, get pampered with like stuff that like, you know, that we are not really like, you know, drawn towards, but like somehow the systems think that like you're drawn towards it. So we fully intend to have a consumer side of the business. And the consumer side of the business would look very different than the enterprise side of the business. In the enterprise side of the business, we're giving trust to the company. We are making sure we are protecting their interest and the privacy and so on and so forth. We'll take some of the principles of that into the consumer side. But for the consumer, it is about coming up with a companion that will guide their decision making process that also creates a community for them. Today you have three kids. Like, how do you make sure, like your kids are not going rogue in the world of social media and all that? So you cannot keep tabs on them.
A
My kids are definitely getting rogue.
B
So how do you set the preferences? How do you see like, you know, like, like, what do you want to teach them? How do you want to teach them? How do you want to be that? Like, you know, can you think about like a parent augmented by AI parent which like behaves exactly like you, that guides your child even if you're not there all the time. Right. How would Abigail, my mom, think? Right. So that's a very amazing problem to solve. And we also know we can only solve one problem at a time. If I solve all problems at the same time, then we're going to lose the focus. So we said businesses need this immediately because if businesses go rogue, are not able to, or businesses get like kind of like the implosion happens in businesses for wrong reasons, for like wrong decision making. The impact is not just the business. The impact is the livelihood of the people who work in the business too. That is like the common man. Right? So by focusing on business, we are enabling two entities actually the dignity of the people who are working in the business and also the business itself. So we are solving slightly the individual problem. Now we can, once we have built enough muscle strength and others, we want to focus on the consumer side, which is a very different app. The challenges are different. The way you acquire customers are different, the way you go into the market is very different. And knowing that I'm going to be a startup trying to do things, if I do too many things, I'll fail.
A
And you would know a lot about this. I think it's worth it, as part of this podcast, just to say like a bit about what your background is, because I think there are very few people who would be more qualified to make these kind of decisions than you. Like, talk about some of the things that have got you to this point.
B
So 23 years of my life in corporate, like I started my life, my professional life in Coca Cola. So my very first boss is the CEO of my company. So he was the SVP of operations. So he used to run all operations for Coke in North America. And so my role was to transform the delivery processes. So take out like, you know, we were delivering the same way, you know, 40 years, like from 1970 all the way to 2000, like 1960 all the way to like 2001 and two when I joined the company, but like, everything about how Coke was manufactured and made and like number of packages and types of products they were into changed significantly in those 40 years. But the delivery was always the same as 1960. So my job was to think about what would happen in 2025 and run it back and build a system that would scale up to that. So that was my role. And so we have several patents. It's still like the industry standard of the way we deliver like Coke and Pepsi in like, you know, in all of North America. So I lived that world. Then I went into PepsiCo, PepsiCo. Whatever Coke does, Pepsi doesn't want to do. You know, Pepsi wants to have its own identity. And, you know, and they are rightfully so. And they had different types of product mix as well. So same thing I was trying to do, but for a different class of problems. They were trying to figure out how do I make distribution look more like manufacturing? So the same type of rigor I have in operating a factory, can I bring it into the warehousing environment and the way I deliver? Right. So. And how do I enable single touch going into the store versus 37 touches on the road? Right. And so that became the challenge we had. It was a blockbuster success, you know, so we save billions of dollars actually in transformation. In fact, like, you know, the initiatives that we did was actually ranked the most innovative supply chain project in the world, period, right. By Supply Chain Innovation, by Council of Supply Chain Management Professionals. And it was for 2009 and 10 consecutively. Right. So I came out of it then I built Disney's MagicBand experience. So nothing to do with operations, everything to do with consumer experience. So the idea was, if I'm in a theme park, can I wander around in the theme park without a wallet? Can I keep my child safe? Can I basically plan my themes? Can I actually go into a restaurant, not pull out my wallet, eat and get out? Can I set my values and preferences and everything before I show up? And I know it knows that I have a peanut allergy and I cannot eat peanut. And the food is prepared without peanut. And it knows that you're showing up at 5 km. Ultimate personalization, right at the consumer level. So I built that. So and it became a blockbuster success. Like, you know, everywhere. Every time you go to like, you know, you know, Disney, you would use the MagicBand experience, right? 2012, I started working for Walmart. Grocery delivery was where we wanted to get into. We started like looking at groceries Available in the store. How do I get it to a home in the most efficient, efficient way? From the store to the home, right? Because the grocery already looking sitting right next to your house. And the problem was trying to figure out the last mile. How do I get it to your house, right? And the traditional way of doing it, like, you know, put a damn truck and try to run around, like, the city and make a delivery. Like, you. You told me, like, you know, everything gets congested here in London because they use trucks.
A
Yes, that's definitely been an issue for me today.
B
So that's the wrong way of doing it. So then we challenged that when we said, like, you know, can we use the trunk capacity of Uber, which is already taking a passenger, to actually make a delivery to Abigail's house, because the house happens to be on the way. People's mind just blew up when I said that. Uber crowdsource company delivery agents, like, you know, they are, like 1099 employees, right? Like, you know, what if they screw up? What if they, like, you know, like, misbehave with, like, a soccer mom or whatever it is, right? Like, who takes the blame? What happens? How do you keep the temperature safe? So we solved that very complex problem, and we said, like, you know, yes, this is the way of the future, and we got to solve all these hard problems. And during the course of solving that problem, we created numerous patterns, numerous patents that, like, you know, Walmart holds today. But now, if you think about it, like, almost like 14 years later, thanks to another pandemic in between, it's the only way people know how to deliver groceries or, like, you know, food to your house, because that's the way it needs to behave. And at the time when we were trying to do that, everyone thought we were, like, crazy, right? So then I go to American Eagle. And the problem that we try to solve in American Eagle is every brand thinks that they can compete with Amazon. I said, good luck with that. Right? Because they want to basically say, like, I can be just as fast as Amazon is. Amazon works because they have 50 million products and they're very close to your house. Not all the brands are very close to your house and they cannot use a store. So how do you sell in a digital world? So we said, like, instead of everyone building a supply chain, let's all put all our supply chains together and we'll move products together. Simple idea, but how would you get, like, competitors to come and, like, put their supply chains together? They would say, hell, no. We made that happen. That was quiet. Platforms, so. And we built that company from zero to $380 million in nine months.
A
One of the key themes across all of these roles that's very obvious to me is your foresight into the future. So let's put this to the test. We are five years on from today. Angelic AI is out there in the world doing its thing. How does it look? How has it changed the AI landscape?
B
What it would essentially do is it would basically ensure that the decisions that are made is one very, very human centric, which is very critical. Whether it is like approving loans, whether it's hiring people, whether it is like, you know, compassionate delivery, whether it's like waste reduction in a company, whether it is like, you know, anything that has to do with customer engagement. You take any process robotics in your house, right? The compliance of your business with the values of your company, right? When like many people and digital workers are making decisions, are they compliant or not? Like, you know, how do you ensure the trust is actually ensured in the system? So in a world where Angelic lives, you can sleep at peace in the night. Without Angelic, I don't know if a CEO's job is even going to be one that you want to aspire, because that's the most riskiest job. You must have heard something called Moore's Law, right? Like, things Keep doubling every 18 months, the compute. So imagine Moore's Law getting an exponential like kick on its back. That is what the role of the CEO is going to be in a world of AI if you don't control the risk. Risk can come from any direction, from the way you hire, the way you manage your customer, to the way that basically decisions are explained, the strategic choices you make, all of those. And. And if you are not careful, the business would go down in a tube, right? So Angelic would simply preserve not only the dignity, but would give a sort of semblance of humanness in the technology world and it will begin to augment humans. It will not strip away the dignity of humans.
A
Let's give your listeners watchers a. However people are consuming this podcast, let's give them an exact sort of example, some specifics on how this works in the workplace. So an example of a situation using normal AI versus having this human input.
B
Yeah, let's take a couple of examples and I'll give you this example because this is what we're trying to encode into the system. In fact, we can pick two or three different variations. That way they get the flavor of all of these things. So I used to deliver. When I did the grocery delivery model, I used to go deliver groceries using a truck first. Before I understood, like, we don't use to have the truck, right. So I used to visit this lady called Margaret, okay, 78 year old. And she used to order groceries every week, $50 from Walmart. And she used to live on the fourth floor, so. And there was no access to lift or anything. So we, like I used to walk up, go make a delivery, come back down. And in those days, like you're not supposed to cross the threshold of the door because what if you cross the threshold and something happens, the customer complains, right? But I used to break the law and I used to go
A
get yourself arrested.
B
Yes, yes. But like, you know, she was elderly. Like, you know, she needed help. So. And like every, every Monday she would say, like Tuesday, like I'm going to the hospital system. And we used to wonder why she's going to the hospital system. The driver and I would wonder that. And then four weeks, this goes on. Fifth week, like, you know, like she doesn't order. Sixth week, the daughter opens the door and then the daughter says, the mother passed away. And we asked like, so what happened? And she said like, you know, Margaret used to get like, you know, $1400. 14, $15. 14, $15 from Social Security. Okay. And of that $1400, $875 was like rent for her. And she would spend $200 on groceries, $50 on transporting herself back and forth everywhere, right. Whatever she's doing. And utilities. And then the remaining. She used to spend money for heart medication because she was elderly. The insurance company jacked up her rates. Okay. And because she couldn't afford the medication, she would go to the hospital, admit herself to get more medication. Hospital system thought, she's a frequent visitor, we should get the hell, like get her out as soon as she comes in. Because for them it's cost the insurance company, it's a cost, right. For grocery, like we were giving her more groceries and we kept asking her, do you want more groceries? Right. So everyone's incentive was to look at Margaret as a number, not as a person. In angelic system. We would have caught all of this and we would have said angelic, like, you know, basically Margaret really needs more medicine. Because if I give her more medicine, she will not come back every time. Frequently. Right. It's good for the insurance company that you give her more medication. That way, like the premium goes down. She doesn't visit that often. For the hospital system, it's also good because she's not visiting frequently. It's good for Margaret because she got the medicine right. It's good for also Walmart because like she's healthy and alive and she's like ordering. So seeing the non obvious, like by connecting all these pieces is what Angelic does. To look at it from different perspectives. Perspectives and making a decision just like a human would. Just what I saw. Right? So this is an example where compassionate caregiving can come into any sort of setting. Whether it's hospice, hospital system, caregiving. Right. Can you bring the way you deliver care needs compassion, needs empathy. You cannot deliver all of that using ChatGPT.
A
And there was another really great example that you gave of this actually, and you actually showed us on the screen where you can import. It was a food delivery example.
B
We can touch on that too.
A
Yeah, talk about that one.
B
So in United States and also most of the world, there's always a mismatch between where excess is and where people need it. And this problem is a perpetual problem, whether it is clothes, whether it's like food. So what we did, and this is a real implementation that we have along with the one that I just talked about. And then there's workforce planning, which I'll talk to you about. So these are all live implementations of Angelic already. Like they're up and running. Like we don't have to. These are things that we wanted to show the world. That's not a theoretical concept we have actually implemented. Okay, so now let's take that example. So we worked with a marketplace of NGO. It's not an NGO, it's a marketplace of NGOs. So they have a collection of NGOs, 500 NGOs, and they actually work with the brands, right? So what they were trying to do is like they would take a puffer jacket and see, like, who needs a puffer jacket? And someone like, you know, one NGO calls it puffer jacket, one calls something else, right? So they semantically match it and say like this fellow asks for puffer jacket. This is a puffer jacket. I'll give it time. But that's where not the need is. The need could be in a different place. So the example that like, you know, we were talking about is food. Like, you know, they had, you know, ramen, like noodles. The ramen noodles. Like it said, oh, the semantic match says send it to this ngo. But there was a shelf which was empty. So like, you know, like, you know, if you just let the AI make the decision, it would say like semantic match to semantic match There is a matching algorithm.
A
Push it there like based on keywords.
B
Keywords. It would just like match it and send it. But because there was compassion involved in this process and there was urgency involved in the process, it looked at all of that context just like a human would and said, hey, this Palo Alto Community center needs that like ramen noodle soup. So I will send it there versus this place. Right. So it redirected to the place of need. So this is a compassionate way of doing it. Now when you're doing this at like when you have 500 NGOs, right? And when you have many, many, many, many brands, the matching is done, cannot be done with humans. It has to be done programmatically with machines which actually encode the behavior of a human and decide like a human and think like a human. Right. You also need to be sensitive that like, you know, you cannot give away the shoe of a 60 year old to a 5 year old or a 10 year old. It's not sensitive to give that right. So it needs to understand more of like how you would decide as a human.
A
And humans do get to decide as well. So there's like features where you can toggle on and off like certain values. So you can go, okay, these are the values that we are going to base this decision making.
B
Exactly. And then you also prevent fraud in the system. Like, you know, one of the common, like, you know, challenges that you have in the world of like ngo and you know, giving is like, you know, sometimes people, you know, distrust the ngo, thinking that the NGO is out there to get you right. You know, they are not using optimally the resources that are being given to them. So basically what we have done is we have created, we put all of this on a blockchain. Meaning like, you know, we know what is going on, we know what is coming out. We also know like, you know, the decisions are compassionate or not. So now all of these angles are being managed and every time there is like some violation of that we actually flag as a behavioral change. We say this guy always wanting to had like 10,000 units that he got. Now all of a sudden he's acting, you know, asking for a million units. Why? Like it's like a credit card fault, right? Like you're only using like $500 every time and then you get a transaction for like $10,000. The bank says it's a fraud and they hold it before you approve it. So we have built that kind of mechanism that, where we can prevent fraud as well. But there is also the Ability to direct it to the place where you need. So that's the second example. Now the third example is basically workforce planning. Workforce planning is very complex, right? Like, you know, you have preference. You have like three kids and you may not want to work like 8pm to 1am job, but that is the only job which is available. Now. Can I create like a system where two employees can say, I can take that shift, you can take this shift. Right. You create a community, though it is like, though it is like a gig workforce, you are giving the ability to the guy who's overseeing these algorithms to make compassionate decision and also creating a community that can actually create more goodness between each other. I know Abigail, Abigail prefers this. I also always want to work with Abigail. So it takes the preferences together so we can encode some of these things which humans do that machines don't do. So that's why the optimization that you can achieve by encoding all of these things is like super fascinating. Super fascinating. And that is where, like, you know, we could use more of these reasoning models.
A
What traction can you share about at this stage? Whether that's from users, partnerships, or even early adopters.
B
Yeah. See, when trust was a problem that I had identified a year and a half, two years ago, and I started talking about it, so we took a three prong approach. The first was before I even write a single line of code, I want to go see if the market is going to buy this, okay. Because that's the most judicious way of using anyone's money. And then the second thing that we did is we said, like, okay, like, you know, is this so obvious? And if so, like, what kind of intellectual property and trade secrets we could develop in this area? Right? So we need to go build that, like, you know, body of work to do that. And the third thing is like, we, we wanted to say everyone understands that this is what we're trying to build, but no one would appreciate how it works till they saw one. So we said, let's approach all these three problems in sequence and try to build it. Obviously, having lived around the world of large companies, I said, intellectual property and managing the intellectual property is very important before when you start talking about things, it's very easy for someone to interpret it one way and then they start building something. So we wanted to ensure our IP is protected. So we filed for 83 patents so far. So 83 patents filed most of them. We are happy to share all the patent structure and other things, explicit patents, very explicit patterns from everything, from how you train the data, what is the corpus of data? How do you go about it? How do you score the models? How do you set up the model? How do you create the virtue structure? How does it behave? How does it work in the enterprise intelligence context? How do you deliver this? What is the architecture? How do you score the values and the decision? How do you explain you take the entire gamut? How do you set the temperature? We have put patents everywhere, 83 patents.
A
And does that in essence stop other people going and creating like the same kind of model?
B
Yeah, it gives you, like, it gives you a lot of protection and like, you know, it's not like. And each of these patterns are like omnibus patterns in the sense, like underneath each of these patterns there are at least eight or 10, sometimes 20 to 25 inventions, claims Embedded. Okay? So it's highly defensible. You cannot break the system. And it was like very deep thought. Like, you know, we've gone like inch by inch, like know minute by minute, like you know, minutiae, great detail, very ornate, including algorithms, mathematics and the formulation of it. So that's one thing we did. The second thing that we did is, okay, we said, like, let's test the market and we, let's basically understand message messenger product. Ok, so let us call it angelic. I know no one understands angelic. Everyone needs to know angelic. How do we do that? That's the first. The second thing is who's the messenger here? Well, I lived on both sides of the house, I know like what it means. So I will tell the story of what it means to build a trustable system because I have done so, right, like ChatGPT cannot come and say, trust me.
A
Sure, many people do.
B
Right. And anthropic cannot say that too because like, you know, they're anything. They are geared to basically like, you know, build a system that is going to be like focused on profits. Yeah, right. So, so we said like, you know, so that's, that's our messaging and what about our experience and what are the problems that we are trying to see and how will people resonate with it? We got 2.4 billion social media views, okay, across the three platforms, Instagram, Facebook and LinkedIn. And we have 2.24 billion views. So it is like 25% greater than 25% of the world's population, by the way. Okay. And then we have close to like 10 million followers or so across all platforms. Right? So that's the second thing we did. The third thing is this. Basically we said, so clearly the message is resonating with people. Like, you know, like people are like afraid of AI and they want like someone to humanize AI. Right. I'm not giving a lethal injection to AI. I'm just saying like let's humanize.
A
What does it mean though? What does it mean in practical terms? So obviously some people are using these models for free. Some people are using these models with paying subscriptions. So in essence will, if, if it's just as a user or as a company, let's look at B2C. Will they then potentially have two subscriptions? Want to sit over the top of another?
B
No, no, they'll have only one subscription. But like, you know, we could, we would enable all these guys to use through our system like ability to choose anything they want.
A
Yeah, right.
B
So that's, that's essentially the, the business and the business model. And even if they use. Right like let's say like, you know, they use chat GPT, we would act as a layer sitting on top of it. Like a filter on top of it. Right. Not an obnoxious cost. Right. Like very miniscule. Like, you know, that you wouldn't like less than a cup of coffee a month. Okay. So that is essentially the, that is the B2C side. Like you know, B2B obviously, like, you know, we have the value proposition down. You save money, you protect your brand and basically you are able to create like you know, halo effect of the brand. Right. So that's essentially the value prop that we have for the brand now. So what does it tell you? Like why did people react the way they reacted? Well, you know, there is a general fear of the unknown and what they're resonating with my stories is the same things I told you. I cannot trust it. It's not credible. It doesn't like, it doesn't account for all the human feelings and aspirations and so on and so forth. It doesn't obviously it is biased. Right. We are afraid. We don't know like where the future is headed. It doesn't like, you know, it does like I don't trust the, the outcome which is going to come of it. I may lose my job. Right. These are all the fears out there.
A
I think there's a lot of fears. I read once feel like it was in one of Stephen Hawking's books when talking about the biggest threats to humanity. I am a nuclear is one. And then AI is right up there as well. Why, why do you think people are so afraid of AI?
B
See, I think like AI is more dangerous than nukes are in fact, like. I didn't say that. Elon Musk said that.
A
And, and Stephen Hawkins felt the same. That's in his book.
B
Yeah. So the reason why like, you know, nuclear is less of a threat than basically, you know, AI is nuclear could be contained. It would only destroy piece of land or like a part of the civilization. Right. AI gone wrong will destroy the entire civilization. It's a silent killer, right. Like it just perpetuates the entire society before you know it. And people are fed with wrong information, doing the wrong things, focused on the right. Like imagine like, you know, like AlphaFold is a great company. Like, you know, it is generating new protein types. Okay. Can you, can you, can you synthetically inject something into the protein type that actually is going to go so perverse that it creates a bio warfare? Very possible. We did not know, we did not know what Covid was. We still don't know what Covid is. Right. Imagine Covid gone wrong, many folds over and we don't even know what it is. So we can create a protein structure very fast, rapidly. If it is not done right, it could be the wrong structure.
A
Right? Yeah.
B
Propagate it into the humanity. So, so we are talking everything at like civilization scale. We are not talking about a nation, we're not talking about a city. We're not talking about like a suburbia. Right? So, so that is the threat that basically is. And intel, you know, it is like we are dependent on this, we are making decisions on this. We trust it or we want to trust it it. And if you blindly trust it and give it the agency, then like, you know, things can go wrong, obnoxiously wrong. And we. There is no way, there's no recall button.
A
What kind of feedback are you getting on?
B
Angelic, Extremely positive. In fact, like, you know, we had a standing ovation in Davos in like AI summit in.
A
I mean that's massive. Let's just take a moment to say you had a standing ovation at Davos. That was like.
B
Yeah, incredible. Davos AI summit as well. Like where all of the world leaders were assembled.
A
Yeah.
B
And also like, you know, in, you know, Middle east, like, you know, like the Forbes had like a Middle east summit.
A
Like, you know, so this is gaining traction.
B
Oh my. Like, you know, so as I said, if you go angelic intelligence was not even a word in a lexicon. Like you know, that Google picked up or like, you know, chat GPT and other stuff. Now you go Google that up or like go like, you know, look at chatgpt it's there. It tells you what it is, it defines it. It says that it is tied to us. It is what. This is what it means. This is how it behaves. Right. So we've come a long way. And so it is in the parlance of like, everyone's subconscious memory now. Okay, so that's what we are excited about. Now how are we going to go about it? We are actually going to be doing the beta version first, releasing that to the Enterprises, and then we're going to do the 2.0 version on April 15th. And so we are like. And the best thing that happened to us is we are able to build all of this so rapidly and so quickly, also thanks to AI. So this is where if there is a right use of the technology, we could accelerate the rate at which you could do things. Okay. This is like the right way of using it. Now we are also, like, telling these systems how to behave while we are building it. Right. That's where, like, you know, like, the responsible use of what we're trying to do is, like, super interesting. So we become a classic case study. You know, I don't want to, like, sell something that I don't eat myself. So that's what we're doing.
A
So are there some laws around this as well, like responsible AIs? Didn't the EU bring something out?
B
Yeah, the EU actually, like, you know, enforced a new law about, like the ethical use of this.
A
Yeah.
B
The challenge with all of this is the best way I would explain it is if you ask Jeffrey Hinton, who's a godfather of AI, and Fifi, who's the godmother of AI. Okay. So they know everything about AI, by the way, the modern day. Okay. They would say they understand the mathematical formulation of the large language model, the very large language model. Right. They understand, like, you know, the output of what is coming out of it. But they cannot simply explain how the model works. When you. When the. When the guy who invented it says, I don't know how it works, what rules are you going to put? You tell me. So it's like basically like, you know, I've built a car without like a seat belt, without like a airbag. I don't like. I know. And it is like running at like 200 miles an hour. Okay, what is the traffic standards you're gonna put?
A
And is that what this is? Is it the protection?
B
Like, you don't have a seatbelt.
A
Yeah.
B
You don't have an airbag, you don't have brakes. And like, you're saying, like, you know, I'm gonna, like, so you can, you can put some signs which says, like, don't go more than 60, but you don't have any control. So it's one of those things. So you can have all the rules in the world. The rules don't apply because you don't have the basic protections built into the system. So that's where I think this problem is really fascinating.
A
Do you think, sorry if I'm getting a bit conspiracy now. Do you think there is a situation where IROBOT could happen? Huh? Irobot, have you seen it?
B
Yeah, so I see. I think we are, we are fast approaching a world where humans and computers are going to be integrated into our system. It's already happening, right? Like if you have Alzheimer's or if you have like, you know, mental disease or whatever it is, or, you know, you have cognitive capabilities, they're inserting chip in your brain.
A
Is that happening?
B
It is happening.
A
Where is that happening?
B
Like, you know, that is one of Elon Musk's company, neuralink. Right. And. And also Sam Altman is in, you know, investing in a company which is also a conflict of interest, by the way. It's Sam Altman's company, but OpenAI is investing in it. Okay, just think about the conflict of interest. Like OpenAI is not his company, but OpenAI is investing in a Sam Altman company. $250 million. So. But like, keep that like politics aside, but I think, like, we are going to find more and more machinery built into our system that is going to make us much more. I think the health span will go up because of that integration of systems into our body. You know, if you ask me, is it going to happen in five years, 10 years? I don't know. But it's happening. It will happen.
A
But in theory, if, if IROBOT was a thing, you would want these robots to have angelic intelligence.
B
Absolutely.
A
So, like, that feels a bit safer.
B
Yeah, so. So there are two types of robots, right? You becoming a robot.
A
Yeah. So putting chips into us.
B
Yes. And then the robot, which is a computer which is highly dexterous, like running around in your house. Humanoids. And that those humanoids, they are getting built left, right and center in like China. If you go to Shenzhen and if you look at like all of the factories in Shenzhen and all of the warehouses in Shenzhen, there are like hundreds and hundreds and hundreds of factories dedicated to building a very dexterous robot which can dance, which can perform, which can clean, which can do all kinds of stuff. Right. There was actually, for the Lunar New Year, there was a show of only humanoids performing dance and all of the cultural activities. Like humans used to do that, by the way, you must have seen that Shenzhen or Shenyan or whatever that, like, you know, show is. Right? Like the Chinese thing. Imagine that being done by robots.
A
But I'm just thinking, what's the. What's the balance? What is the balance, though, if those humanoids haven't got that human layer of. This is the values and virtues that we as humans or the most of us live by.
B
Yeah. So that is the. That is the trust problem I'm trying to solve. Yeah. So would you put a humanoid next to your child? If it doesn't behave, would you trust your mother?
A
I'm not having one. I'm not having one in my house.
B
I'm fine with the humanoid.
A
Well, not even to help me with the cleaning.
B
Yes. So, like, you know, like, heard like all different, like, types of stories on this. Like, if you ask my wife, my wife would say, like, I would love to have one because, like, I'm so tired, like, you know, doing all this stuff. Like, I want to have one then,
A
like, only because you will have Angelic built into hers. She trusts your robots.
B
So I think we're gonna have like a whole array of things here. But I think caregiving is a very interesting humanoid human application. Imagine, like, you know, caregiving. Right. Like, you know, there are not enough caregivers or nurses, qualified nurses today to take care of elderly. And as the aging population increases, there is a role for humanoids in that case to give care. Now, it cannot, like, say, I don't like the grandma and I'm going to beat the hell out of her if it doesn't behave. It will do that. Right. So I think we need to build systems which are more compassionate. And this is where even the human robotics or the consumer robotics, there's a huge play for something like this. And that's coming too, by the way.
A
You have talked a couple of times throughout this podcast about profitability. As in, you know, these companies are just after the profit, profits. But let's face it, you are looking for investment at the moment, so you also need to be profitable. Like, what, what are you doing to make sure that everything around that is ethical to what you're trying to do?
B
Yeah. So see, I think what I'm not saying is, like, I'm trying to build an NGO or a yoga app. I'm not doing that. What I'm saying is for all the investments People are making, in the world, in the world of AI, they're going to be grossly dissatisfied because the investments are not going to meet the use case and the usefulness. The use case has to provide usefulness to provide value. So we're going to be very disappointed because we are simply not going to trust it. So what we are building is like a structure and a layer that enables trustable use of these systems. So that, I think, is a business opportunity. It is not like. It is not like I'm giving away trust for free. I'm going to charge people for trust because we are taking all of the energy to build trust into the system in a rightful way and we're going to charge people for that. It's just absolutely fine. But while we're building it, we are building it in a way that is going to help also companies not spend as much. This is where the company benefits overall as a system. The company will benefit because they're getting the benefit of two things. They are keeping the virtue alignment. They're also making money or they're being very efficient in the usage. So that is where I think we can still make a lot of money. And this is like a 4.3 trillion dollar opportunity.
A
Let's talk about that. What is the opportunity in the marketplace?
B
So it's a huge opportunity. Like, you know, Humanoids itself, like, if you assume that there are like, you know, three humanoids in every household that, like, you know, I think like, Elon Musk thinks that's that's going to happen like 10 years from now. He's always been wrong about the timeline, mostly right about, like, the consequence of what's going to happen, the outcomes.
A
So he thinks 10 years.
B
He thinks 10 years. Like, he also said, like, you know, autonomous cars are going to happen in 2016, but didn't happen till 26, may not happen till 30. Right. So he's right about the future, but he's not right about the timeline, how
A
close that could be. Yeah, yeah.
B
You know, he has his own biases around that. Like, he would hope. 2016, we had like all autonomous cars. So the society is not ready. So now. So if I, if I assume that, like, Musk is right, because intellectually he's right. And like, let's, let's say like in 2035 or 2045, like, you know, you're the humanoids. Three humanoids in every house, that's a $35 trillion opportunity. Humanoids in the house. Now every software that is using Humanoids in the house, which is going to get sold at 10,000 a pot, right? If you charge a thousand bucks, it's a lot of money, right? So every enterprise, there are so many enterprises in the world. What is an enterprise? It's creating value. How does it work? Like gdp, right? If every company, every individual, everyone is using it, it's the economy. So like now you take a small fraction of that economy, it's a huge opportunity and everyone is invariably going to use these AI tools. Now the choice is do you want to use one which can go rogue or one which behaves? And I'm saying I will build the one which behaves.
A
That's really powerful, really powerful. And would definitely make me feel a little bit better about having humanoids in the house. When you think about where Angelic is today. In fact, let's talk about that. Where is Angelic today?
B
Yeah. So where is Angelic today? So we have like three implementations, we have about like seven MOUs already signed. We are actually kind of releasing the beta version. So the beta version is going to be available across every company. And so we actively like, you know, sign up like 100 customers, customers pay like, you know, like get them to start like, you know, playing with our systems and other things and then like, you know, we're gonna go like big bang after that. We are already like very confident in what we are, what we have already built. It already provides a lot of value. Where we see this continuous evolution is, see virtue is something that like always is evolutionary, right? So we will continue to train these models, make it better, make it, it like more robust and for which we are actually trying to raise money.
A
Right, well let's talk a little bit about that. So obviously you're looking for investors at the moment. How are these funds going to be deployed?
B
Yeah, let's talk about before that. Let's talk about like the market validation, right? So when, when this, when this theory started coming out, like, you know, people said, oh this sounds a little interesting, but like, you know, like can we trust this? But like there are so many proof points now. Like you know, like you take humans and with an ampersand there is $480 million and 5 billion dollar valuation. What they only do is basically make the collaboration between the human and the machines much more like effective or efficient. Right? Then there are companies which have actually raised just to solve one problem. Like let Meera Morati as an example, right? Like she's trying to solve consistency of the answers problem, right. And you know, there are a couple of people who have come out of like, you know, OpenAI set up their own shop. And each of these companies are worth like 24, 25 billion dollars. And they've raised like, you know, 4, 5, 6, 7 billion dollars. Right. So and that is all seed round by the way. They're not like series A just coming out of the gate. Like some of the world thinks these scientists know better.
A
Yeah.
B
What they're trying to do. The problem is this is not a scientific problem. This is a trust problem we are trying to solve. You know, you can hire the best scientists, but if they can't fix the world problem, you don't like you just wasted your money. What you need is a practical application of how this is going to be applied in society and in enterprises. You need to have that common sense. You can have the best scientist, but that fellow doesn't know how to tie a shoelace then of no use. So this is what is happening. Like all these systems came out with no context of how it can destroy humanity. And it was like thrown out into the world. So we are seeing validation that people who are talking remotely close to my idea are raising a lot of money. Okay, now let's talk about what I am trying to raise money for. So we are trying to raise money much more, you know, in a very, what I say, resourceful way. We don't want to spend a lot of money. So we will raise $50 million, but we will raise it in three tranches because we want to be based on milestones. 15 million, 15 million and 20 million. Right. Where does the money go? The money goes primarily in, you know, four things that we're trying to do. First is we have to train our models to make them much more efficient. So every time you ask a question, it gives much, much, much, much better answer than basically what like the traditional systems do. So you begin to continue to build on the trust journey. Trust is something that is built over a period of time and there are many complex problems which will come about as we are building these things. So that is where the training of these models is where we are going to spend some money and the R and D associated with it. The second place where we are going to spend money is basically ensuring that the tools that we're building are enterprise ready. It takes a lot of money and energy to do that. How do I make every company can talk to my system, how do I make it pluggable and everything? So we're going to spend some enterprise capability, build out dollars on this. So one is training. The second one, the Training annotation. And then the second thing is basically building enterprise ready systems. The third is obviously go to market, right? That we have to like launch this, get this, get the marketing going, like get people into the system, have sales people run the motions, get all of this, right? So that's the third place where we're going to spend money. The fourth is the hardware, right? To compute all of these things, we need money, right? And the fifth one I would profess to say is talent, right?
A
Getting the right people to be, getting
B
the right people to solve these problems because we don't. And I don't believe even building an extraordinarily large company that I would have said like, you know, we probably need like 50 people to do this and maybe 100 people to do this in the future. I don't believe that this company like in the, in the face we are in and in the future, like you know, foreseeable future, we would not need more than 10 to 15 people. But top notch people, top notch people, right? Because if the code is able to generate itself, you don't need a lot of coders.
A
A couple of these things here that you've said, you know, talent being one, marketing being another. You've already said you've got 2.4 billion views on the things that you're doing at the moment. So you already know that you have got like traction in the marketplace and then talent even just thinking about the connections that you've got. I know that we've had conversations within the syndicate global about, you know, talent and who can be bought in. These are not massive risks really.
B
Not at all. So listen, I think like, you know, money talks, bullshit walks, you know, so that's essentially the case. So you know, with money comes everything and I think talent is highly mobile these days and great. But like what we're looking for is talent with like mission driven mentality. Like you can get any talent but people who are driven with the mission and like are focused on what doing the right thing and are, they cannot like it's like a tree that can never be like, you know, cut right? Like it doesn't like sway one one way or the other, right. We want those kinds of trees and so, so and we're gonna spend, we're gonna spend our energy trying to bring them, we have access to a lot of them given like, you know, like the fact that like I have lived in the corporate world, I know a lot of people, I meet a lot of interesting people and my team has access to a lot of Great people. So. So I don't think talent is going to be massively a risk for us, but yes, we have to spend money on talent. Talent.
A
You know, having talent be fully on board with the mission is one thing, but who is the ideal investor? Because surely you want them to feel the same as well. Surely you want your investors to feel passionate about this.
B
Yeah, so I, I think so. Mission driven. You know, VCs are the best.
A
Yeah.
B
Ones to do this. And also the family offices would be. And like, you know, would be really good because family offices have the tradition of building legacy. And this is something which is legacy building. You can be part of a journey which is so transformative that people talk about it, that this is what they built. It's like building another Nike. This is that moment where you can say angelic inside everything. Imagine, you know, it's a very powerful statement for. And to be associated with that. You need to have a certain kind of a DNA in your family office makeup. Like you want to believe that you are here not for, just for money. Yes, everyone is for money.
A
But you've got a great opportunity off the back of this. Right?
B
Exactly.
A
As an investor.
B
And, and like, you are able to also be the defining moment for, like, you know, what is going to come in the future. Because this is inevitable, Abigail. Right. Either we, we, we realize that is inevitable when shit goes wrong, or we begin to use it now. And I'm saying let's use it now.
A
And you're ready for the investment now. So somebody comes to you, you want more info. You've got the pitch deck, you've got the data rooms, like every full transparency, everybody can see.
B
Absolutely, absolutely. So we, we got all of this set up and we've thought through, you know, inordinate amount of details. Like, you know, this is not my first rodeo, but why trust you though, huh?
A
Convince me. Why trust you?
B
Well, you know, I guess, like, you know, two things, right? Everyone who's worked with me and knows me, they know that I have an invisible power around me that, like, somehow I get to the point of like, no return, but I still return. So my life is that in the sense, like, you know, I don't give up. Right. It's so easy to give up in your life. And like, most of the things that I have done are things which are very difficult to do. And people, like, said, like, I'm not going to do that. But, like, you know, I have done all of those things, you know, be it like pursuing the education that I did coming out of the context that I did working for large companies and telling them like, you know, how to do different things. Imagine, right, the guy is running like a 40,000 man operation in Coca Cola and saying what you did in 1960 is not valid in 2000. Let's go do something different. You think they'll just accept me? I was a kid right out of school, by the way, right? So I had to prove myself that what I say is what I deliver. And I have done that over and over and over for 23 years before I got into this. So people know me for that, right? The ability to deliver and the persistence that goes with it. And I say that there is an invisible force behind me and I truly mean it because I always believe that when intent followed by hard work and then you have strong conviction, success follows, not the other way around. People start with conviction, hard work, and then they change their intent.
A
Do you think you get some of that tenacity from your mum? I loved hearing about the story of your mum.
B
So this is, that's, that's what like, you know, I am made of. That's number one. The second is, you know, why do you trust me? I have lived this 23 years, like in, you know, trying to change companies, organizations. So I future proofed all these companies. All I'm saying is I've done it five times for five different companies. Now I want to future proof the society which is a collection of companies and individuals. Yes, it is a difficult problem. I don't say it is not. But do I trust my instinct? Absolutely. Do I trust my team? Absolutely. Do I think like, you know, I'm gonna run into problems? Absolutely. Do you think we're gonna find the answers? For sure. Why do you believe that? Well, if your mind has a problem, you also seek the answer. And you will seek the answer. It'll come to you.
A
Why should VCs and family offices be looking at this opportunity above other opportunities that are out there at the moment?
B
See, I think there are only. So the world of AI is getting fragmented into two sectors primarily. The one is the infrastructure world and the second is basically the capability world. What do I mean by infrastructure and the capability world? The infrastructure world is like, let's build the next chip which is going to solve the problem. Let's pump more money into the large language model and make it much more robust. Let's basically build the data center, right? And that window is closing. That window is already closed. You know, how many companies, like, how many more companies are you going to find like Chat, GPT. It's done.
A
It's done.
B
Right? So that game, forget it. People thought like there is all these applications that people can build which will make your life easier. That is also gone because AI is already solving that. The infrastructure companies are solving that. They're going. It's like Pac Man. Have you played the Pac man game?
A
Yeah, yeah, yeah.
B
It's like Pac Man. It's like eating up all the things and like the infrastructure is going to be the thing which remains right now. What is the thing that has not been solved? Trust. You can build this infrastructure if no one buys it. Everything sits idle. So you as a company have invested in that too, in that promise and that is going to sit idle too. So you're going to have a lot of sunk cost if you don't know how to recoup it. The trust is going to be the capability that is going to be needed in the future. And like that you're going to be you. You're going to need other capabilities like security, safety. Right. How do you ensure that? Basically like, you know, there's like robotics as an example, right? Like you know what happens in the physical world. So there are things which will happen, which capability wise that will leverage this infrastructure. So there are only a handful of slivers of things left in the future and no one has tackled this.
A
Do you think that something like this, if the, if the fears are right, if AI could go rogue, do you genuinely believe that something like Angelic could be part of saving the world?
B
Yes. And that is, you know what, that is actually the marketing strategy we have.
A
Wow. So I've said that without knowing that.
B
Yes. So if you look.
A
So what's the strategy? You're going to tell everyone they're going to die and Angelic will save them?
B
No, no, no. So our marketing strategy has been like, you know, it's very interesting, right. So you know, our strategy has been that every time something goes wrong, you know, we can show the world how they would have been different if this existed. It's very easy because if systems are grossly untrustworthy and are like very, very episodic or like very erratic, so to say the outcomes that are going to come out of it and the consequences thereof is not going to be a news like once in a twilight. It's going to be like every day, every second something is going to happen. The more the usage of AI, the more like, you know, gloomier stories you're going to hear. Right. And there is a. And while those gloomy stories are happening. We have the ability to paint the story of what would have happened if people have used this. So our marketing strategy has been just that, actually, to be honest. In fact, we took the ad that like, you know, anthropic run ran on chat GPT during Super Bowl. They said like, ads are coming to AI. Like, and like. And they created three ads. Okay, and we said, great, so you were hitting that guy. And they said like, you know, then we. There's a, there's the same video. The kid is asking anthropic, so what do you think? Should I submit the assignment? And the anthropic guy says, if you believe that you should submit the assignment, that you should submit the assignment. I. But I think like, you should like critically think more about it. It's like the typical BS that you get. Tell me the answer, yes or no.
A
Yeah, yeah, yeah, give me the answer.
B
So then like, you know, it's a very funny ad. We actually created that ad. And all AI, by the way, thanks to like all my genius friends, all
A
the talent, all the talent out there, all the talent.
B
And so it goes into a ramble about like, why this person should or should not give the assignment, submit the assignment. But we come in and say you should because this is the best you could. And this is the wisdom that says you should because. And then you should go do this. So our marketing strategy is if this guy beats up that guy or that guy beats up this guy, both of them are equally ignorant and violating the trust in the laws. So now we have the ability to sit on all of these guys and say, like, you know, this is a better way to do it. That's our strategy anyway.
A
So what has been the cost to you personally to get to this point?
B
If you look at it personally from a cost perspective, which I don't, but I will still answer your question. I had the rosiest job that you could ever find in corporate America. I was the second highest paid executive in American Eagle Outfitters, only after the CEO. Not even the CEO, Right. So. And I left everything to start this and do this because I made a promise to my son that I am going to leave a world better than like, you know, better than what I inherited. And whatever I am is basically all of the beautiful things that my father and my mother did. And if I can capture all of that, their essence, and everyone who came in my life and every human who exhibits that into the technology and unleash it for my son and say, this is the world, I'm going to give you inherit this and it's a better place and it's more human and you're going to be blessed and you're going to be helped by angels who are invisible to you. That's the best gift I can give my son and it's a generational wealth so I decided to do that. Okay. So I am not doing like, I don't think about it from a cost perspective. If I thought about cost, I'm deeply in whole. Three years of like all the money that I lost, like not doing what I did. Like you know, go like clock in 9 o' clock and leave at 5 o'.
A
Clock. Yeah, yeah, yeah.
B
You know, I would have been like much more wealthier. You know, probably. My wife reminds me like 50 times, hey, like you should have done that, you know, so she'll be glad when
A
she's got them cleaners.
B
So that. So I don't think of that as the real driver for my decision. And there is obviously the second type of cost that like is much more intangible or like much more personal to me. Right. Well in this, during this journey, Right. Of trying to make sure we are getting the story out and other things. I have seen my 6 year old son only 2 days in the last 3 months because I've been out trying to make sure like I am moving this in the right direction. So he's six years old, he's the only thing that I have and he looks like my father and I cannot spend the time with him. Right. So that cost is probably higher than the cost of like not working for nine to five jobs. Right. I lost my mother, I lost my uncle, I lost my father in law. I'm probably going to lose my aunt very soon given her health condition. So there are many losses that I've had. But I still have progressed on this journey fully focused, making sure I give 150% or 1000% of what I have. Because I believe if I don't do it then I'm going to create a reckless world for many moms, many fathers, many kids out there in the future. So when I trade that versus the time I lost with my son, I think it's okay. Okay. The, the third thing that I have is obviously I've been. I also wanted to be judicious about where I spend money, how I spend money. So I've invested my money also into trying to make sure this works. Right. So yes, this is like you know, part of the money is like orchestro and like you know we are trying to Basically, you know, get it out of orchestro. But also I have invested my own money on this because I believe in it. So all of that is true. But what am I working towards? I'm working towards the potential that this would be very big. And even if it's a very big company, the way I would like to lead my life is I don't care about the outcome of this entity personally. For me, all of those proceeds will go back into humanity some way or the other in whatever foundation or whatever I set up. Because I lived on my father's salary and my son would be just fine on my salary too. I think, I think he's gonna be just fine. Right. So my purpose is to make this like really successful and to get every shareholder what they truly deserve in this case, which is one legacy to like, you know, great returns on what they invested on.
A
When do you say that the company would break even? Because that's, you know, investors are going to be particularly interested.
B
Like an idea like this will go soon, very viral, right? It's like, it's almost like, you know, you think about it, right? Like, you know, why did all these white coding companies became so popular, like lovable cursor, you know, because, you know, it's solving a very specific problem in an enterprise. Like, you know, how do I, how do I make sure, like, you know, I can, I can build proficiency into the way I'm building code. That's what they solve for. This is a trust problem. Every company will need it. So this will just take off, okay? And even if I tamper all my excitement, which is, which is what we did in the financial model, we said like, you know, let's downplay it all, let's downplay it. Like, let's not imagine like, you know, rosy world of all like, you know, miracles of all different kinds falling in place. Cascading miracles as they call it, right? And if, even if you take the worst, worst, worst worst case scenario, we would still break even in October of 2028, in two years from now.
A
Wow. So that close
B
we would be even positive by then. And this is like all like very like what I call anemic numbers, like a million dollars this year, like you know, $5 million next year and like, you know, $30 million. What we've seen like in, in, in Silicon Valley and rest of the world, like wherever these such, such of these companies are built, they say like they see 100x growth month over month. So there are like a lot of like these hundred million dollar ARR companies which are getting formed like in one year. Right? This is like, this is a blockbuster idea.
A
How are you going to go for the series, the tranches of the investments?
B
Milestone based, you know, like, you know, we don't, like. I also come from the world where, like, you know, poverty teaches you something. Don't have too much excess. You'll get very, very, very sloppy if you do that. Right. So I'm conditioned to do that. So we, we want to be very judicious about the money we spend and how we spend it and where we spend it. So we are going to actually base it on milestones. We're going to say, like, hey, like, you know, this 15 million gets us this far. 15 million more gets us this far, 20 million. And like, and if we are able to accelerate the business and like, you know, get that faster, we'll do that. But the idea is basically like, we want to not get ahead of ourselves. That's what, like, you know, I want to make sure, like, we don't get, don't get ahead of our skis trying to build this business because you could, like, you know, we want to, we want to mitigate all the risks in the business, pretty much.
A
Let's finish on the dream. You've got your investment, you, you've gone through those milestones. You EBITDA positive, like, you've got your. Everybody's happy, your investors are happy. What's the dream for you with Angelic?
B
Like, the dream for me with Angelic is basically like, it. I want everyone to say that a system, like an angelic system in their life, truly transformed their life. You know, I do not want anyone to say that intelligence, when delivered to them, ruptured their life. Dignity, their identity, their ability to access and their ability to have equality. Right? It's about digital equality. I'm not talking about, like, gender equality and all that is digital. Like, everyone should be able to access the world's resources the same way. If you go tap into it, you should be able to access it. So my dream for everyone is to basically say, because we have Angelic, we experience the same set of miracles that Shaker experience in his life that when no one observed it, observed me, and it gave me the sense of purpose and the sense of direction and made a difference in my life. That's what I want to do.
A
Sheikha, thank you so much for agreeing to this interview and for sharing everything that you have. I massively hope you get the investment that you're looking for. So I can see this out there. In the world.
B
Thank you so much.
A
Thank you.
B
Sa.
Tomorrow, Today — Episode Summary
Podcast: Tomorrow, Today
Host: Shekhar Natarajan (guest/interviewee)
Guest Host/Interviewer: Abigail Horn
Episode Title: The Future of AI: Can Humans Really Trust Artificial Intelligence?
Date: May 7, 2026
This episode flips the script, with guest host Abigail Horn interviewing Shekhar Natarajan about the future of trustworthy AI. The central focus is on the urgent, civilization-scale question: Can humans trust artificial intelligence, and what does it take to ensure that AI amplifies rather than undermines our core human values? Shekhar discusses his vision for "angelic intelligence"—an approach to AI designed with native human virtue and trust built in, rather than optimized blindly for power or profit. Along the way, the conversation delves into technical, ethical, cultural, practical, and business ramifications, offering a human story behind the technology.
“We have moved away from optimizing a few things to optimizing everything without any, like, direction... when you begin to do that haphazardly, you begin to optimize the wrong things, which is human dignity.” (Shekhar, 02:27)
“Systems don't understand. And they think of these people as numbers. And when trying to optimize numbers, you optimize for the wrong thing.” (Shekhar, 04:19)
“Can we not, like, add that as a layer on top to the responses, but can we make it native to the computational process?” (Shekhar, 10:03)
“Virtue is absolute... but how you exercise compassion in a context is you, how you interpret it and how you apply it. So you need... controls in terms of how you want to guide the AI.” (Shekhar, 11:11)
“GROK actually was able to undress young teen kids... Adam Green... committed suicide ... there was a lady in Europe... it was a deep fake. And someone actually siphoned like $800,000 out of a bank.” (Shekhar, 07:38)
“In the world of AI, the real and the fake looks so real that you cannot distinguish... if you blindly trust it and give it the agency, then things can go wrong, obnoxiously wrong.” (Shekhar, 10:03; 57:27)
“We become a neutral layer. Neutrality is what we provide... we are agnostic of who the players underneath are.” (Shekhar, 23:35)
Three Key Business Benefits
“What we’ve seen is any company which is using our service would actually see up to 20 to 25% reduction in the number of tokens that are being called.” (Shekhar, 20:03)
Deeply Defensible IP: 83 patents filed covering the methods, algorithms, and architectures for building and deploying virtue-aligned AI.
“We filed for 83 patents... each of these patterns are like omnibus patterns... highly defensible.” (Shekhar, 49:43)
Compassionate Decision-Making in AI
Tuning for Human Input: Angelic enables organizations (and soon individuals) to select and prioritize which virtues are operational in their AI, including toggles and settings. (47:06)
Traction:
Consumer & Enterprise Models:
Market Opportunity:
“If you assume... three humanoids in every household... that's a $35 trillion opportunity.” (Shekhar, 68:13)
“I've seen my 6 year old son only 2 days in the last 3 months... that's probably higher than the cost of not working 9–5.” (Shekhar, 87:22)
“I want everyone to say that a system like an angelic system in their life, truly transformed their life.” (Shekhar, 93:18)
On Profit vs. Humanity
"Sometimes a blind pursuit of profit strips dignity away." (Shekhar, 04:19)
On Encoding Virtues
"Virtue is absolute... but how you exercise compassion in a context is you, how you interpret it and how you apply it." (Shekhar, 11:11)
On AI Harms
"AI is more dangerous than nukes are... Nuclear could be contained. ... AI gone wrong will destroy the entire civilization. It's a silent killer." (Shekhar, 56:12)
On the Neutral Layer Approach
"We are not another large language model. ... We are preparing for that evolution by, by being where we are and providing that value, we will accelerate the company's ability to become its own organizational model." (Shekhar, 27:24)
On the Dream
"I do not want anyone to say that intelligence, when delivered to them, ruptured their life, dignity, their identity, their ability to access and their ability to have equality." (Shekhar, 93:18)
For further details, requests for investment, or to see the pitch deck, listen from 69:52 onward. For personal stories, background, and Shekhar’s "why," see 31:55–37:55 and 87:22–90:27.