
Loading summary
A
Foreign welcome to Coruscant Technologies, home of the Digital Executive Podcast. Welcome to the Digital Executive. Today's guest is Ed Wattle. Ed Wattle is the founder and principal of Intellibus, an Inc. 5000 Top 100 software firm headquartered in Reston, Virginia. He serves as a trusted board advisor to some of the world's largest financial institutions worldwide, where C level executives rely on his expertise in IT strategy, enterprise architecture and digital transformation. One of his flagship initiatives is Big Parser, an ethical AI platform and global data commons dedicated to transparency and responsible AI development. A seasoned entrepreneur, Ed has successfully built and sold multiple tech and AI startups. Before founding Intellibus, he held leadership roles at major global financial institutions including rbs, Deutsche bank, and Citigroup. Well, good afternoon, Ed. Welcome to the show.
B
Good afternoon, Brian. It's great to be here.
A
Absolutely, my friend. I appreciate you making the time and I understand you're in Jamaica right now, which is awesome. I know that's not normally where you're at, but I'm so jealous. Right now I'm in Kansas City, sweltering in this humidity. So thank you again. And if I could, I'm going to jump right into your first question. You've advised C level executives at some of the world's largest financial institutions. What are the top digital transformation priorities you're seeing in finance today?
B
Within the finance industry, fraud has been one of the biggest challenges and AI has of course created the possibility of enormous fraud because you can create deep fakes. We had recently, Sam Alton talking about how it is so scary that you call a bank and you ask for a significant size wire transfer and all they ask you is to speak a code on the phone and that could be easily deep faked. So the entire financial industry is grappling with this risk and that is definitely one of the biggest challenges today.
A
Thank you. Absolutely. You know, I think the last almost two years here on the podcast didn't matter who I had on. We talked about AI, deepfakes, fraud, et cetera. But you're certainly up for the challenge. Working in finance with AI, you know, it levels a playing field for the good guys when it comes to creating amazing solutions for the world. But on the flip side, when you got bad actors using deep fakes to steal money, that's a whole nother level and challenge that we need to address. So appreciate your insights, Ed. Big Parser is described as an ethical AI platform and global data commons. What inspired you to create it and how does it differ from traditional AI platforms?
B
Interestingly enough, about 20 years ago, almost, I had this epiphany of, lack of a better term. You would say, dream in my head to say that there is a way and there's gotta be a better way, how humans can contribute data in an ethical, responsible manner to create something like ChatGPT. Obviously, this is 20 years before ChatGPT existed. And my hypothesis was that eventually someone would end up creating something like ChatGPT. Because I was a big fan of the movie Iron man, which I think a lot of Marvel fans would appreciate. And as I would see something like JARVIS come to life in the movie, I would always imagine what it would take for society to create something like that. And ChatGPT is not Jarvis, but ChatGPT comes pretty close to a lot of things that you would expect a JARVIS like thing to do or AI to do. And my hypothesis was for something like that to happen, all the data on the Internet has to be essentially fed into an AI engine. And what was blocking me from doing that was obviously an ethical concern. A lot of people would argue that OpenAI, when they put all the Internet data into ChatGPT, sort of crossed that ethical boundary. And Big Parser was an alternative approach to solving the same ChatGPT problem, which was largely to say we would follow the Wikipedia approach. We would collect and organize all human data on the Internet, much like Wikipedia has done, except we will store it in a data store, like a database, and then we'd feed that database with all the good clean information on the Internet, what we'd call the data commons, into an AI engine, much like a transformer model or an LLM. Back then there were other models, not transformers, but that was the original idea. And again, the hard part there was collecting data. So we really focused on building a community of individuals which included everyone from high school kids to the head of AI for the Pentagon. People would come in and sit in a workshop for several hours, and we had several marines from the US army come and do that, and they would sit there and they would feed data into grids, like how people feed data into Wikipedia, except it took us almost a decade to get, I would say, fairly insignificant amount of data compared to what data exists on the Internet. And at some point, it was already, the ship had sailed, someone had taken data from the Internet and ChatGPT existed. So, yeah, then we passed the idea of collecting that data. But we come up with an alternative approach on how to solve the problem, which I think is still solvable. We don't have to take it as a foregone Conclusion that human data has to just be taken off the Internet without permission.
A
Thank you. I appreciate the backstory on that. I think we're all kids at heart. We have big imaginations. And you'd mentioned Iron man, which is obviously one of my favorite movies as well, storylines back from the day in the comic book days, you know, you had that dream to create something that was really powerful, yet ethical. And right now, in my opinion, ethics and guardrails are not keeping up with the acceleration of AI. As you know, all these companies are leapfrogging each other, and we're now, as they say, we're right at the cusp of having artificial general intelligence, which is amazing. But at the same time, I'm a little bit scared about the potential of if we're not having these guardrails in place, what could happen. But I love the fact that you're on the right track with your ethical AI and your global data commons idea that you had, and you're working forward to bring that into play as AI continues to evolve. So I appreciate that. Ed, having founded and exited multiple tech and AI startups, what do you think are the critical ingredients for building scalable, responsible, and ethical AI companies?
B
One of the foundational guardrails building an ethical AI company is to think about where and how you're sourcing the data that you're feeding into an AI engine. Let's say you're in the business of training a model, and we can really bucket companies in the world in two big buckets. One, the companies that are creating models and the others that are using models. So the companies that are creating models, for them to be ethical and responsible, they have to think about where they're sourcing the data from. If they're sourcing their data from an open ethical source like a data commons, which is human curated data, for example, Wikipedia or Big Parser, then it is definitely ethical and responsible versus you're taking data from any other website. And I don't want to name organizations for them to feel that they're doing something unethical. But there are several other organizations that have these stores where you can go get AI training data from. How do you know where that data came from? What was the source? How did it originate? Where did it originate from? That's a question you must ask because that's foundational. And then once you're on the other side of the coin, let's say the model's been created ethically or not, using ethical or unethically sourced data, you're using the Model, what are you using the model for? Now, if you're using the model for summarizing information that you've created, it's probably an ethical use. If you're using the model for making the world better, in some sense it's ethical. The moment you're starting to use the model to do things which are like creating deepfakes or trying to game or manipulate society, then that's when it becomes challenging. And there are several companies who are trying to do that. Now, there's a question, a really important ethical question there, which is around jobs, almost everyone's afraid. Of course, some people are afraid AGI will come and robots will take over the world. But that's a doomsday scenario. And I'm not a big proponent of that line of thinking. But the line of thinking that I do care about deeply is how will AI be used in the context of jobs? Jobs are what keep the economy running. And with AI there's a significant fear that a lot of jobs will be lost and lost lock stock in battle. For example, in Manila, a call center laid off 800 workers. Not that it's a significant layoff. There are layoffs happening in the US at times with tens of thousands of people. But that 800 people layoff was purely because someone replaced the work they were doing with an AI agent. And those things will happen more and more in society. So you could make a lot of money as a company, as an AI company trying to get rid of jobs, because that's obviously possible. But you could also make a lot of money by investing in creating jobs. And there are companies that are investing in things like that. For example, can we take AI models and make anybody a software engineer, even if you don't know how to write code and empowering and democratizing technology? And that's on the other side of the fence where people are using AI to create jobs. So I think if you're on the using the model side, you want to consider that. Are you taking jobs away or are you creating jobs?
A
Thank you, I really appreciate that. You know, I have some good takeaways there. Ethics is really near and dear to me as far as AI here and just in general. But you know, you'd broken apart early on in the conversation where and how you're sourcing your AI data. You've got obviously the companies that are creating that data and there's companies that are using that data. And at the end of the day, it's really foundationally what are you using that data for? And I do Want to switch into the economy a little bit. From a MIT professor I had. I had a course I took. You know, he said, yeah, there's going to be a lot of jobs that are eliminated, but there's going to be so many new jobs that have never been ever created before will be created in this, I guess, AI era that we're in. So it's certainly an interesting topic and we could go for hours on it. Ed, a last question of the day. Looking ahead, how do you envision the role of ethical AI in digital governance evolving in the face of rapid advancements in generative AI and autonomous systems?
B
Those are actually very interesting questions. Digital governance is a topic that's very dear to my heart. I invested a lot of time thinking about it and building solutions around that space and investing in efforts around that space. One of those efforts that you might be familiar with is the world digital governance effort that I'm pretty closely involved with. That's wdg.org and as part of that effort, the key questions that are being laid out, or rather proposed, is how do we accelerate AI and what are the guardrails that we need? Because if we think of digital governance as hurdles, as means to slow AI down, then it is not net productive for society because there's so many cures for diseases you could find, and find them quick. There's so many vaccines you could create, so many good things you could do with AI. And therefore there is a need to accelerate AI. And acceleration of AI without guardrails could be complete chaos and mayhem. So what are those guardrails is the key question. And those guardrails are based on some foundational principles. So what are those principles is another key question. Often governance is about policy or regulation, but we're asking people to peel that onion back and say, whatever your regulation or policy is, what guardrails is it really enforcing? And then what principles are those guardrails based on? And those are some key questions that we're asking people to do. And those are some key questions that we're asking people to contemplate on.
A
Thank you. I think that's very profound. You know, it's really that question you'd asked initially is how do we accelerate AI and what guardrails do we need? I totally agree. I'm looking for the positive in AI. So much potential to do good and do so many things and cure diseases. But as you said, acceleration without guardrails will be chaos. And so we need to step into this logically and think this through. I think we can have really the best of both worlds here having the ethics that keep AI from really getting out of control. As you know, it's just a matter of time and Super General intelligence will be here and that will be a game changer. And hopefully we all have our ethics in place. Ed, thank you so much. It was such a pleasure today and I look forward to speaking with you real soon.
B
Likewise, Brian. Thank you for having me.
A
Bye for now.
Guest: Ed Watal, Founder & Principal at Intellibus
Host: Brian (Coruzant Technologies)
Release Date: August 1, 2025
Topic: Building Ethical AI and the Future of Digital Governance
This episode features Ed Watal, a seasoned technology entrepreneur and founder of Intellibus, as he discusses the urgent challenges and responsibilities facing the financial industry in the AI age. The conversation spans the ongoing threat of AI-facilitated fraud, Watal’s journey in building ethical AI platforms like Big Parser, and the evolving landscape of digital governance. Both host and guest underline the crucial need for ethical frameworks, data stewardship, and adaptive digital policies as generative and autonomous AI systems advance rapidly.
[01:28]
Fraud and Deepfakes: Watal emphasizes that fraud, exacerbated by AI capabilities such as deepfakes, is the biggest challenge the financial sector currently faces.
Example: AI-generated voice cloning can trick financial institutions, as referenced by Sam Altman’s warning.
"You call a bank and you ask for a significant size wire transfer, and all they ask you is to speak a code on the phone, and that could be easily deep faked."
(Ed Watal, 01:32)
The industry urgently grapples with redefining digital identity and verification.
[02:36]
Inspiration: Watal envisioned Big Parser nearly 20 years ago, inspired by fictional AI like Iron Man’s JARVIS—a system powered ethically by collectively contributed human data.
Ethical Data Commons: Instead of aggregating data from the open internet without consent (as he critiques OpenAI for doing), Big Parser aimed to use a wiki-like, community-sourced approach.
Challenges: Significant hurdles existed in curating enough data to compete with the sheer scale indiscriminately collected by other models.
“Big Parser was an alternative approach... We would collect and organize all human data on the Internet, much like Wikipedia has done...and then we'd feed that...into an AI engine.”
(Ed Watal, 03:26)
Shift in Strategy: After realizing the scale advantage of models like ChatGPT, Watal’s focus shifted toward alternative ethical methods in data curation—stressing that large-scale data scraping without permission should not be accepted.
[05:58]
Data Sourcing: The ethicality of AI begins with the data sources—knowing both the origin and consent status.
“If they're sourcing their data from an open ethical source like a data commons...then it is definitely ethical and responsible versus you're taking data from any other website.”
(Ed Watal, 06:23)
Model Creation vs. Model Usage:
AI’s Impact on Jobs:
“You could make a lot of money as a company, as an AI company, trying to get rid of jobs... But you could also make a lot of money by investing in creating jobs.”
(Ed Watal, 08:18)
[09:46]
Digital Governance Principles:
“If we think of digital governance as hurdles, as means to slow AI down, then it is not net productive for society... Acceleration of AI without guardrails could be complete chaos and mayhem.”
(Ed Watal, 10:14)
Guardrails vs. Acceleration: The conversation centers on finding a balance—ensuring AI development is responsible, without unduly hampering its incredible potential for societal benefit, such as in healthcare and research.
On Ethical Data Collection:
“We'd feed that database with all the good clean information on the Internet, what we'd call the data commons, into an AI engine, much like a transformer model.”
(Ed Watal, 03:41)
On the Industry’s Dilemma:
“We don't have to take it as a foregone conclusion that human data has to just be taken off the Internet without permission.”
(Ed Watal, 04:51)
On the Role of Governance:
“What are those guardrails is the key question. And those guardrails are based on some foundational principles.”
(Ed Watal, 10:40)
The episode maintains a conversational, forward-looking tone, blending Watal’s technical insight with real-world urgency. Both host and guest are candid about threats but optimistic about AI’s ethical and creative potential—emphasizing the need for clear principles and a collaborative approach to digital governance. Watal’s seasoned perspective frames AI ethics not as an afterthought but a foundational step for future-ready digital societies.