
Jack Kokko is co-founder and CEO of AlphaSense, the market intelligence platform often described as “Google for finance.” The company’s 6,000 customers canvass 90% of the top asset management firms, all the world’s leading investment banks,...
Loading summary
Jack Kokko
A single deep research report that our AI produces now gives them same 10 pages that they spent three weeks producing with a team of people. So it just gives you now this incredible efficiency and breadth in what you can do, allows you to do much more diligence and ultimately be more confident in your decision making and still be a lot faster. So it's going to address the quality and the speed and the confidence all at once.
Child
Foreign.
Ted Seides
I'm Ted Seides and this is Capital Allocators. My guest on today's show is Jack Coco, co founder and CEO of AlphaSense, the market intelligence platform often described as Google for Finance. The company's 6,000 customers canvass 90% of the top asset management firms, all the world's leading investment banks, and over half of the Fortune 500 companies. Our conversation covers Jack's early frustration as an investment banking analyst that sparked the idea for AlphaSense, the evolution of the business from a simple semantic search tool to an AI powered research platform, the promise and perils of LLMs in high stakes decision making, and Jack's vision of an always on intelligence machine that will transform how business gets done. Jack offers a fascinating glimpse at the intersection of technology, data and investment decision.
Making before we get to Ted's interview. It's football season, which in my house also means it's indoctrination season. Because let's face it, young minds are malleable. And when you've got kids, you've got a once in a lifetime chance to wire them the right way with your favorite football teams. Just ask my 4 year old.
Child
Go jobs. Sicken. Woof woof woof.
Ted Seides
Now that's an easy one. The Georgia Bulldogs are college football powerhouse. Three national championships in recent years, tons of glory. Who wouldn't want to be a Dogs fan? But on Sundays.
Child
Here we go. Brownies. Here we go.
Ted Seides
That one's just mean. The Cleveland Browns are famous not for winning, but for testing your character year after year, heartbreak after heartbreak. And yes, I made her a Browns fan. Anyway, some might call that cruel. I call it parenting. That's the thing about young minds, they believe what you repeat. So just like forcing your kids to cheer for your favorite football teams, now's the time to play. Plant another seed. Share the Capital Allocators podcast with friends, family and colleagues in their formative years. Because if you get to them early enough, they'll be lifelong fans too. Thanks so much for spreading the word.
Please enjoy my conversation with Jack Coco.
Jack, thanks so much for joining me.
Jack Kokko
Thanks for having me.
Ted Seides
I'd Love you to take me back to your background that led to this entrepreneurial journey.
Jack Kokko
If I go all the way back to where I grew up, that was in Finland. I studied electrical engineering, thought I was going to be building mobile phones and during that time I really got enamored by finance. Studied both in parallel. And then one summer late in those studies I spent a few months working at a startup in Brussels called E Stack where we were building the company was going to be the European version of Nasdaq. Never really took off, but I was talking to US investment banks and trying to raise money for that business and also made some connections to firms in London where got an interview at Morgan Stanley and then ended up working as an analyst. As my first job out of college in London, first joined the telecom boom and then convinced them where I wanted to be my dream land was Silicon Valley and I ended up in San Francisco working on tech deals during the dot com boom. Spent the first weekend working on a billion dollar deal and things were moving so much faster than in Europe. Was very exciting at the time, but certainly also a stressful job for someone who wanted to do a good job and the tools just weren't up for it. And that ultimately planted the seeds in my mind for what I'd one day want to go and build with AlphaSense.
Ted Seides
What were your frustrations at the time doing the work you were trying to do without the tools needed?
Jack Kokko
I had a pretty strong work ethic. I'd work the usual all nighters and trying to do a really good job doing the analysis but the pace was really fast and you'd end up not having enough time to do a really good job. I remember some client boardrooms where I'm sweating and barely awake, but also afraid of what did I miss and what am I going to be called on by the CFO or CEO of that company where I just missed something in my analysis. And that feeling has stuck to me frankly. Still every day when I walk into a boardroom I have flashes from those situations that really was because of the lack of technology to help an analyst who needed to consume so much information and trying to catch up on a new industry that you didn't know, new companies you didn't know and that sort of cross sectional information across different industries that you really should have known to be smarter with your analysis, with your viewpoints and to be able to talk to some really experienced business professionals working about to bet billions on a deal. Certainly we had the data terminals that every finance professional has still today that was a big part of the frustration that you could go and manually look for data but it was very hard to consume it at the scale and speed that was needed. And back in those days you already started to have technology for consumers where I had Google to search the Internet and so forth and we could get our hands on information very efficiently. But this wasn't available for professional that desperately needed that every day and every minute in the job. That was a big part of the frustration and has stuck with me. I know a lot of people still today are doing their jobs with fairly manual tools.
Ted Seides
What was the vision for what you first wanted to build in the early.
Jack Kokko
Days we even called it, I forget if it was us calling it our clients first calling it Google for finance or Google for analysts. Really build a semantic search engine that would understand what an analyst really is looking for when they are reading a financial filing or an earnings call or a research report from Wall Street. We built that system that understood millions of terms and linked them to core concepts. Understood that revenue is the same as top line across the whole vocabulary of finance. So as we built a system that was able to do all that and look at an earnings call happening in Japan or SEC filing in the US all the information around the world, all companies using different vocabulary and reliably find every single data point about every single topic or theme that an analyst was researching. That was somewhat revolutionary at the time that just wasn't available. People were still contrast searching for individual terms one at a time in PDF reports. Shocking. But that was how work was done. And we had magnified the speed and efficiency and reliability of finding that information. That was the vision that started to execute on. And the product we built got pretty quick traction in the hedge fund world where there was this strong thirst for information and efficiency and speed to insight that I had experienced as an analyst. And we were now able to bring.
Ted Seides
To that market without going too deep into the technology. What you're describing now, someone would say oh yeah, I'll just go into ChatGPT and figure that out. What was it like bringing these pieces together into a technology platform before a couple of years ago?
Jack Kokko
Today language models are able to be trained and inherently understand a lot of that. Not all of it, but a lot of it. There's still a lot of value in really specialization and refinement in that industry level deep dive understanding. But back then the tools were more basic. We did have an AI first vision from day one. The AI of that day was just a lot simpler used AI and you trained Models to classify information. They would be able to do things like understand the sentiment of a given statement in a paragraph of text in a document and say, is this good or bad for the document? That was something we were really proud about. Took years to build that. We had a team of dozens of people in India tagging very large volumes of those statements to be able to train those models and train it to do that one classification. But it does it so well that even today's LLMs still struggle to be able to do that kind of thing at that deep industry level. Understanding and there were many other classification tasks just taking in feeds of documents and understanding, is this broker research report, is this an initiation? Is it a change in price target, recognizing companies in text, Being able to do all these things that would help users to slice and dice information efficiently and be able to ask questions that would cut through millions of documents and get to just the right insights, the right documents and the right paragraphs and commentary that you'd be able to very efficiently go through. So it was much more work to do these individual things that now for a lot of it come with large language models as they are trained as really universally capable models when they were very targeted, narrow models back then.
Ted Seides
When you dive into how you go about implementing this at that first layer of data sources, what's the core set of information that you've wanted to train? And then where are there alternative data sources you've accumulated over the years?
Jack Kokko
We started from the information that was hiding in plain sight, hiding because there was so much of it that it was hard to get to the insights even though they were available to every professional in the market. So all the SEC filings, global filings from every country with the stock exchange, earnings call transcripts, conference presentations at every investment banks, and then of course, press releases, news, and then broker research, getting Wall street research on a platform where you could now compare what is the company saying, what is an analyst saying about any topic, any company? And then that was still information that people could get access to on other platforms. One big step for us was acquiring a company called Stream, where they had built an expert transcript library that allowed us to start scaling and generating high value proprietary content that you couldn't get anywhere else. And we could really point that system to generate information on specific companies. What are their customers saying, what are their suppliers, partners, former employees, executives saying about things that really matter? Before this you had to rely on what is the company saying, what are they putting out there in press releases or saying in public forums and filings, but you really had to go talk to management to question that or get alternative points of view. And the expert interviews started to add this invaluable additional perspective. We started to double down on that. That of course led to our much bigger acquisition of tegus, where they were the market leader in that they built by far the largest expert transcript library. And it really incredible operation working with a thousand buy side firms out there who were doing calls on tigus and across public equity buy side as well as private equity venture capital. And you had public company content, private company content. And that started to expand the diversity of what we were able to offer to the market. There wasn't much qualitative research on private companies out there, but these expert interviews started to really pull that in. And today we feel like we've got the richest source of insights on private companies. We do keep on adding content all the time, but that is where really the core of the focus is, because we see so much unique proprietary incremental value that we can add.
Ted Seides
So what you described are some building blocks of quantitative publicly available information. Then you have company reporting information and then this huge library of what's called expert opinions. If you're in that use case, the hedge fund analyst, how would you describe to someone who hasn't used this system what it is they're seeing so that they can pull out whatever information they want to pull out?
Jack Kokko
Today the easiest comparison point is something like a ChatGPT where you're asking a language model, a human language question. The system is now able to precisely understand what you're looking for from your prompt. And now it goes across all the half a billion documents in our system and is able to find the most relevant ones and then dig deep into them and ask those same questions from every single document and see is the answer here, is it here? And do that hundreds of times, thousands of times for the most relevant documents for the user to look at and provide a narrative format answer that is granularly cited to those documents. A crucial difference to what people are used to with these chatbots we all use as consumers is that we focus on taking users to those underlying documents. Our users are serious professionals that care about reading and getting deep into the context that is stated in a SEC filing or research report or expert interview. And we have made our use of interface such that it's very easy on the same screen to see all those citations in the narrative format answer, but also dig deep into the underlying document and really understand the context and go deeper and lodge additional queries from there. So you can get a very strong confidence in what you're reading because you know where it's coming from. It's coming from high quality sources. You see exactly what company did it come from, what analyst wrote it, or what expert provided an opinion and then judge for yourself. Not just trust that, but get the whole 360 degree view of looking at all those different sources and multiple instances of each one on that same topic to gather more of the mosaic of information. And we've made that really efficient so you can very quickly step through all the different breadcrumbs that lead you to that mosaic to then be able to draw conclusions. And of course the machine is now able to give us those conclusions in that LLM provided narrative answer. You don't have to draw those connections yourself. You can question what the machine is giving you and ask different questions from different angles, but at least you get a narrative answer that is very intelligent through chain of thought reasoning. That the machine has already done so much work that probably is a lot more than you would have been able to do as an analyst or. Very few of us have time to spend weeks researching a project and often hear from clients that a single deep research report that our AI produces now gives them same 10 pages that they spent three weeks producing with a team of people. So it just gives you now this incredible efficiency and breadth in what you can do, allows you to do much more diligence and ultimately be more confident in your decision making and still be a lot faster. So it's going to address the quality and the speed and the confidence all at once.
Ted Seides
As you talk to your hedge fund clients and you're able to get that example of a 10 page report faster, deeper than what would have taken a lot longer. How do they think about the alpha component? Meaning in the past you would do all that hard work and other people weren't going to do it. Now all they have to do is become a client of yours and they can get that work done. What have you heard from your clients about where they can drive an edge on the market? And where does the information that you've created become some type of table stakes?
Jack Kokko
We are raising the bar for sure as any technology that is introduced into the investment process now everybody's able to do things much more quickly, efficiently and move more in an agile way because the research can be applied to so much more that in the past you just had to ignore. It becomes now a question of who's asking the right questions and how are you asking Them and what angles and how do you look at cross industry impacts and read throughs from this company to that company or this industry to that industry is also about technology adopters early and late and how are you able to adapt to these new solutions and how well can you deploy them? And people spend a lot of time trying to be great prompters. How do I write a prompt that the system really understands? Well, we see customers asking us how do we really help them? How do we raise everybody to the same level? So they're very good at asking the machine. And this is even hard for us humans. You ask your colleague a question and you have to think, did I give a good enough prompt that I can trust the answer? And the same applies to these machine models. But this machine automation really just does the work that nobody wanted to do. The work becomes more interesting and it's easier ultimately to do the value adding work when the machine does the heavy lifting and collects the information for you.
Ted Seides
What are some of the the initial responses you give to someone who's asking you how to create a better prompt?
Jack Kokko
Into AlphaSense, I'd go back to how do you ask an analyst that's working for you? How do you make sure that you convey all the information that the analyst needs to know so that you can be sure that your request has been understood? It's the same thing with the machine. If you keep it too vague, it might misunderstood the question or it may make assumptions that you don't like. So if you want to be very clear about what you're looking for, then it pays to be detailed in what you're asking about. But you can also be iterative. We've built our system so that you have different modes. You can ask questions in fast mode generative search where the average answer comes back in six seconds. There's a mode where you let the machine think longer. It takes maybe one or two minutes of chain of thought reasoning. It runs dozens of searches, synthesizes an answer and brings it back. And then there is deep research where you let it work for 10, 15 minutes and it comes back with a 10 page report and it's going to have gone much deeper. When you're doing that longer cycle work, you're going to be more careful with your prompts. You don't want to wait and then realize that you weren't precise enough. But when you're iterating quickly on a six second cycle, it's cheap to ask lots of questions. So if you don't think the first question got There, then you can ask again. We feel it's our job to make sure we understand the user and understand what they're looking for. And we're constantly refining this and trying to make sure that the system has the intuition to understand what you didn't say. This kind of is really the value of specialization where we're trying to really understand our financial professional users, our knowledge professional users and different roles and what they're looking for and allowing the system to take that into account to understand their prompt in a way that their colleague would in that same seat or same industry, same company.
Ted Seides
Houzz in the last couple of years, starting with ChatGPT, I'm curious how that has changed the trajectory of what you've done for a long time before they were around.
Jack Kokko
It's been an incredible breakthrough from a vision perspective. I remember telling our team five, six years ago about how our system is going to be this oracle that you can ask any question in human language, it'll understand your question and go do the research, come back with a great answer. Frankly, I had no idea how and when we're going to get there. When large language models actually started to be able to do that, that was incredible. Just felt like sort of a Cambrian explosion of opportunity of what we could build with it. Now that the system can just understand what's on our users minds so much better than when you just had to express it in keywords. Just really hard to deduce what someone is looking for when they put in a couple of keywords. But when they put in a full sentence, full prompt, you have a really strong chance of getting to exactly what they were looking for. So it was an incredible opportunity and of course has allowed us to do a lot since then and it'd be hard to go and cover all of it, everything really that we've done in the past, we've almost redone with language models. Although I mentioned the sentiment piece that is still. Yeah, the incredible sentry model we built with the prior generation of technology still is running and it's doing a job that's better than what we see LLMs doing. But mostly LLMs now can just raise the game on everything, every part of what we're doing and opening up new opportunities. One thing that we built and released just weeks ago was a new AI interviewer. So our Teagus Expert transcript library. It's created by buy side firms doing interviews with experts and analysts at a hedge fund or VC firm, private equity firm, has to get on a phone and talk to the expert for half an hour, 45 minutes. And there's a lot of friction in that process. But what we now have is an AI interviewer that does a pretty good job of that. We're now able to create this new system that is much more scalable, where we can point it in any direction. And the AI interviewer will hop on the phone anytime the expert is available. And it's also able to help us generate new content sets where maybe buy side analysts wouldn't be available to do this repeated work, where, for example, we're doing channel checks covering dozens of industries and talking to the same expert every month to understand what is going on with pricing and demand, supply dynamics, market share shifts and so forth in a granular industry, and getting a pulse on that industry every single month from the same person, and doing that across dozens of industries and getting a pulse in the whole economy that way. It'd be hard to get buy side analysts to commit to doing that every single month as a repeated process. But AI will do it whatever we ask it to do. This is a very exciting new product that we were able to launch on the back of AI being able to do something that just wasn't at all possible even months earlier, and now suddenly is possible. There were signs of this being feasible in the past, but now able to have the human language conversation with a true expert with high technical and market expertise, where they wouldn't talk to an AI unless it was really able to have a conversation, able to respond well and ask smart questions. I was quite shocked about what I was seeing in the first interviews where it's talking to an expert in the high bandwidth memory market and memory chips, able to talk about the various technologies and pricing dynamics and have a very fluid conversation. That is an incredible advancement that we think is game changing. As soon as we announced that we started hearing from clients that, hey, can we offload our calls that we like to do ourselves to this AI? There were many people that don't want to be recorded, perhaps. And I would love to give those calls and say, hey, here's what I'm trying to find out. Can you recruit the right experts and have your AI do the work and just send me compliance transcripts back? That's another big improvement to the industry's workflows where nobody wants to be spending hours and hours preparing for expert calls. It's a dreaded job and suddenly AI can do a lot of it. I think the industry will be very happy about that. And we can still go and get the same insights and People can now focus on what do they do with those insights. Now, of course, the vast majority of this is still happening by people getting on the phone. They do want to do it to the point of the ones that really care about what insights they need to get, they'll do that themselves. But there's a portion of those calls that get offloaded to AI now, and that's pretty exciting.
Ted Seides
As you worked with the LLMs and as an example of building the AI interviewer, what have you seen as some of the challenges that you had to overcome to make these work the way you wanted them to?
Jack Kokko
There are lots. If we're thinking about technical challenges, it starts from what is the leading edge LLM for this particular task that we're trying to solve and can we actually get it to do that work? And what is the right combination of can it absorb enough context? In this case, the AI interviewer needs to read a whole deep research board of maybe dozens of pages in some cases to really get expertise on any topic. Can it hold that in its memory and also take the speech of the expert it's interviewing? And then also when it hears something unexpected, can it do quick research and now adjust what's in its context and pivot and ask a new, better question? That took a lot of iterating. You've got to test different models, you've got to test different configurations. We built this, our system, as kind of an LLM agnostic orchestrator that is working with just about every one of the leading edge models in the market, and we're deploying them in different ways and depending on the task at hand, you'll end up using this model now and maybe another model in a few weeks when a new breakthrough happens. So the team has had to build these capabilities of tracking and testing and very effectively staying up to speed on every new breakthrough to understand where is the right combination of these things. There isn't one perfect model out there. Models that have been optimized for given tasks better than others.
Ted Seides
What's the process for figuring that out which model is the right one for the right task?
Jack Kokko
It's a very systematic engineering led process where you're really just testing all of them all the time. We have teams of people evaluating outputs, we have LLMs evaluating outputs. So there's a system that keeps running standard checks and when you have a new model, you can compare it to the baseline and see, okay, what does this new black box deliver? You can't evaluate it until you've tested it. You kind of look at the external test results. And there are some of these metrics, some real metrics, some valid metrics. Hard to really draw conclusions from what you read. You have to just deploy them and test them and see what is the effective performance. Try to test numerical metrics and see how often or what percentage of the time do human analysts agree with the output. But ultimately there's also a style test. Do I like the way that this LLM speaks? Is it too verbose? Is it speaking specifically in a finance type language where we've ended up even giving financial services the buyer side of a different model and the corporate world a different model? We've learned that there are different stylistic references as to how you want the LLM to be speaking to you. So lots of aspects that you have to test and some of them are qualitative in that way.
Ted Seides
What's been most surprising to you in this process of testing the LLMs?
Jack Kokko
What's been hard to do is knowing where the cutting edge is, knowing what each model is capable of in practice. Even the system. You can't really build it, design it, build it. It's ready. No, it's a very iterative process. You have to go and keep evolving it and seeing how much human evaluation you could do, how much LLMs can actually effectively evaluate each other. And that changes as their capabilities evolve. It's a process that keeps you on your toes. You can't claim to master it at any time or if you feel like you've mastered it, some new breakthrough happens and now you have to re challenge your assumptions again. Learn that we just have to have teams that keep on doing this and have to be ready to pivot when something changes. That readiness to pivot and the flexibility is perhaps one of the bigger learnings from this, that you just have to be on top of this all the time and invest the time, including myself as the CEO, I got to understand what's going on. These are critical choices for product. Product is what adds value to users. I feel like I need to be reading a lot and trying to have tentacles through our team to making sure I'm up to speed. And that applies to everybody. That's part of that chain. It feels like an around the clock, very intense process of staying up to speed with all the development.
Ted Seides
So business started mostly with hedge funds. You did mention, oh, there's a little different use case for corporations. How has it evolved from that initial user base to who your customer base is today?
Jack Kokko
We always had the idea that one day the corporate world should like this too. They need information, they're making big decisions, they're deploying capital, they're acquiring companies, they're making investments and launching new products, entering new markets. They should need the same information. And that was the thesis we learned that at least this thesis played out well in investor relations. At first they were hearing from hedge funds that they're using this new great tool called AlphaSense. We started to really spread like wildfire through word of mouth in the investor relations community across public companies, and then started to map out all the other big pockets of knowledge workers in those companies. From corporate strategy to competitive intelligence to corporate development to strategic marketing, product management, even engineering these days. And then going back to the CFO's office and across the C suite, it's really gotten across dozens of Personas within corporations where they are just trying to be as much on top of the information as the investment world is, but more narrowly focused on their industry or different forces affecting where that industry is going. That's today pretty broad, diverse landscape of corporate users. And of course then everybody else that's in the knowledge worker universe, from consultancies to bankers. There are similarly now users, dozens of different knowledge worker Personas, but everybody's doing the same kind of work. Take an M and a deal as an example. When that deal breaks, there's probably been usage in all of these areas, from analyzing it, from private equity bidders to corporate bidders to hedge funds maybe trading on rumors, to the investment bankers that worked on the deal, to consultants worked on the deal and so forth. So dozens or hundreds of people have been using AlphaSense from all the different angles around that same deal because it impacts all of them. It's become kind of an ecosystem that uses information about industries, markets, companies that we're able to serve from all these directions.
Ted Seides
I'm really curious about the unit costs of the business in the sense that some of the data sources you talk about are probably data sources you have to buy. And then on the other side you're curating and providing this information that's essential for huge decisions. How do you figure out what someone's willing to pay for that?
Jack Kokko
The biggest variable today, and that is how much intelligence do you apply and what does that cost? More steady pieces of that apply to any data business. But the raw LLM intelligence and chain of thought reasoning, if you take that to the maximum of where that is heading, you're going to have a system running 24, 7 and running processes, both initiated by users, initiated by APIs that clients are running. Clients are Building agents for their own internal workflows, triggering these generative search and deep research reports through those APIs. And then the system is initiating work to generate more information. Where that is heading is recognizing what information gaps exist and going through our expert network and finding experts and launching calls and bringing back new information into the system. So there's this sort of intelligence factory that processes millions of tokens. For every decision that happens in these financial and business workflows, we are having to look at what benefits the overall system, what can we amortize across all customers? Where is that token usage so high for individual client that we need a pricing model where they're leveraging the API in a massive volume and they get a lot of value. That is a very evolving world right now where you have to do whole new kinds of math and estimations on what that intelligence is going to cost, how that cost is evolving over time. So it's hard to give a very precise answer. But I'd say that's another very quickly evolving game. Currently in an intelligence factory business that we are.
Ted Seides
So do you look at that today? Is there a minimum fixed cost and then a variable usage cost above that to the customer?
Jack Kokko
The vast majority of our business is still fixed in terms of how we price. We try to make it predictable and easy. But when clients do want to start to experiment themselves, that's when you have to give them more flexible models where they can leverage the platform in different ways. It has traditionally been a user based model. We've started to go much more into enterprise pricing where clients are adopting the system wall to wall, they're incorporating their internal content, senior teams get involved, they suddenly say, okay, how do we make this something that our whole organization can use? Let's forget about this per user model. Let's think of us as one big client. It has meant that our pricing models and packaging models have evolved in the same way.
Ted Seides
You've done a series of acquisitions along the way. How does that fit into the business strategy?
Jack Kokko
For AlphaSense, there is no specific acquisition focus. We don't feel like we need to do acquisitions. Everything is somewhat opportunistic as to what is out there now. If there was another tiglike asset, I'd be very excited. These are few and far between. This was quite perfect and it made a lot of sense and it was a huge bet for us. But we had a lot of confidence that this is the right thing to do. We made a much smaller investment at first and then we're able to make this very big investment almost $1 billion that we deployed in that Tigus deal. But we felt that as this sort of intelligence factory, the more content and proprietary insight that you feed it, the more value you're able to deliver to customers. So it felt if we have a great user interface, and it's the better interface to deliver this qualitative content, then acquiring and plugging that content and data into the system is going to make it so much smarter that the combined value proposition is one plus one equals five. And that's what we have seen. So that is the thesis. I'd say we're forced to be opportunistic because these kinds of fantastic content assets aren't available all around. They are pretty rare.
Ted Seides
As you look at the business today and where you're headed, what are the most important metrics that you're reviewing to gauge your progress?
Jack Kokko
There are the standard financial SaaS, company metrics, where the recurring revenue is probably the primary one that you stare at and kind of have big targets. And beyond that, we're fortunate that we've been able to build a business with great SaaS metrics that we feel both private and ultimately public market investors will really appreciate. So it allows us to use that spectrum of metrics and make sure that we're gradually tuning and turning the right knobs to improve the performance. But there isn't anything that we feel, oh, we've got to really, really focus on this. We have good gross margins, unlike some companies leveraging language models that have big challenges there. We are able to deliver enough value that we're able to continue investing a lot in intelligence and the token capacity and still absorb that cost and not worry about our overall metrics. Beyond revenue and growth, which is primary, to me, it's all about growth. We're accelerating our growth every quarter, and that's pretty rare. It's great to see in this environment where language models have both created a lot more client demand and created a lot more value in our product. So there's this great tailwind and momentum. If I was going to crystallize it down to one thing, it's about growth because we're trying to build something really big, and getting there faster is the primary objective.
Ted Seides
How do you go about leading a team to achieve the kind of growth that you're hoping to.
Jack Kokko
One big change that I felt the need to make is to get even closer to the product. As a founder, having built the original product and having had the vision for what you wanted to build for the market, you're naturally going to Be close to the product as a result of language models and also acquisitions and doing a lot of things at once across the acquired companies, the companies they had acquired, content assets, data assets, technology, and so many different pockets. I put myself much closer to all of that and effectively taken the role of the ultimate head of product and taken on many more direct reports to be closer to what's happening and be closer to every decision. To make sure that I can unblock obstacles from people, I can give them aggressive, ambitious goals and then figure out, okay, what's preventing you from achieving it. Well, let's go and take that out of the way. I feel like at our size of the organization, over 2,000 people, you get to the point where people assume that things that are blocking them are not changeable. I think I have a unique ability to go and figure out when I see that there are priority projects that need to move fast and I hear that they're not moving fast for some reason, I feel like everything can be moved, everything can be changed. But it's hard to do that unless you're really close to all those critical product decisions. I put myself right in the mix of and the flow where it's happening, and I feel that that's the one critical thing. It certainly keeps my slack channels busy. I feel like it's still the right thing to do to drive that growth.
Ted Seides
You started this business hoping to solve this frustration that you had with the efficiency of gathering all financial information for the decisions you need to make. There's so much evolved over the last 17 years that you've been at the front foot of increasing that efficiency. What excites you about what will happen for the next 17 years?
Jack Kokko
It's certainly hard to see that far, but it's a very exciting time. Given what this technology revolution with AI and language models has enabled. It's an incredible time for creative building. As a startup founder, CEO, I'm sure many in the same position would share this. The most fun thing to do is just to wake up every morning and think about what can I create next? And there's just the incredible sandbox of what we can do to create the next generation intelligent machine that the whole financial and business world can use and be smarter and make smarter decisions when you're making big bets and be more agile, with better confidence, better data. So it's a very fun place to be building all that. As I see it, not that far forward, but into the next months and years. It's about building this always on machine that is working for all of the investment firms, public and private markets and banks and consultancies and corporations across every industry. If you think about every user having kind of 1,000 analysts in their pocket to think about Alphasins that way. Well, what if all our clients had that available through our system? That means our system is churning through its machine intelligence day and night every minute and running things on their behalf when they ask, even when they don't ask. If you're a buy side firm, you've got a portfolio. Well our system can be monitoring that portfolio and analyzing what's happening with every one of those companies. Every new piece of information that comes out it can proactively go figure out what's the meaning of that? What other impacts does it have in other industries? What are the read throughs? All of this can be automated. The system can be sort of thinking 247 and then informing our clients more proactively what should you know right now? And that's a really exciting thing to be building to me. That's the next vision when we're able to build the Oracle using LLMs now it's making the system really run around the clock and do this work proactively. And when it doesn't find the information it can recognize that autonomously and go and generate expert calls and have the AI do more expert calls, bring back the information and now tell you that what does this new thing mean when the next deep seek like event happens and nobody knows what it means? Our system can figure that out, that this is a problem. Let's find the right experts in our network, let's go and bring back the information and hours later it can provide really high confidence information on this thing that people don't have to be scrambling around anymore.
Ted Seides
Fantastic. Well Jack, I want to make sure I get a chance to ask you a couple fun closing questions. What was your first paid job and.
Jack Kokko
What did you learn from it? Very first paid jobs didn't pay much but I was delivering newspapers on a bicycle in cold icy Finnish roads as a teenager. I was selling magazines door to door. I remember being one summer at steel plant cleaning furnaces in a rubber suit with a gas mask and a rock drill. And so I feel like all of these things I learned the value of trying to do your best in whatever you're doing and working hard and something good will come out of it. I still remember or maybe now I can remember fondly those experiences. I still feel like it's a valuable thing to have to keep trying to do your best job and maybe Today I can do more fun work when it's more creative. That is one thing that I still think about.
Ted Seides
What was the best advice you ever received?
Jack Kokko
One thing that I have been told and I dismissed it at first it sounded fluffy. Follow your passion. If you're going to be an entrepreneur and launch a company, do something you're passionate about. I've actually learned that that was incredibly good advice. Having spent decade and a half on this company. If that wasn't true, if I was building some, I don't know, some accounting software, I can't imagine being equally passionate about it. But I'm doing something that I wake up excited about every day and I have this personal passion to go and solve this problem that I still feel in my bones thinking back to my analyst days and feel I can go and help the whole industry do these things so much better and more efficiently that it keeps me going. So following the passion actually has turned out to be really good advice. Not at all fluffy as I thought at first.
Ted Seides
What brings you the greatest joy?
Jack Kokko
This probably applies to a lot of entrepreneurs just being able to think what doesn't exist yet. What would help a lot of people that would be able to be skilled and big and how can I have impact doing it? So I'd say it's that creativity and building.
Ted Seides
What life lesson have you learned that you wish you knew a lot earlier in life?
Jack Kokko
One thing that I grew up with was learning the value of self reliance from not just my parents, but even the culture I grew up in. You had to do your best and work really hard and believe that you can crush through any obstacle and with perseverance you can get there. I think I've over indexed on that and pushed too far into the idea of self reliance and believing as an entrepreneur you tend to believe that hey, I can just go and do this and it's all possible. What I have learned is that there's so much that you can do by working with others that surpasses what you could do on your own. However hard you work and however persistent you are, humans are the primary species here because we are able to collaborate. Tapping into that and asking for help doesn't come easily. To me, that's been one learning. I wish I knew earlier, but I think I've been able to deploy that a little bit more in later times and it's been a big thing for me.
Ted Seides
All right, Jack, last one. If the next five years are a chapter in your life, what's that chapter about?
Jack Kokko
I think it's about playing in this sandbox. And this might sound a little geeky, like it's all about technology and what it can enable, but it's such a Cambrian explosion of opportunity that these language models and what generative AI has given us, it feels like we can reinvent everything. I come up with business ideas every day. If I wasn't doing this, I'd be pursuing one of those things that come to mind every day. So it feels like that's a great personal mission that I can be excited about. And of course, I'm building what feels like the exact right sandbox for me, which is AlphaSense. So just deploying that creativity and enjoying what I really find that techy, geeky enjoyment in, that's what my next chapter will be about for sure.
Ted Seides
Well, Jack, thanks so much for sharing this incredible application of what you've done with AlphaSense.
Jack Kokko
Thank you.
Ted Seides
Thanks for listening to the show. To learn more, hop on our website@capitalallocators.com where you can join our mailing list, access past shows, learn about our gatherings, and sign up for premium content, including podcast transcripts, my investment portfolio, and a lot more. Have a good one and see you next time.
Podcast Disclaimer
All opinions expressed by TED and Podcast guests are solely their own opinions and do not reflect the opinion of Capital Allocators or their firms. This podcast is for informational purposes only and should not be relied upon as a basis for investment decisions. Clients of Capital Allocators or podcast guests may maintain positions in securities discussed on this podcast.
Date: September 25, 2025
Host: Ted Seides
Guest: Jack Kokko, Co-founder and CEO of AlphaSense
This episode features a conversation with Jack Kokko, the entrepreneurial mind behind AlphaSense—a platform often called "Google for Finance." Ted and Jack cover the founding journey from analyst frustrations to building an AI-powered research platform used by top asset managers, banks, and corporations. The discussion explores the evolution of search and intelligent research in finance, the advent of Large Language Models (LLMs), integration of expert opinion data, and Jack’s vision for an “always-on” intelligence machine that could revolutionize business and investment decision-making.
Early Career & Motivation
Analyst Frustrations with Research Tools
Semantic Search Vision
Incorporation of Proprietary and Alternative Data
How AlphaSense Works Today
Impact on Professional Research & the Quest for Alpha
The LLM Revolution and Product Transformation
AlphaSense AI Interviewer
Technical and Operational Challenges
From Hedge Funds to a Broad Client Ecosystem
Pricing and Intelligence ‘Token’ Costs
Acquisition Strategy
Key Company Metrics
Jack’s Role and Culture
Vision: Always-On Intelligence Machine
On Analyst Life and Tools
On Research Efficiency
On LLMs as a Breakthrough
On AI Interviewer
On Prompt Engineering
On Building the Future
This episode offers a rich, candid exploration of the intersection of AI, information access, and investment research. Jack Kokko details the technical evolution, business strategy, and pioneering culture at AlphaSense, while Ted Seides draws out lessons relevant for entrepreneurs, allocators, and anyone engaged in knowledge work. The conversation highlights not just the tools, but the philosophical and practical shifts underway as AI turns financial research into an “intelligence factory.”