Transcript
Dharmesh Shah (0:00)
So that's my advice, is every day, every day you should be in ChatGPT. I don't care what your job is. Right? You could be a sommelier at a restaurant. And you should be using ChatGPT every day to make yourself better at whatever it is you do.
Sam Altman (0:18)
Can I ask you about the story really quick? And you have, like. You have, like, a list of stuff here that's like, all amazing. It's actually a lot of. It's very actionable. But the reason I want to ask you about the story is for the Listener. Dharmesh founded HubSpot, $30 billion company. You're the CTO. So you. And you're. You're an OG for Web 1.0, Web 2.0. And your first round, or one of your first rounds was funded by Sequoia. Your partner, Brian is an investor at Sequoia. So you are in the insider. You're an insider, I believe you may not acknowledge it. I don't know if you do it or not. You are an insider. The cool part is that you're accessible to us. When did you first see what Sam was working on? And how long have you felt that this is going to change everything?
Dharmesh Shah (0:59)
So I actually have known Sam before he started OpenAI and I got access to the GPT API. It was a toolkit for developers to be able to kind of build AI applications, right. Effectively. And so I built this little chat application that used the API and so I could have a conversation with it. So I actually built that thing that night. It was a Sunday. I had the. A full transcript two years before ChatGPT came out.
Sam Altman (1:25)
So that's four years ago.
Dharmesh Shah (1:27)
It was 2020, so five years ago. Wow. Okay. This summer. And so even then, it's like. And as soon as I, like, you sort of have that moment, it's the same that all of us have with ChatGPT. I just had it two years earlier. And then I'm showing everyone, like, Brian, you are not going to believe, like, I have this thing through this company called OpenAI, and watch me, like, type stuff into it and see, like, see what happens. It's. And we would ask it, like, strategic questions about HubSpot. It's like, how should it, like, who are the top competitors? And they were shocked. Even then, two years before chat, it was shockingly good. Right? But the thing you sort of have to understand the constraints of how a large language model actually works is that you type and you have a limited. Just imagine this. If we're going to just use the physical analog sheet of paper can only fit a certain number of words on it. And that certain number of words includes both what you write on it that says, I want you to do this and. And the response has to fit on that sheet of paper. And that sheet of paper is what in technical terms will be called a context window. And you'll hear this tossed around. It's like, oh, this, you know, ChatGPT has a context window of whatever. Or this model has a context window, whatever. That's what they're talking about. All right, so why is that? Why does anybody care about the context window? It's like, well, sometimes you want to provide a large piece of text and say, summarize this for me. Well, in order for you to do that, it has to fit in the context window. So if you want to take two books worth of information and say, yeah, I want you to summarize this in 50 words. Those two books worth of information have to fit inside the context window in order for the LLM to process it. Most the frontier models are roughly 100,000 to 200,000. They measured in tokens, which is like 0.75 of a word. But that's like a book.
