Loading summary
A
Anders Heidelsberg is one of the biggest living legends in the tech industry. He created Turbo, Pascal, Delphi, C and Typescript. The impact he has had on programming languages and developer tools is immense. Today with Anders, we discuss how C might have not been born if it was not for the sun vs Microsoft lawsuit over Java. The behind the scenes story of Typescript and why open sourcing it was a huge deal inside of Microsoft. What he's learned from 40 years of designing languages, including why IDs and programming languages go hand in hand and and many more. If you want a behind the scenes look at how three of the most used programming languages in history got built and how AI might change our usage of programming languages, this episode is for you. Before we start, I'd like to introduce our presenting sponsor for the season, Antithesis. A definite trend that I'm seeing across the industry is a lot more focused on testing. Unsurprisingly, thanks to AI, we know that software is hard to test. And we also know that AI is making it worse thanks to producing increasingly more code and more complex code. The bottleneck is becoming reviewing the code, testing it and trusting it. Or is it? We tend to think that code reviews are the bottleneck because you cannot scale human reviewers with token spend. But the problem actually goes beyond code review. Really, it's about verification. We know that AI cannot verify itself. To verify the correctness of AI generated software, we would need to cache issues that traditional tests miss, including issues that we did not even think of in a code base that is changing at superhuman speed. Oh, and we need to do all of this before deploying to production. The only way to verify that software works is to run it with realistic faul. And this is exactly what Antithesis does. You can bring the system you work on under test and verify that it works as it should. Teams at ETCD and Jane street are doing justice, and I'm also starting to use Antithesis to test real world systems. Over the season, I'll be sharing a lot more on how it works and how it can help verify that a system is bug free. In the meantime, check out antithesis.compragmatic to learn more. Anders, welcome to the podcast.
B
Thank you.
A
It's brilliant to meet you. You've created so many widely used programming languages, including the one that I learned first to program with Turbo pascal. How did you get into programming?
B
Well, I was lucky enough to attend a high school that this is back in Copenhagen that offered students access to a computer. It was one of the first high schools in Denmark to do. So we're talking mid to late 70s now. And I sort of got bitten by it then. You know, just this idea that you could program this machine and make it do things and, you know, the wonder of figuring out how it was put together. Of course, it was like completely ancient by modern standards. Standards. It was like this HP2100 with 32k of ferrite core memory. You could literally open it up and see the ferrite cores. I mean, it was, it was amazing, you know, paper tape reader and, you know, and then we got a 1 megabyte 14 inch hard drive and that was just state of the art. The bootloader was on paper tape because there was no ROM in the machine. So, so it started up and knew nothing. And so you had to type in the instruction sequence to load the bootloader that would then load the OS off of the hard drive.
A
And as, as a kid, what did you start to program on it? What captured your imagination?
B
Well, this, this was a Hewlett Packard. So it had fortran, which I found to be very quirky. It had a very slow BASIC interpreter. But then it had Algol, Hewlett Packard's version of Alol, which was an interesting compiler implementation because it didn't support recursion, which is kind of bizarre. You know, the call instruction of that machine would store the return address in the first word of the subroutine and then just execute and then to return you would jump to that indirect through that word. So if you called yourself, you'd just be gone forever, you know, so. And of course there were no debuggers or anything to help you figure this out. So you just had to be real careful about which algorithms you used. But, but it was, it was compiled to machine code and it ran, you know, and it was like you could build games, which is what we mostly did, like Lunar Landers and what have you kind of thing. Right. So yeah, it was fun back then. Things were so simple, Right. You could see all the way to the bottom. I mean, there was just no layering and nothing. It was right on top of the hardware.
A
I guess you needed to see all the way to the bottom as well.
B
Pretty much.
A
Right back then.
B
Well, I mean, it was just so simple that you could. Right. And that was the beauty of those earlier. That was true with the eight bit micros and even, you know, the early PCs and whatever. Right. And then we've just added more and more layers over time. We have a lot of layers right now. That's for sure we do.
A
And how did you go from building games to actually building your first ever compiler? And what was that compiler?
B
I started in 79 at the Danish Technical University. At that time also, you know, this was right when eight bit micros were starting to become available.
A
Eight bit microprocessors, right?
B
Yes, exactly. And you, and it was usually these kits that you bought that you had to solder together yourself. And then of course they didn't work and then you have to figure out why they didn't work. So you, you, you learned a lot about the hardware too. And I bought a British Z80 based kit computer called a Nascom and started learning assembly programming on, on that one. And then I also met with some college buddies and we ended up founding a company and we had the first computer store in Copenhagen where you could walk in and buy a computer, one of these kit computers. And later we sold Apple IIS and VIC 20s and Commodore 64s and blah blah, blah, you know, all of those different ones, right? TRS 80s. So I did a lot of programming on those and found that programming was really the thing that I enjoyed. And of course they all came with Microsoft's ROM basic, which was slow, but it allowed you to write programs. But I always like missed having a real programming language, something like Algol or that I had been taught, right. And then my, my buddy, the guy that I founded the company with, he was like, well, there's this new thing called Pascal, you ought to check it out. And it's, it's even supposed to be simpler than Algo, which was, which was actually true of every language Vith created. They got increasingly simpler as time went on and Pascal was not that hard to implement. And so I got interested in trying to do that and then wrote a little compiler that fit into a 12k ROM that would compile a subset of Pascal. And you could then yank out the Microsoft ROM BASIC and stick in our ROM instead. And then when you booted your machine, you were in this little environment where you could type in PASCAL programs and run them. And that was sort of the early precursor of Turbo pascal, if you will.
A
Many years later you joined Borland, and I think it was in 1989, and there you created Turbo Pascal, the programming language, but also the IDE. Right?
B
Well, there's a little more to that story in that company that we had back in Denmark, ended up writing eventually a full implementation of Pascal for 8bit CPM 80. And then we ended up doing a joint venture with Borland, which was also a Danish company, it was originally founded in Denmark and we made a royalty contract where they would sell our compiler on a royalty. And that's how I got involved in Borland. And we shipped that first product in 83, the first version of Turbo Pascal. And then that took off more than. More than any of us had expected. And eventually that ended up being the thing that I did full time.
A
Why was it called Turbo Pascal? I understand you added things on top of Pascal that was there.
B
Well, I mean, it was called Turbo Pascal because it was fast. Back then, Turbo was like. This was like when Audi had their Quattros and their Turbos and whatever, you know, Turbo just meant fast, Right. And this thing was fast and super interactive. Right. And so turbo Pascal it was.
A
When Turbo PASCAL became big, was it big just because of the compiler or also there was an id, a dedicated ID for Turbo pascal, Right?
B
Yes, yes, that was always the idea. And that, that goes back even to the predecessor of Turbo Pascal, this idea that it's not just a compiler, it's an experience, right? I mean, you don't just compile your programs, you also edit them, you also run them, you also debug them, you also have a runtime library. It all has to, like, fit together, you know what I mean? And so turbo Pascal was always about building that whole cycle and try to make it as interactive as. As Basic was as an interpreted language.
A
Right.
B
But giving you the performance of a compiled language and the better, you know, semantics and syntax of Pascal versus Basic. And so that was sort of the idea from day one, you know, focus on the whole cycle.
A
And so when you were building the compiler, you were already thinking of ways that the ide, for example, could make sense or could have helpful features for. May that be editing or debugging, for example?
B
Oh, absolutely. You know, the first versions of Turbo Pascal didn't have a debugger. You know, you would just use writeland statements and then you'd just see what happened. Right? But often if you had some error and it blew up, you know, with a runtime error, we would print out the address of the runtime error, which is where was the program counter at that point? And then we had a mode in the compiler where we would say, compile, but stop at this address. And so the compiler was real simple. It would just produce object code, and then once it hit that address, it would just say, well, whatever I'm syntactically looking at right now, that must have been around where the error was. So that was like how you could go to the line where the error had occurred. Do you know what I mean? It's not like we had like line maps or debuggers or any of that stuff. We just had the compiler and it was just easy to make it stop at a certain address, you know, in the, in the object output and then show you where it was in the source code.
A
Why do you think turbo Pascal was so popular? I remember back again, this was my first programming experience. It was at schools, it was outside of schools for production software. And you said yourself that it spread like wildfire.
B
It was just better than all the competition. It was faster, it was smaller, it was more interactive, and it was also cheaper. So it was like 10 times better at a 10th of the price of the, of the competition, right? Compilers back then used to cost $500 and they were just compilers and then you had to have an editor and blah, blah, blah. And it was like this whole long winded cycle of inserting different disks with compiler paths 1 and 2 and what have you. And, and here was this thing that just like made it all go away. And you could get it for 49.95 and for 49.95, I mean, heck, that was worth it just to get the manuals that came with it, right? I mean, so, so there was very little piracy because it was so cheap. Although, speaking of piracy, we always had the joke about the Russian site license, how we sold one copy to Russia and then that got copied everywhere.
A
But after turbo pascal, you built Delphi, which was an even bigger setup in many ways. This was now a integrated environment for Windows development. How did you evolve ideas from Turbo Pascal and Delphi?
B
The big thing that happened there between Turbo Pascal and Delphi was the advent of the graphical user interface, right? We switched from running DOS in text mode to running Windows in a gui. And that meant a new kind of application, right, that you had to create and at the same time competitively. Microsoft had created Visual Basic, which was a very impressive product, but still had some of the very same flaws that we knew how to compete with, right? In terms of interpreted versus compiled and extensible versus not or not extensible versus ours, that had classes and object orientation and blah, blah, blah. First we set out to build a Visual Basic competitor, but then we also realized that, well, that's not really enough of an angle. And then there was this other phenomenon that was happening at the time, which was called client server applications. And there were a whole bunch of 4GL application development tools for database connected client server Apps. And so we set out to build a tool that was like as interactive and rapid application development as Visual Basic, but with a compiler behind it, targeted also at client server, enterprise apps that were. And that was what Delphi sort of was about. Right? It worked out really well. I mean, that. That was a.
A
That's.
B
That product to this day is still being used actively by a whole number of programmers.
A
I was very surprised when I worked at Skype right after Microsoft bought it. Yeah, the Skype application, you probably know this, it was built in Delphi in 2012 or 2013. There was a plan to rewrite it and move it onto something else. That rewrite, midway, a year in, stopped. So I'm guessing that until the end of that Skype application, which was decommissioned maybe a year ago.
B
It's amazing, isn't it? I mean, the Delphi was and is in some ways a wonderful way of building Windows desktop apps. I mean, they had a great, you know, the vcl, the Visual class library that allowed you to inherit components and install them on the palette and make drag and drop work for your forms designers with components that you had built and whatever. It was pretty cool.
A
Yeah. And we already heard the Microsoft link with Visual Basics. So you joined Microsoft in 1996, you worked on J and then later C. But can you take us back to that moment in time? What was the kind of programming environment like?
B
Well, the environment, particularly around the time where I joined Microsoft, the mid-90s, Java had happened. Well, the browser had happened first of all. And JavaScript. But JavaScript, no one really paid attention to JavaScript because that was just this little whatever thingy that that was in the browser, you know, and it was slow and it was like, yeah, no one uses that. But then there was this Java thing that allowed you to create applets. Oh my God. Applets.
A
Oh, yeah, fantastic.
B
And run in the browser everywhere. Yep, yep, Everywhere and in the browser and everywhere, supposedly. And this language that was like, simple, yet had object orientation and byte codes and like, like was platform independent. And I mean, it was like everyone was running around with like, with their heads cut off thinking this was the end of languages, you know, Java is going to flatten the universe and we're all just going to be writing Java and Java applets and that's it, you know. And I actually came to Microsoft ostensibly to be the architect of Microsoft's Java development tools and worked on Visual J 6.0 was the version that they had, Visual J 1.1 at the time I joined, which was basically take Visual C, yank out the C compilers, dig in a Java compiler and call it good. But it wasn't interactive, it wasn't rapid application development and whatever. And I came sort of with a whole host of knowledge of how to build interactive development tools. And that's what we set out to do with visual J 6.0. And we also of course knew that, hey, you know, I mean, people are going to be running on Windows and they're going to want to be able to build Windows desktop apps. And so we built a class library that allowed you to do that. This was the precursor WFC I think it was called, but it was the precursor of winforms. You know, in some ways how did
A
the development of J go and eventually how did it lead to the idea of like, okay, let's do something else completely different?
B
Well, you know, development of J went, went great until the big Sun Microsoft lawsuit got in the way. And there was, you know, and that is like, I mean now we're talking like business and whatever. It had nothing to do with technical, but it effectively meant that Visual J was never going to be a product that companies would make a bet on because they full well knew that, you know, you're not going to, you're not going to write your app in a, in a language that has been enjoined by a judge in San Jose, you know, or, or whatever. And so we kind of realized at that point too that maybe it's not a great strategy to, to place your, your development platform bet on, on technology that's licensed from a competitor. And, and that in turn along with the sort of dev situation at the time, I mean, Microsoft's main development products at the time were in two camps. There was Visual Basic rapid application development loved by everybody, you know, because it was so easy to build apps, right. But performance wise had problems. Extensibility wise wasn't so great. To write new components, you had to write them in C and whatever. And, and then we had C with MFC and power and expressiveness. But really what people wanted was both, they wanted something that rolled both of those up, right? And then they also wanted like modern things like garbage collection that say Java had, for example, right. An exception handling and a more object oriented, component oriented way of building your apps. And all of that was part of the genesis that led to. NET and to the C language.
A
So which one was First NET or C inside of Microsoft?
B
Well, they were simultaneous I would say, because we knew we wanted to build a runtime that was language independent because we knew that we wanted to run Visual Basic on it and we wanted a way of running C on it and we wanted the ability for other languages to host themselves on this runtime. But we also knew that we needed to build a language that would appeal to, to both Visual Basic and C users and give you sort of that golden thing in the middle. Right. And to be frank, something that could compete with Java. Right. And so that's why we started out
A
building C. And then when you started out building C, what were your design goals? You mentioned a few things with garbage collection or exception handling. But how did you come up with like, okay, what will this language be?
B
Well, like I said, I mean the overarching thing was this power and productivity of C with the ease of use of Visual Basic in a sense.
A
Right.
B
But what it also meant was we knew we wanted to build an object oriented language, we wanted managed code or bytecode so we could target different runtime environments. We wanted garbage collection and exception handling, but also things like a unified object system where, and that's true in C, like anything can be assigned to an object and if it's a value type, we box it and it's a self describing object. So reflection, you can ask an object what are you? And you can get all of the facts about it at runtime and you can dynamically manipulate it in ways that are just don't exist in a lot of other environments. We knew we wanted to go there with that. We wanted a language that made this new model of properties, methods and events first class because that was how components were built, as opposed to just sort of functions and procedures and even objects. Right. And then we actually also wanted to create a language that was standardized. We wanted to give this language to a standardization committee and try to level the playing field there. And all of those things were sort of like what was rolled up in
A
C. You definitely did it. C was my first professional language where I worked with it I think for about five years. And I've seen both the tooling, the capabilities of language and I still think to this date in many ways that old version of C was ahead of some languages today in some ways. So it's very interesting to see how rich that language was when it came out. And of course the developer love for that followed. But can you take us back? What did it take to build a language like this? Just in the more again software engineers, people were listening. They're used to building SaaS, apps, backend services, you know, like, like certain projects, but we are not familiar with, with what it takes to build a language, especially something with such large ambitions. Inside Microsoft, we knew millions of developers ideally would be using it. How did you get to this?
B
Did you.
A
How did you come with the roadmap? How big or large or small team needs to work on this?
B
I think early on we decided that we want to have a team of people design this language, not just one. I was sort of the guy who ran the group of designers, but we put together a group of six people or so, six, seven people. And we got in a room three times a week for two hours and just started the design. You know, like literally start from the top. What is that? We all knew what I mean. These were all people who had built or worked on programming languages before, right? And had seen all of the things you're supposed to do and all the things you're not supposed to do. And quite honestly, language design is 90% the same and 10% new for pretty much every language. Every language you build still has to have a compiler. Compiler is still built pretty much the same way. And of course, as time marched on, people demand more and more. You have to have IDEs, you have to have frameworks, you have to have blah, blah, blah, blah, blah, you know, and it's all. There's. So there's a lot of experience you want to pull in, and there's a lot of work that you're doing that isn't really per se new, but every time around you try to fix the problems that you've been exposed to. This language design group worked together for years on end, and it was lovely to come into work with a new idea and then immediately have five or six people that you could sit down and have a deep discussion with without first having to spend an hour level setting. Do you know what I mean? Yeah, yeah. And, and, and, and that worked really, really well because, because we could just jump right in, you know, and, and have two hours of technical discussion and everyone was cognizant of, okay, if someone comes up with a new idea, now it's our job to try to shoot it down. Well, what's wrong with this idea? Do you know what I mean? And if it could go, if it could stand the test of that, then it was probably a decent idea. And so that was kind of how we ran the design. And then I wrote the specification of the language in parallel with our design meetings, and then we had a group that was in parallel implementing the compiler in actually implementing it in C, or rather C, because we didn't use all of the C Features, you know, in that compiler implementation. But it wasn't until the Roslyn project that we self hosted the C compiler
A
and Roslyn meaning that the compiler is in C. Right, exactly.
B
Yes, yes. That was a project that came later to build the compiler in itself. And also early on, you know, this is, you got to remember back then, IDEs were not really all that fancy, you know, I mean, we have syntax colorization, statement completion was kind of like. Well, some IDEs were starting to dabble in it, but it wasn't really a norm. So we built like in a sense a classic compiler. But then we also built this like mini language service Y thing that sort of cut some corners and whatever, but could do some rudimentary statement completion and syntax coloring. But in a sense we had two implementations that we had to evolve in parallel. And over time that became quite a drag, right, because as we added generics and added other features and link and whatever, and it was like, oh my God, now, now this is like we got to go implement all of these features twice in the, in the real compiler and in the language service. Right. And so that ultimately led us to this project called Roslyn, where we built a single compiler that really is both. It's a compiler that can both function as a command line compiler and as an interactive service inside the IDE. TypeScript is built that same way also. And there's a lot of learnings from doing it that way that are still not being taught in school.
A
There are many useful things not being taught in school, even when they are useful to learn about. One of these really useful tools that I must mention is our seasonal sponsor, TurboPuffer. TurboPuffer is a ridiculously scalable, fast and cheap vector and full text search engine built by an engineering team that I really like. The first time I heard about them was when I was talking with one of Cursor's co founders about how their vector database could not keep up with the number of code bases that they were adding. This was back in 2023, Cursor did something seemingly risky. They took a bet on what was a little known and relatively new product at the time, turbopuffer. But it paid off. Cursor moved their semantic search workload over and turbopuffer was indeed able to handle Cursor's massive, ever increasing load. The reason has everything to do with Smart Engineering. TurboPuffer is built on top of object storage with smart caching on NVMe SSDs. Cursor's active codebases get loaded into the cache so searches are fast and inactive codebases fade into object storage. Cursor has so many good things to say about TurboPuffer. They cut their semantic search cost by 95% when they switched. They think of TurboPuffer engineers as an immediate extension of their team in Slack, and TurboPuffer is one of the few pieces of infrastructure and that they have not had to worry about as they scaled. Today, TurboPuffer indexes over 4 trillion documents for vector and full text search and is used by the likes of Anthropic Notion, Linear Ramp, and many others. I'm getting to know the turbopuffer team and we'll share more about some of the cool things that they do behind the scenes throughout the season. If you need vector search or full text search at scale, think TurboBuffer. Check it out@turbobuffer.com Pragmatic when it comes to useful tools, I need to also mention our seasonal sponsor workos. One theme of today's episode with Anders is how he has spent the time frame thinking about developer productivity on a scale that most of us do not decades, not months or quarters. WorkOS takes the same kind of long term view on enterprise infrastructure. Sso, scim, RBAC audit logs. They spent years getting these right so you do not have to spend weeks implementing them. That's why the fastest growing AI companies trust WorkOS. Visit workos.com to learn more. And with this let's get back to building languages with Anders. And when you're building a language as your product was, I guess the language itself, how did you get feedback? Of course you had the URA said the group criticized it. Did you have an internal beta testers? Because again for like a backend service you would typically have dogfooding, alpha testing, beta testing and then you go public at some point. But this is not your average software as a service for sure.
B
Yeah, I mean luckily we had internal clients. The NET framework very quickly started implementing in C. They had sort of used a hacked up version of C to implement, which was kind of odd because I mean it was like targeting bytecodes but not really. So they switched to C and that helped a lot. And then we had other internal teams using it and so we got a bunch of feedback that way and then we had, you know, the cycle was not that long, right? I mean I think we started in late 98 and by the PTC of 2000 we had, we signed up. I mean we basically gave away beta copies, right? And got tons of users onto it.
A
Now C introduced a Lot of new features that were net new, I think, to programming languages. LINQ is certainly one of them. But one thing that might have been maybe one of the most influential parts that other languages adopted as inspiration was the Async Await setup. Looking back, what do you think you got right with this design and why did it become so copyable across languages like JavaScript, Python, Rust and others?
B
Well, a lot of languages are built around cooperative multitasking in the sense that they have an event loop that sits and dispatches events and then, you know, you handle the event and then you yield back to the event handler loop. And it all runs in a single thread cooperatively.
A
Right.
B
The problem with that is if you then want to do some long running work, how do I stop in the middle of this piece of long running work and yield back to the event loop cooperatively? Right. And then, and then when my result is ready, then I can come back and continue executing here. Well, in order to do that in, in an inverted architecture like that, you have to build a state machine. And state machines are notoriously hard for people to implement because you got to move all of your state off of the stack into objects, you gotta remember where. And then you have this big case statement that envelopes your entire logic and it's like, it's a nightmare to figure out, right? But the transformation from serially executing code into a state machine, it's continuation processing style translation, is actually one that you can do in a machine based fashion. You can have the compiler write the state machine if you introduce syntax that allows you to indicate where you want to yield. And that's what await is. Await is basically I'm saying I want to yield here and I want to yield this promise. And then when the promise completes, I want you to come back here and continue executing. And then the compiler writes a state machine around it and it actually turns it into this big switch statement, you know, and moves all of the state that survives across the await into something that's heap allocated so it can be brought back. And doing all of that work is something that compilers are great at. And so that was sort of the idea that we have this new style of programming where we're using promises or the equivalent of promises and the ability to yield and then we have callbacks. But trying to write your program in that style, that's also what JavaScript suffer from a lot. It's like all this callback style stuff. And with Async Await you get sort of the illusion that you're just writing normal Sequential code and then the compiler does the painful transformation for you and that turns out to be really useful. Now, arguably an alternative way of doing this is to use threads in the os, but the problem with threads is that they come with preemptiveness and the OS has the ability to preempt you at any point in time and that's not necessarily what you want. And your ui, now you have to be multi threaded in your UI and all sorts of other problems come along with the plus, threads are heavyweight and typically not well suited for lightweight tasks like you could do with async functions. So there are pros and cons. Async Await introduces this notion of function coloring, which is unfortunate, where you have two kinds of functions, async functions and regular functions. And all the red functions can call the blue functions, but the blue functions can't call the red functions. And so that means once you want a red function, now everything above it has to be red. If you want from a sink function to call something async, well then you got to turn this function into an async function and its caller has to be, et cetera, et cetera, right? So that's unfortunate. And that's why some environments like Go, for example, has GO routines and green threads, which are really language emulated lightweight threads that kind of do what I'm talking about, but, but at a much lower cost, but you avoid the function coloring. So there is a bunch of different things but, but you know, but for an environment that already exists like JavaScript or like C and the Windows event loop and whatever, this was the right Solution.
A
Speaking of JavaScript, AS C was becoming really popular across startups, enterprises and so on, it was exploding popular games as well. To this day it's very popular with games development, but JavaScript was starting to become more popular. Can you take us back to your observations on how JavaScript went from the mid-90s to this script language? No one really took seriously to just exploding in popularity.
B
I think it was sort of a confluence of a number of things that happened in the early 2000s. Right. First of all, the JavaScript platform, execution platform matured a lot. Like Google did their excellent work on V8 and all of a sudden make JavaScript a fairly performant programming language. HTML5 got ratified and, and we were getting to a point now where you could actually build real UIs in JavaScript. And there was this device revolution that the iPhone set off and all of a sudden we have all of these different form factors. It's not just Windows PCs on the desktop anymore. It's all sorts of diverse devices, but lo and behold, they all run browsers with JavaScript. In LO and behold, the real cross platform language isn't Java, it's JavaScript. Who would have thought? Exactly. And so the world started opening its eyes to that and started building larger and larger applications in JavaScript. And we saw that externally but also internally. And one of the trigger events was when the Outlook.com team came to the C team and asked us whether we would pretty please productize this thing called Script Sharp. And we go, well, what is Script Sharp? It's this cross compiler that allows you to cross compile C into JavaScript such that you can basically treat JavaScript as an instruction language and run your C Sharp apps in a browser. And I'm like, well why would anyone want to do that? Well, because then you can get a grown up programming language with grown up tooling. You can use Visual Studio, you can, you can have projects, you can do all of these wonderful things, you know, that you can't do with JavaScript because JavaScript is just a scripting language with shitty tooling. And we were like, wow, really? Well, gosh. Well, perhaps a better approach would be to fix JavaScript. I mean surely you're not going to be best of breed in the JavaScript ecosystem by telling people to write in a different programming language. Although plenty of people were like, remember CoffeeScript and all of these other languages that targeted JavaScript, right?
A
Yes. So there were a programming language, right, which generated JavaScript, but it wasn't JavaScript itself.
B
Yes, yes, it was super popular. I mean like so many different things did that. But JavaScript is actually a pretty decent little language. There are just some things missing. You gotta give credit there to Brendan Eich. I mean he understood functional programming and
A
Brendan Ick, the creator of JavaScript and
B
he got functions as first class objects right in JavaScript, which is godsend and beautiful, but it doesn't have a type system. And we knew from experience that you cannot build good tooling without a type system. You can build decent tooling but it's never going to scale, it's never going to scale to large teams because you can't describe your intents in the code. There's no way of formalizing any of this stuff and there's no way of analyzing it and there's no way of using it in an IDE to give you statement completion and refactoring and go to definition and find all references and blah blah blah blah, all of that stuff. Right. That germinated the idea of, hey, it's, we can, we could create a superset of JavaScript that adds a type system and then we could just compile it away. But then we have, now we have the foundation for great tooling and then we could build a great tooling on top and actually create a wonderful development experience. Right? That was sort of like what we set out to do.
A
When you set out to do this, you not only set out to do this, but you set out for some reason to do it as open source. Which took everyone outside of Microsoft as a surprise because old Microsoft under Steve Ballmer was notoriously perceived as anti open source back then with the Windows and C Sharp back in the day, of
B
course, I'm talking about, you know, Microsoft was slowly waking up to the fact that open source was not going to go away and open source was where developers wanted to be and they were voting with their feet. Yet there's a collective DNA, you know, that has been trained to pull you in the other direction. Right. And so that battle was, we were right in the center of that and we full well knew that there was absolutely zero chance that we would appeal to the JavaScript ecosystem with a proprietary programming language license from Microsoft. No, no one was going to come. It had to be open source. There was just no two ways about it, right? But getting that off the ground inside Microsoft, it took some pulling and we paid some taxes. We did eventually get the okay to do open source because we had two technical fellows, myself and Steve Luko, who was the other co inventor of TypeScript, insisting that that was what we had to do. And so, okay, people weren't going to debate that, but of course you have to pay the tax and be on Microsoft's open source repository called Codeplex, where exactly no one was. And so we were there for the first two years and it kind of was crickets, you know, and it wasn't until 2014 when we moved on to GitHub, that things really started to get moving with adoption. And also, honestly, it totally changed our workflow. You know, there's open source and there's open development and we were technically open source in the beginning, but it was not open development. We would sort of lop the source code out in its repository and scrape the issues off of that and put it into our internal issue tracker. And then, you know, but once we switched to GitHub, the entire workflow moved to open development also. And that I love that workflow. And that's, we've been there now for Over a decade and it's been fantastic and it's what made the product as
A
good as it is just over a decade later. So the language moved to GitHub in 2014 and in November 2025 the GitHub Octaverse report revealed that TypeScript became the most popular language across GitHub outside of the type system. What do you think made TypeScript this popular? And of course we've had other languages, Python being the other very popular one. But what captures developers preferences this well?
B
Well, I think, you know, it didn't just happen overnight, you know, and if you look back at that, you know, all of a sudden we surfaced as number 10 and then we climbed slowly over the years up and sat next to JavaScript. Right. And of course if you added JavaScript and TypeScript together, then we were already number one. It's just which syntax were you using? Type annotations or not? And more and more people over time just decided to adopt that. I mean some early on were using JS Doc or whatever, you know, and like these types and comments or that we also supported. But gradually I think people just realized hey, this is, this is the right way to do it. And the reason they came I think is absolutely because of the better tooling. And I think we were totally right there that like adding an erasable type system and then using that to enable great tooling is really where the pro, where the programmer productivity boost is realized.
A
And I guess this is where we cannot like not mention VS code which shipped that great tooling also as a free to use for most people, or at least initially for most people, which also made a big difference.
B
Absolutely. Yeah, that's our sister project which is written in TypeScript. And so they were one of our earliest adopters and we worked pretty closely with them to this day. That whole interplay that in turn is also what led to the invention of lsp, the language server protocol that now pretty much every tool vendor uses to enable interactive services in the ide. Oddly, it isn't until this port to go now that we're switching to lsp. We had our own precursor of LSP because the LSP didn't exist when we first integrated TypeScript into Visual Studio code. But there were a lot of learnings from that. So it's been an incredibly symbiotic and fulfilling experience to build these two projects in parallel in open source. And I think it has totally changed people's view of Microsoft in the developer
A
ecosystem for us developers who again are not as familiar with compilers themselves of course I use TypeScript and I'm aware that there's some compilation going. Could you give us a brief overview of what the TypeScript compiler pipeline looks like in terms of parts and what parts you specifically focus on more?
B
Sure. It's in many ways a fairly typical compiler, and in many ways not. Pretty much every compiler has, you know, what's known as a lexer or scanner that takes text and turns it into tokens. And then typically on top of that you have a parser that takes the tokens, checks their sequencing, and then makes abstract syntax trees, which is a, you know, a tree that you can navigate that effectively is a map of, of the source code, you know, but broken into syntactic primitives and checked that, you know, like syntactically everything is. Or grammatically that everything is correct. So those are the first two stages of the pipeline. Then we have, well, we have one extra pass that we call the binder, which is, you know, once we have the parse trees, then we bind symbol information to them, where we find all of the, all of the declarations of variables and whatever, and build symbol tables and attach them to their functions such that we can then later look up names effectively. And we also build, in the binder, we build a control flow graph and I can talk about what, what that helps us do. And then we have the type checker, which is the largest part of our pipeline. And that's the thing that checks semantically that your program is correct. It's the thing that figures out types and checks that the types relate correctly and that you're assigning the right thing to the right thing, and then that, you know, that you're calling something that actually exists and so forth. And then we have an optional stage at the end called our emitter. And normally the emitter infrastructure in a compiler is also quite big because that's where you go from intermediate representation to machine code or bytecode. Now, in our case, we just erase types, if you will. Well, we kind of do two things in our compiler, actually. Early on it was very much about A, erasing the types, but B, also down leveling your code. So we would take newer ECMAScript features that weren't yet supported by the runtimes, for example, classes, and then we would down level them to constructor functions and whatever. And so we would rewrite the code. And that was a very popular feature early on. Now pretty much every browser is evergreen and, you know, like ECMAScript features are caught up and so that's not as important anymore. So our emitter is effectively, you know, a thing that just erases type annotations and spits out the JavaScript code that can run unannotated and also can spit out declaration files, which are summaries of your modules and so forth. But those are sort of the stages. Now, the thing that's interesting about the compiler, though, is that it's built in a manner that, where it can function in a highly interactive mode, which is what the IDE uses normally. You know, command line compilers, they just run through these stages and, you know, the output is just whatever gets emitted or some error messages. Right. But in an ide, you know, the compiler is a service, and what we do in that service is we basically take a program that is perpetually broken because you're typing, and yet we try to syntactically or semantically analyze it. And because we need to know when you press dot here, what could you, what could come next? Well, that means we need to know what is the type of the thing you dotted on. In order for to figure that out, we may have to resolve stuff. We may have to look at ASTs over here and whatever. And all of that has to happen within 200 milliseconds or else people think the IDE is slow. Right. Well, what if you have 500,000 lines of code? You can't compile all of those in 200 milliseconds. So you got to be super, super deferred and interactive. And so you got to do minimal amounts of work. And that's how our compiler is built, is it tries to front load like, for example, like you have 500,000 lines of code. Well, let's say in 500 files. Well, we could build the ASTs for 499 of the files and just sit on them. We don't have to rebuild those because you're not editing in those files. We just have to update the AST of the current file you're in. So that goes 500 times faster. Right. Than if we had to do all of it. And then we don't actually have to figure out all of the types in here either. We can just start where you're at and then just resolve just enough to answer the question that you're, that you're needing an answer for right now. And so everything is lazy and deferred and functional and reusable inside the compiler. And it's a very different way of writing compilers than what the textbooks will traditionally teach you.
A
Yeah, because I guess this is. Now these are interactive Compilers, if you will. Right. It sounds like it's more than a compiler or a lot more difficult problem to solve.
B
The same engine is there, but. But you got to build it in a. In a manner where it can be very interactive, and that was not typically important for compilers, you know.
A
And so TypeScript is a superset of JavaScript. What are some features you would try to add if only JavaScript would allow it, or if you were able to influence JavaScript's roadmap? What is something that you feel could make TypeScript a lot better? But of course, there's a constraint there.
B
We track the ECMAScript committee and new language features that get developed in ECMAScript we implement once they reach stage 3 or 4 in the standardization committee. And then we've sort of been on that train ever since the beginning. So there is a pipeline that supplies new language features in a standardized manner. We sort of see it as our purview to define the type system on top of the right. So that is, if you will, our playground. Now, I still have things that I wish I could have in the language itself. I mean, I like functional programming. I like functional programming languages, and key to them is that everything is an expression. There's really no distinction between statements and expressions. And so one of the features that JavaScript lacks in my estimate, is the ability to give symbolic names to temporary results and expressions and then reuse them. This is the let, let, blah equals whatever in some expression that functional programming languages, you know, like Camel and whatever, all have. And it's nice because you could just stay in an expression context and you can just dot things together and, or whatever and sort of do this more fluent style of programming. But then all of a sudden you need a name for something you want to reuse, and now you've got to pop out and declare a variable or turn it into state. Anyway, you know, that's one thing that I would like to fix. There's. There's something called do expressions that may or may not happen at some point, but it's taking a long time. So anyway. But I mean, generally Speaking, I think JavaScript is a. Is a nice little language. It just has some issues, you know, and then. And I think we're very good at teasing them out with our type checker. Right? And so, so once you have a checker that can warn you, hey, you're about to do something stupid here, then it's not so bad. The thing that makes it interesting, I think, and unlike pretty much any other programming language, is the gradual typing, this notion that you can have types, but you don't have to have types. Other languages force you to type everything right, because they in turn use that information to generate machine code based on what the type is. Different Instructions for float vs int vs whatever. Where in JavaScript the types or in TypeScript, the types are there purely for the development experience and the checking. When the program runs, they're all gone. Now, of course, they're still types, but they're all dynamically computed. But that's kind of interesting because that means in the language we don't necessarily have to prove 100% correctness. And a lot of language features that we have, we can't a hundred percent prove correctness. Like in a structural type system with recursive types, there are just cases that you can't analyze because the types are infinitely recurring. The more you try to relate two types, the deeper you go and you're just staring into the, the recursive abyss, you know what I mean? But you can kind of go, well, well, we've proven it to four levels. That's probably good enough. We're just going to say it's good enough. And then, you know, if everything else works out, we're going to go, sure, that you can't do. If you were to go generate machine code, that then would have indeterminate behavior, right? But if JavaScript has a runtime where everything is well defined already, so if we're checking 99% instead of 100%, well, heck, that's better than the 0%, the JavaScript check, right? And it gives you like, language features that no other languages can provide because they can't get to 100%.
A
It's interesting how constraints lead to innovation or even limitations can lead to more innovation. Speaking of innovation, one one of the biggest innovations that is everywhere is the AI agents, AI coding tools that us software engineers, most software engineers are using, increasingly using AI agents as well as you're developing languages on a more, I guess, niche team, what kinds of AI tools are you using or how is AI helping your language development work? May that be TypeScript or C day to day?
B
I work on, on TypeScript and I can certainly talk about how we've been in the process of moving TypeScript to native code for the last year and a half or so. In the beginning of that project, AI was nowhere near as capable as it is now, and therefore we could not really use much of it in the beginning. At this point though, I'd say we're using, we're using AI fairly well. Obviously we're on GitHub. We use AI to code review, pull requests. That in the beginning was not all that great, but now it's actually getting a lot better. We use AI to implement issues or fix issues, simple issues. And it succeeds some of the time in this port that we're doing because we snapped a copy of the source code from a year and a half ago and then ported it. We have a backlog of PRs that need to be moved on to the new native compiler. And so we're using AI to help us move those pull requests. And that's actually going fairly well at this point. And then we use it for a bunch of drudgery. Drudgery work like, okay, here's this feature. Please write me some tests in the same style as these other tests. Right? And kaboom. It's like no one likes writing tests. AI loves writing tests and it'll just pump out more tests and great, you know, so, so we're trying to use it to get rid of all the toil that otherwise we would spend our time on. Right. But I would say we're not at a point where it absolves us from understanding what we're doing. Not at all, no.
A
Well, plus your level at the stack, if you will, because you're building a language, it might argue that someone really needs to understand at least one person, ideally the whole team needs to understand those fundamental parts, right?
B
Oh, absolutely. I mean, and, and it's language is interesting in the, in the world of AI because AI like a lot of, like this, this conversation. We wouldn't have this conversation if it wasn't for languages. Because how would AI get to determinism without programming languages? Right? I mean, AI is by design stochastic and indeterminate. It might give you a different answer the next time you ask it the same question. Either just because random or because there's a new model or there's a. Whatever. It's not, it's not that it's. There's no determinist yet. We can't build applications if they're not. If they have non deterministic behavior. I mean, what would a banking app look like if it like decided to hallucinate or, or whatever. Right. So you have to have something that. Where the rubber reads the road and where you can reason about and where you can replicate the behavior. Every time you run the app the same thing happens.
A
Absolutely. I mean, I even see it in a bunch. I think almost all AI agents or tools these Days when you ask it something to do with data, oftentimes they will start writing a Python program. Because I think the AI designers figured out that you at some point want to turn some non deterministic into a deterministic. And what is the thing that we know most efficient?
B
Yeah, don't ask it for the answer, ask it to write a program that
A
computes the answer and you will know that that will be deterministic.
B
Yes, exactly. Yes.
A
Yes, it's very interesting. But speaking of languages for AI, a question that comes up, of course, because AI is everywhere, is generating a lot more code. What is your take on either modifying existing languages for AI usage based on what you're seeing, the patterns, or potentially coming up with, would it make any sense to come up with a language that is more suited for AI agents to use?
B
Well, my flippant answer there is, you know that the language that's most suited for AI is the language that AI has seen the most of in its training set. Right. And that's why you could argue AI does really well on JavaScript and TypeScript and Python because it's seen an awful lot of it and there's an awful lot of that still. And so. And that just reinforces itself. Right. And you could argue, well, the reason TypeScript and JavaScript are popular, well, that's mostly to do with the browser, not so much to do with AI. Right. But it's interesting to look at why is AI targeting TypeScript versus just JavaScript? And there I think the types actually help guide the AI to producing better programs. And I think our combination of the ability to type something when there's no context, but also the, our ability to infer it when there is context is just the right combination. Because if you were to force AI to write a type annotation on everything, then it would probably get it wrong more often because now it has to keep track of all these types and it has to just repeat itself over and over and over. Right. And so types are important where there's no context, but inference is super important for the dry or do not repeat yourself principle. Right. And fewer tokens generally makes AI more efficient. And so I think we have a very nice combo there in how you can just sort of type the outermost parameter and then everything flows from there on in.
A
Right now One thing that AI is already revolting in again for the GitHub team, share stats. So this is also open data that AI agents are just generating a lot more code. I mean, they're both quick to generate, they Also like to be sometimes verbose knowing that we are already seeing a lot more code pushed everywhere and from at a project level, at an aggregate level. What do you think language characteristics could become more important in this world of just a lot more code oftentimes generated by machines?
B
I mean you could argue that we were already past peak truth on the, on the, on the Internet, right? And now there's, there's just more and more garbage every, every day it gets harder and harder to suss out the stuff that you do want to include in the training set in order to actually make something more intelligent. So I think that, that gradual, I mean I'm sure people are working on it. I could see that as, as, as becoming problematic languages that are suitable for AI I think like I talked about types and inference, I think both of those are important. I think also locality is important.
A
Locality.
B
Well what I mean by that is like don't have a bunch of global stuff where AI has to grok the entire product. Oh, these pound include files that are, oh my God, well who knows where they're in scope and how do I put that in the context window or not? And do I burn like a gazillion tokens on trying to include. But, but if you have good locality where you're clearly stating what you're importing and whatever and you can analyze just a single source file and from that extract its protocol to the outside world without having to know anything deeper. Do you know what I mean? I think those are important aspects just simply to reduce the size of the context window and also make it easier to summarize each module in a program. Right?
A
This is so fascinating because I remember this was probably 15 years ago where PHP was very much critiqued for its globals. And early on I didn't understand as a young developer why that was a big deal. I was just hacking around in PHP until I had the issue of something was not working and turns out that something imported overwrought a global and sudden you realize that when things are defined across the code base, a global could be anywhere and there's no way for you to know when someone else is doing your. Now back to the state problem which you just Talked about.
B
Original JavaScript suffered from this problem. There were no modules, right? Everything was global and anyone could just like monkey patch anything else. And it was impossible to know really what am I sitting on top of here? But now with ECMAScript modules and whatever, we're moving towards sanity and more and more the world is written that way. In the JavaScript ecosystem, and that's a good thing. And I think that will help us down the line with AI. AI, it's just starting to become aware of the existence of language services and agents today like to use GREP and AWK and whatever to find all the places where you reference a certain thing. But it's not semantic search. Right. And so if you have a common name for this property like count or address or whatever. Right. Well, it's going to find a whole bunch of properties named address and then that's not going to work so well because now you don't know that you're renaming the right one. Right. But this is where language services come in and semantic search. And I think that's going to increasingly become more important with AI. And really these are services that are already provided by LSP implementations, but they may need some tweaking in order for them to be more accessible to AI. AI likes command line tools, you know, and they're not really command line tools.
A
And I also wonder if, if, for example, performance will be interesting because we know that these things can, can run faster, so faster feedback loops will clearly be helpful, which we're now going back to. One of the reasons that TypeScript was so popular, you said the 200 milliseconds of getting you feedback. Right.
B
But there are ways of, you know, where, where you could imagine, you know, like a server keeping a project hot and giving your LSP services, you know, that I can ask semantic questions and whatever, and then once AI stops asking after 10 minutes, the server just dumps it, you know, and whatever. There are ways of putting this together, I think, where we can make some progress on, because, like the ability for AI to semantically validate the code that it's generating, as it's generating it will increasingly become important.
A
What about the software craft? You've been in this industry for many decades, but it's hard to unsee that these tools are just coming to everyday use. Similar to how at some point graphical IDs came before that, I guess higher level languages came knowing that AI agents and AI tools will be part of the craft, what do you think parts of software engineering craft will become less important and what might become more important?
B
In a sense, we're all turning into project managers, right? And we can have an army of junior programmers called agents that will just spit out reams of code, but someone's got to have the big picture and review all of that. And so increasingly our craft is going from one of writing the code to one of Reviewing the code and building the architecture of the code and overseeing the work, if you will. It's a different kind of craft. It's a different kind of enjoyment. I've always liked writing the code, you know, to me, that was the fulfilling part, seeing it work. Do you know what I mean? And in a way, AI robs a little bit of that, right? I mean, because I am less interested in reviewing code, but I think we could also make the process of reviewing code much more interesting than it is today, right? I mean, today you see a list of diffs in alphabetical order, and now it's up to you to make heads or tails of it. I mean, there are more pedagogical ways of presenting that, and you could have commentary generated by the AI that tells you what the changes are and whatever, and then tries to guide you along. Do you know what I mean? So that symbiotic relationship, I think we need to work on that more and sort of to keep the enjoyment in there. But I think it's foolish to think that AI will just eliminate programmers, because ultimately that's great. You know, like, vibe coding is wonderful as long as it works. And then the minute it goes off track, then you're like, you have no idea what's going on and you can't convince the AI to fix it. So what do you do? You can't absolve yourself from understanding what's going on. That's not programming. And ultimately also, you know, the responsibility for a program does not lie with the AI, it lies with the programmer. You, you're not going to go back to the eye and say, shame on you, I'm going to fire you. What does that even mean? Right? I mean, now you have nothing. You know, it's. No, you need someone to have that. That function of being responsible. And so, so that's, you know, so, so ultimately, AI is a tool to enable us to become more productive, I think. But it will change the way that we write our programs, for sure. I mean, there's no point in sitting there and typing in stuff that AI could type a hundred times faster.
A
You know, having created three very widely used programming languages, what have you learned about developers? About what they care about, but when it comes to programming languages and stuff that maybe they don't care too much about and don't even think about it, but you might have to think a
B
lot about, you know, I think at the end of the day, developers care about being productive. They care about being in the zone where they feel like, oh, yeah, this Thing is just clicking. For me, it's doing just the right thing and it's like answering. It's right there. It's an extension of my fingertips. Right. So for me, as a, as a language designer, I'm never just looking at the language. It's. You're looking, you got to look at the whole picture, the whole experience, because really what you're doing is you're creating an experience, an experience that programmers will spend the majority of their working life in, which is why programmers become so attached to their tools and their languages. Right. I mean, it's almost a religious thing which language you're and which tool you're using because it's so ingrained in your workflow and it so enables you to be in the zone.
A
Right.
B
So that I think is the, that's the key to focus on and that's what I've tried to do with, with the, with the, the work that I've done over the years.
A
And it sounds like this is why from the very beginning, you also focus on the ide, the tool where developers spend their time in.
B
Yeah. You can't have one without the other. Well, you can, but. But it just not. It's not nearly as effective. Yeah.
A
One question we're starting to see, or it's more of a question mark, is, well, how much are we going to be in the IDE all day versus these new interfaces, which might be agents, where you can manage multiple things, or command line, which is again, just something where we found that agents can work asynchronously. But I think we're still figuring out as an industry of, like, what. What will come next?
B
Yeah, I don't know that we can see the steady state at this point because it's, it's, it's evolving so much. But I still believe that, that programmers are going to be relevant in this equation. You know, I fundamentally believe that.
A
What about performance and efficiency? Early on in your career, you just mentioned your first computer you've had on how many kilobytes it had and how you fit your compiler into 12 kilobytes, which these days I cannot even create a text file that's smaller than that, or it's very hard to do. Right. But it seems in the, you know, like a few decades ago, writing efficient programs was important. And over time my perception is that it's becoming less of a focus. What is your take on that? And do you think it's kind of fine for us developers to forget about efficiency or we're just allowed to do that because of we have more resources or maybe this will change.
B
I think it's a case of it depends. There are certain classes of apps for which efficiency is absolutely key. I mean, the kind of program that my group works on, like compilers, tooling and whatever. Yeah, people do care. That's why we're spending a year and a half moving to native code inference in, in the cloud, on up against. I mean, oh my God, you know, like financial, fast trading, whatever. It's all about perf, right? I mean, at the speed of light and like trying to, trying to move your trade faster than the other guys. I mean so, so there are lots of places where perf is king, but there's increasingly also places where, where perf doesn't really matter because you know, it's so fast anyway that if it, Even if it's 10 times slower, you still can't detect a difference. And so it's just not worth optimizing there anymore. It depends, I think on the kind
A
of app you're building, it's a good reminder that not all use cases are born equal. I'm interested, what is your personal development setup like these days?
B
Well, I'm an old Windows guy. I still. Windows is my desktop. I have a Lenovo P1. You know, I like just keeping everything portable so I don't have a big screen or whatever. I just. But this is like, you know, what 15, 16 inch laptop with a nice OLED screen and a nice keyboard. And that's what I do my coding on, you know, pretty much exclusively.
A
And what tools do you use?
B
Oh, I. VS code. VS Code.
A
VS Code all day, every day.
B
Vs code and GitHub all the time. Yes, yes, yes.
A
And for AI coding assistance, it's mostly
B
the GitHub and VS code stuff. So. Which means, you know, I mean, well, you can. You get to choose your LLM there, right? I mean, but it's that workflow. I think generally speaking it's limited how much we've been able to use LLMs in implementing our compiler and implementing new language features. It's like, it just, it's good at surfacey stuff, but when it comes to like getting the big picture and how to types and symbols and binding and parsing and all relate, then where's. What's the most efficient data structure here and whatever. It's. Yeah, it's not quite to that level.
A
I'm also wondering if the lower you go into the stack, may that be very high performance code or very concise code where all these things matter? Maybe the applicability starts to drop. Because again, we already see this with LLMs. They're amazing for greenfield work. When you have an existing large application, it's useful, don't get me wrong, but it's not nearly as useful.
B
Yeah. And we are one big brown field because we already have a huge code base, right. And it's got to fit in there. Plus, I mean, to be honest, I mean, there are only so many compilers in the training sets of AI where there is a gazillion GUI apps written in React and whatever. Right. So no wonder it's good at those. Right?
A
I was interested in reflecting a little bit on your career. You've now been at Microsoft for 30 years and you've been working on programming languages for 40. That's even a lot to say, but in this industry it's pretty common for people to change jobs every three to five years or so. What has kept you at a company for so long and also in a similar area for even longer?
B
Well, there's just something about developer tools that, that is just what I love to do, you know, and programming languages and they're, they're, they're complex, algorithmically complex problems to solve. And I, for some reason like that they have fewer dependencies on other things. So you're, you're, you're building from the bottom yourself, you know what I mean? You don't have to like sit on top of someone else's framework and swear at them when they, when it doesn't do what you want it to do. Right. So, so that kind of works for me. Right. But the thing is like doing programming languages, you come to realize it's a long play. I mean, if you look back at the stuff I worked on, it goes in 10 year cycles at least. And TypeScript didn't really, for example, or C Sharp for that matter. I mean, it takes 10 years to get to, you know, version one is great, but it has all sorts of issues and you gotta do version two and then it's not until version three that it really starts to be great. But then now you gotta convince people to actually adopt and it's a long play. You got to be willing to do the long play. And I think having been at a company like Microsoft, it's been great because to be put in a position where a company like Microsoft is putting their mic behind your efforts on creating a programming lace, that's not an opportunity you get in a lot of places. Right. And that has been fantastic. But also the fact that Microsoft is fundamentally A developer focused company and they always have been.
A
Yeah, that's how they started.
B
Developers matter. It's not advertisers who are paying the bills, it's developers and enterprises that you know. And I like that. Like where you feel like you're doing an artist day's work and people are paying you for it. And it's good stuff.
A
You know, as closing.
B
What.
A
What is a book that you would
B
recommend and why I always recommend the same book, which is Nick Klaus Witz Programs plus Data Structures equals Algorithms. It's actually like available online now. It was written in the 70s, but that was the book that. It was a revelation for me to read this book. This is how I. How I learned about hash tables and the. How to construct a small compiler and whatever. And it was, it was just wonderful. It's very light on symbolism and very rich on examples. And I was always an engineer and so that book just appealed to me and I think it's still in a lot of ways super relevant today.
A
The basics have not changed, have they? Too much?
B
No, no, no, certainly. And then particularly when it comes to programming languages, heck, it's, it's a, it's a well established discipline, quite honestly. Yeah, it's been around for 50 plus years.
A
Well, Anders, thank you so much for this in depth conversation.
B
Oh, my pleasure. This was a lot of fun.
A
I hope you enjoyed this rare conversation with Anders as much as I did. An interesting part I to came keep thinking back to is how Anders said that programming language Design is a 10 year cycle at minimum. Version 1 has issues, version 2 fixes them, version 3 is finally great and then you have to convince people to actually adopt it. Most of us devs are used to thinking in quarters and sprints. This is certainly a different time frame. I also found it surprising to hear how small the C language design team was and how lean that they worked. Six to seven people, three meetings per week, two hours each. All of them were people who had built languages before and they were criticizing each other's ideas. And ideas that survived the criticism were the ones considered good enough to work. Just a good reminder that standout technical work more often comes from small teams than it does from committees. Finally, I really like how anders said that IDEs are the language from Turbo Pascal in the 1990s to TypeScript and VS Code today. Anders says that the compiler is not the product, the product is the the whole edit, compile, run, debug cycle. This is a good reminder to any and all of us building software. The product is the complete way that your customers use the product, not just the screens or parts that you are responsible for. If you'd like to go deeper on Microsoft's developer tool routes and operating systems, check out the related to Pragmatic Engineer Deep Dives linked in the show notes below. If you've enjoyed this podcast, please do subscribe on your favorite podcast platform and on YouTube. A special thank you if you also leave a rating on the show. Thanks and see you in the next one.
Host: Gergely Orosz
Guest: Anders Hejlsberg
Date: May 13, 2026
In this episode, Anders Hejlsberg—creator of Turbo Pascal, Delphi, C#, and TypeScript—joins Gergely Orosz to share four decades of programming language design. The two dive deep into the evolution of compilers, the importance of tooling and IDEs, the genesis and inside stories of several iconic languages, how open source changed Microsoft from within, and the impact of AI on the software craft.
This episode is packed with unique stories and technical insights valuable for both software developers and engineering leaders, offering inspiration and practical wisdom for anyone interested in how modern programming environments—and their productivity "magic"—were invented.
First exposure to programming (02:11):
"You could literally open it up and see the ferrite cores. I mean, it was amazing..." (02:27, Anders)
From games to compilers (03:13):
First compiler and entrepreneurial ventures (04:50):
"We had the first computer store in Copenhagen where you could walk in and buy a computer, one of these kit computers..." (05:08, Anders)
Birth of Turbo Pascal (07:01):
Why it succeeded (10:25):
"It was just better than all the competition. It was faster, it was smaller, it was more interactive, and it was also cheaper." (10:25, Anders)
IDE as a core product (08:17):
Transition to GUI and Delphi (11:38):
Legacy:
"Delphi was and is in some ways a wonderful way of building Windows desktop apps..." (13:34, Anders)
Why join Microsoft (14:10):
Sun lawsuit as a turning point (16:13):
"Development of J went great until the big Sun Microsoft lawsuit got in the way... it had nothing to do with technical [issues]..." (16:13, Anders)
Designing C# & .NET (18:06):
How a language gets designed at Microsoft (21:13):
"Language design is 90% the same and 10% new for pretty much every language..." (21:13, Anders)
Compilers and IDEs must be co-designed (08:17, 23:43):
Feedback cycles (27:32):
"The transformation from serially executing code into a state machine... is actually one that you can do in a machine based fashion..." (29:04, Anders)
JavaScript's rise (33:16):
TypeScript genesis (35:36):
Open source at Microsoft (37:18):
"Once we switched to GitHub, the entire workflow moved to open development also. And that I love." (38:21, Anders)
Why TypeScript became so popular (39:50):
TypeScript compiler pipeline (42:17):
Standard stages: lexing, parsing, binding (symbol table), type checking, emission (erasing types/down-level code).
Interactive, incremental, and fast for IDE use—"lazy and deferred" updates.
"Everything is lazy and deferred and functional and reusable inside the compiler..." (45:54, Anders)
Compiler optimizations for IDEs (46:58):
How AI tools are changing language and compiler development (51:46, 53:22):
AI and determinism (53:35):
"AI is by design stochastic and indeterminate...We can't build applications if they have non-deterministic behavior." (53:35, Anders)
Type systems and inference help AI write better code (55:31):
AI flood of code: future language characteristics (57:10):
From code writing to orchestration (62:21):
"We're all turning into project managers, right? And we can have an army of junior programmers called agents that will just spit out reams of code, but someone's got to have the big picture and review all of that." (62:21, Anders)
Ownership and responsibility remain human:
What developers actually want (65:05):
Productivity and "being in the zone"—the IDE/compiler is the real product, not just the language syntax.
"It's the whole experience... You're creating an experience that programmers will spend the majority of their working life in, which is why programmers become so attached to their tools and their languages." (65:05, Anders)
The IDE/language symbiosis:
Longevity at Microsoft and in language design (70:58):
"It takes 10 years to get to... version three that it really starts to be great. But then, now you gotta convince people to actually adopt. It's a long play. You got to be willing to do the long play." (70:58, Anders)
Microsoft as a developer-first company (72:30):
Formative influence, full of worked examples and practical advice, not just theory or equations.
"It was a revelation for me to read this book. ...I think it's still in a lot of ways super relevant today." (72:47, Anders)
This episode offers a rare, behind-the-scenes account from one of software engineering’s true pioneers. From soldering kit computers to influencing the entire developer ecosystem, Anders shares stories that highlight how continuity, focus on end-to-end experience, openness, and deliberate language design yield tools that define generations of programmers.
The takeaways are clear: Productive developer tools require co-design of languages and IDEs; true adoption and innovation happen in small, expert teams; and the rise of AI changes the craft but only reinforces the value of thoughtful design and human expertise in the loop.
For more, check out The Pragmatic Engineer Deep Dives.