
John Hennessy is a computer scientist, entrepreneur, and academic known for his significant contributions to computer architecture. He co-developed the RISC architecture, which revolutionized modern computing by enabling faster and more efficient proce...
Loading summary
John Hennessy
John Hennessy is a computer scientist, entrepreneur and academic known for his significant contributions to computer architecture. He co developed the RISC architecture which revolutionized modern computing by enabling faster and more efficient processors. Hennessy served as the President of Stanford University from 2000 to 2016 and later co founded MIPS Computer Systems and Atheros Communications. Currently he serves on the board of the Gordon and Betty Moore foundation and is the Chair of the Board of Alphabet. John received the 2017 Turing Award for pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry. In this episode, he joins Kevin Ball to talk about his life and career. Kevin Ball, or K. Ball, is the Vice President of Engineering at MENTO and an independent coach for engineers and engineering leaders. He co founded and served as CTO for two companies, founded the San Diego JavaScript Meetup, and organizes the AI in Action discussion group through Latent Space. Check out the show notes to follow K. Ball on Twitter or LinkedIn or visit his website K Ball LLC.
Kevin Ball
John, welcome to the show.
John Hennessy
Thanks. Delighted to be here.
Kevin Ball
Yeah, I'm excited to dig in. So, I mean, somebody with your background, you get introduced all the time. It kind of speaks for itself and you've got so many things you could say. I'm actually curious, if you were designing your own introduction, how would you introduce yourself to an audience?
John Hennessy
That's a good question. I'd say I have been extremely fortunate to have entered the computer field in its early days and to be able to do incredible things because of the remarkable advances that have been made in the field. And that's been just incredibly exciting. And I'm so glad I decided to be a computer person.
Kevin Ball
It has definitely been a wild time in the computer world though, interestingly, like, you started early, but Risk is still running. I mean, with RISC V, that's kind of the hot topic. Now. What are your thoughts on, like, what is our continued bandwidth in the Risk space?
John Hennessy
Yeah, I think what happened is interesting. I think in the end what really made the Risk ideas really take off was the demand for more efficiency. And that comes in a number of different ways. I mean, because now a lot of the devices we use are battery powered, not plugged into the wall. So energy efficiency is really important and Risk is much better at that. But also because we've gone ubiquitous and there are computers everywhere, right? Figure, look how many computers are inside a brand new car. I mean, there are 50, 100 microprocessors inside there. So the price does matter. All of A sudden we're not just building chips that cost several hundred dollars each, we're building chips that cost $10 each or $20 each. So the whole efficiency thing won out in risk. And now even in the large data centers, you see these companies that are the hyperscalers are building out of risk chips because the energy consumption is a big part of the bill that they pay in their data center. So they worry a lot about this energy efficiency issue. And that in the end, that was the key insight of risk, that we knew how to build processors which were much more efficient in their use of silicon area and their use of power. And that's been a winning combination now for probably the last 15 or 20 years as we switched to a new computing world from the old world of having desktops and things plugged into the wall.
Kevin Ball
Yeah, well. And as you highlight, our constraints may have shifted, but efficiency is still super important.
John Hennessy
Yeah.
Kevin Ball
So we've been on this long run for a really long time. Moore's Law carried for so long in scaling and each generation of chips getting smaller. Like, even if it's slower, it's still mind boggling how far we're going. But I feel like we're kind of seeing the end when that comes and we're having to embrace something different.
John Hennessy
Yeah, yeah, we're plateauing. I mean, first of all, Moore's law isn't a law. It's a kind of objective for the industry to scale against. But we see it slowing down now. Let me, let me point out that it's slowing down. If you look over the last 50 years, 50 plus since Gordon made his prediction, we've scaled by a factor of about 10 million and we're off from Moore's projection by about a factor of 25. But the gap is getting bigger and it's really been the last few years. It's opened and it's opening more and more and more. And so that's going to demand that we rethink computation. We think about efficiency, we think about different ways of doing things well.
Kevin Ball
And I think one of the things that it's pushing people towards is more heterogeneous computing. Right? Less.
John Hennessy
Oh, absolutely. I mean, you look at the Apple chips, there are multiple processors, but there's, there's a high performance processor, there's a low power processor, there's a AI processor, you know, there's a signal processor. So we're moving more and more of that and that's again this drive for efficiency and using the silicon and power efficiently both matter.
Kevin Ball
Yeah. So I'm interested in your thoughts on what that ends up looking like for a software development team. Right. This is Software Engineering Daily. So we're writing software, not just the hardware piece of it. So how does that heterogeneity play out into the tools we use to write software?
John Hennessy
Yeah, I think it basically requires more work on behalf of the programmers to really get a good fit between the processor and a processor, whatever processor they're using in a heterogeneous world and the application. And that for better or worse, that problem has gotten pushed off to the software actually beginning when we went to multicore. The reason we went to multicore is that we didn't know how to build faster single thread processors. We didn't have any idea how to do it. We were at a dead end. We used up over a period of 15 or 20 of years, we used up all the good ideas and mostly instructional level parallelism and they ran out of Steam. So then we had to go to multi core. Now of course, when we go to multi core, then the programmers have to find the parallelism and decide what threads to run where. And now as we've gone to heterogeneous, as you alluded to, things get even more tricky because you've got to figure out, okay, not only what are the threads that I can run in parallel, but which thread should run on which processor. So that's going to require, I think, for better or worse, programmers are going to be responsible for more efficiency going forward in getting efficiency out of the thing. It's funny that many years ago I was talking to Maurice Wilkes, who was the last living pioneer from the golden age, from the ENIAC age, in the post World War II era. And I said to him, maurice, what's going to happen if we can't continue to build hardware that's faster and faster and faster? We've been going one and a half times every year. And what's going to happen when this slows down? He goes, programmers are going to have to get a lot smarter and a lot more careful about the code they write. And I think he's right. And that's what we're seeing now.
Kevin Ball
In some ways it kind of actually reminds me a little bit of like what happened when Risk came in where you were saying, this used to be in the hardware. You have these complex instructions that are doing all this stuff and you're saying, well, let's make software do it.
John Hennessy
Yeah, I think that's right. I think there is a parallel and you know, a part of what drove Risk from, at least from my research group, was the notion that you should never do anything at runtime if you can do it at compile time. And a lot of what was going on were things we could do, we could do at compile time. So rather than reinterpret complex instructions, compile down, get rid of a layer of microcode, and compile right down to the hardware primitives. And I think nowadays that's changed in that the processors have gotten a lot more complicated, memory hierarchies are getting more complicated. If you look at GPUs or TPUs or anything, there's a lot more focus on controlling the memory system by the software rather than by the hardware. Now, today that happens with a combination of smart compiler tools and people who understand how to write their algorithms so that they compile well for those kinds of machines. And it's that combination. So it requires, I think, a level of understanding of the underlying hardware mechanisms to really become a good programmer that can program something efficiently.
Kevin Ball
Yeah, well. And it, it's like when single threaded performance just kept getting better and better and better. We didn't have to worry about it. And you could almost completely disconnect those hardware teams working on their side, software teams working on their side. So long as you end up generating the bytecode, it's going to work and it's going to keep getting faster. I don't think we're in that world anymore.
John Hennessy
We're definitely not in that world anymore. I think it's just, you can't just rely on the hardware guys to make things faster because it's not going to happen. Unfortunately, if we could, I mean, there were a lot of incentives not to rewrite software because, you know, a year later it was going to run 50% faster. Well, no more now a year later it runs 5% faster, if you're lucky, you know. So you're going to have to find ways to rethink that interface. And I think it's interesting because it's really about, you know, how do you think about the interface between the hardware and the software system? How do they come together? How much does the programmer have to know? What's the compiler responsible for? How does that all fit to deliver performance?
Kevin Ball
I saw in one of the talks that you gave that when the first Risk revolution was happening, one of the challenges was that the tooling didn't exist. And in fact, the tooling was being generated inside of academia because companies weren't doing it. What do you feel like the missing layer of tooling is for this generation of, okay, now we're moving into the heterogeneous world.
John Hennessy
You know, I think we still have this gap. And when you move to these domain specific architectures, things that are tailored for particular classes of algorithms, right today, lots of machine learning things, obviously, but a wider range of things. Graphics clearly has this. Lots of signal processing has this special purpose aspect to it that can be captured. The key thing is to figure out, can you build an architecture that does really well in these kinds of applications, but it's sufficiently flexible to allow a wide range of applications? And then, of course, figuring out how to get that match between what the hardware can do well and what algorithm the programmer really wants is still an open issue. And we've got it for some things. But if you look at lots of the things we run, whether they're on graphics units or they're on something doing machine learning, they're doing linear algebra problems, and they're comparatively well structured. Even with sparse linear algebra, it's comparatively well structured compared to a random piece of code you want to run. Right. So figuring out how to align these things and how general can these architectures be, how wide a range of things can they run is still a critical open problem. And the tools will determine that to a large extent, how to get that interface to work between the hardware and software.
Kevin Ball
Now, certainly if you're, you know, when you're shipping graphics units and you're manufacturing tons and tons and tons of these, you need that level of generality. But I think another thing that's kind of interesting is like with Cloud FPGAs, you can sort of create your own architecture for your problem space. Is the efficiency good enough there or is that still, when you go to fpga, is that still leaving too much on the table?
John Hennessy
Yeah, I mean, you can do this. I think, you know, there's a efficiency loss that's pretty significant. But if there's a lot of gain from the flexibility that's achieved, and you can really change that flexibility, change the structure of the FPGA to do some other problem. You can imagine situations where it makes sense, particularly when the algorithms are changing quickly. Rather than build an architecture that's adapted to a particular class of algorithms, it might be smarter to go to an FPGA structure that would allow the algorithms to continually evolve and still be able to match pretty well to the new algorithm. So there have been some people at Microsoft have done some experiments with this kind of approach. Probably they've moved the furthest, but lots of Hardware developers use FPGAs as starting points now, anyway, to get something that works that's reasonable in terms of getting the hardware before they go to something that's a more customized design and is going to cost not only a lot more to design, but a lot more to fabricate as well.
Kevin Ball
Yeah. Another area here that I think is interesting, I'm going off of something I saw you talking about, I think in a talk in 2023, was essentially treating machine learning as a way of programming where you're programming software per se, but you're programming it with data rather than programming it with code. I'm curious how you think about that with relationship to efficiency. Right. Like, on the one hand, it's almost as flexible as you can get. Right. Give it some text, you'll get text out one of these LLMs, you'll get amazing. Or you train it on some other data domain. But it's also massively expensive. So how are you thinking about the role of programming with data in this ecosystem?
John Hennessy
Yeah, I mean, you're right that programming with data is the right way to think about it. You've shifted to the use of data for programming. But then of course, the cost is the training, particularly if it's a large data set that you need to get trained on, is what's really costly. Right. And the model, depending on how big the model is. I think one of the interesting things we've seen is that some of these smaller models that are trained more carefully and that are inspired by a large model have achieved enormously incredible results. You know, okay, so the giant models do these incredible things, but a model that's a lot smaller, let's say a billion parameters versus 500 billion parameters, is able to do pretty well for lots of applications. So one of the things I think we're going to see is the models for endpoints. For example, what's on my phone, I want a machine learning model on my phone that'll help me with text and search and some other things. But I'm not going to put a model on that has 500 billion parameters in it. So I'm going to have a small model on the phone that's going to do a lot of things. Probably one of the outputs of that model is, I'm not sure, call the big model in the cloud and go do that. And we're going to have to figure out how to make that work in a way that's appropriate and seems smooth and works well for people. But I think we're going to see more and more of that. And particularly smaller LLMs adapted to particular domains, whether it's inside a camera, inside a phone, inside some kind of other device that may be on a lot of the time.
Kevin Ball
So yeah, I think that's a fascinating domain. And the more you can constrain the problem, the more you can fine tune the model to particularly do that. I love that. So curious. In that same interview that I'm thinking of, you predicted that LLM enabled technology would be truly useful in a year or two. And I think this was end of 20, 20, 23. I feel like one of the things I've seen is this was the year that LLMs broke through for software development, right? Like coding assistance with LLMs have gone from a niche that a few people were exploring to just exploding.
John Hennessy
This episode is sponsored by Mailtrap, an email platform developers love Go for high deliverability, Industry best analytics and live 24. 7 support. Get 20% off for all plans with our promo code sedaily check the show notes for more information.
Kevin Ball
What other domains are you seeing that type of breakthrough in?
John Hennessy
Well, I think coding is certainly. You're right and coding is amazing because it's. You wouldn't code anymore without an LLM assistant of some sort, right? I mean you wouldn't do it because the leverage you're going to get from it for lots of code is just so high. Right? So it's probably delivered. You know, it's obviously things like abstract data types and various forms of polymorphism delivered. Lots of programmer productivity. Well, this is delivering another big hit in terms of improvement in productivity. I think we're seeing it in writing. We're seeing it around things that help you digest complex and large documents. I'm thinking of something like NotebookLM where you can ask it tell me what the key things I need to understand in this hundred page manuscript are. What are the key insights? And you get reasonably good answers out of these things. Amazingly good. And for college instructors, you can say to it design me five test questions based on this material here and you could get great things out of it. So I think we'll see a lot of help on that. You know, one of the things that instructors generally hate to do is grading. Everybody, all teachers hate to do the grading part. They like to see their students succeed. But the grudge work of But I think now we've seen some Systems based on LLMs that can do grading as well as people and I think that'll be a big improvement. I'm very big on this idea of using machine learning and AI to eliminate human drudgery. We're not going to completely replace jobs, but we're going to replace some of the stuff that people really don't like doing in their jobs that is more rote, more straightforward that we could do with an LLM. And I think we're going to see more and more of this occurrence.
Kevin Ball
Yeah, no, I completely agree. And I think it's a really interesting problem domain because you have to, sometimes you have to completely reshape how you're thinking about it. Right. Using the coding with LLMs example.
John Hennessy
Right.
Kevin Ball
You have to shift how you're attacking your software problems, but what you get out of it is elimination of a lot of drudgery. Yeah. I think one of the key problems, and I'm curious where you're seeing this, is almost what you talked about there with regards to when does the small model call out to the big model? I think similarly we need a question of when does the big model call out to the person and say, you know what, Like, I can't do this, I need you to get involved.
John Hennessy
Yeah. So I think one thing we're going to have to do in all these LLM based systems is tune the system so that they say, I don't know. Not my best answer is X, but X might be my best answer, but I don't have a high degree of certainty in that. And we've got to get there. You know, I mean, there are these examples you hear about periodically of people using LLMs for writing and then making up citations to things that don't exist. It should never do something like that. Right. Just as it shouldn't write a piece of code that it really doesn't have high confidence is the right way to write the code. Right. And I think, as my, as my colleague Dan Bonet pointed out, one of the problems with these coding tools is, is that they'll sometimes write a piece of code that has a big flaw in it and it won't know that it's got a flaw in it. And of course that's tricky because as a programmer reading somebody else's code, whether it's another person's or a machine's, and figuring out is this right? Is a hard task. But I think that's the sort of thing that we're going to have to navigate through and try to make the systems better at being more cautious when they don't have high confidence in what they're predicting.
Kevin Ball
Absolutely. So text and LLMs and coding have been getting a lot of buzz and image and video gets a lot of buzz. But in some ways I'm more excited about things like AlphaFold or other things like this that are not in the text domain. I saw that One of the DeepMind founders won the Nobel Prize for Chemistry this year. And actually I think the Nobel Prize for Physics was also in a machine learning related domain. And so just I feel like, yep, those are the dimensions that are going to completely change the world. And I'm curious.
John Hennessy
Yeah, so science, I think is going to change dramatically. I think these machine learning tools are going to be the new tool of science. As important as microscopes have been, as important as various tools for looking at the structure of molecules and DNA have been, and this is already happening. The chemistry example is a great example. I mean, Alpha folder has discovered more protein structures than 50 years of protein structure work discovered. And that's an amazing result. So I think we're going to see more and more of this. And people are doing all kinds of problems that are computationally not tractable if you do them from basic scientific principles. But where the LLM, where a machine learning system can be used to reduce the search space so dramatically that you can get the answer. You're still doing little kind of physics simulation things that we traditionally do in much of science, but you're using it over a much smaller domain than you would have before. You figure out the basic structure of the protein by knowing what other proteins with similar molecules, similar atoms in them have. And then you use that to guide the process of getting the detailed structure and it results in significant improvements in performance, ability to do much, much more. And I think we're going to, we're seeing this in lots of science. We're seeing it in astrophysics where people look at the structure of galactic systems and understand how they're evolving. We're seeing it in one of the things I thought was amazing. There are people working on this to understand turbulent flow, one of the hardest computational problems we, we do. And solving that problem is extremely difficult from basic principles. On the other hand, you might be able to use these tools to kind of get the basic structure and then use simulation to get the accuracy that you really need in these systems. Look at weather prediction. I mean, amazing result. The DeepMind people have beat the best weather prediction system out there, which was developed over a period of 20 years in terms of computational ability and they're able to outperform it. So I think I'm really excited about what this is going to do for science.
Kevin Ball
Yeah. So there's something you talked about there. That I think is a really interesting big picture theme, which is like, these generative systems don't have to get to the right answer. They just have to narrow the search space. We have all sorts of domains in which we have formal validation that works when you have an answer or a small number of answers, but where exploring the entire search space is totally intractable. And if you can use this system to narrow you in now, you dump it into a formal validation. I think mathematics is another interesting area here where we, we have formal validation checkers. But proof generation is hard, right? So use an LLM of some sort to generate viable truths, narrow your search space, and then now you can dump it into a formal validator with that model. I'm curious, actually, you probably know more than I do about what are the different domains. We talked about a few. We talked about weather, we talked about chemistry and protein folding. Like, what are some other domains that this can open us for us in terms of narrowing the search space down? And then we can dump it into either a formal validation or a human validation.
John Hennessy
So. Well, I mean, lots of classic problems which are NP complete, right. So that we don't know how to do them efficiently. If you can narrow the search space, you can come up with an answer. And it may not be. It may not be the optimal answer, but it may be very close to the optimal answer. And that. That could certainly be appropriate. I mean, a lot. There are lots of interesting problems that reduce down to these very fundamental computational problems. For example, generating test patterns for software or hardware, generating sets of tests that will test everything. That's a really hard problem if you have to do it completely. But if you were guided by a system, you might be able to narrow the range of it. So you could get a reasonable number of tests that would adequately test the system. And I think, I think we'll see other examples like that where, as you said, narrow the search space, you're doing a complex optimization problem. But if you can narrow the search space, then you can get to something that is close, if not perfectly optimal, close to the optimal solution very quickly.
Kevin Ball
Yeah, I love that. As a kind of idea generator for domains to attack. Right. Anything where you have an NP hard problem, but you can validate any particular solution, this might be a useful technology to try applying.
John Hennessy
Yeah, agreed.
Kevin Ball
All right, so bringing this back around to software engineering and the tech industry, we're obviously in a very tumultuous time. Lots of things changing here. How do you see all of these breakthroughs impacting the tech industry and the world of software development over the next few years.
John Hennessy
So I think one of the things we're seeing is we're seeing a kind of almost a back to the future evolution in the tech industry in the following sense. If you look at the tech industry Prior to about 1985, 1990, there was a lot of vertical integration. I mean IBM, IBM did everything. They designed their own chips, they designed their own disks, they did everything. It was a vertically integrated all the way up through the entire software stack. Right. They did everything. And then the industry moved to, particularly with the PC and the emergence of shrink wrap software, it moved to very horizontally. So you had intel down here at one layer with the disk guys over here as another. And then on top of that you had Microsoft and then on top of that you had the application layers. Now all of a sudden, because of the need to vertically integrate much more to get the applications closer to in touch with the hardware, we're seeing a reintegration in the vertical direction. So you look at, Microsoft has certainly done this, Google has done this. I mean there's a vertical integration now across those layers and even a company like Nvidia integrates all the CUDA and software work around CUDA gets integrated into the, into the hardware and the design of next generation GPUs. So there's a lot more vertical transmission up and down that stack, which I think is changing the way we think about programming and the industry going forward. But I think, I think it's fascinating because I think it leads to a level of collaboration across these boundaries that keeps the field interesting and exciting going forward.
Kevin Ball
Yeah.
John Hennessy
In a world where agreements are the lifeline of businesses, DocuSign is more than just signatures. The company is transforming how professionals create, commit to and manage agreements. Use DocuSign Intelligent Agreement Management to turn complex negotiations into a streamlined experience, breaking down barriers and pushing business forward. Visit DocuSign.com today to learn how DocuSign IAM can give your business a competitive edge and propel you into the future of agreement management.
Kevin Ball
Do you see startups also doing that level of vertical integration?
John Hennessy
I think so. I think a bunch of the startups are trying to the extent that a small company can do much of anything because it's got of focus, but they are certainly they are taking advantage of that integration across that stack to try to achieve something. And I think, I think we'll see more and more of that. I mean it's, you know, there's so many I've never seen. I mean the number of startups is just insane right now. Partly driven obviously by this AI revolution that's occurred and the discontinuity it's created and the opportunity people see. But I think that's an exciting thing about our industry, you know, that we're constantly reinventing ourselves and new things are coming along and changing the industry. And I think that's what's made it really a fascinating field to be in.
Kevin Ball
So we've talked a lot about machine learning. We've talked about some of the stuff that I've seen you talk about in the past. I'm curious, looking forward, we're entering 2025 now. What are you most excited about that's coming into the industry right now?
John Hennessy
So I think this switch in how we think about programming models is a really crucial one and how we think about the applications of this. We're still, you know, this is still relatively new technology. It hasn't really. It's still going and changing at an amazing rate. So I think there's a lot of excitement there, but there's still a big gap to go. I mean, if you look at, you look at the effort that I have to put into training a new model, and I look at the gap between how much computational cost and energy goes into training a new model, and I look at how a baby learns to talk, for example, and the amount of energy consumed to train an LLM versus train a baby is gigantic. So there's obviously a large gap that we still don't understand. The big breakthrough in machine learning happened because we realized that to create intelligence, it was about learning. It wasn't about memorizing facts, it was about learning things from data and experiences. Right. So we've learned that, but we're still not building learning machines that are terribly efficient. At least if we compare them with what's in our cranium up here, we're much more efficient learning machines now. Can we adopt some of the ideas that can we get more inspiration from the structure of human brains that we can use in this systems? I think lots of people are playing at the edge of this. I don't think anybody's gotten a breakthrough yet, but we'll see somebody may.
Kevin Ball
Yeah, that is very interesting. I think a couple of immediately interesting threads to pull on there are one, we are continuously learning rather than separating learning and inferring in some ways. And I don't know what that looks like in the machine world, but I think that's an interesting difference.
John Hennessy
Yeah. And you know, the amount of data that we use to train these Systems is far more than what people end up training on. You know, I mean, if you look at the AlphaGo playing chess, right, AlphaZero, which plays chess, learns it from just understanding the movement, sub pieces, but no strategy. It had to play 90 million games to get up to really a superb level. No human player has to play nearly that many games to get to a master chess level. Now it's a kind of, it's a bit of apples and oranges comparison because the way it learns is very different than the way our brains learn. But maybe we can get some inspiration from the way our brains learn that'll improve the way we train and create these machine learning models.
Kevin Ball
Yeah, well, and I think to your point, right, we've shifted into learning instead of writing down rules and memorizing facts or things like that. However, as humans, we do create rules in ourselves and we sort of then operate at a higher level of rules. And I wonder what that looks like in the machine learning world. Maybe it's not at the level of the model, maybe it's at the level of the system the model is embedded in. I've seen some fascinating Things with using LLMs to generate tools for themselves which they then learn how to use. And so you can mix the unstructured learning and kind of structured code or logic. But yeah, it's a, it's a fascinating.
John Hennessy
Domain and it's a different domain in that if you look at this famous book Thinking Fast, Thinking slow, right, that talks about how brains operate and our ability to do certain things very quickly and other things we have got to, we've got to calculate, we've got to do a more deliberative process. But our LLMs, they're not at that level. They have kind of one way to do it, right? They take this model that's in many cases really big and they throw the data in and they get the answer out. But a lot of times they probably wouldn't need that complex a model. Now, whether or not you can build some kind of system that operates in the way the brain operates, in that it only has to use a small amount of its capacity to do certain things and call on a deeper, more complex model, but integrate it in some way that it recognizes it internally, right, which is what we do in our brains, maybe something like that could work.
Kevin Ball
All in all, a fascinating time to be alive and in the tech industry. One other area related to this I'm curious your thoughts on. I know a lot of people, particularly as LLMs, have dramatically scaled up the amount of code Any individual software engineer can write. I've been asking the question of, okay, what does the software industry look like in terms of is, is it still a great place to work? Are there still going to be lots of programmers in 10 years? All of these different dimensions? I'm curious, you're seeing that at the scale of an Alphabet or a Google, how they're navigating that. What is your view on the future of a career in software in a world where we have these models to write software for us?
John Hennessy
Yeah, I think it's a good question. You know, here I draw on the lesson of history. I mean, if you look at how much more productive a programmer is even without LLMs, let's say in the just prior to LLM Copilot era, and you say how much more productive was that programmer, say than programmers 50 years earlier? Let's go back to, let's say the 1960s, right. They're writing an assembly language. It's really. So programmer productivity improved by leaps and bounds, certainly more than an order of magnitude, maybe as much as two orders of magnitude over that time. But the number of programmers in the world went up by a lot. So there was a way, there was a recreation of lots more things that we could do with computers. And I think that's what will be key here. If we can be creative about creating new things, then the demand for programmers will continue to go up. Now programming skills will change and how programmers work will change and individuals are going to have to learn new ways to do that work and get efficient with new tools. But I think, I think the industry will still be an exciting place to be. There are other parts of the employment sector that probably LLMs are going to reduce employment in over time in the same way that, you know, we don't. Lots of people used to be typists or data entry people and we don't have a lot of the people who do that anymore because with that process has been automated. So there will obviously be some tasks where there's automation and we automate it and there isn't demand for, there isn't an obvious demand for that skill level anymore. And that challenge there is going to be how do we, how do we retrain and prepare people for new careers when that's necessary.
Kevin Ball
If you were to point people either early in their careers in software and you said, okay, what it means to be a software engineer is going to change, it's going to look something different, where would you recommend they focus their time and energy now?
John Hennessy
Well, so I've Always been a believer that kind of building a core set of a good foundation is a good starting point. Right. And in the software industry, in computer science, building a strong foundation is crucial because some of the problems are not going to change. Right. How do you test? How do you debug? How do you know the code is really well written? How do you think about all kinds of software engineering tricks that we use? How do you think about issues like security. Right. Which has become so much more important than it, than it was in an earlier time. So I think building a strong foundation is really going to be crucial. The tools that you learn initially, well, let's say while you're a student going to school, those are going to change. Those are going to change and they change dramatically. Just look at the last, I mean, students who graduated 20 years ago are programming in something completely different than they were using 20 years ago. Right. So I think there's going to be that kind of evolution mastering that you have to be able to learn new things. I think part of a good education is it teaches you how to be a lifelong learner. And in our field that moves so quickly, you have to be able to learn new things.
Kevin Ball
Absolutely. Well, we've covered a lot of different things. We're getting close to the end of our time together. Is there anything we haven't talked about that you would like to touch on for folks?
John Hennessy
I guess. What do I worry about? I worry about, I do worry that there's lots of good to be done from these new generation of tools, but there are also ways in which you can misuse them. Right. Software is malleable. It can be used for lots of different things. And how do we as a society really ensure that the technology we're developing does good in the world, really does the things we want to do and constrain if we, to the extent we can constrain misuse of that technology. And I think we're going to have to worry about that. I worry that we become so cyber centric in our lives that we have to worry a lot more about security and protection in our cyber systems. And that's going to require a level of diligence by software programmers who understand these things. I think that's really different. But I think it's an exciting time. And you know, one of the amazing things when I think about being in this field for 50 plus years is to kind of see it reinvent itself all the time. Something new comes along, new ideas come along and we see this burst through and you, I mean this AI revolution is amazing. I mean, you know, people working on these various AI technologies for a long time, they were making progress, but they were making slow progress. And then all of a sudden, boom and a breakthrough. And I think that's the kind of. We've seen that a number of times in the history of the field, and I think it's really been. It's been what's kept it so interesting as a discipline and field in which to work.
Kevin Ball
Awesome. I think that's a great close. Let's call that a show.
John Hennessy
Yeah. Okay.
Podcast Summary: Software Engineering Daily - Turing Award Special: A Conversation with John Hennessy
Release Date: April 3, 2025
In this special episode of Software Engineering Daily, host Kevin Ball engages in an enlightening conversation with John Hennessy, a renowned computer scientist, entrepreneur, and academic. John Hennessy, a Turing Award laureate, shares insights from his illustrious career, delving into topics ranging from computer architecture to the evolving landscape of software engineering in the age of artificial intelligence.
RISC (Reduced Instruction Set Computing) has remained a cornerstone in modern computing, and John Hennessy provides a comprehensive overview of its continued relevance.
Energy Efficiency and Ubiquity: Hennessy emphasizes that the core strength of RISC lies in its efficiency, making it ideal for battery-powered devices and ubiquitous computing environments.
"RISC is much better at energy efficiency, which has been a winning combination for the past 15 or 20 years as we transitioned to a new computing world." [02:26]
Cost-Effectiveness in Mass Production: The affordability of RISC chips, costing significantly less than their predecessors, has facilitated their widespread adoption in various industries, including automotive and data centers.
Heterogeneous Computing: The conversation touches upon how RISC principles influence modern heterogeneous computing architectures, integrating multiple processor types to optimize performance and efficiency.
As Moore's Law slows, the industry faces new challenges that necessitate a pivot towards diverse computing approaches.
Moore's Law Revisited: Hennessy notes the deceleration in the pace of Moore's Law, highlighting that the gap between predicted and actual scaling is widening.
"Moore's Law isn't a law; it's an objective for the industry to scale against. We're now seeing it slowing down, especially in recent years." [04:19]
Embracing Heterogeneity: The shift towards heterogeneous computing involves integrating various processor types (e.g., high-performance, low-power, AI processors) within a single system to meet diverse application needs efficiently.
"We're moving more and more towards heterogeneous systems, driven by the need for efficiency and the diverse requirements of modern applications." [05:08]
Impact on Software Development: This transition demands greater effort from programmers to optimize applications across different processor types, increasing the complexity of software development.
"Programmers are going to be responsible for more efficiency, figuring out not only parallelism but also optimal processor allocation." [05:45]
The evolution toward heterogeneous architectures introduces significant challenges in software tooling, which John Hennessy explores in depth.
Historical Parallel with RISC: Drawing parallels with the initial RISC revolution, Hennessy points out that the lack of existing tools historically was mitigated by academic efforts.
"Back when RISC was emerging, tooling was being generated inside academia because companies weren't doing it." [09:50]
Current Tooling Gaps: Today, as domain-specific architectures proliferate (e.g., GPUs, TPUs), there's a pressing need for advanced compiler tools and programming frameworks that can effectively bridge the gap between complex hardware and software demands.
FPGA Flexibility vs. Efficiency Trade-off: The discussion includes the role of FPGAs (Field-Programmable Gate Arrays), which offer flexibility at the cost of efficiency, making them suitable for rapidly evolving algorithms but less so for mass production.
"There's an efficiency loss with FPGAs, but their flexibility makes sense when algorithms are changing quickly." [12:00]
John Hennessy delves into the transformative impact of machine learning (ML) and Large Language Models (LLMs) on various domains, including software development and scientific research.
Programming with Data: Hennessy discusses the paradigm shift from traditional coding to programming with data, highlighting both its flexibility and associated costs.
"Programming with data is the right way to think about it, but the cost of training large models is massive." [13:48]
LLMs in Software Development: The conversation highlights the surge in using LLMs for coding assistance, which has dramatically increased programmer productivity by automating routine tasks.
"Coding is amazing because you wouldn't code anymore without an LLM assistant of some sort." [15:20]
Broader Applications in Science: Hennessy underscores how ML tools like AlphaFold revolutionize scientific fields by narrowing search spaces in complex problems, enabling breakthroughs in protein folding, weather prediction, and astrophysics.
"Machine learning tools are the new tool of science, enabling us to tackle problems that were computationally intractable before." [20:17]
Looking ahead, Hennessy provides a visionary perspective on the tech industry's trajectory and offers advice for aspiring software engineers.
Vertical Integration Resurgence: Contrary to the horizontal integration trend of the past few decades, there's a reintegration towards vertical integration in the tech stack to optimize performance and collaboration across layers.
"We're seeing a reintegration in the vertical direction, leading to closer collaboration and more integrated systems." [25:13]
Sustained Demand for Programmers: Drawing from historical trends, Hennessy asserts that despite automation in certain tasks, the demand for creative and skilled programmers will continue to grow.
"If we can be creative about creating new things, then the demand for programmers will continue to go up." [33:32]
Advice for Aspiring Engineers: Emphasizing the importance of a strong foundational knowledge, Hennessy encourages continuous learning and adaptability as essential traits for future software engineers.
"Building a strong foundation is crucial. Mastering how to learn new things is part of a good education." [35:31]
In his concluding remarks, Hennessy expresses concerns about the ethical implications of rapidly advancing technologies.
Misuse of Technology: He warns about the potential for misuse of powerful tools like LLMs and the importance of societal measures to ensure technology serves the greater good.
"There's lots of good to be done with these new tools, but there are also ways to misuse them. We need to ensure technology does good in the world." [36:50]
Cybersecurity Vigilance: With increasing reliance on cyber systems, Hennessy highlights the necessity for heightened security measures and diligent programming practices to protect against cyber threats.
"We have to worry a lot more about security and protection in our cyber systems, requiring diligence from software programmers." [36:50]
Optimism for Innovation: Despite the challenges, Hennessy remains optimistic about the field's ability to continuously reinvent itself and drive meaningful advancements.
"We've seen the field reinvent itself multiple times, and this AI revolution is another exciting chapter." [38:31]
John Hennessy's conversation with Kevin Ball offers a profound exploration of the current and future state of software engineering and computer science. From the enduring principles of RISC architecture to the transformative potential of machine learning and LLMs, the discussion underscores the dynamic interplay between hardware innovation and software development. As the tech industry navigates challenges like the plateauing of Moore's Law and the complexities of heterogeneous computing, the emphasis on foundational knowledge, continuous learning, and ethical responsibility emerges as pivotal for shaping the future of technology.
For those interested in the evolving landscape of software engineering and the integration of advanced computer architectures with modern software practices, this episode provides invaluable insights from one of the field's foremost experts.