
Loading summary
A
You're listening to TechTank, a BI weekly podcast from the Brookings Institution exploring the most consequential technology issues of our time. From racial bias and algorithms to the future of work, TechTank takes big ideas and makes them accessible. Welcome to the Tech Tank Podcast. I am co host Nicole Turner Lee, the Director of the center for Technology Innovation at the Brookings Institution, founder of the AI Equity Lab, and editor in Chief of the Tech Tank blog. It's been three years since the release of Chat GPT and everybody's trying to figure out what to do. Probably you that's listening and institutions of higher education are still navigating both the opportunities and challenges of AI for their students, administrators and faculty. What's interesting, I think, is that over 85% of bachelor's, masters and doctoral students use AI for educational purposes. Things like research or homework assistant or statistical analysis or outlining, even though many of them report low AI literacy levels. That's interesting, right? They say they're using it, but maybe not to the maximum ability and perspectives in academia towards AI vary widely. Professors have taken approaches towards AI in the classroom, with some integrating it into assignments and others outright banning them. And skeptics have raised concerns about academic integrity and the erosion of students critical thinking skills. Which is why we're having this conversation today. Because I think there needs to be this broader conversation of one how higher education is instituting this into their ecosystems, but more importantly a conversation around why we do need to actually make sure this AI fluency for workforce readiness as well as personalized learning. So there are a lot of colleges and universities developing AI tools and guidelines, but I know one and one individual in particular that's doing a heck of a job on it, and that's my friend Lev Gonik. He is the Chief Information Officer at AB Arizona State University and he's here to discuss his recent work designing and managing digital enterprise infrastructure at asu. Lev also served and this is kind of how I knew him, dating way back when we were both in the digital divide space. He was a co founder and CEO of Digital C, a nonprofit focused on digital opportunity and innovation as well as in a prior role he was a Chief Information Officer at Case Western Reserve University. He's an ultimate great guy, really clear about how to do this. I'm going to make sure he tells you a lot but doesn't give away the store because he is one to put on your list of people to watch. Welcome Lev. Thanks for joining me my friend.
B
Great to be with you Nicole. Great to be with you.
A
I mean, I always appreciate seeing you in person when I come out to asu and I always appreciate hearing your voice. So, you know, thanks for taking the time. So, you know, I laid out a lot and I probably gave you more compliment that you actually expected to receive, but that's me. And what I love about this conversation we're going to have today, you know this landscape better than anybody else from both the enterprise side as well as the community side. Because as I mentioned, you and I have done a lot of work to sort of surface digital equity as one of the prime reasons why we both stay so involved. And then technology. I laid out what the landscape looks like to most of the world that's sort of thinking about AI integration into higher education. But what are you seeing when you think first and foremost about AI within these institutional contexts? So give me the broad scope and then I'm going to have you brag about some of the stuff you're doing at asu.
B
Maybe the place for me to start is where you and I connect. I think we have both been interested in and committed to digital inclusion and trying to accelerate digital inclusion. And both you and I started even before there was broadband access. We were all involved in actually just trying to make sure communities had access to computing. And I was up in Canada teaching at the time in Waterloo, and these new fangled computers in the early 90s showed up and they were these PCs that no one had really seen before. And my first instinct was to build a community resource so that community members in Waterloo who had never used a personal computer would have access to it. And then if we fast forward into the late 90s as the Internet became something that mostly started in the university world, began to create an itch and a curiosity for people as to how they might be able to use it there again. I found myself in Southern California at that time working on trying to make sure that the community had access to early Internet enabled computing environments. We actually helped a local school system in Pomona and Southern California acquire and renovate an abandoned shopping center and turned it into a education village which had the first Internet enabled computing environment for folks. And of course that was part of a national program all over the country. You and Chicago and folks in Cleveland, as you mentioned, in my adopted hometown, lots of folks were working in that space as well. And if you fast forward to today, I think we're in the exact same situation. I think we're talking about AI today. It is in many ways my commitments here at ASU are informed by the same set of principles, which is that we need to make sure that all of the tools that allow for participation in society, in the economy, in education, in training and workforce, all of those need to be made available to the largest number of participants as we can possibly make happen. And I think that that is the greatest digital inclusion, digital equity challenge of the last 50 years is actually the one that we're leaning into right now, which will be around AI, I think. Love to chat with you, maybe to underscore just why that is so. And here again, I think it's so important that we make sure that asu, which is so committed to the broadest number of students from as diverse backgrounds, all with an interest in and a commitment to achieving success. That's our mission here at asu. But it's also not just about the students who come to us on our campuses or online. It's also our commitment to the community around us. And so for me, that's the thread that I've managed to basically pull from for the last almost now 30 years.
A
Well, I love the way you kind of put it back into our history. Right. And you're up my alley because I've been trying to tell people, you know, we've got to talk about the history of the Internet, the history of, you know, why many of us have been in the space for so long. Before I go into artificial intelligence in particular, like, what have you seen in terms of connectivity for students that are at the university? Because I think this is sort of a primer for, for the conversation we're going to have and what you've been able to do at ASU to make AI much more ubiquitous. Right. Than we've seen traditional COD activity.
B
Yeah. I think, Nicole, for me, what I've seen in a very affirmative way is as we traverse and try to deal with the global pandemic back in 2020, we realized here at ASU that having access to technology was a table stakes conversation. It wasn't, I mean, it was something kind of in the air before that. But obviously this became, as I say, a kind of table stakes conversation. And the university really did, I think, an extraordinary job in making sure that all students, you know, we have the largest number of students from tribal nations across the land. Many of them, when they went home during COVID needed to have computing devices and Internet access. And the university leaned in to make that happen. You know, two thirds of the, of, of the land here in Arizona is rural and we have huge numbers of, of students whose homes are in Rural Arizona. And lo and behold, you know, there was very little Internet connectivity available for a lot of interesting historic reasons here in Arizona. And again, the university stepped in to provide computing and Internet devices where we could and partnered with others to make sure that that happened. And we even supported our international students who needed to go home during COVID or chose to go home during COVID to provide them with access as well. So the principle of access to education is inextricably linked to access to technology, and technology that makes the access to the education resources that we make available. And in that same context, you know, ASU accelerated the work that we're doing in AI because we borrowed from our experience, our faculty adoption and use of AI, which again, oftentimes, if you sort of just hold out the. The, you know, for a moment, sort of what happened during COVID I think there's a general sentiment that faculty are slow to adopt. Not at asu. It's unbelievable what's happening in the AI space. There's been literally a call early on for professional development for our faculty. More than 3,000 faculty here at ASU have actually availed themselves of asynchronous curriculum content, professional development content with no incentives, as you know, again, there were no incentives during COVID when we helped to flip the nation's largest university to an online experience. Again, not all of it necessarily the best out of the starting baits, but here again, I think our faculty leaned in and have seen this as an opportunity to address the mission of making AI available to all and realizing their responsibility to become literate in that regards. But also, I think, an incredible amount of innovators, DNA here at asu, entrepreneurial DNA here, and so lots and lots of faculty use. And we opened again conversations very early on, literally three years ago, as your intro indicated, conversations that we had with OpenAI to try to help shape the ways in which, and perhaps answer the question, why should OpenAI is an R and D facility at the time, lean into the education space as not only a principle that was important to do, but also as a way to actually shape market forces, to engage and again, use the platform of ASU for as large and diverse a learning community as we are, as an opportunity to work together. And in January of the year, in the January immediately following the November release of Chat GPT, we began to actually make their tools available. And we were, you know, and are among those universities that make that product which has now got an education wrapper to it, which we helped, which we helped to shape as a. As A requirement for the security and the privacy of our students. We have had a chance to make those tools available through licensing to all of our students here at ASU, all 200,000 of us.
A
Wow. And that's what I love about your story, because I think you came into this understanding the need to ensure ubiquitous access to technology. But now, as we're flipping into the artificial intelligence age, making sure the same happens. Now, before I want to go into this faculty professional development, I just wrote a paper about this Lev. But before I do that, just give us a taste of what A.I. you know, or where A.I. is showing up on the university campus for students. And then faculty and administrators, we nice to sort of get some landscaping of what you've been able to bring in this area.
B
Well, yeah, I mean, the formula, if you want to kind of call it that at asu, is to try to do three, three things at once. One is essentially developing these communities of practice, broad communities of practice. And some of the professional development work was early on, these were conversations with faculty first about essentially putting out on the table aspirations as well as obviously, considerations. There has been from the beginning a very significant commitment to. From our faculty commitment to making sure that there's an ethical framework for the ethical use of these tools at asu. And so when I heard that set of concerns, broadly from across the campus, from our humanities, social science, professional schools, law, journalism, there was lots of faculty who were part of these early conversations. And I said, that's terrific. I'd love to stand up a faculty ethics committee to help guide the work that ASU was going to embark upon. And that's been a central part of our commitment to dialogue with the faculty. And the series of questions that came up following were like, we also need professional development and some literacy training about at the time, you know, a whole new vocabulary of things that were not easy to immediately pivot towards and how to have conversations about again. Early challenges in the maturity curve of the. Of the technology, with lots and lots of conversations, immediately flipping to conversations around shortcutting and hallucinations and other kinds of things that, again, there were lots of opportunities for conversations. And then the professional development has led to literally dozens and dozens of communities of practice where faculty within the disciplines are having not only conversations, but we have developed a series of ASU platform technologies and tools which we call ASU Create AI as a creative effort, as a generative effort to actually create platforms that support secure private garden walls where we can protect either the intellectual property of the faculty members or certainly the personal and health related identities of our students and staff along the way. And that set us on our way. The first leg was really around communities of practice. The second piece was really around trying to figure out ways that we could catalyze experimentation and innovation. And we did this actually with OpenAI. We created an internal grant program where we asked faculty to start with, and then faculty and staff, and then finally faculty, staff and students to respond to an internal grant program to actually put out impact responses. How by leveraging these tools, do you think you could have an impact on the teaching and learning, the research or the service of the institution to the communities around us? And we thought again that we would initially catalyze a couple dozen projects. And again, at this point in time, as we're talking, Nicole, we have currently 600 projects in flight simultaneously responding to these grant activities, which has given people a chance to propose projects that range from sort of individual solo projects to whole classes working on a wide range of issues, research related issues, transformation of language experiences, Persona based education for healthcare professionals, conversations on ways of which engaging students in philosophy, the law, transformation of the entire STEM undergraduate education experience. All of these have been essentially generated out of the idea of how to catalyze interest in and then continued reflection. And then the third piece of it is setting up these. The third leg of the whole project has been to sort of set up a futures environment, a series of sandboxes where we can support and respond to the needs of the campus community as this very early immature technology that we, you know, has sucked all the air out of the room in almost every conversation continues to evolve. We wanted to create sandboxes. And that at ASU has evolved into, you know, a significant effort around this create AI platform, which now has literally tens of thousands of faculty and students using the platform to actually experiment, to actually do, to actually experiment with, try new things. There are 4,000 AI experiences that are in a library here at ASU, not developed by Central IT, not developed by my group, in fact, not usually developed by anybody in it. These are low code, no code solutions that subject matter experts and of course students are leaning into and basically are sharing with one another and applying the access to now over 50 of the large language models and vector databases, vectorized databases of content, all the curriculum at the university, lots and lots of research activities. That gives you a sense, Nicole, of the breadth of the work underway here at asu.
A
Yeah, I mean, this is so interesting to me because I think you've like flipped the script on what most people think about when we start introducing AI into education. Right. A lot of that shifts into this conversation left on guidance and documents and you know, what the policy is, the protocols and it sounds like what you've tried to do.
B
Not here.
A
Yeah, I mean, tell me, right, little conversation about policy.
B
Because we, we felt that we had all the policies we needed. Yeah, we have an academic integrity policy. There's a whole bunch of practices that we knew and know are going to continue to evolve. And we use the North Star of the existing policy framework, which is the balancing act between the key to the long standing set of practices as to how knowledge gets created and how we validate and verify knowledge through peer review and how we maintain a commitment to academic integrity and how we correct and self correct for when there are challenges in that area. And all of that allows us to apply it to AI without us spending two years in conversation about the strategic plan and about the policy collection of activities. We have a significant bias to action here at asu, but it is guided by, it's guided by things like the faculty ethics committee and we have a faculty research and AI committee where again, these are largely around what kinds of guardrails, what kinds of considerations need to be put on the table. So our part of the university community, that is to say the central services support teams across the breadth of the university, our instructional designers, our technologists, the people who support teaching and learning for neurodiverse learners here on the campus, everybody needs to have a voice at the table in helping to shape that work. But we didn't need to invent a whole bunch of new processes that would have us spinning and basically finding the challenge of just getting out of the starting gates, which is still significantly a challenge not only in higher education, but it's certainly one that where we are in many, many ways challenged to unlock value. And I think we're very, very lucky to be at ASU where there is something in the water here in which we manage to have this, I think significant bias to action and to experimentation and to iteration, to design, build. And I think that is not only in this AI moment. You know, ASU has had that in the DNA now of the institution for more than 20 years. And it is what helps to differentiate ASU from all the other great research universities in the country.
A
Well, and that's so interesting, right, because again, I'm going to go back to this guidance piece. I hear a lot of like ctos like yourself sort of err on the side of guidance because of like the security concerns. Right. There's this whole thing like, we can't bring these technologies to our universities because they'll have like security breaches, particularly at the enterprise level. I mean, as a CTO that is sort of trying to reimagine what this looks like. And it sounds like. What I love about your conversation so far is like, take agency over where these tools show up. So it kind of shifts the conversation from, no, we can't have AI because it's leading to critical skill erosion or it's, it's, it's not having young people learn. Sounds quite opposite at asu, right? That it's engaging various communities, various disciplines, various hierarchies that sort of come to the table to learn together. But as a cto, where do you start injecting like some of the concerns about the cybersecurity breaches and privacy concerns?
B
We take those extraordinarily seriously. And we have a very large robust cybersecurity infrastructure here at the university. And we have again a series of principles related to our AI architecture here, which your listeners can view for themselves at AI ASU Edu. And in that environment, we have a series of commitments and principles that then inform our architecture, which then informs our code and our DevOps environment. And it starts with two key principles. One is privacy by design and the other is security by design. And the entire ASU Create AI platform is developed to support and to deter the injection of poisonous tools that have and already begun to create challenges in a number of other platform technologies that are out there. And certainly we have been and regularly testing. In fact, we have AI in real time testing against those kinds of insertions and that kind of what we see out there in the broader environment where certain sequence of documentation and or code can be inserted into and help and can poison some of the language model and ways in which our databases get utilized. In the real experience that students and faculty have, that can create bias, that can create ethical considerations. We also have a. We also coming out of our commitment to privacy and to security, we've actually have an ethics and bias engine that allows for both real time, if it's enabled by humans and or for testing of folks using and creating solutions. You can literally run your environment or you can auto run your environment through the ASU bias engine again, which is an effort to not only take a look at the fidelity of the language models, but also to make sure that again, you have a security agent working for you to make sure that there has been no successful efforts to insert malware or otherwise nefarious code. Into the fairly complex data systems that are informing research in our labs and the like. So again, I think here the piece to understand is that these are almost never binary issues. The idea that we're, you know, this is not like it's secure or it's not secure. This is whether or not you have a commitment to security, in which case you want to take ownership agency of your own organization and bring your cyber professional colleagues into the conversation as early as possible so as to inform it so that they don't become the office of no, it can't be done or it shall not be done because of security issues. And rather they become the enablers of how to make the environment safer. There is no way, no how, that there is any guarantees. In fact, they'll never be any guarantees of 100% secure environments for AI, because there are none, none of those guarantees anywhere on the network, anywhere, ever. These are all ways of essentially protecting the downside risk that face all large, complex organizations. And again, I think a lot of this has to do with mindset in our institutional environments. And cybersecurity is not just another one of those important stakeholder groups at asu. It's one of the two principal factors, again, along with our commitment to privacy, which we take to be a central foundational commitment to everything that we do in the technology space here at asu.
A
No, and I appreciate that because I think that's where a lot of higher education institutions sort of struggle because they're sort of looking at the technology and because of it being new even to the CTO in many instances. Right. They're trying to figure out, like, how does this align with traditional cybersecurity efforts? But there's this, like, other piece, Lev, that I love for you to kind of opine on, which is, you know, there will be. Okay, I'll put it into, like, cases. There are students that will reject AI in many instances. I'm finding that young people have these questions around, like, compute power and data centers and environment. And then there's the obvious, you know, other case, which is, we do need these data centers, compute power and environmental remediation accelerants to be able to generate the type of activity that you're talking about on your campus. I just wrote a paper about this on data centers. I'm just curious, sort of injecting this into the conversation. How do you deal with student expectations and I guess, their AI activism and then to, like, how do you manage the resiliency of the network that you're creating given these Conversations we're having now about this, you know, extensive demand of compute
B
we invite. In fact, I have over the last several weeks, had multiple meetings with student leadership here at asu. We have a council of student presidents I've met and listened as part of the ongoing commitment to be engaged with and try to understand, learn from our student leaders here. And you're right, there is continuing AI interest as well as an activism that you would expect from student leaders along the way. And those are leading to really important conversations, one of which is what is the university's general view of the realities that we are building hyperscale data centers all over the country. But that's not just out there. Here in Phoenix, we have 43 hyperscale data centers either lit or under construction. That's speaking to the impact on the local economy and the local groundwater and the local power grid and on and on. And so those are really important conversations to be having. And obviously, like most everything else, these are not binary choices. These are about conversations and then shifting the question from sort of the broad sort of high level conversations of the world out there to really turning it inwardly to ASU and what can we do in that environment? And so I've mentioned a couple of times that we have this fantastic platform technology which we've designed, built, manage and operate here, which is called the ASU Create AI platform. And when students, faculty or staff want to invoke different language models, they can actually select filter for the models that have proven to be the most environmentally sustainable. Typically they're smaller language models, but they actually get to run and actually have a chance to do comparisons in terms of not only the cost of the model, not only the latency of the model, but actually of the sustainability of the models. And those are then symbolized in the template infrastructure of the platform with a series of green leafs or lack of green leaves that are out there. So it is like the whole effort that developed during the time you and I were students, which was around personal responsibility for recycling. There's a personal way that you can actually choose to use, for example, some of the nanoscale and discrete language models. And those are ways in which you can practice what you believe to be really important. We have a whole series of efforts here which I'll share, which we called EDGE AI. This is all in response to sort of that central question, which is, you know, what is the responsible way to be thinking about our relationship to the environment around us? So EDGE AI is actually offline AI, and that is ways in which these are use cases that started many of them with being able to deliver language model libraries to refugee camps, to folks living on tribal reservations where there is not significant Internet access. In some cases we even have a fantastic project here which assumes that in fact there are places around the world where there is no power at all. And so this is solar. We got a project called Solar Spell, which is a computing device with a Raspberry PI compute with a solar panel form factor which allows you to have access to libraries of educational content, including language models, generative AI language models that allow you to use any number of languages, have dialogues and conversations as much as you and I would experience. But it's sitting on a form factor that basically is using solar energy. It's these language models are sub 1 billion parameter. They sit as a card in, you know, on the same form factor plugged into the Raspberry PI. And now we're even doing it for offline use of mobile phones that is simply SD cards that can be utilized when you're offline and only. So my point in all this is if that if the issue is how to be have an ethical approach to the ways in which we want to support the environment and not instead of or and we want to continue to use these powerful tools, then let's lean into and create entrepreneurial ways of creating robust solutions that meet the needs of students who have got that conviction and commitment as well as the use cases where humans can be can participate in the education journey by actually unlocking this kind of value.
A
No, and I think that's important, right? Because you know, because of these concerns, it is, I think, you know, what you're telling us is like it is important to have a participatory model at the university, particularly since you're going to have a range of learners, a range of disciplines and a range of opportunities and concerns of the technology. But I also like how you're talking about, you know, ways in which you can sort of disaggregate the computing power in ways that make sense for the application or the community that's using it. I mean, this sort of gets me back to this question that we started with, which is the Internet, you know, digital access, more traditional digital access in fact, and AI, right, Because I always look at AI LEV as like these application layer where you're able to engage in some unique opportunities in various sectors and various functions, et cetera. But at the end of the day you still need Internet access to connect to the devices to be able to run some of these models. Am I correct about that or I'm not because I'm technical, but I'm just trying to think about how we still deal with this Internet that is still somewhat tethered right to AI because at some point it'll be disaggregated and more singular, I assume. But just curious how you deal with that earlier challenge that we talked about.
B
Yeah, so again, I mean, this is sort of Lev the technologist chatting with you for this one, Nicole. This is very much the moment that we're in. Again, I just want to underscore how early on we are in the AI economy and the AI technology stack. We are starting from massive compute infrastructure that is obviously consuming all the air in the room about conversations, but is also consuming massive amounts of power, energy and obviously the circulation of dollars to support this environment. But it is very much informed by big Iron. Think about that in our era around mainframe computing and so, you know, you had to connect to the mainframe in order to get into the computing environment today. You have to connect through very big pipes running at speeds that, you know, would boggle the mind. For the training models that are out there for the frontier providers, these are actually being measured not in megs, not in gigs, but in petabytes effort, sorry, terabytes, rather terabytes of throughput in order to make these things happen. So what will happen as sure as day follows night, because this is the way the technology evolves, there will be not only edge, that is to say offline models. So for example, we've been working with Nvidia right now to take a product that they call Spark, which is DCX Spark from Nvidia. It is basically the size of a brick. It is offline, it supports very substantial size language models and it can again support what is basically a supercomputer on your desktop, connected or not connected to the Internet. And that is the beginning of a whole new generation which is not going to supplant the big Iron activities, but will develop into this sort of spectrum of ways in which AI is going to get utilized to the point where again, I think it's going to be all the way down to a small SD card that sits in either a Raspberry PI or on your mobile phone that is there. And so Internet connectivity is hugely important in the overall scheme. But what is going to happen is there will be, I think, all kinds of core capabilities being built into appliances in our homes. Televisions are already beginning to see it, obviously other home appliances, our cars are already been here, you Know, in Phoenix and elsewhere, you know, four or five other cities around the country, you know, we have physical AI already in play here with Waymo, a Google company that provides literally there are 1200 Waymo cars, autonomous, no driver cars, racing all around the valley here every day. Yes, there is Internet connectivity for those, but actually, that is not actually how physical AI is working. Physical AI is working with LiDAR and supercomputing capabilities built right into the automobile. So it's a very rich, very robust, very generative moment in which the Internet part of AI will be important for 70% of the market use cases, and the other 30% will be all these other fantastic opportunities, including physical AI that will be out there, some of which will have little Internet requirements, some of them will probably have to have much more robust Internet access, but some of them will have no Internet access at all as part of the experiences that we will be having. And again, those are very quickly creating, I think, value and hopefully value for the community, whether, again, you know, you're a senior citizen and you don't drive and you still want to be able to get to the doctor's office or otherwise, or my special needs child wants to get to work. Jumping in a Waymo is one way of continuing to maintain your autonomy in ways that we are seeing real uptake here.
A
Yeah, and I think that's going to be maybe a conversation, Lev, we should have about a joint conference. Right. Because we just had something similar to this at Brookings on the future of the Internet in the age of AI. I mean, I guess to your point, I also predict a world where there'll be just many more disaggregated points of access, like you said, your refrigerator, your autonomous vehicle, etc. Or is this one in which we need to make sure, like you're doing it asu, that there are communities of practice so that we train more people on how to use these in their relevant sectors.
B
Right.
A
So if I want to do law, I'm now at asu, I'm learning how this works in law. So I'm not necessarily, you know, dislocated or outsized out of this opportunity. If I'm a philosopher or in the humanities, I actually see a place for AI in the work that I do. I mean, is that like your ultimate, like, goal of doing all this stuff to ensure that it aligns with learners aspirations and sort of make sure that they stay included? Because I do worry, Lev, I'm not going to lie. I mean, part of what we went through was more of like an access challenge to the hardware and all this other stuff, if you remember right, it seems like with AI it's really going to be a knowledge barrier if you do not have these skills.
B
The central issue, which again, I think invites us to have more conversation, Nicole, and again, convenings about this is actually related to the workforce.
A
Yes.
B
Whether you're preparing for the first job, which is part of what we do in our part of the supply chain, if you will, in terms of our role in education mostly, or you're returning to education as a way to upskill or retrain. The disruption that is going to, that is already beginning to unfold and will continue to significantly inform the challenges and the opportunities going forward, is a program that connects institutions like mine to the communities around us in the most profound way in our era. When you and I began in this work, we understood how important the Internet was going to be for workforce needs as well as essentially for a basic idea of being literate and a citizen. In the Internet age, in the age of AI, this takes on a much, I would say a much more urgent call for community engagement and for solidarity work and for building allies across the full diversity of the community around us. Because the disruption is not simply going to be in one or another layer, in the sort of socioeconomic, sort of realities of our society, many, many different parts, whether you're a blue collar worker or whether you are a white collar worker or any number of other different parts of the workforce. And this then is an opportunity for us to engage in our commitments to poverty alleviation, to opportunities to unlock value and to unlock opportunities for folks, to help returning veterans, to help refugees get resettled and gain access not just to the physical tools required to be an informed participant in the 21st century, but also to have access to the literacies required to be an active participant and have a meaningful engagement with the leading edge parts of the economy. Because the divide between those who get on the AI economy and those who are left behind or are made to be left behind in terms of being underdeveloped as an action that divide, I fear, is going to be greater than anything we've ever seen in the digital divide debate and the digital divide advocacy work that needs to be done. And for that, I think organizations like ASU have a fantastic opportunity to create a proof point on the art of the possible.
A
Yeah, I mean, I agree. I mean, this has been fantastic. I mean, part of what you're doing is like, hey, let's bring agency back into higher education, but let's do it in a way where it makes sense for our ultimate goal of training the next generation of leaders. And most importantly, let's have it as a participatory model, which I think is so interesting in terms of the conversations that we have been in, you and I around like bias and where there are breakdowns and some of the consequential fears. I mean, don't get me wrong, we still have those fears. But I think what you're laying out is a framework for just more engagement in a critically important and safe manner, which is not always discussed among your colleagues. Just saying. So with that, I want to thank you, Lev, for joining me for the Tech Tank podcast. Before we end this, also let people know where they can find out more information about the ASU enterprise technology, particularly the program that you put together, ASU CreateAI.
B
Yeah, please, anytime, take a look at AI ASU EDU. It is a window into everything that we've discussed today. It's also a window into hundreds of stories of the ways in which our students, faculty and staff are engaged with AI across the full breadth of the institution.
A
I love it. Well, thank you so much, Lev for joining us.
B
Thank you, Nicole, as always for inviting us and I look forward to the next opportunity.
A
I know I hope to see you soon in Arizona because it's cold out here in Washington, dc. So listen folks, Please explore more in depth content on tech policy issues at Tech Tank on the Brookings website, which is available at www.brookings.edu. your feedback matters to us about the substance of this episode. So leave a comment, let us know your thoughts, share it with someone else, and suggest other topics. We're going into 2026. We want to hear from you. This concludes that another insightful episode where we make bits into pal bites. And until next time, my friends, thank you for listening. Thank you for listening to Tech Tank, a series of roundtable discussions and interviews with technology experts and policymakers. For more conversations like this, subscribe to the podcast and sign up to receive the Tech Tank newsletter for more research and analysis from the center for Technology Innovation at Brookings.
Episode: Universities tackled digital inclusion—now they are accelerating AI use
Date: January 19, 2026
Host: Dr. Nicol Turner Lee (Brookings Institution)
Guest: Lev Gonick (Chief Information Officer, Arizona State University)
This episode explores the evolution of digital inclusion in higher education and examines how universities, exemplified by Arizona State University (ASU), are advancing into the AI era. Host Dr. Nicol Turner Lee and guest Lev Gonick discuss the challenges and opportunities of integrating AI into university life for students, faculty, and administrators, drawing lessons from decades of digital equity work.
Historical Perspective:
Pandemic Acceleration:
Communities of Practice:
Catalyzing Innovation:
Futures Environment/Sandboxes:
Practical Policy Approach:
Cybersecurity and Privacy:
Responsive to Student Voice:
Regular meetings with student leaders address AI skepticism, environmental impacts of data centers, and expectations for responsible tech use.
Platform lets users choose models by “environmental sustainability” (green leaves icon); students can select energy-efficient, smaller models if they care about environmental footprints.
Edge Technology & Access Innovation:
AI and Connectivity:
Current AI demand strains university (and society-wide) network/computing infrastructure.
Real-world examples include autonomous Waymo vehicles in Phoenix running much of AI “onboard” without persistent connectivity.
AI Fluency as the New Literate Divide:
On the new digital divide:
“The greatest digital inclusion, digital equity challenge of the last 50 years is actually the one that we’re leaning into right now, which will be around AI.”
— Lev Gonick (06:21)
On ASU’s approach to AI:
“We have a significant bias to action here at ASU… guided by things like the faculty ethics committee.”
— Lev Gonick (19:26)
On workforce urgency:
“The disruption… is already beginning to unfold and will continue to significantly inform the challenges and the opportunities going forward… The divide between those who get on the AI economy and those who are left behind… I fear, is going to be greater than anything we’ve ever seen in the digital divide debate.”
— Lev Gonick (41:10 & 42:25)
“It’s also a window into hundreds of stories of the ways in which our students, faculty and staff are engaged with AI across the full breadth of the institution.” (45:18)
Tone & Takeaways:
The episode is collaborative, forward-looking, and pragmatic. Both host and guest emphasize agency, participatory policymaking, and inclusion—urging institutions to move confidently into the AI era, while centering ethical concerns, security, and broad community benefit.