
Loading summary
A
We're also going to look at it from the perspective of how can I use these techniques to help to address the problem that you've created. So it's a bit of a cyclical problem domain.
B
Yeah. This reminds me of the name of my show, Embracing Digital Transformation. Right. Because instead of trying to keep it at bay, we embrace it. Not meaning adopt it fully, but we embrace the understanding of it so that I can now manipulate it and use it for what I want to use it for. Welcome to Embracing Digital Transformation, where we explore how people process policy and technology drive effective change. This is Dr. Darren, Chief Enterprise architect, educator, author, and most importantly, your host. This episode, Transforming Learning the role of AI in education, with returning guest Dr. Carm Taglienti. All right, Carm, welcome to the show.
A
I am super happy to be here. Thanks, Darren. Looking forward to the conversation.
B
Yeah, you know, we, we talk every Fridays. Well, most every Fridays, except when I'm traveling, which lately has been a lot. And you've been traveling. Indeed. But you're returning guests on the show. This is what, your second or third time?
A
Second, I believe. But yeah, it kind of feels like more because we talk all the time. But I'd love to share our thoughts with your audience. For sure.
B
Yeah. So, Carm, give a little bit more of your background. I know you've talked about your background before, but in respect to education. Let's talk about your background with respect to education because that fits in really well with what we want to talk about today.
A
Sure, yeah, no, absolutely. And I, you know, it's kind of interesting because I, I grew up more of a classical technologist, you know, undergraduate in computer science. Then I went into systems engineering, very focused on, you know, creating computer systems and programming and those kinds of things. But then a little bit later in my life, I decided I wanted to be an educator. So I started teaching at the graduate level in university, Northeastern University, where I initially did some teaching and I still teach there today, a couple other universities. But I decided to go and get a doctorate in education, believe it or not. So I got my doctorate in education and it was very rewarding for me. And I focused on things like how people learn within the organizational setting as it relates to disruptive features of things like technology. So it's, it's a really fascinating area. And I found now that it's like, that was like the perfect dissertation or research area. And in addition to that, I also, I'm an academic director at Wake Forest University where I focus on a curriculum around a strategy, innovation. And you are graciously one of my board members. And I'm super excited to have you on my board there as well. And certainly I told you I teach at Northeastern around AI topics and cybersecurity, as well as National Singapore University and the Asian Institute of Technology. So I don't do a lot in the education space. I'm just kidding. But I. It's definitely a passion of mine and certainly an area that I just love to teach and I love to learn about how people learn. And that's hopefully. Well, I'm sure we'll talk about it, but it's. I know it's one of your passions too.
B
It's fascinating because you got your dissertation right before all this kind of unpacked. Right. This whole generative AI which is completely changing education. Anyone that doesn't think so is living under a rock.
A
That's right. That's right.
B
So how did that prepare you for what you're seeing happen? And did you ever think anything like what has happened would happen when you were doing your research and things like that? Because there's been other times where we've had some technology introductions that has been disruptive. But have you seen anything this disruptive or is this just par for the course?
A
Well, you know, it's a great question because what really prompted me to pick my research topic area at the time was, you know, I'd been in the industry for 20 years, ish. Before I decided I would do this research. And what really bothered me was the fact that many organizations would look at technology and, you know, try to change their business. I'll call it a disruption or innovation, and they would fail. And I. I really wanted to know, like, why was this a failure? It wasn't the technology's fault and it wasn't the individual's fault, but it was sort of a combination. It was like the. It was the organizational element in addition to maybe some missed expectations with respect to the technology, et cetera, et cetera. And so I came up with this framework called the Determinants of Determinants of Successful Innovation. Like what are the factors that impact that? Now, it's not a recipe that says that if you have three of these and two of those, then you're going to succeed. It's more of the. What are those influencers. And I think at that time I was really just trying to look at it more from an. I'll call it an engineering perspective. Like there must be some kind of strategy to associate that with successful innovation or successful transformation. And that's what the research was all about. Now, fast forward now into the world of AI. Well, all of a sudden, now this big disruption happens. The same elements exist. And I think, to answer your question, this probably is the biggest kind of disruption I've seen, at least in my career. Now, the cloud was pretty big and the introduction of the phone's pretty big, but this one was just at a much larger scale. But I also think, and maybe we could talk a little bit about this is I think that some of the ways in which we can address the disruption as part of successful innovation is by leveraging the capability, which I think is another interesting thing. So instead of just saying, hey, you caused this big problem for me, we're also going to look at it from the perspective of how can I use these techniques to help to address the problem that you've created. So it's a bit of a, you know, cyclical problem domain.
B
Yeah. This reminds me of the name of my show, embracing Digital transformation, right? Because instead of trying to keep it at bay, we embrace it. Not meaning adopt it fully, but we embrace the understanding of it so that I can now manipulate it and use it for what I want to use it for.
A
And we're in the middle of this right at this point, so I wouldn't say I've got an answer, but I, I certainly use AI techniques with some of my dissertation materials or my research results, and I, I use that to try to help me understand how can organizations succeed at AI transformation as part of going through this and really allows us to move a little bit faster. I think in some ways because of the fact that there are so many complex moving parts. And last time I checked, I can't keep like 37 things in my brain at one time. So it really helps to keep us honest and it helps us to do things like. I'll just give you one good example of something that we use. It's a, it's a tool that we use to be able to understand which use cases within an organization are the best ones that could provide for them highest return on investment or at least meet their expectations, kind of what I was talking about. And you can approximate that pretty easily because if you set a context and you identify maybe what some of the measurable areas of success or criteria might be, an AI system using sort of standard LLM technology can help you to create prioritized lists or give you not the answer, but at least a relative assessment of these are good things that you can do, or these are realistic things that can provide value to you as a organization. And so we can move so much faster than we used to be able to because it was sort of the let's do three prototypes in the. Historically, that's where we used to do it. Let's do three prototypes, see if it works. And it's like, oh, no, it didn't work. We just blew, you know, whatever, $200,000. Let's go back to the drawing table or the drawing board. And so this is a. I, I think in that way AI can help us to realize sort of the disruptive effect, but also help to address it. So it's pretty cool. I think, in the grand scheme.
B
Yeah, I agree with you here. I think it can act as an augmenter or an accelerator to solve problems that it causes for no better.
A
Right, right.
B
It's its own medicine. I don't know if that's the right metaphor or not.
A
Agree.
B
Now, what is this doing to education specifically? Because.
A
Right.
B
I know what, I know my approach to this and what I've been thinking about doing on this, but let's hear it from someone other than Darren.
A
Yeah, yeah, no, it's. You're gonna get the same answer. No, I'm kidding. We have talked about it. But yeah, no, I agree with your approach and I love what you're doing. And I think it's certain area elements I would totally agree with. And I, and I take a few, maybe different ones, so I can get into that in a second. But before we even get there, the one thing that I will mention to tie all of this together is that in order for us to be able to truly transform, we have to make sure that we understand and educate our communities. So I was talking more in the business context, but there is a, there is an element of this that is, I'll call it education as it relates to the workforce. Not, not so much, not as much classical education as we think about in the academic context, like university level education, et cetera, et cetera. So we may draw the distinction between the two, but I think they're interrelated. So I don't think they're completely separate as we think about education. But to that point, let me get into sort of classical education and you can almost see the inferences into the way that we learned within the corporate world. And we could talk a little bit about that too. But in a classical education setting that you and I are exposed to all the time, it's, you know, we have to remove ourselves from the standard testing regimens or pedagogy. If you want to call it that as it relates to how do, how have people learned historically? And it's so easy for many of us to just fall into the same ruts again. I'm designing three different courses in curriculum right now. And it's always the same. It's like, where's your quiz? Where's your essay? You're going to do a midterm and a final. And I was like, whoa, whoa, whoa, whoa. It's like, within this new world, those are not really good ways to truly understand learning. So this is where we can now take advantage of the disruptive effect of AI and think a little bit more about. We have the liberty to be able to move faster or take the next step in terms of critical thinking or truly defining what we mean by understanding. And so the definition of understanding then becomes one of, not only do you, do you know a technique or have been taught a theory or a concept, but can you apply it? And that's really where we're going with this. So some of the things I know that you are doing and some of the things that I do are more related to how can I apply what I've learned in a real world setting. And that's how I think we need to transform the educational process. So whether it's using, like, I'm trying to do this more effectively, but, and I'm trying to do it in the current curriculum. And I think I've come close to figuring out how to do this, but I'm teaching students about a particular type of, say, AI infrastructure, architectural design technique. And then I, I'm going to do what I like to call, like a workshop style, call it a, you know, like a, a speed kaggle competition or a coding competition where it's like, here's your problem, you have a half hour to solve it. So do you know how to approach the problem? And can you effectively describe to me how you would approach it? It's almost like, and I'm trying to do this more from the walk me through, like a whiteboard, like an individual exercise. And sometimes, you know, we would call these panels, you know, where we would interview an individual. But instead of actually writing a paper for me, I want you to tell me your thought process and tell me how you apply the concepts in the real world situation. Because that to me indicates true learning, true understanding. So that's one way to do it. Another way is to, you know, and we could probably get away with this for a little while is maybe make recordings of, you know, create a slide deck Explain to me the concepts. I will listen to you describe what you're trying to do and what it means to you. And if you can't explain it very well, that's another indication of the fact that you probably don't really know it if you can't explain it. And if you're, you know, will people read stuff? Maybe. Will people, you know, they're probably going to use avatars yet, but you get the idea. So I think, yeah, they could use
B
deep fakes or things like that in, to do the recordings.
A
Right, right. So, so we're, you know, we're, we're trying to figure out how to do that.
B
You would use this technique to do assessments to see if they have learned the material that you're trying to get. Because ultimately we're trying to instill some skills on these students. Right. We're trying to get them to either their, their what, what was the term I heard the other day? Hardened skills, core skills like critical thinking, collaboration.
A
Right.
B
There's some. I, I, I was, I was at a CSU board meeting this last week. Week, the Workforce Acceleration Board, which I'm part of. And I don't consider myself an educator because I've only been teaching a year and I didn't take any courses on how to teach non.0. Right. So I'm a technologist. So these guys are throwing around words I, I've never heard of before. And, and I'm thinking, okay, I don't get it. I don't know what you're to say. But you, you would know what they're saying because you're educated in that. But the approach that I'm taking, like you mentioned, is very different. It's, my end goal is do my students understand what I'm trying to get across to them? And I've, I've put in there what my objectives are for the semester. I want my students to understand these key principles.
A
That's right.
B
And be able to apply them in the real world. And that's it. So if they put in the effort, then I should put in the effort too, I guess. I don't know what the right answer is.
A
Yeah, no, that is, I believe that that is the right answer. And I think it stems from. Well, there are two different ways to look at it. I think there's the theory based on and then there's the, we'll call it, call it the, yeah, call it theory and maybe engineering. So it's okay to look at it from the perspective. Like if I were a theoretical mathematician, then I'm, I'm trying to actually explore, you know, different ways to think about mathematical principles and representations of the real world as it relates to sort of math and mathematical proofs and theorems and axioms, et cetera. So I can get into maybe theory and then I, maybe not. I maybe imaginary numbers, for example, you know, that's I can represent maybe a phenomenon with an imaginary number. I don't have to necessarily prove it because I'm not, that's not what I'm interested in. But an engineer is something slightly different. An engineer. I'm looking at it from the perspective of I have to make practical principle based decisions in the real world or even physicists, but let's go engineering because maybe that's more near and dear to our heart. But in, in, in those two realms, I think there's a slightly different, you know, learning is slightly different in each one of those. So like if, for example, interesting, I'm in the math, I'm in the mathematical world or the, you know, academic world or maybe theoretical world, I use proofs to be able to demonstrate understanding. And if I can prove something, then therefore I have, you know, I've learned that particular topic area or at least been able to talk about it in a way that other people can understand. If I'm an engineer, then I can do the same thing. But it's a principled approach so that I apply the principles in a way that's sound and applicable in the real world. That's another way to demonstrate understanding. So I think in our we meet, I personally err on that side. More of practical experience, which is also called more engineering focused. And I think what you're doing is exactly that. It's like we don't necessarily, and this has been an argument I've had for a long time, like when somebody wants to learn about AI, it's like, I don't need to teach you about, you know, linear algebra and calculus and statistical derivations and probability to the, you know, at a real theoretical level. I'm just going to show you techniques that you can use in order to be able to solve a particular problem. Which algorithms work, why they work, how you apply them, what the expected result would be. I don't have to know all the theory to do that. It's really nice to know if you're, you know, whatever, geeky like us, but it's not something that's required. So I think that that's a draw. Draw that distinction between approximation. I call it that maybe approximation is not a good Word. But it's sort of the. I just need to know enough to be able to use these skills, and I have to have enough confidence to know that when I do apply them, I'm going to result. That makes sense. And for our students, just like you said, because you had 100% hit it on the head, was, I want to make sure that you have these practical skills that you can apply. I'm setting the expectation for you that will make you successful. But I also, you need, you student, need to step up to the plate and demonstrate to me that you have these skills before I'll give you a passing grade in my course. And so I think that that's, that's exactly what I think we need to do for most of the population. Because even I, you know, I like to consider myself a bit of an academic. Maybe not as deep as maybe most academics, but in the grand scheme of things, I won't be writing my own, you know, neural network algorithms to be able to maybe, or I won't be working with Richard Sutton on the next version of, you know, the, the reinforcement learning algorithms. I'd love to do that, but I'll never do that. So. But I understand them, and that's about as good as I'm going to get. But I can use that to help explain to other people why these are the right techniques. So, anyway, it's a long answer, but I think you're spot on.
B
Now, do you think if we only take this engineering approach, which we won't, but because of generative AI, do you think some of the more theoretical academia is going to start falling aside because of the fear that we may have in education to move away from theoretical to more practical? Because that's how I can actually create some value in, in society. Because we're gonna, we're gonna relinquish some of the deeper thinking or theoretical stuff to an AI because an AI can't do real things. Not yet.
A
Right.
B
Where human, human can produce real things. Do you think there's a potential there to lose that kind of academic rigor?
A
I, I don't think so. And here's why. Because I think if you look at, you know, let's just take maybe, you know, statistical measure or percentages. You know, how much of the current academic community, even before AI appeared were true theoreticians or not?
B
Not many.
A
Right. True academics or even, I mean, even look at it from the perspective, how many people have PhDs, right? I mean, it's, it's like some crazy small amount. So I Think having the expectation that, you know, the, A large part of the population will say, in the case of AI, really understand AI and how it works, and, oh, what's convolutional neural network work? And it's like, you'll never need to know that. So it's maybe crazy for us to believe that that's true, but many, many, many people, you know, and maybe there's a missing middle in there somewhere. There are the engineers that understand true application and why things work, not so much how they work, but why they work. And then I think there's also that missing middle, which is sort of what we're trying to address. It's like you don't have to be the expert engineer either, or the, you know, call it the execution or the person that can execute on these concepts, but you should at least have a decent understanding of why things work and with enough confidence to know that these are founded in rigorous principles and solid, I'll call it intellectual or academic rigor. The who's doing the rigor is probably a smaller part of the population. It's been that way for a long time. Like, you know, like, why did I always hate math when I was growing up? I love math now, but why did I hate it? Because it's like, it's, it's so hard and, and not too many people did it. But, you know, you get the idea. But no, I think we'll find out. At the end of the day, there's probably sort of the standard stratification, if you will, in terms of the population, in terms of like, who knows things at the theoretical level and who knows things that are the deep engineering level and who knows things at sort of the. Someplace in between. But I think the real problem we're having today, and we could talk about this too, is like, I think there's a lot of fakers out there now. Like, I really think we have to do a better job of people truly understanding or misperceptions, maybe. I won't call it fakers, but I
B
mean, no, they're AI fakers.
A
No, I know sometimes I get, I go to, to listen to a talk and people are trying to tell me how AI works and, or not tell me personally, but the group. And I was like, that's not right. That's not right either. And no, no, no, that's not right either. And I was like, do I stop them and correct them or do I just sort of let it go? And it's, I guess, because I don't want to disrupt the whole thing, I let it go, but it's like, oh, my God, we have to. We really do need to learn more about how things actually do work. And this is an area that could possibly be, you know, I wouldn't say harmful, but certainly. Well, me. Yeah, I will say harmful. Harmful to our society because I think in a lot of ways misperception and misunderstanding could equate to, you know, problems for us and, you know, like deep fakes or, you know, whatever.
B
You know, it's interesting that you. That you brought that up, that there's a lot. Because there's a lot of people out there talking about AI that don't know what they're talking about.
A
That's right. That's right. Yeah.
B
Right. Well, because it's a big buzzword. Do you think that AI is. I think AI is a magnifier. Right. So if someone was out there already pretending, AI is going to help them pretend better. If someone is out there, that's highly theoretical. They're going to be able to become even more theoretical than they were before. Do you see the stratification widening because of that? Meaning the, the gaps between them?
A
Yeah.
B
That it actually may. It actually may cause some harm to our society as a whole. In. In segregation of thought. That AI will magnify those. Those segregations of thought.
A
Yeah.
B
Am I making any sense? This is just like spewing out of my. Out of my head. I.
A
No, yeah, no, I. I completely agree with you because I do think that. I do think there'll be a widening gap at some point where it's going to be the. Those that know and those that don't. And. But I think the harmful element comes into those that don't or say that they do. And I. And I think it becomes a. You could. You can say it with confidence because the rest of the population doesn't know. Like, if I were to tell you. If I were to tell you something about whatever the, I don't know, the surface of Venus, and you didn't know anything about it, and I did it confidently, you'd be like, sure, Carm, whatever you say. You know, so. And I think that we have that element, unfortunately. So people would be like, that sounds plausible in the AI space, and I'll believe you, but I think we just need to do a better job in terms of making sure that we understand what's real, what's not, or how to ensure that what's real is. And this gets all the way back to your critical thinking that you were talking about. Earlier, I think as a side, if we become better, and this also maybe becomes our charter as educators, if we are saying, hey, question, question the obvious or question the unknown. And if you're able to, if you have a good critical thinking strategy, then you'll ask the right questions or you'll try to put it in terms of contextualization of things that you already know to say, that doesn't make sense to me. Or that does, and they should have. You know, for students that have come through our classes, they've already been through that. It's like I made, I made you, quote, unquote, go through the process of critical thinking to demonstrate to me that you understood the pro, the concepts I gave you. So therefore, please do apply that in your professional life and in your personal lives. And then I think we just increase our, you know, an educated society and not avoid that problem because it will exist, but maybe we can quell it a little bit and sort of take the next step as a society. Now I'm getting all, you know, universalist here and talking about the crazy future, but I do think it's, I think it's a good opportunity for the human race generally, you know, and again, oh,
B
I, I, I think so too. I, I, I think so too. It's, we as educators, though, have to change the narrative.
A
That's right. That's right.
B
When I first started teaching, I took over someone's class who was a brilliant educator. I mean, he's written books, he's, I mean, incredible. And his, his course was set up for not AI usage. So his course was, if you do these things, you're going to learn these principles. If you jump, if you do step by step by step. Right. So it was very prescriptive. Right. And what I found towards the end of that semester that I taught that class was the kids could cheat, they could game it because it was so prescriptive that Nai could do it. And so I had to go back to the drawing board and say, wait, my, my goal is to teach them these specific principles. And critical thinking is one of those skills that they need. And there was no room in the curriculum for critical thinking.
A
Right.
B
Does that make sense? It does. Instead, it was all about teaching principles, but no critical thought was going into, well, why did I use dependency inversion? Instead it was, this is what dependency inversion looks like. And this is, this is what it is instead of, and this is how to use it instead of why would I use it? And is this, what are the trade offs of using it? So I Had to, I had to completely rewrite the courses that I'm teaching because I didn't feel good about the end goal. I didn't feel like it was giving them the best opportunity to understand the principles and to build up that strong critical thinking muscle that I want them to have.
A
Right. Yeah, I, I, I love that. And I, I also think that, and this is something where, like when I was younger and I was in, you know, I, I didn't come from a very educated upbringing. So, you know, I grew up outside of Buffalo blue collar family, you know, an education was just sort of the, oh, you're the one that gets to go to college. But so I didn't really know how to learn. And I remember getting into college and I would literally have a dictionary right beside me because when I didn't know something, I had to look it up. And that was how I was going to sort of make up for the fact that I didn't have an amazingly good, you know, I'm not criticizing my education, I'm just saying it was not as good as maybe others have had, but whatever. And you know, woe is me, I'm just teasing, but it's, but I think now to your point, so if I can teach you what critical thinking is and then in order to be able to be a good critical thinker, how can you answer? And I have a two and a half year old grandson now asks why all the time. But now AI can help with the why. Right? So if I think about it, it's like, why, why, why, why, why. It's like if you're, if you are a good critical thinker and you're trying to understand a concept and you're trying to really understand the why, and you know, your professor's not around, it's not your book, or you'd have to read a whole nother book to get there. Well, I have a great vehicle now to be able to learn these concepts at whatever level I need to in order to ensure that I can, you know, get to the point where I can understand the concepts so I can continue my learning process. And that is amazing. That is like game changing across the board. So that I think is another thing we have to teach students is sort of the how do I leverage the assets that are available to me today to become a better learner so that I can continue the process. And to your point, we can't just say if you pass these three tests, you get a degree, that we can't do that anymore. I think we have to. We have to be able. It's like you have all the tools, humanity go figure out how to truly learn something and use the definition I gave before. Learning means being able to not just explain the theory, but also to understand how to apply it. That's real understanding. That is what we need to help with. And if you can't do it on your own, go let AI help you as a co pilot or as a, you know, whatever, an advisor that I think is really the onus is on us to get there.
B
Well, Carm, as I always have fun talking to you, but we're out of time for the show.
A
That's great.
B
But we could go.
A
We should do one of those long forms. We could spill up an hour or two.
B
Yeah, we could fill up a couple hours. I. But I think our audience may fall asleep if we do that.
A
I think you're probably right.
B
Right. But Carm, you'll. You'll be coming back on the show. We're going to probably have you on monthly to see how things are progressing because.
A
Yeah, great.
B
We're in the middle. We're in the middle of this. So this is an exciting time. So thanks for coming on the show.
A
Well, thank you so much and thank you to your audience for listening to us. Hopefully we didn't put anybody to sleep, but very, very fun, very exciting. And thank you so much for what you do, John.
B
Hey, thanks, Carm. Thanks for listening to Embracing Digital Transformation. If you enjoyed today's conversation, give us five stars on your favorite podcasting app or on YouTube. It really helps others discover the show. If you want to go deeper, join our exclusive community@patreon.com embracingdigital where we share bonus content. And you can always connect with other change makers like yourself. You can always find more resources@embracingdigital.org until next time, keep embracing the digital transformation.
Host: Dr. Darren Pulsipher
Guest: Dr. Carm Taglienti
Date: February 24, 2026
In this insightful episode, Dr. Darren Pulsipher sits down with returning guest Dr. Carm Taglienti to dive deep into how AI, particularly generative AI, is fundamentally reshaping education and learning practices. Drawing from both business and academic perspectives, the discussion centers around practical approaches to modernizing curricula, fostering critical thinking, and the nuanced challenges and opportunities AI introduces within education. Both hosts, experienced technologists and educators, offer candid reflections on what effective transformation looks like for learners and institutions alike.
Contributors:
(For the full conversation or to explore more, search for "Embracing Digital Transformation" episode #328: Transforming Learning: The Role of AI.)