Episode Notes

Ready for a serious look at the future of your law firm? In this Lawyerist Podcast episode, Zack Glaser sits down with a panel of law school professors for a wake-up call about something that will fundamentally reshape your practice: the tech-savvy generation of law students entering the field. This isn’t a distant trend; it’s happening now, and it demands your attention. 

We explore how these future lawyers are already operating differently. You’ll hear firsthand about their strong preference for the Google ecosystem – and why that directly impacts your firm’s current reliance on tools like Microsoft Word. This isn’t just about software; it’s about the shifting expectations and workflows these digital natives are bringing with them as they become your colleagues. 

Here’s where it gets interesting for you: we explore how you can leverage this change to your firm’s advantage. The professors share insights on how these students are uniquely positioned to drive AI adoption within your practice. Think of them as an untapped resource, ready to experiment with and implement AI-powered services that can elevate your firm’s capabilities and even open doors to new service areas. 

And let’s be clear, you can’t afford to ignore the rising importance of tech fluency, especially in AI. The professors don’t mince words: a lack of understanding in this area will directly impact your ability to serve clients effectively. In a world swimming in digital data, from car sensors to smart devices, your firm’s relevance depends on it. 

This episode is your essential guide to: 

  • Stay ahead of the curve in a rapidly evolving legal landscape 
  • Unlocking the potential of the next generation of legal talent 
  • Building a future-proof firm that thrives on innovation 

Tune in!

If today's podcast resonates with you and you haven't read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? Check out our coaching community to see if it's right for you.

  • 06:21. Teaching AI in Law School: An Overview
  • 25:25. Is AI Dumbing Down Legal Education?
  • 52:19. The Future of Law Firms and AI Expectations

Transcript

Zack Glaser: 

Hi, I’m Zack. 

Stephanie Everett: 

And I’m Stephanie. And this is episode 561 of the Lawyerist Podcast, part of the Legal Talk Network. Today we’re doing something a little different because Zack is hanging out with a team of law school professors and they’re talking about how law students today are using technology and what disruptions they may bring just in a few short years when they join the industry. 

Zack Glaser: 

I love this. I love this episode. I always think about how quickly the kids that are in high school that aren’t using Office 365, they’re native to Google, Are Going to be lawyers that’s seven years out, but these professors are getting people, they’re a year out, they’re two years out. And so I think this is a fascinating one. 

Stephanie Everett: 

I think it’s a good point about Google. I mean, Google did something super smart when they launched Google Classroom and basically have addicted our kids to the Google environment, right? 

Zack Glaser: 

The kids that I work with are, they don’t want to use Microsoft Word and don’t, a lot of times they don’t need all the heavy hitting stuff, the redlining of Microsoft Word, but we do as lawyers for now. 

Stephanie Everett: 

We’ll 

Zack Glaser: 

See. We’ll see what happens. 

Stephanie Everett: 

But it is an interesting because, sorry, I’ll date myself. When I was in law school, word perfect was the King of Kings, right? Still to this day, love reveal codes. Let’s be honest. I mean Word perfect had its features. 

Zack Glaser: 

Yeah. I mean it’s a Beta Max type thing. It’s a Beta Max versus VCR type thing where Word Perfect has some stuff and still has some stuff that is a better product, but you kind of have to go with what everybody else is using. 

Stephanie Everett: 

And so I mean, I guess job security for our friend Baron, because all these kids are going to be graduating and have never used Word, so he’s going to have a lot of work to do here in the future, or I know lawyers are starting their own firms and they’re like, Hey, I’m just going to go with the Google environment and what 

Zack Glaser: 

Could go the other way? Yeah, and that’s one of the things we talk about in this, and it’s fascinating, but we certainly don’t get into everything we could get into. That’s the thing with all these professors, I could do a single podcast or multiple podcasts with each of them individually, 

Stephanie Everett: 

And I mean, obviously we’ve been joking about tools, but it’s also just that mindset and probably, I mean, I know I’m super excited to hear this conversation. I know you guys focused on technology, but if we really go broader too, it’s about the skills. So one thing we know today, future lawyers, they’ve never written emails. They communicate over text with emojis and shorthand, and now we’re going to have to teach them. We’re going to have to teach them how to have conversations and we’re going to 

Have to teach them how to write a professional message. So you can’t expect our training for our young lawyers has to change. We can’t expect 

Zack Glaser: 

Them to show up with the same skill sets that even we had in a more basic way. That’s a really, really good point because I communicate with my nephew via Instagram direct messages instead of even texting him, and I can’t tell you how many times kids on my team call me bru. I’m like, okay, fine. But just all the, and then when they send messages in our major group chat platform, it’s all, I’m like, what is this? I don’t feel like I’m that old, but it’s all these very shorthand things. And so you’re right. We’re going to have to either, well, we’re going to adjust a little bit. I think that’s the thing is we all think we’re going to teach them how to do this. We’re not 

Stephanie Everett: 

Well, because our clients are going to be their age too, so then the clients are going to come in with different 

Zack Glaser: 

They are, yeah. 

Stephanie Everett: 

Yeah. I don’t think judges are ready for us to call them bra instead of, your Honor, let’s be clear. 

Zack Glaser: 

There’s going to be some sort of shorthand. It’ll be like yh or something. I don’t know. 

Stephanie Everett: 

Yeah. 

Zack Glaser: 

Okay. Well, 

Stephanie Everett: 

Let’s dig into this conversation. So many things that you guys could cover, and I hope if you’re listening, maybe hang on and get ready because changes are still coming. 

Dennis Kennedy: 

Hi, I am Dennis Kennedy and I am currently the director of the Michigan State University Center for Law Technology and Innovation where I teach a class in AI and the law, or actually two classes. I teach a class called Legal Technology Literacy and Leadership at the University of Michigan Law School. And I’ve been around the world of, I’ve been a practicing lawyer in-house counsel large firms, and been involved in legal technology for a long time, long enough to have received the lifetime Achievement award at the American Legal Technology Awards last year. 

Nicole Morris: 

Hi, I am Nicole Morris and I am a professor at Emory University School of Law in Atlanta where I run our TIGER program, which is a tech innovation program teaching law students how to think entrepreneurs and help innovators. And I also am the director of our legal tech and innovation initiative. I’m a patent attorney by training, been practicing for many years before I came over to the law school, and I sort of think of and focus my pedagogy on the legal implications of technology. 

Tracy Norton: 

I am Tracy Norton and I am a professor at Louisiana State University, Paul MA Bear Law Center, and I teach analysis and writing courses, advocacy courses in the first year and upper level and also professional responsibility. And I have been working in the space of technology leverage law practice and legal education since 1997. 

Zack Glaser: 

Well, thank you. Thank you so much for being with me. It is awesome to be able to put the three of you in one podcast together here. I could easily have y’all individually in podcasts talking about some other things as well, and look forward to doing that in the near future. I think two at least, or all of you may have already had an episode on here. Dennis, you are in plenty of your own podcasts, so I really appreciate y’all’s time. But I ran into y’all, all of y’all at the Women in AI Summit that was put on at Vanderbilt University this year. And we got to talking about incorporating artificial intelligence into what y’all are teaching or how you’re teaching your students in law school. And it’s quite frankly, it’s fascinating to me to think about hitting AI in law school head on instead of defensively instead of saying, how do we keep these people that are about to be attorneys from using artificial intelligence and how do we do we get them to use it better, more responsibly and honestly use it to teach our classes better. So what I want to do really quickly is go through, have you all go through and give us a little brief description of how y’all are incorporating this into your classrooms, realizing that it probably, the description probably is not going to suffice because we don’t have the time for it, but let’s start with Dennis real quick. How are you incorporating AI into your classroom up there in Michigan? 

Dennis Kennedy: 

I sometimes like to say, and it is probably true, I’m probably the most aggressive user of AI you’re going to run into. So I teach class in AI in the law, and we’re doing a lot of different things. So I use it in connection with the teaching with group exercises. We do a lot of simulations. I do some prompting training. We had two prompting projects in addition to final paper also show ways to use ai. We developed, we looked at frameworks for how to choose AI tools, had a legislator from our legislative aid from the Michigan legislature come in and talk to the class and get their feedback about how the state of Michigan might legislate certain AI tech. So we have a lot happening. We also do something called AI Studio, which is a hands-on prompting class that’s sort of voluntary for people. So I’m trying to do a lot of things very experimental, and as you said, I really advocate hard for the students to use AI tools in ways that make sense, and I try to illustrate to them where it works well and where it doesn’t and the types of things that they should use it for and they might not want to use it for. 

And that’s been fascinating, and it’s also been changing, I would say pretty drastically even in the last three or four weeks. 

Zack Glaser: 

Oh, yeah, I hadn’t thought about that honestly, about just you’re incorporating, but this is moving, this is a moving target that you guys are dealing with, and I think that’s kind of amazing. Yeah, we’re all dealing with moving target, but y’all are trying to make sure that you’re doing it responsibly and teach people how to deal with the moving target. Well, Nicole, let’s move to you. How are you in a basic sense, incorporating it into your work and your students? 

Nicole Morris: 

Yeah, I would sort of describe how I’m teaching and introducing concepts with AI in two buckets. One would be AI governance, so it’s the broadest where we just explore kind of the governance framework issues for deploying these tools, the privacy implications. To give you a more concrete example, one of the courses I teach in the spring, a module of the course is like a seminar and we’re reading Your Face belongs to us by Cashmere Hill, and she’s an investigative journalist who looked at this startup, Clearview A I and how through its rollout of its facial recognition product kind of ran into all kinds of legal walls and skirted privacy issues. It was helpful because the course that I’m using this book for, we’re working with innovators trying to find startups or help build startups with the technology. So it’s how just a piece of technology that can be the basis for the startup company can have a lot of deep seated implications when you start incorporating AI tools. 

So one framework is a governance framework. I’ve had guest lecturers come talk about privacy law, information privacy. I had a researcher from Google talk about how Google uses ai, and that was really cool because they talked about responsible AI and things like that in that lecture. And then the other side, I encourage my students to use it as a research tool, so to supplement like Lexus and Westlaw or just the random internet at large. But there are some really interesting applications and we were fortunate because we’re here in Atlanta that Georgia Tech is not too far. So I have a guest speaker from Tech Come each year, and he demoed a tool that some researchers at Georgia Tech developed to help find licensing targets. So you’ll sort of input some prompts and it’ll tell you different publications or academic institutions or companies who are working on technology related to your keywords and your prompt. And that was mind blowing to me. I’d never seen that before. So I encourage them to use whatever they want to use. I mean chat, GPT is good for factual research vetting, identify various uses of, we’re doing A-P-F-A-S Forever Chemicals project this semester, so sometimes it’s super technical. So the AI tools are helpful to help define the related industries, the companies working on different technologies. So the research tool or as a research tool, I find it really helpful to show law students how they can actually still ethically use AI tools To get Deeper factual information while still using the trusted legal databases to find case law that’s relevant to investigate legal issues, regulatory issues. And I think what they’ve told me at least is you’re one of the few professors that encourages to use ai, and I’m like, we have a lot to do in a short amount of time. So My engineering brain is more like, it’ll take you weeks to get an answer to that. That might take you a day if you use the proper AI tool. So I liken it to training them to think more efficiently, particularly if you’re under a billable hour rubric that you’re not all your time’s going to get counted for if you’re not efficiently doing the research writing and things like that. So that’s where I’m doing in a nutshell. 

Zack Glaser: 

Yeah, that’s a great point. And I like that the concept there is broadening even the idea of what research is. We’re not talking about legal research, we’re talking about research as a practitioner and knowing how to do that with whatever tool is there. But then also it makes me think about realizing what good sources are, and we have to do that just in teaching anyway, teaching people to go find good sources. But now there’s so many products out there that are getting you tons and tons and tons and tons of information and learning how to use it to drill down into those appropriate sources. Yeah, I love that. Well, Tracy, how are you guys incorporating it down there in Louisiana? 

Tracy Norton: 

I would not speak for the entirety of educators in Louisiana, but 

Zack Glaser: 

That’s why we got you here because special, you’re different 

Tracy Norton: 

Or even the entirety of professors at LSU Law Center. I am very AI forward with my teaching. There are a lot of different views among faculty members about use of AI and whether it’s responsible, et cetera. I’ve been around legal education for nearly 30 years and more than that, if you count the time I was in law school and I have seen technology come and become integrated into legal education and the practice of law over the last three decades. And what I’m seeing here is really a version of what I’ve seen every single time. There’s a lot of folks who think this is a flash in the pen. This isn’t going to last. People are going to find out that you can’t use this for legal education or law practice and it’s going to go the way of the dinosaur and laptops in the classroom. That was a raging debate at some law schools still whether students should be allowed to have laptops in the classroom. And I’m like, they have in their hands a little a three by five or six portal to all knowledge everywhere. Let ’em use laptop or don’t, but they’re using technology. 

Zack Glaser: 

They got those phones. Yeah. 

Tracy Norton: 

So there has been resistance. Every wave of technology, there is resistance, and I’m seeing the same range of the wave. The difference is that with other technology practitioners and professors could tap out, they could say, you know what? This is too much. I’m not learning this one. I’m not going to find out what this is about. Practitioners could do it. We may use it in the firm, but I’m not using it 

And they can map out on it. The other thing that’s different about this is that the speed with which it changes that Dennis just alluded to, I do a ton of CLEs for practitioners, for lawyers, and also I do a lot of presentations for just general business professionals. And I’ve been doing this since chat. GPT released their widespread version in November of 22. I have to change that presentation. I have to update it about once a month. And I’ve never had a topic like that where I have to constantly update it because it gets outdated within about a month. It’s different in that way. And so that presents a particular challenge to educators. And so I really encourage educators to jump into this. And the way I use it in my classrooms is I want students to approach it with curiosity. And so I start at the very beginning and I have colleagues who are very adamant that first year law students should not be introduced to this in law school. And to me, I think this is a little bit like education around sex and drugs. You can keep it from ’em. They’re still going to do it. They’re still going to find out about it. Do you want ’em to find out about it from you or do you want ’em to find out about it from the kids on the street? 

Zack Glaser: 

All those kids on the street talking about legal 

Tracy Norton: 

Ai, ai ai, AI 

Zack Glaser: 

Selling you AI on the corner. 

Tracy Norton: 

Yeah, right. Trying to sell you an LLM outside the seven 11. 

Zack Glaser: 

This one’s got the latest. 

Tracy Norton: 

So I encourage my students from the very first time I meet them in the very first class in the first year to start using it for just whatever, Just Ask it a question. Just tell it I’m in law school, what can you do for me? And just start approaching it with curiosity. And then I start to slowly integrate it in a formal way into classes. And so there are two primary ways I do that. One is, first of all, before we get into any of it formally in the class, they have to watch a video that I’ve created for them on AI literacy because they need to know what it is, what it does, what it isn’t do. Because once they have that framework, they’re going to think of things to do with it. And I find the same thing with lawyers. Once lawyers understand what this technology is, and they will start to think of ways that they can use it that apply to their practice, and they’ll start to apply the rules of professional responsibility themselves in a really sensible way. You don’t really need an expert in AI to tell you how to use this responsibly if you know what it is and what it does and doesn’t do. So I start with AI literacy, and then I have shifted my classroom more. And because I teach skills courses, this is a little bit easier for me to do. I always focus process over product anyway. 

Speaker 3: 

And 

Tracy Norton: 

So I’m less interested in what their final product looks like than I am in what the process is that they use to get to that final product. And so a lot of my assignments are evidence of the process that they’re using. And so I’m moving more of those into the classroom. So I’ve taken some of the things that used to be in the classroom that were more explanatory, those are out of the classroom and we spend more time in class with them actually doing things pencil and paper. To me, one of the best tools for effective use of generative AI is this 

Zack Glaser: 

Pencil and paper, 

Tracy Norton: 

Get yourself a pen or a pencil and a sheet of paper and you can really make AI sing for you. And so I have them write things in class that they can then upload to AI and maybe get feedback on or maybe work on throughout the process of creating a product. So the other kind of assignment I give them is I give them something that was created with generative AI on a topic that they have researched, and then their job is to edit to final product. And so I moved their role from original creator into editor. And I think any person who is a writer or teaches writing will tell you, you learn more about writing from editing anyway. And so I don’t need them necessarily to create that crappy first draft as long as I’ve had them engage in the process along the way, so we can skip that first draft phase, we can let AI do that and we can jump straight into editing, which is where they learn the most about writing anyway. 

And I’ve been really happy with how they engage that way. And so it’s funny because they feel like they’re working with ai, but they’re just doing plain all screen and keyboard editing. And then another way that I use AI that has been really exciting for me is using prompt architecture as a framework for teaching good writing. Because prompt writing is very much like legal writing in that it’s got to be very nuanced, and you’ve got to plan out what you’re going to say in advance. And so that’s why I tell students, we’re going to sit down. You’re going to get out a sheet of paper and a pencil, and we’re going to plan out your prompt. And so I give them, I have a prompt framework based on the rhetorical situation. We work through the rhetorical situation, and then I teach them how to build prompt stacks on each part of the rhetorical situation. And then they end up with this really robust specific prompt because what I want them to understand is you’re sending a question out into the void with technology that has basically processed, for all intents and purposes, everything on the internet. 

Well, your prompt has got to narrow down all of the data that the AI has processed. And so a really good precise prompt is going to get the AI right where you want it to answer your question. And so the two ways I use it is I have them integrate AI into their process. I have them do a lot of pen and paper writing, and I have them use prompt writing as a framework for learning how to write with nuance and precision. 

Zack Glaser: 

So, so just kind of again, hitting it face on and saying, look, we are using artificial intelligence in here, but we’re going to use it in the way that it needs to be used. So the first question that I kind of think of broadly, and this is for all of you guys, and I think that attorneys out there listening to, oh, we’re using artificial intelligence in law school. Is this going to dumb down research? Is this going to dumb down writing? Is this going to make it to where students don’t have to even I used books in the library as opposed to Lexus or Westlaw or what have you, because it was thought we need to know how to Keysight. What do you guys think about that? About? Are we jumping ahead? Are we dumbing down research? Are we dumbing down writing 

Dennis Kennedy: 

Well? Are we recognizing where the world now is and dealing with that? I mean, how many people have read a law review article for fun in the last 50 years? How many people read legal briefs for fun? What is this writing that we’re trying to preserve? I think it’s part of the question, what is this experience that we think we’re trying to preserve? And what is this sort of apprenticeship type of approach, which many people have terrible, terrible stories of being harassed and abused as part of learning law. Is that what we’re trying to preserve? Or do we have an opportunity to take a bigger swing at this? And when Tracy was talking, it was interesting to me because when she talked about handwriting in a piece of paper, I made some notes earlier during this podcast and I don’t know that I can read them, so it’s bad enough for me. So that is kind of an issue. And then I’ve been talking with people who are sort of newer to ai, and what’s interesting to me is that they see AI as a voice assistant, 

Not like a text tool. And that was a mindblower for me. I think it’s ultimately right. I think it’s a good thing with ai. I wish I would’ve called it regenerative ai, conversational ai, but I think that if you have this conversation, which by actually talking might help you, that becomes really interesting. And those people might get really, really good at that. So I always like to say about AI is I think it forces us to the most fundamental questions really quickly. So we’d say we’re going to do ai. Then we’d say, well, how does that affect the experience of learning to be a lawyer? And then we really have to start to drill down and say, what is it that we’re doing? What is it that we want to preserve? Could AI help us? Or is it just identifying some of the problems that we have? 

So just two quick points I want to make before I finish. So one is that I was at an event where there were admitted students who will start at in the fall at Michigan State, and one of them came up to me and said, I want to know if there are certifications in ai I should be sure to get before I come to law school in the fall, because I think we’re already starting to have this group that AI is just part of what they do. And then the other question I have, which I resisted for a while, but I’m now starting to fall into thinking this might be right, I’m still not totally convinced, is that at the beginning of the internet, I reached this point where I said, wait a second. We are in a time period of historical importance that is the invention of the printing press, 

And what should I do and what should we do in an event that’s like that of historic magnitude? And now my feeling is that we’re raising a question. Is this current generation of AI tools like the beginning of the internet? And if so, what does that mean for each of us as we consider its implications? So is our vision of what AI could do for us too small? If we don’t get involved in the decision making, are we likely to make the mistakes we made at the beginning of the internet, especially on some things? And then to go back to the Women in AI summit, one of the notes I brought back with me, and I can’t remember if somebody said it or just wrote this down, but it was that we need to approach AI with a sense of wonder, not a sense of fear. And I think that all comes into my thinking about what we’re doing with legal education, but legal education is one small slice of everything that AI is hitting. 

Zack Glaser: 

Why is the AI part of legal education though? And I’m just kind of devil’s advocate here in a sense, the question of the student saying, what certification should I get in artificial intelligence before I come into law school? Why then are we not saying, okay, well law school, we’re still just going to teach you law. We’re just here to teach you law. We don’t worry about ai. You can learn about AI somewhere else. 

Tracy Norton: 

I’m happy to jump in on that if 

Zack Glaser: 

Yeah, yeah, please, Tracy. 

Tracy Norton: 

Yeah, so the reason is because it’s about the way that we gather and process and interact with information. Law schools don’t exist outside of some sort of reality law schools have to deal with the reality that we all live in. Could we have taught and can we teach law without using the internet? Yeah, we can, but there’s no reason to do that. The law doesn’t exist outside of the internet. It’s a tool that we all have. Just like in the early days of legal education, did you arrive at law school with a car that could go a hundred miles an hour? No, you didn’t. Does that mean that we shouldn’t allow students to drive to school with cars that reach high speeds? No, they exist in the real world. And so AI is just something that impacts everyone in the real world. It’s becoming ubiquitous. And so we can’t exist outside of that, and we do our students and the legal profession a disservice when we pretend like we can exist outside of that. And so I would say to lawyers who are concerned about the dumbing down, first of all, that’s not what I’m hearing from lawyers. What I’m hearing from lawyers is, can I hire two less associates this year if I learn how to use ai? 

Believe me, the general feedback I’m getting from lawyers, oh my God, this is really going to impact critical thinking in the profession. What I’m hearing is can I do more with fewer people if I know how to use ai? But the other thing is using AI requires expertise. Just because you give someone a tool doesn’t mean they know what to do with it. And so you can give me all of the materials to build a house and all of the tools necessary and plunk them down on a lot and say, okay, go to it Norton, build that house. And if you check back a month later, you’ll still have a lot full of building materials that I have not touched because I don’t know what to do with that stuff. 

Zack Glaser: 

And 

Tracy Norton: 

So just having the tools doesn’t mean that all of a sudden you are imbued with some sort of magical powers. And so we have to figure out, it’s a new tool. And I think people in the legal profession recognized, especially those of us in the tech space recognized early on, Hey, this is going to make a difference. This is a tool that could be useful. And so we’re still in the part where we’re figuring out what it can be used for and how it can be useful. But I think the first part of that for anybody is understanding that this isn’t going to take the place of a legal education or legal expertise. You still have to have that to be able to evaluate inputs or outputs rather well and inputs. 

The other thing is I have always believed for my entire career, the tech that I have invested my time in, the tech that I have developed has all been not about tech for tech’s sake, but about making time for the things that require human capacity. And so I believe that the highest and best use of technology is to make things more efficient that don’t require our humanity so that we have more time for the things that do require our humanity. And so what I’ve been telling lawyers is if you pine for the days when you had time to have a relaxed conversation with a client or colleague, you want to learn about AI Because This is going to make your workflow more efficient and this is going to take care of all that knucklehead stuff that you feel like keeps you from doing the real work of lawyering. So if you hanker for the old days of relationships and thoughtful conversations, AI is how you’re going to get there. 

Zack Glaser: 

I like that. One of the friends of the program, gi, we’ve had him on here a good bit, I think you guys are probably familiar with. He says, do what the bots can’t. And I think that’s a good point. What that got me thinking about though, and Nicole, this is for you, is how do we then grade that? How do we figure out if somebody’s doing a good job with that? If I can just grab a, theoretically, if I do a really good job of prompting something, I can get a good response and potentially turn that in. How do we grade that? Because A, is that plagiarism? Is there plagiarism in there? B, like you said, Tracy, if I’m doing really good prompting architecture, maybe I’m doing a good job. 

Nicole Morris: 

Yeah, no, that’s a great question. My answer’s multifaceted and it may not flow logically. One, the assignments need to change. So in my mind I start with the assignments need to be different, and you just need to create, open up your rubric to be, let’s assume a certain percentage or just allow everyone as part of the assessment to require use of ai, and therefore the baseline response is now the starting point. And then you want some do what the bots can’t do type of questions or assessments or analysis within the assignment where that’s really what you’re grading. So you’re grading what comes from a bot or a gen AI tool plus some additional work requirements or work product requirements. So as Tracy was saying, with the building materials, just because you have access to the tools does not mean you’re going to get a good result. 

You may not know how to put in the right prompt to get even that baseline information. So as you’re grading, there’s a huge assumption in the question that everyone’s going to put in a question and chat GPT, they’re going to get the perfect paper and they’re going to submit this paper. Not everyone’s actually going to do a good prompt to get said perfect paper and depending on, which is why I say you got to change the nature of the assignment, depending on what you’re asking, there is no ability for the gen AI tool to give you a perfect paper. It’s going to require some additional human tweaking and input. So the assignments need to change. What I was starting to say or think about was also the fact that our legal tech vendors, like the primary ones in legal education being LexiNexis and Westlaw, they’re using AI now. So to the extent that you want your students to learn about how they can use these databases to get better, faster work, product and results, that should also be sort of in the calculus of what we think about as far as the grading. To just go back to your previous point on dumbing things down, just a couple of thoughts I had. We’re not a rocket ship in legal education. We are like a container ship. None of this stuff is changing overnight. 

We are for the foreseeable future doing traditional learning, except no one’s going to the library stack. Sorry, Tracy. We’re doing it all from our laptop. There’s no book source anymore, but there’s still a ton of traditional sort of legal writing education, RIN contract law, casebook education. But then for schools, and I’ve said this before with other podcasts, schools that start incorporating AI and the first year curriculum, open up courses for two Ls and three Ls are going to slowly move ahead of schools who are just sticking to, no one learns AI here. So you’re going to see the market. People are going to make choices. I love the student approaching dentist. Students are going to make choices about I’m going to these schools where the things that I want to do with my life and my career are accented by the institution. So I think one of the reasons you saw the question about is AI certification is employers are saying, we want people trained in AI before we hire ’em. 

So students are like, okay, if I need to be trained in this, I’m going to get a certification such that I can be employable. So if you make that a minimum employment requirement, at some point the law or legal educators and the institutions will need to sort consider that. The last thing I’ll say about the whole, all of it, dumbing it down, and what are we doing eventually, your clients are using ai. So I just taught a class or a review session today, and there are legal rules that say, for copyright protection, the creator must be a human. So you can’t have AI create works of authorship. It’s important for you to know that if you’re wanting to be representing artists. Similarly on patents, the patent statute says that an inventor must be a human. So that’s another delineation. The patent law is a little bit more open than copyright where you can have AI assisted inventions, but then you have to be real clear with your inventor, how much of the AI is this? If we still need human input working its way into workflows that lawyers are going to touch and concern, and depending on what kind of law you want to go into, it’s going to be very important that you understand how these things are working together. 

Zack Glaser: 

Yeah. Well, that gets to the guts of what Tracy was saying and what all y’all were saying of just that AI literacy, just knowing what it is that AI does, I think is if you’re not doing that, you’re probably doing your students a disservice at some point. Well, so I want to change and redirect just slightly instead of how are you guys incorporating artificial intelligence into how the students are creating product in a way or students? Are you able to use AI to enhance literally the classroom classroom experience or to make that you all using artificial intelligence to help do your job in a way? 

Dennis Kennedy: 

Yeah, my students probably think that they’re inside a laboratory every time they come into class with me trying these new things. And one thing I’ve been doing a lot this year is that there’s an approach in business that’s been around for a while. It’s called scenario planning. And I really love this approach, but I found I can use AI prompts to create what could take in a business like a year long complex project to just generate a series of these things. And it’s a way of looking at possible futures and kind of say if we have more regulation or we have less regulation, if AI adoption is slow or it’s fast, we sort of do these two by two quadrants and then it has a clever name for a memorable name for those quadrants and then describes what’s in there. Well, I use that and I divide my class into four groups and I say, rather than just say, what do you think about legislation of ai? Let’s say, let’s look at these four different possible worlds, and each group will discuss one of ’em and we’ll come back and we’ll do that. We’re jumping ahead. We’re sort of leapfrogging into a much deeper approach. 

I’ve had some conversation with people where I’m kind of like, I send my students, if we’re doing AI and intellectual property, my assignment is go to practical law and get up to speed. What you’re going to do in practice, especially if you’re a transactional lawyer, 

And to say, can I get you to use AI to come into class? So potentially we can start a conversation and the discussion in class at a much higher level. And so the question would be like, do I really care if you read the whole assignment? If you had instead used AI to say, what are the five key points here? Or what questions might I? Because if that turns the outcome of that class into a much higher level discussion, I think that’s great. And then I’ve also done some things where I would say, here’s the issue. Let’s see what AI says are the most important things, and why do we agree or disagree with that? So again, it’s thinking critically about what the AI does, and then I’m incredibly transparent about the prompting that I use to say, here’s what we’ve done. Could we criticize these prompts? 

And then I also do a lot of like, Hey, this sounds really great. This sounds thorough compared to what you might think, but as a practicing lawyer, here are the things that just left out. So how do we think about those things? And then I’ve done group assignments and simulations where I’ve basically prompted AI to act as a group and kind of produce its own output. And then after the students talk about what they’ve come up with, I say, now let’s compare what AI said on this, and do we think it did a better job or a worse job, or did it kind of help us see things that we didn’t? So I use it a lot. I mean, I think the key is just transparency to say, here’s how I’m using it and how do we think critically? But I like the idea that we can kind of jumpstart conversations if there is this commitment to in-person classes and students just come into a class and you lecture to them while they’re sitting in a seat, I don’t know what we gain by having them in person. So I think we have to do change how we teach what the assignments are. And then I think as Nicole and Tracy said, we definitely have to change the assessments. So that’s the approaches that I take. 

Zack Glaser: 

Yeah, I love that idea of, and I think all three of y’all have hit on this in of leapfrogging, this base level of knowledge or base level of work or something like that, and getting students further into the issue much more quickly. That seems to be okay. I could talk to you guys forever about this. I know you, y’all are really thinking about these things. What the last thing I want to hit is, okay, so we’re training these young attorneys to think like this, to do this. How is that going to affect them coming into the practice of law? And more specifically kind of what mentorship is, what are we going to need to, because I can see mentorship being a back and forth at this point of like, Hey, I’ll teach you how to not yell at a clerk if you teach me how to use ai. But yeah, the concept of mentorship as these students are coming out into practicing law, 

Tracy Norton: 

One thing that’s really exciting about AI is a use for first generation lawyers. And that is I find that AI is an incredible leveler when it comes to norms. And so AI itself is an incredible mentor. And so I use it. I’m first in college law school, the whole nine yards. And so even at this point in my career, still run into situations where I’m like, I’m not sure how you’re supposed to handle that. And I use it for sticky situations. I use it for, I had a situation with a couple of students who were really struggling with how to behave appropriately in a situation, and I wasn’t sure, I’m not a therapist, I’m not a life coach, I’m not any of those things. But it was happening in my class and I had to address it, and I didn’t want to just punt and say, you guys can’t act this way. Cut it out. And so I went to the AI and I said, here’s the situation. Here’s what’s going on. I’d like to have this kind of conversation. I want a compassionate, straightforward, professional mentoring conversation. Give me five talking points. And it gave me five talking points. And I was like, oh my God, I could recognize, yes, that’s going to work, but I didn’t never come up with that on my own. And so it was a good mentor for me, and especially in a profession where, yes, you’re expected to have a mentor, but there are different levels of that relationship. Sometimes it’s just very formal and very, you have been assigned to be my mentor, and I will ask you the questions. I think it’s appropriate to ask you, 

But there are always those questions that we want to ask that we don’t even know if we could ask or should ask. And I find that the AI is incredible, incredible for that. So I like it as a sort of supplemental mentor for really anybody. I mean, my youngest daughter is 11, she’s autistic, and she really struggles in social scenarios. And I was out of ideas of what to tell her. And so I started going to ai. I was like, this is a situation. She’s got level one autism. These are her characteristics, this is what she’s concerned about. What advice can I give her? And it came up with some great advice. And when I gave it to her, she was like, mommy, you’re the best. I knew you would know what to do. And I’m like, yeah, I know what to do. I go to the ai. So I think it’s going to change mentorship, like you said, that teach me the tech and I’ll teach you the old school stuff that’s been going on for 25 or more years. But I think it’s going to change the access that more people have to some advice 

Zack Glaser: 

I love when people blow my mind. I had not even almost thought of that. Yeah, that’s fascinating and awesome. 

Tracy Norton: 

Yeah, 

Zack Glaser: 

That’s going to be brilliant. Tracy, 

Tracy Norton: 

For you, Zack, I’m interviewing this person, upload their bio from the web. What should I ask them? Give me the five questions that would most expect to be asked. 

Zack Glaser: 

That’s right over there. It’s right over there on my screen. Absolutely. Well, so yeah, I’m a second generation attorney, and so I into, I was lucky enough to roll into an operating practice and for my father to introduce me to other attorneys in the area. And when I went into court for the first time, I knew exactly what it was going to look like because I had actually tagged along with him in court many times in many places. And I knew who to ask. I had built in mentors. But the idea of asking artificial intelligence, what’s the general idea here? What’s the generally accepted practice? That’s awesome. That’s brilliant. 

Tracy Norton: 

I ask it what to wear to events. 

Dennis Kennedy: 

Yeah, yeah. What’s the meaning of business casual? So Zack, what you’re saying has come up in some of the things I’m doing with self-represented litigants is to say, if I’m a self-represented litigant and I have some access to some ai, that would just kind of help me navigate, 

Zack Glaser: 

Yeah, navigation, 

Dennis Kennedy: 

Then know what to expect and how to think things through. That’s one of the areas I’ll be focusing on over the summer as a self-represented litigant. But yeah, it’s interesting. I can’t remember whether that women in AI or warehouse site heard this, but one of the examples I’ve heard is that AI as the advisor, if you’re the sort of the dad, the single dad who has to answer questions about the first period for your daughter, the AI could be a lifesaver 

Zack Glaser: 

There, right? Oh, man. Yeah, yeah. Absolutely. Absolutely. 

Nicole Morris: 

I was thinking, first of all, Tracy, I took notes because clearly I’m not ai. So now I will be thinking of it as my counselor and parenting tools. But in the legal practice, in terms of students coming out, I mean, there’s again, multifaceted response I think that the students will see coming into law school, maybe even as soon as fall 25, definitely fall 26, 27 are going to be way more AI savvy. So 

They’re going to know more, expect to be continuing learning on that. So that’s one. Second, students who do get a fair dose of some tools when they get into the law firm or in-house, again, we’ll be looking to assume that we can, I know what I can’t unlearn what I’ve learned. So they’ll be teaching their partners or senior associates or whomever in their organization who’s unfamiliar. It’s sad to me because I think they’ll hit a wall in many places where it’s like you cannot use any generative AI within the firm, but there are several firms building their own internal Large Language models. So hopefully students will find a place to continue their skills there. But there’s clearly going to be reverse mentoring. I think there’s also an expectation from the client side that not all of my work is getting billed out in a traditional way. If You can save time and money, more importantly the money part, by using a tool, I’m going to leave this law firm and go to the firm that’s doing that for me. So that’s going to get forced down. 

Zack Glaser: 

Yeah, that’s a very good point. That’s a very good point. Well, unfortunately, kind of out of time here, I really appreciate this conversation. I have learned a lot, and I think really what’s come out of this for me is that AI is here. I think you’re not necessarily even going to have to be teaching straight up AI literacy in two years even. 

Nicole Morris: 

Yeah, maybe not. 

Zack Glaser: 

What does AI do that’s going to be done in high school, elementary school soon? And then the other is, could you imagine coming out of law school and then using chat GPT, doing all these things and then going to an office that doesn’t allow that? So I think that to me, that’s a call to action to these firms that are out there listening to this to are you going to be able to hire people when you’re saying we’re not going to use artificial intelligence? It kind of goes back to something that Tracy was saying early on is just, you can’t tap out of this. You cannot tap out of this. And I like to say, you cannot retire out of this. You can’t retire fast enough to not use ai. It’s just not going to happen. Any 

Dennis Kennedy: 

Left? You haven’t talked to some of the lawyers approaching retirement and tenured faculty that I have who are planning to tap out of this completely, including one I was just texting with before we got on here. Yeah, I think the clients are going to drive a lot of this. It’s interesting. I’m hearing more so there was always this thing that big clients were not going to pay for first year associate time. 

I’m hearing that’s moving out to second year associate time as well, partially because of ai. But if you think about it, there’s this fantastic opportunity for law firms. If clients aren’t going to pay for that, then you have the ability to use these LU students to build out the AI product type services and try things and do the experiments because you can’t bill for them anyway. And they’re the ones who are willing to do this. And I have a number of students and concluding one in my class now who worked at a firm over the summer and is working there, and he is part of the AI committee at the firm he’s at right now. And I’ve had people say, we’re looking forward to these new students coming out. They can help us navigate these waters. And that’s another thing where I feel that in legal education we need to look at. But I also think from the client side, I think that if we aren’t training lawyers who can help clients navigate all that’s happening in ai, I don’t know that those clients are going to be able to find representation that actually helps ’em. 

And that’s been a concern of mine for a while. If a lawyer says to me, I don’t know much about ai, I don’t know that I can hire them for almost anything these days, including traffic accidents, if they’re not aware of what sensors might be in the cars, the phones, the watches. So it’s a fascinating time. And as we said, it’s moving quickly too. 

Zack Glaser: 

Yeah. Well, I think that’s a good one to end on here. Dennis, Nicole, Tracy, thank you guys so much for your time. I really enjoyed this conversation and appreciate you guys lending your expertise. 

Dennis Kennedy: 

Yeah, you should have a spac. This was really fun. 

Zack Glaser: 

I’m going to advocate for that Absolutely. Thank you guys. 

Dennis Kennedy: 

Thanks for Having Us.

Your Hosts

Zack Glaser

is the Legal Tech Advisor at Lawyerist, where he assists the Lawyerist community in understanding and selecting appropriate technologies for their practices. He also writes product reviews and develops legal technology content helpful to lawyers and law firms. Zack is focused on helping Modern Lawyers find and create solutions to help assist their clients more effectively.

Featured Guests

Tracy L. M. Norton

Tracy L. M. Norton is a law professor at LSU’s Paul M. Hebert Law Center and a leading voice in legal writing and AI in law. With over 30 years of experience, she has authored influential publications and speaks nationally on technology’s role in legal education and practice. She is passionate about equipping lawyers with the skills to thrive in a tech-driven profession. She is the co-developer of The Law Profs’ AI Sandbox. 

Dennis Kennedy

Dennis Kennedy is an information technology lawyer and legal technology pioneer based in Ann Arbor, Michigan who is well-known for his role in promoting the use of technology in the practice of law. A professional speaker and an award-winning author with hundreds of publications to his credit, Dennis wrote the legal technology column for the ABA Journal for many years, has co-authored several books (most recently, The Lawyer’s Guide to Collaboration Tools and Technologies: Smart Ways to Work Together, Second Edition (ABA 2018)) and contributed to others, and co-hosts the long-running podcast, The Kennedy-Mighell Report, on the Legal Talk Network.

Nicole N. Morris

Nicole Morris is a Professor of Practice at Emory University School of Law and the inaugural Director of the Innovation & Legal Tech Initiative as well as the TI:GER® Program Director.

Share Episode

Last updated May 22nd, 2025