Episode Notes
AI is here to revolutionize your law firm, moving beyond simple prompts to actively handling your tasks. This Lawyerist Podcast episode is your essential guide. Zack Glaser sits down with Dr. Charreau Bell, a senior data scientist and assistant professor at Vanderbilt University, who’s helping law students and faculty use AI in truly meaningful ways.
This isn’t just theoretical—you’ll get real clarity on what it means to use AI well in your firm, right now. We break down the crucial difference between merely using an AI model and actually “training” one, so you understand what’s happening behind the scenes. You’ll also learn when it’s safe (and risky!) to send client data into cloud-based AI tools, and how you can even run powerful AI models right on your own computer for ultimate privacy.
But here’s where it gets really exciting: we explore how AI is evolving beyond simple prompts into “agentic systems” that can plan, reason, and act on your behalf. Think of AI not just as a tool, but as a proactive assistant that can break down complex tasks and execute them using various “tools” you provide. This isn’t just about automating simple tasks; it’s about offloading work that previously required significant expertise, freeing you to focus on more creative and complex legal challenges.
If this episode sparks questions about the data in your firm and how you can leverage it, check out the “free small firm scorecard”. It’s a quick assessment to help you get a data-driven view of your firm and identify areas for smart improvements as you embark on your AI journey.
If today's podcast resonates with you and you haven't read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? Check out our coaching community to see if it's right for you.
- 03:20. How Data Science Helps Lawyers
- 13:25. Secure AI: Running Models Locally and with Cloud Providers
- 43:46. The Future of AI: Automating the Unwanted
Transcript
Zack Glaser:
Hi, I’m Zack, and this is episode 562 of the Lawyers Podcast, part of the Legal Talk Network. Today I talked with Dr. Charreau Bell about how artificial intelligence is showing up in legal work from local private LLMs that protect client data to powerful agentic systems that can do real legal tasks and what small firm lawyers need to understand to stay ahead of the curve. So let’s be honest, AI is everywhere right now, but that doesn’t mean most lawyers feel like they know what to do with it, right? Maybe you’ve messed around with chat GPT or heard someone mention local LLMs or AI agents, but it still feels like another tech trend that might not apply to your day to day. So this episode is going to change that. Dr. Charreau Bell is a senior data scientist at Vanderbilt Data Science Institute, an assistant professor of the practice of computer science and the director of Data Science minor at Vanderbilt.
More importantly for us though, she’s deeply involved in helping law students and faculty use artificial intelligence in real meaningful ways from pulling structured data out of case law to running secure local language models that respect privacy. In our conversation, we talk about what it actually means to use AI well in your firm. We break down the difference between training a model and just using one. Then we talk about when it’s safe or risky to send client data into cloud-based tools, and we explore how AI is evolving beyond simple prompting into something far more powerful, a GenX systems that can plan, reason, and act on your behalf. Now, if that sounds technical, I promise you this is one of the most accessible explanations you’re going to hear on the topic. Charreau has a gift from making this stuff just click. You’ll come away from this episode not just with ideas, but with real clarity about what’s possible today and what is coming fast. And if this episode sparks a thought, like what kind of data do I even have in my firm? What could I track or automate? Check out our free small firm scorecard. It’s a quick assessment tool that helps you get data-driven view of how your firm is doing and potentially where you can make smart improvements. Think of it as a baseline for your AI journey. Alright, let’s get into it. Here is my conversation with Dr.Charreau Bell.
Dr. Charreau Bell:
Hi, I’m Charreau Bell I am a senior data scientist at Vanderbilt’s Data Science Institute. I’m also an assistant professor of the practice of computer science and the director of the data science minor at Vanderbilt.
Zack Glaser:
Thank you for being with me, Dr. Bell. I appreciate you being with us that that’s a lot of titles, that’s a lot of education. But for the most part though, you study data science, you study computer science, right?
Dr. Charreau Bell:
Yeah.
Zack Glaser:
Let’s tee up some things here a little bit. What does that kind of entail day to day for let’s say? I don’t know what that means. Lets pretend, let’s go into a fantasy land where I don’t know what that means. Give me a little bit about what you study and what you look into.
Dr. Charreau Bell:
So in my current role as a senior data scientist at the DSI, a lot of my work focuses on helping students, faculty, industry collaborators understand how they can use data science and how they can use AI for whatever particular purpose that they’re looking for in their work. So some people are really interested in helping them do things in a more standardized way or in a way that helps to facilitate more efficiency in their processes. And some people are looking for novel new implementations of doing things differently and doing things in unexpected ways, new ideas. But whatever the case is, we’re always excited to see them and we’re always excited to help and train people on what it means to use data science and what it means to use ar.
Zack Glaser:
Okay. So you have really kind of been in the trenches with a lot of different types of people, a lot of different types of thoughts, a lot of different types of use cases kind of brought people along to say, this is how we can use information, this is how we can use the information that you’re getting, or this is how you can get more information. What are kind of the things that you’re looking at if we can get into a little bit of specificity?
Dr. Charreau Bell:
That’s a really good question. So I think what we are normally seeing is people who have a lot of documents. So let’s say that you are trying to figure out something like labeling. What is labeling? It means you have a document and you have some information in it and maybe you’re trying to pull some information out of it. So you have maybe a scan of a contract or you have big, big long leases like we’re talking about, and we’re trying to pull information out of that. And so maybe normally what you would have is for someone to sit down and read through this really long contract and say, all right, well this person, the contract is with Johnny, and then their address is 1213 Midtown Lane or Whatever, And you have so many of these, or you have so many questions that need or information that needs to be pulled out of these documents. It would take someone a really, really long time. Then a lot of times the expertise of the person who would do something like that would be better used towards other things.
Zack Glaser:
And
Dr. Charreau Bell:
So people want to know, I have this task that really, really needs to be done. So much depends on it downstream. I would like to have the person who would be doing this focused on other tasks. And so that’s where AI tends to come into the picture a lot. And this is because especially this day and age, AI is good at these sorts of tasks.
Zack Glaser:
And yeah, that’s kind of the stuff like you said, that may be a first year law student or associate or intern or something like that might be looking into, and you specifically see this, you work with Vanderbilt Law School and the students and the professors there to kind of run down some of these things in those areas as well, right?
Dr. Charreau Bell:
Yeah, for sure. So in the law school, they have such a great insight and great understanding of how AI can move law forward and be used in law. They look across such a wide variety of different tasks, even looking at tracking legislation that’s ongoing and using AI to be able to search for information like that, even to being able to look at different kinds, court cases and then pull information out of these cases. So AI has been, it’s quite helpful in being able to do those things and lower a lot of the load of some of the RAs that would be doing those things so that they can focus on one, the research that they’re actually trying to accomplish, but two, maybe reviewing instead of having to sit there and type out what the thing is they’re trying to pull out. Instead they can look at the document, look at the results from the AI and review these things, make sure that they’re accurate and not spend all the time actually generating these things to get them in this form.
Zack Glaser:
One of the things that I’m kind of pulling out of this is that we’re at, and you and I were talking about this before we started, we’re at kind of a second level in my mind of using artificial intelligence here. And this is not anything technical, but I’m just kind of putting, if I’m going into chat GPT and I’m saying, Hey, help me suss out this issue or something that’s kind of like one level of messing with AI and I’m really dumbing this down, but then a second level here, and I think this is the dream of a lot of attorneys, is to say, okay, I know there’s a tool out there, there’s some way that I can have this, that I can point artificial intelligence, this big mass that is this concept of artificial intelligence at my own data or at some other data and get information out of it. What is that? What is the tool? What are we physically doing there in using artificial intelligence to let’s say look at legislation and maybe do some research on legislation?
Dr. Charreau Bell:
I love that question and I’m probably going to go in 16 different directions because I just
Zack Glaser:
Want to talk
Dr. Charreau Bell:
About it from every single angle. But even just thinking about what is generative AI in particular,
These platforms are these mathematical models that are just trained on huge swaths of data. They’re trained really foundationally to just predict the next token, and we’re going to call this thing that they’re predicting a token. It’s really just like a sub piece of a word or some part of a word that allows the model to understand and be able to model things. So we have this that is trained on huge amounts of data for really long periods of time. It has seen so much text, it has extracted the semantic meanings of words and sentences and concepts. And then when it’s done that now it has an understanding of generally the English language or things about the English language. So we were just talking about leases and because we’re on this particular podcast, we understand that we’re talking probably about something relating to housing. If this was a car podcast, we might be talking about car leases. And so because of the amount of information these LLMs, these AI models have been trained on, the AI would be able to understand if it was trying to make a summary right now that the leases that we’re talking about are probably associated with law leases as opposed to if it was listening or transcribing the same conversation on the car urist, I don’t know, podcast that we would be talking about cars.
Zack Glaser:
We need to create the Car Urist podcast now. Now we’ve got to go do that.
Dr. Charreau Bell:
But those things like that kind of training is really what’s powering the behavior of these LLMs. And so one thing that we have really come to love with their developments of AI is the idea of what we like to call a context length. This is how much information the model is able to think about or compute upon at the same time. That is such an incredible piece of progress in AI because I don’t know if you remember back in the day where we could put in just maybe a couple sentences and it could complete that sentence, maybe now we’re able to put in even a book, a whole book and then ask questions about that book. We’re able to put in a set of leases and a book, and then it’s able to think about all the things in the book alongside all the things of the leases. And maybe you ask a question about the leases and it references some information in the book. And so with that, just this augmented context length and being able to compute over all of those things at the same exact time, we have a really powerful way of putting together foundational knowledge, like things that would be in a book with things that we might be working with every day and then use those things at the same exact time.
Zack Glaser:
So I guess I’ve got two questions there, one, and I’m going to show how uninformed I am with both these questions. One is where are we putting these things? That’s such a good question. Where am I putting these things? And two, is that training, is that the model or is that extra on top of something or does the information have to go into the model in order to be considered training of it?
Speaker 3:
And does that matter?
Zack Glaser:
15 different questions. I’m going to hit you with 11 questions at the same time. It’s a really good interviewing tactic right there.
Dr. Charreau Bell:
I love both of those questions. So in my current role, a lot of what we also do is training. And so we use these platforms that help people to understand the breakdown of what something like, like a chat GPT type platform or what these platforms actually how are they’re put together. And so what we always see in our schematic or our block diagrams of how these systems work together is this block that’s always called LLM or AI model. And when I say LLM, I mean large language model. This is the thing that powers GBT open AI large language because it’s really large and it’s a model of language. So LLM, large language model. But what happens is we can see sort of the data moving to a point where it gets to the LLM and then we get a response back. So the question, I think your first question is what’s that magic right there? And So usually what that magic is if you’re using a particular provider is you’re sending data up to that provider. So you’re sending information to OpenAI, you’re sending it to to grok, whatever have you. You’re sending this information to them, they’re putting it through their LLM or their AI system, and they’re sending it back to you. And so even in thinking about that, we know that it’s really important that we don’t upload things that we don’t want to be just elsewhere. If there’s things that have privacy associated with them or laws associated to how you can handle them, then we’re not putting those in those systems unless we have existing agreements. But yeah, it’s going elsewhere.
Zack Glaser:
Certainly, and you and I were talking about this earlier too, so certainly don’t want to send client information into public LLM. You do not, probably don’t want to send it into GR and we won’t necessarily get into this, but there are ways to make those kind of private to bring them down onto your own hardware, I Believe. Well actually, yeah, let’s get into that. I lied. I lied. Let’s not gloss over that. So that is that box there that says input in and then a miracle occurs
Dr. Charreau Bell:
A miracle
Zack Glaser:
Output. So if I want to protect that data, I could own have that at my office on AWS machine or something like that.
Dr. Charreau Bell:
So there are ways to do it with cloud providers. So like AWS, they have their own way of making your access to your AI system very private to you. So the same way that your data might live on AWS, even though it has these privacy controls that are associated with it, you can also mimic that same sort of behavior or have that same sort of behavior with providers like AWS or Azure. A second thing is, I’m just beyond excited that you said even working locally on your own computer. So with the advances in technology and the advances with computers and their hardware, it is absolutely possible to run some pretty big powerful models on regular consumer grade hardware.
For example, this is the computer that I’m talking to you through right now. This is a Mac, I think it has maybe M two processor maybe, but it has so much, I’m not going to try to get too technical with it, but it has so much memory that’s available that you can download a model and run it and it’s performant to the point of something like an oh one or oh three model and with open ai. And so what that’s meaning is that we can get some amount of reasoning, we can get very powerful types of intelligence locally. Often it is just a little bit different. Sometimes you do have to have a little bit more actual training. So I know we’re talking about that a second, but you might have a little bit more training or a little bit more prompting that you might go through. But After that, when you think about it, once you’ve got your hardware system and you’re able to download these models, and I’m saying download the model like this, again, it’s this mathematical thing. You set of numbers, you download it, you turn off your internet, you disconnect your internet, and then you run the models. I mean, it’s completely private to your computer.
Zack Glaser:
One of the things that I think of right there is translation, translation of client documents is that it doesn’t take, we’ve been doing translating for a long time. It doesn’t take the latest, greatest model to do that, but I do not want to send my client documents out to the ethernet. And so I can envision bring that down onto one of my computers and okay, well let’s translate this into Spanish or
Dr. Charreau Bell:
Yeah, absolutely, absolutely. And then so many languages are now just multilingual. We were talking about the Lawyerist and the urist and being able to understand even that joke in different languages because we are having the relationship between the word car and then the word car in Spanish and the word car in Greek or whatever. So being able to do a really nice translation as opposed to just something like word for word. Yeah, that’s definitely an interesting use case.
Zack Glaser:
Okay, so talking about specific use cases for attorneys, because I can see gears turning in listeners heads right now, like, oh man, because we’ve got a lot of tinkerers out there and they know that they’d be dangerous if it was out in, they had the model in the ether out way out if they were sending stuff to open AI or whatever. Do I need a lot of information? Do I need a lot of, let’s say a, I’ve brought a model onto my computer and then a lot of client files on my computer. Let’s say I have 20 leases. I feel like that’s not going to be enough information to train a model on to then be able to spit out like, Hey, I want a lease, but here’s the scenario, here’s the scenario that’s happening. Please make my lease with the changes that would be needed for this situation.
Dr. Charreau Bell:
That is a really interesting point. So let me roll back then to the question that you asked about training. So what is training a model? So you will know when you’re training a model, it takes, it’s like a whole framework of things that you have to do to train a model. So for example, you would be preparing a dataset potentially in a very specific way
Zack Glaser:
So
Dr. Charreau Bell:
That the model can receive it in a way that is compatible with its previous training. You would actually be what we would call changing the weights of the model, the weights of the model being the values that are essentially defining what the model is. So if you think about, I don’t know, Nashville, which is where Vanderbilt is, if you think about Nashville being defined as something that’s like country music and westerns and then Vanderbilt and medical systems and hospitals, when we’re training a model, the weights that would change would be maybe Vanderbilt becomes, if we are training and changing model weights, the change of a model weight might be that we think of Nashville as less associated with Western information or Western music or country music. And then we have it now more associated with all kinds of music. Or we might have it less associated with just specifically Vanderbilt, but then also very with the Vanderbilt Medical Center, or we might now have it associated also with Baptist or Ascension. And so we’re growing things about the actual information that’s in the model.
So if that wasn’t overly helpful, the way I also think about it is as human beings, this is not going to be overly accurate, but this is actually how I think about it as human beings and maybe the lawsuits will really appreciate this. Whenever you’re studying for something, you’re studying and you’re trying to learn something and you’re trying to put it into your brain in a memorable way, as far as I know, there’s no list or set of texts that exist in your brain as the text or even the number or I don’t know, the chemical signals of that actual text in your brain. You’re somehow assembling it somehow across your neurons so that it’s represented that information as you’re trying to take it in. And then when you go to sleep, all that information is kind of being put together. They say that if you study well before a test and you sleep on it instead of just studying through the night, then the information because of your way your brain is, and just putting this information, I dunno, rehelp this brain connections with that. It’s like retraining your neural network so that it has this information that is taken in.
So this is what training a model is. You have to go to sleep and you have to have your brain sort of restructure, read weight so that it represents that information.
Zack Glaser:
Okay,
Dr. Charreau Bell:
This is different than what we’re talking about with our AI systems and we’re talking about putting in a whole bunch of data at the same time. That’s more like when you get to the test and so you have your sleep, you’ve already assembled the information because you studied, you get to the test and the professor gives you the document and now you’re essentially just computing on that document. So you’re using your brain, the weights that you’ve trained because you studied and you learned things and you have this new input and you’re just kind of processing based on what your brain knows that new input and producing an output. So you’re not really changing the configuration of anything that’s in the brain part. You’re just using that thing as a computation on the input and the output to produce the output. And this is basically what we’re talking about with LLM. So if you download an LLM, you’ll know if you’re trading at LLM, but with most platforms where you’re doing is you’re uploading information, you’re uploading information, and you’re using it in its state where it’s been trained, it’s done, its studying, and so you’re putting information into it. It’s computing on that based on what it has already learned. We’re not changing anything about that model and that it’s producing an output.
Zack Glaser:
Okay. Okay. I love how you brought Sleeping on studying into this because I always figured that was big mattress was who was telling us that we had to sleep before Tess, that was
Dr. Charreau Bell:
Maybe
Zack Glaser:
It’s a big cabal of big mattress being like you got to go to sleep in order for all this information to get in there. But yeah, that makes sense of you’re training it and so it’s learning and then you’re using it and so it’s computing on something. Let’s then bring it around, if you don’t mind to when I am training it, how much data do I need, how much stuff I think about recapture the years and years and years of us training the freaking models to read text or to find bicycles amongst the bushes or whatever. And that’s a lot of data. And for people that don’t know what I’m talking about, Google’s capture thing that says, I’m not a robot. When you’re saying, oh hey, this whatever it is, it looks like chicken scratch and you’re like, that’s mc five BD, well, you’re helping, you’re training the model or helping the model learn what an M looks like and what the parameters around what an M. That’s a lot of information that was pumped into that in order to do it, those years of us doing that, can I train on just if I’m just a solo attorney, do I have enough stuff to really even make it relevant and worth it?
Dr. Charreau Bell:
So one thing about the AI models that we’re talking about is that depending on exactly which one, you can actually train them further. So depending on that relationship that you have, that enterprise relationship that someone has with a particular platform, if you want to upload your actual documents,
If you upload them in a particular way, you actually can have a model trained for you even if you don’t have that many documents. And the reason for this is because of how the models are created. So man, maybe 20 18, 20 19, there was this idea of transfer learning where someone out there has the money and the computational power to train a really large model on tons and tons and tons of data. And so after that is completed, it has a good understanding as we were saying with large language models about language and concepts and how those things kind of relate together.
And so I mean, how much, the way that I think about it is after a model has been what we’re going to call pre-trained, so train on all of this data, the differential from it going from knowing some things, being maybe a high school student, the amount of knowledge and amount of reasoning to being specialized. Now I am a intern at a hospital and so I can do intern hospital thingies as a high school student. So some high school students go and get internships at hospitals for example. I guess also at law firms to understand where it’s like, but that incremental information that they need to know, this is how you’re training the model when you first get your high school student in the role, you might give them some things to read, you might give them, I dunno, some training documents, some training software.
So they have this differential of being specialized in the role that you want, but you didn’t have to train them how to talk, you didn’t have to train them how to walk, you didn’t have to train them in English or whatever language. And that is what pre-training does. So pre-training someone doing this thing with lots of computation, they’re giving you essentially the high school student, but then you with a smaller amount of information. Remember what we gave to our high school students to become an intern is some more training documents, not even that much that they could read, maybe some software, some experiences, but that smaller amount of information, we can train a model so that it has a specialization in this area. And so that is how we can use these pre-trained models even more effectively. And so again, if you actually have a pretty good system, and it could be consumer grade like the Mac that I’m talking about, You Can train a smaller model based on the documents that you have. There are various open source models that come in different sizes and you can take your very few number of leases and you can train a model to better represent or better understand that domain of whatever your leases are. So maybe it’s an expert in this very narrow space of lease law, like very lease law for your particular law firm, Very Narrow. But this is something that you could definitely do. And again, the highest amount of training has already been done if you have this pre-trained model.
Zack Glaser:
Okay, that makes sense. I am saying it took a ton of information to teach this model how to read essentially for kind of, okay, well it’s been taught how to read. I don’t have to go teach it how to read and the thing that I’m teaching, it isn’t of the level and the heft of teaching it to read or teaching it to translate or teaching it to do math in the first place or something like that. We’re talking about it needs to be able to number my leases appropriately or know what clauses are coming in and it already knows how to read it already knows how to contextually figure out what’s going on in that. Okay. Okay. So now my brain is going all over the place with stuff that I could potentially do inside my office. I want to get a little bit away from the LLM side of the AI and kind of talk about ag agentic ai. Because there’s two things that I think of in a law firm when I think of doing things. One is doing research or acting upon a document or getting information out of somewhere. And the other is making it do something for me, Making It do something based on an action I took. But it’s not, I’ve built so many zaps and power automates and little automations and things we’re not talking about. I need to actually put in the then this with these sort of agentic ai. How do I approach that? How do I connect with that in my office?
Dr. Charreau Bell:
So this is such an interesting question. There have been such incredible advances in this area when it comes to age agentic systems, agentic systems you can think of. I mean you of course know what I’m talking about. You’ve already talking about power automate,
Zack Glaser:
Pretend, I don’t know, always pretend I’m an idiot. Go ahead and go ahead and explain to me.
Dr. Charreau Bell:
Okay, so these agentic systems, so a lot of the terminology around them is representing ideas that we know about. So agency, agency is when you can make the decisions and you can take actions, you can decide what to do in pursuit of some overall goal. So the thing about agents is that an agent would be taking often multiple steps to achieve a purpose. So when we think about these LLM AI systems, generally you’re thinking about, Hey, could you write me an email? Then it write the email. With an agent system, what we might be thinking about is, Hey, could you find out more about this particular, I don’t know, company online and then use it in combination of with these documents that I have to write a really good email to someone to do something. And so instead of it just doing the thing that you wanted to write the email, it’s able to break down the tasks that you’re looking for and then do those tasks in order to complete the overall objective.
And so this is a very nice thing and it’s become very, very powerful with the introduction of reasoning models because at their baseline reasoning models are generating these types of plans or these thoughts for how they should approach doing the thing that the user wants. That has been a major push forward in the area of agents. But yeah, so coming back to agents, they usually are more than what we would think of of LLMs. They’re more increasingly being powered by LLMs because of the capacity of LLMs to lay out a plan or deal with certain amounts of ambiguity. And so our LLMs are beginning to generate this plan. What we do is we give them access to certain tools, we give them access to certain tools, meaning that we do not allow access to other tools. What is a tool? It is basically exactly what it sounds like. Anything that it can grab and say, Hey, you know what, I need to search the web. Let me grab that tool of Google. Let me grab that tool of DuckDuckGo search and let me figure out the search terms that I need to put into this tool so that I can get the result back.
So it has access to tools like this. So maybe web search or what OpenAI would call code interpreter, which would be something that executes code so that you can generate different plots or graphs or figures if you put in data and it can actually execute these and return figures and plots and images for you to put in this report,
Other tools or even being able to access things on your own computer. So let’s say that you have a directory of contracts that you want it to maybe look up a particular contract or a set of contracts that are relevant. Then this can become a tool maybe you asking it to write about company mouse maker. And then it has access to the tool of being able to pull files from this particular directory. And so it’s able to look and say, oh, I see file a mouse maker. And then also I see file keyboard maker and maybe keyboard phone maker. Those might be helpful as well. Let me grab all of those and see what we can do. So this agentic systems, they’re able to do more than just generate a single text output, but they’re able to form plans and then execute those plans alongside any tools that you give them access to so that they can complete. That
Zack Glaser:
Is exciting. More than exciting though. Scares the shit out of me because what if one of those tools, what if I haven’t limited the tools appropriately? What if one of those tools is my phone or my texting Or Something, my credit card to buy something on Amazon, but it excites me because it’s like that’s the type of thing that I could potentially have said was too difficult for an entry-level position in my office. And so I had to hire somebody with skills To Do. We’re not talking about being able to just take over the answering of the phones or filling out of labels on my folders. It’s been so long since I’ve used actual physical folders. I couldn’t think of the word when I started practicing with my father. We had a high school kid come in and they would type out the labels on a electronic typewriter. This was not that long ago, electronic typewriter. And they’re like, what is this thing? But that this is beyond even that type of task. So we’re talking about is AI coming for your job? This is the place to me where it’s like it’s coming for jobs, it’s coming for tasks a hundred percent.
Dr. Charreau Bell:
Yeah.
Zack Glaser:
So how do I interact with this? Okay, can I just type in into chat GPT or something like, hi, please do these things for me. I couldn’t get it to do that, thankfully I’ve got to get some other tool. Right?
Dr. Charreau Bell:
Depends on what you’re trying to have to do. So especially if you use something called Claude Desktop because they have this thing that’s called, I think it’s called the model context protocol called MCP because I just think it’s awesome and I never really thought about it outside of being MCP.
And so what that is is it’s this way of setting up tools that anthropics clawed models can have access to on your computer or whatever tool it is that’s been created. So then let’s say that you want Claude desktop to be, I don’t know, looking at your Outlook calendar and then maybe planning out your day of how you spend your time, or maybe you want it to look at all of your Slack messages and then figure out how do you fit that in with your Outlook calendar because you have these crazy things that are going on in Slack. Really with the advent of ncp, it really opened up the doors for a lot, even if you don’t have these really fine green sort of specialized systems. That one, I will say I think that a lot, we’re still in the very pretty early stages of AgTech, generative ai, so powered by generative AI and LLMs, but I think we’re going to be moving quick.
Even the reason we’re going to be moving quick, especially with the power of those reasoning models. I mean, I remember when I think oh three first came out and it was just incredibly powerful, and then later they added on the tool use part of it. So instead of just thinking about these, yeah, thinking or reasoning around these particular tasks that should be done, then it introduce the ability of the model to actually do those things and go out and search or go out and execute this code, that differential. And that really short timeframe was, I mean, that’s building things quickly that are really amazing. And that’s kind of what the pace of AI is. It is just driving forward And Agent systems are going to be, wow. I mean, we have reason to be concerned and of course we should just generally be not critical, but just have our eyes open to what is happening in AI and how it can affect us. But boy is it an interesting time. It’s an interesting time.
Zack Glaser:
Well, okay, so just really quickly, because I know that you talked to me before we got going about making sure that we were talking about security on something like Claude, you have the desktop app, one would still need to, that is an app that is sending information out to anthropic as the open anthropic out in the world. And so we wouldn’t want to give that access to client files or sensitive data, but we could potentially, I can envision a scenario where it has access to things enough things that it’s worth it, but we could also, I assume, have some sort of product that had appropriate security agreements.
Dr. Charreau Bell:
Yes.
Zack Glaser:
Some business agreements and things like that. And
Dr. Charreau Bell:
I was going to say really quick that the ability to deploy this context protocol, so this MCP thing, this again ties back into the local models because although there’s not a really easy commercial product to make it happen, if you’re sort of a tinkerer, I mean local and then although we’re talking about servers, they’re also servers that run locally. You turn off your internet, you go in a closet like 10 miles below the earth, it’s not calling out to anything. It’s just using whatever you have on your computer. Now, if the tool is internet access, not going to work, but if it’s anything that’s on your computer that doesn’t need the internet, again, you have something awful, so
Zack Glaser:
Something wrong. Okay, so I’ve got two things before we wrap up here. One is I have spent years telling people to clean and sanitize and organize their data. And I think about it, I feel like there’s a point here though where we can get a little messier with our data now we can have messier data because, and the comparison I use is growing up using computers, we always had to nest information underneath folders. That’s how you organize stuff. This newer generation coming out or now let’s just say it that way, let’s not do a generational thing. Now you can simply search for things. And so folders really from my perspective, become a way to secure things as opposed to a way to organize them. Is that type of thing also happening with data as it relates to being able to use it intelligently? I feel like artificial intelligence could either quickly help you clean up your data or be able to kind of reason through the shit you’ve the mess that you have on your floor of printouts or whatever.
Dr. Charreau Bell:
This is such a fascinating question, man. I said that every single time. What a good question. What an interesting question. I just love all. There’s just so good. But honestly, that’s just such an interesting question because it is the age of being able to put things into LLMs.
I was looking the other day on this website because someone was asking me about creating this platform that could search the web to find out information from this particular organization, but the performance of the model was just not good, just not good at all. And it was just seemed like there was no reason for it. And then I looked at the structure of their website where things were just nested, nested, nested. And then even looking at the actual code behind it, it was not really accessible at all to this AI system. And so having the information there in a really nice way that’s digestible for LLMs,
Even what you’re talking about, about pulling things out of directories to be able to put them into the context of the LLMs, this really is a thing that people are doing and it’s really a great way to work with LLMs. Even when I’m thinking about, we were just talking about MCP, one of the things that they say on their website is, Hey, here you go. Here’s this really big long text document of all of this information about how you can code with our, how you can code with our platform. And it’s just this big old long document that you’re supposed to put into when you’re working with Claude, just put this huge long, big text file into Claude, so then you can just ask questions because the whole amount of the answer is actually with the question. And so I think that’s really interesting. There’s a lot of development platforms that they of course give you the way that you can connect with their platform programmatically. They give to you in a structured form because as humans, we just, we’re used to this and we have to be able to navigate it. But then they’ll also give it to you in the big long LLM form, and it’s usually called LLMs text. And so it’s really for putting into the LLMs if you want to create a platform and you want to use AI to help you write the code. And so tinkerers out there might be your moment, but it’s definitely a thing.
Zack Glaser:
So visually structuring things for our may not be that important, but the structure of things, how we put data into things where they live next to each other might provide context, might provide more information for these for, okay, so listeners out there continue to sanitize and organize your data in a decent way because it’ll probably have better answers. At the very least. I always envision my AI models getting mad at me. Come on, man, clean your room, come on. And then it’s like, I’m not going to give you as good of an answer. I’m going to kind of phone this one in, which is really bad because I envision, I use chat GBT throughout the day and I envision this as an expert who’s sitting next to me on their phone, not paying attention to me. They’re just giving me answers. And so when I need a real answer, I have to be like, Hey, hey, pay attention to me. Okay, is this real? Did you double check it? And what are your assumptions? And it’s like, oh, sorry. It’ll literally say things like, sorry, I didn’t realize that you wanted me to be that specific. I’m like,
Dr. Charreau Bell:
And a lot of times the better you give it to the system, the less it has to sort of be kind of distracted by nonsense. And so I will say the more advanced models are kind of tolerant of these sort of unwanted ways that we might structure data. But I would say when it comes to structuring data, I mean there’s other things that we do with data. So things are in a directory. You might be pulling them out for a SQL table. So it’s always nice to have this structure. But LLMs can, like I said, they have tools that are available available to them. They can just pull out all of the finals that are in that directory, and then it works for both purposes. Right.
Zack Glaser:
Okay. So looking out crystal ball and the speed of artificial intelligence, I may be saying what are you looking forward to in the next five minutes, but what are you most looking forward to as artificial intelligence and these things progress?
Dr. Charreau Bell:
I’m going to tell you the full absolute truth. So the thing that I am truly most looking forward to is the automation of things that we don’t actually want to be doing. So looking at society and the way that people are and the kind of ideas that they have and the things that they want to move forward. There’s so much creativity and innovation and excellence out there, and these people are sometimes in these jobs where they can’t even use any of it, right? I am forward to the day that AI is able to automate the things maybe we don’t care about at all. And so people are able to really be creative and build new things altogether and really enjoy their existence with creation and each other. Let AI do the things that maybe are one easy for it
To put together and do. But I’m really looking forward to the day that people can really write that book that they really want to and get down in the weeds of the things that they always really wanted to address or create that different kind of representation of video so that now you can have a different kind of experience that no one has ever had before. Create that movie that no one has ever thought of or conceived of. Are AI good at some of these things? Are they pretty good at some of it? Yes. But I think that there is this additional little piece and this additional little element that comes from humanity that is just brilliant. And so I want to see more of this brilliance. I want to see all of it. I want to see all of the things that people love to do, and they can spend time doing it. This is what I want.
Zack Glaser:
I love that. That’s promising too. That’s the promise of AI me. I want artificial intelligence just to be able to create requests for admissions. For me, that’s it. I just want it to create requests for admissions. I need 30 questions. That’s it.
Dr. Charreau Bell:
Maybe. So
Zack Glaser:
Once it can do that for me, once it can do that for me, I’ll be happy and I can just, I’m done. I’ll never advance with AI again. That’s it. Oh my goodness. Well, all, we’ve got a wrap up here on my ridiculousness. I really appreciate you being with me and Thank To us about all the things, ai. I learned a ton here. I hope our listeners did too. So yeah, thanks for being with us, Dr. Bell.
Dr. Charreau Bell:
Yeah, for sure. Thanks for having me.
Your Hosts
Zack Glaser
is the Legal Tech Advisor at Lawyerist, where he assists the Lawyerist community in understanding and selecting appropriate technologies for their practices. He also writes product reviews and develops legal technology content helpful to lawyers and law firms. Zack is focused on helping Modern Lawyers find and create solutions to help assist their clients more effectively.
Featured Guests
Charreau Bell
Charreau Bell is faculty director of Vanderbilt’s undergraduate data science minor. Bell also is a senior data scientist at the Vanderbilt Data Science Institute. In that role, she leads highly interdisciplinary teams of faculty, staff and students in answering research questions by leveraging data science tools and approaches. Her work focuses on training and empowering researchers and students across all disciplines in data science methods and facilitating scientific discovery and innovation across the university.
Last updated May 29th, 2025