Alex Shahrestani | Unlock AI’s Potential in Your Legal Practice

Unlock the possibilities of AI in the legal field while overcoming the challenges of technology adoption that most attorneys face today…

But how can lawyers harness artificial intelligence to elevate their practice without succumbing to overwhelm or ethical pitfalls?

In this episode, Alex Shahrestani, managing partner of Promise Legal PLLC, delves deep into the integration of AI and law, offering innovative insights into navigating this technological terrain effectively.

You’ll discover:

  • What AI really is and how it can transform legal practices.
  • The importance of learning to draft effective prompts in AI chatbots.
  • How attorneys can train AI to tailor solutions for their unique needs.
  • The ethical concerns and privacy issues lawyers should be mindful of when using AI.
  • Why big law firms might face challenges, and small firms rise, in the age of AI.

Mentioned in this episode:

Transcript

Alex Shahrestani: When I’m working with people on setting up their systems, I often say, don’t put yourself in a box here. Like, what are the things that frustrate you in your practice? There’s almost certainly a way you can apply AI to make that a little bit better.

Voiceover: You’re listening to The Texas Family Law Insiders podcast, your source for the latest news and trends in family law in the state of Texas. Now here’s your host, attorney Holly Draper.

Holly Draper: Today I am really excited to welcome Alex Shahrestani to The Texas Family Law Insiders podcast. Alex is the managing partner of Promise Legal PLLC in Austin, Texas, where he merges his computer science expertise with legal acumen to empower tech startups.

As founder of the Journal of Law and Technology at the University of Texas School of Law and a leader in the State Bar’s Computer and Technology Section, Alex is at the forefront of legal innovation. His dynamic approach offers valuable insights into the intersection of law and technology. Thank you so much for joining me today.

Alex: Yeah, I’m happy to be here, Holly. Thanks for inviting me.

Holly: So why don’t you just tell us a little bit about yourself?

Alex: Sure, I guess I’ll talk about the professional side of life. So I knew I wanted to be an attorney fairly young, and I remember when I was making my decision about a major I thought, well, how could I make sure that I’m lined up perfectly for the future of law? And I thought, well, it’s going to be technology or environmental regulation. I had a MySpace page back in the day, so I remember doing all kinds of styling stuff with the coding.

And I was like, well, I’ll try my hand at that. I always kind of like that, and then I ended up loving it. And so I never really let go of the computer programming aspect of my professional career, and that led to this excitement to practice in the area and understand new issues as they arose.

And it was, it was just like my vision, right was to graduate from law school and start a practice working on tech law. And at the time, I remember going to various admissions committees and talking with the recruiters and stuff, and I was like, oh, I want to do technology law. And they’d always say, you mean patent law.

And I’d be like, no, I want to work on technology issues in the law. And so that sort of has been, like, the tone of my whole trajectory has been like, no, I really care about this stuff, and it is like evolved into me being sort of a voice for those emerging issues in technology, which led to the formation of the Journal of Law and Technology, which led to my participation in these various entities.

But I think most pertinent to our discussion today is, as I was developing my firm, I was trying to do best practices, and I always said my first hire was going to be a computer programmer because I think that will make my money go the furthest. But instead, because I enjoyed it so much, I became the computer programmer for my firm, and I hired more attorneys to do more of the legal work.

So that has led to me having this robust expertise in both AI and just software projects in general, and with the recent boom in generative AI, I couldn’t be more thrilled with my life decisions about technology and the law.

Holly: Yeah, and I think a lot of us middle-aged and older attorneys, and maybe some younger ones too, this is such a foreign concept for us, and we don’t want to get left behind. So hopefully we can help make that a little bit better for everyone today. How would you describe your current practice?

Alex: Yeah, so we primarily focus on high-tech startups. Traditionally, it’s going to be corporate practice, maybe a little bit of IP like trademark and copyright. You know, your run-of-the-mill, like business needs. Every company needs that. So that’s really the bread and butter. But what I really get excited for is when I get these novel questions, or if it’s a client who’s in a particularly highly regulated space.

So for example, we have a number of clients in educational technology, and so those laws are constantly changing. Which is more exciting. There’s, a question of what is personally identifiable information, for example, and that’s a particular type of data that a customer holds. And there are particular rules about how it should be held, and all of these things that understanding computer technology makes a lot simpler.

And I think it makes me a more nimble attorney. So that’s really where we target our marketing. But we get mom-and-pop shops from time to time too. We’ve had a couple of breweries. Some of the stuff that maybe in my personal life I’m excited about, like, musicians and stuff, but primarily it’s high-tech enterprises.

Holly: So today we’re going to be talking about a topic that I really see popping up everywhere now, and that is artificial intelligence. And I know for me, I’ve gone to a few presentations about AI, and for the most part, I felt like it just went right over my head, and I didn’t have any idea what it is and how I could better understand it and learn to use it in my practice.

So there may be certain points today where I’m gonna say, talk to me like I’m a reasonably intelligent fifth grader, if it’s something that it goes way over my head. And hopefully, we can make this easily digestible information for other attorneys out there who want to understand it and start implementing it in their practice. So let’s start right at the beginning. What exactly is AI?

Alex: Okay, so before I answer, I’ll just like reaffirm what you said. If you have any questions, if I skip over something, please let me know. I try really hard to make it understandable for anyone, and then I try and make it understandable in a way that empowers you to go and explore these tools.

But by all means, if I say something and you’re like, wait, I don’t know what that is, please interrupt and let me know I’m happy to go over it. Okay, so what is AI? So AI is all things to all people, right now. When people say, AI, you see a lot of marketing speak about it, and there’s more or less fluff around those claims.

And some claims are referring to various kinds of AI that aren’t really what people are thinking about right now. So I think the first and most important thing is there are several kinds of AI, and there are lots of different ways to think about AI. So it’s easy to get wrapped up in researching AI and start to confuse yourself because whatever your source of knowledge is might not be laying out the type of AI they’re talking about.

The AI that has popped off in the past couple of years with ChatGPT is specifically generative AI. And it’s a certain brand of AI that is predicting what an answer might look like. It’s trying to guess what the right answer looks like based on the content you give it. Now, there are other kinds of AI out there. Trying to solve different kinds of problems.

So one example would be self-driving cars. That’s not a generative AI system. It has a different endgame. But the thing that makes both of those things AI, is what’s called a machine learning layer. So what is going on underneath the surface is basically you are feeding the software a bunch of information.

So a simple example would be a million intake forms, right? So like say, it’s you’re building an AI for intake for your practice. You just give it a bunch of intake forms. What the AI does is it takes a look at all of those examples, and it makes guesses about mathematical relationships. So stuff that would never, ever occur to you as a person.

So trying to imagine what those mathematical relationships are is kind of a pointless exercise, but that is what the machine is trying to do. With generative AI, it’s converting groups of words into discrete numbers and drawing relationships in that way. And then what the machine learning system does is it takes what its guesses are, and then it tries to create an output that matches what it’s learned.

And then what you end up doing as a systems engineer is you take, say, so say you’ve like, got these million intake forms, right? You use 750,000 of them to teach the AI. You reserve the other 250,000 of them to test the AI. So the AI will look at maybe the beginning of those intake forms, and try and guess what the rest of the intake form will look like.

And it’ll compare its output to previous results and say, well, that’s closer to these example documents that I’ve been given than the previous version. So this math is more correct. And it does that a million bajillion times. So it’s like basically guessing, comparing to the right answer, and saying, okay, this guess is closer and continually self-adjusting until it gets to a point where the systems engineer says, this is good enough for use with general purposes.

So that is the portion that is across all AI is that machine learning layer of that process of taking in information, making guesses about what that means for new information, and just iterating over itself until it gets pretty good at it. So that’s AI in general. Speaking to generative AI.

What that’s trying to do is take those relationships between the documents it’s it’s looking at, and trying to identify patterns. So I’ll take a small step back to try and ease into this a little bit. So everyone who’s listening is familiar with Google searches, Westlaw searches, Lexus searches, what have you.

With those searches, you’re putting in a few different words, and what the system is doing is it’s looking for variations of that meaning that you’ve put into the search bar. And it’s looking for pieces of information in a database that match those terms, and serving the entire file from the database to you. So you can think of it like a doctor’s office, right? So you might have those old-school client files.

You got a shelf with 1000 folders on it, and it’s sorted by name and perhaps a year or something. And so when the doctor tells his assistant, hey, I need you to pull this particular document, the assistant goes and it looks for those search identifiers, and then it pulls the file. And if there’s more than one match, it might pull a couple of files, right, and then take that to the doctor.

The doctor is able to look at the entire file and get whatever information they were looking for from it. An AI functions differently. So it’s not ever returning just like the file. So if you imagine that that same folder system at that doctor’s office is being searched by a generative AI. The generative AI is not returning a whole file to you.

Instead, what the generative AI does is it looks for patterns among all of the folders, and then it tries to guess what the correct answer is based on the patterns across the 1000 folders of 1000 charts. So because everyone’s situation is different, every one of those patients might have a different name, date of birth, but then also, like, the actual nuts and bolts of their health care is different, those patterns aren’t going to match very well, right?

So some of the patterns will match. It might, like the name, date of birth, and the various other identifiers that are just consistent across charts, however, their system is set up, is going to be well represented in the generative AI’s output. But the particulars of a particular person’s healthcare plan will not be represented well, because it’s trying to draw a pattern between everyone’s health situation, and it generates an answer based on that pattern.

Another way to think about it is, whereas with a database of searches, you might have a stack of papers that you’re searching through for a particular paper to read, I kind of think of generative AI as those old high school and college transparency projectors. Do you remember those? Where they throw it up there and the light turns on and it’s up on the board.

Now, imagine you put two of those transparencies on top of each other, and what would be projected onto the board? It would be a bit messier. You would see a couple of different kinds of data represented. Now with AI, it’s kind of like throwing those 1000 charts or whatever onto that transparency as a big old stack. And it’s not going through each transparency looking for a piece of information.

It’s looking at what’s projected up onto the wall, and it’s looking for patterns there. So certain things are going to stand out to the AI. So for example, the word the might be a common first word on any given document. So you might see up on that, on that whiteboard, you’ll see the word the kind of bold like you can actually make it out among all the noise of the other data.

And the AI takes that and says, oh, if I have an input that is of the type of the stack, I should probably have the word the right there. And there’s going to be other sort of patterns that stand out. And that is sort of where the AI is drawing its information from. Is from these patterns.

And the less static the information, the more the AI is going to be taking kind of a guess, right? So the word the on that transparency projector might be well defined, but the word right after it is probably muddled, and you would have to guess at what that might be, and that’s kind of what the AI is trying to do. So I’ll pause there because I know it’s complicated.

Holly: So I know there are a lot of ways that AI can help attorneys and so let’s go through kind of, for those of us who might not know much about it or are not using it in any way, what’s a good place for us to start? Either trying to learn about it or starting to implement it?

Alex: Yeah, I think the best way to get started. To just start playing around with it. So ChatGPT is a perfectly fine option. If you have no idea what you’re doing, don’t start using client data there. It is safe if you know what you’re doing, but you know, just for simplicity’s sake, just use it for other stuff. And just see what sort of answers you get. See where the problems arise, and you’ll start to get a feel for it.

So a way to think about it is, back in the day when CGI was first becoming a thing, it was really cool. Like, you could tell it wasn’t real, but it looked realistic enough that you were like, wow, this is amazing. And as time has gone by, as you get better and better at recognizing those images, going back and watching those old movies, you’re like, wait, that doesn’t look real at all.

And there’s sort of a progression of your expertise in recognizing what is being done. And that sort of holds true for AI as well. So, like, a good example is for AI generated images, like, there’s the, the commonly known sort of factors that indicate that there’s an AI generated image at play.

The extra fingers or the weird out-of-place objects, or the morphs of particular objects. But something else that you know I’ve noticed is held pretty true, is that AI really likes to have highlights across the whole image, right? So there’s always, like, bright sources of light reflecting off of everything.

That’s an AI thing that I wouldn’t have recognized when I was first getting started with AI. So the more you interact with it, the more you’re going to recognize where it has fidelity and where it struggles. So, yeah, so ChatGPT. Go play with that.

Holly: So I just went to ChatGPT earlier this week, and I know there’s the free option and then there’s the paid option. Is this something that we want to be paying for? And if so, why?

Alex: Okay, so if you’re just starting, I wouldn’t pay for it just yet. There’s no need until you’re confident in it. If you start running up to limitations where it says, oh, you’ve used your quota, or whatever the case might be, that would be a perfectly fine time to start paying for it. We pay for it.

We also pay for the API version, which is the programmer’s version, but I like having the ChatGPT paid version, the $20 one, not that $200 one, because I run a lot of stuff through it. So a simple example of what I might want to do with a paid version of ChatGPT, which I’m not certain you could do with free, or I’m pretty sure you could not with free, is like, put client information in.

So in some cases, there are amounts of client information that I’m confident in putting into ChatGPT because you can actually set up agreements with open AI to make sure that your client data is not being used for training models. That anything that they do use, is only stored for a minimum amount of time and then deleted off the servers.

And typically that usage is not related to training a model. It’s like, is there a bug in the code? Like, let’s keep a log of that so that way we can fix the problem. It’s a different kind of retention. I’m not certain, I’m actually fairly certain you could not do that with the free version. Some other things are the latest and greatest models will not be available to you if you don’t use the paid version.

So their latest one that they released to the paid version of ChatGPT is called the 01 model. I don’t know if you’ve heard anything about this, but it’s basically it’s the same model as the one before, but they do this thing called chaining. So the model before, it is fairly performative. I think on the bar exam, it passed like that previous model passed the bar exam.

But this newest model, it’s really that old one, but it breaks down the questions that are being answered by the AI into discrete parts and then providing a better answer back. So that’s chaining, or reasoning, is a way to think about it. So again, remember, AI is pattern matching, right? So it’s like looking at this big transparency, and it’s trying to find the right information.

If you ask a big, broad question, it’s going to have a much harder time, because it’s looking through a gigantic stack of transparencies, versus when you ask a very focused, specific question because then it’s only looking at those relevant documents to find those patterns. And it’s more likely to find something useful for you.

So with this newest model that’s on the paid version, it takes your question, breaks it down into discrete parts, answers each part separately, and then puts it back together. And so you’ll see better results from that, even though it’s technically not a new model, and that’s not something I think you can get with the free version of ChatGPT.

Holly: So aside from ChatGPT, are there other types of AI services out there that attorneys should be looking at?

Alex: Yeah, so first, there are other alternatives to ChatGPT, as it stands. So there’s Claude by Anthropic, there’s Mistral, there’s Llama by Open or not Open AI by Meta. Those are sort of the big chatbots. Now, if you really want to play in the space and see what other people are using, there’s a website called Hugging Face, and it’s not like some sketchy website, but you’d never come across it on your own.

It’s, if your listeners have ever heard of GitHub, which is kind of like a database for programmers to show off their work. So in the same way Instagram is for your pictures, GitHub is for projects, but there’s also a project management component to it. Hugging face is like GitHub, but specifically for AI.

So there’s all these people out there who are either professional AI developers or perhaps they’re just tinkerers, or just looking for a fun usage for AI. And so many of them make their model available for free. So a good example would be, there’s this one on hugging face that’s like one of the most popular ones, and it’s a comic book generator.

So you can put in, like, a small prompt, and it’ll generate the panels of the comic book along with the speech bubbles, which is really cool and neat and just fun to play around with. But if you’re really trying to explore what people are doing with mostly generative AI, hugging face is a great place to look. I believe the URL is huggingface.co.

So if you come across it, it’s got the emoji of like the hug. And I recommend that. I believe also you can, if you sign up for it, for the free version, they have ways that you can actually use the other chatbots. So you can try out flawed, you can try out ChatGPT, you can try out Mistral. So they sort of try to make it more approachable to explore.

Now, there are a lot of things out in the markets that I am not really a fan of, and I think it’s good that the legal tech industry is looking to implement AI in their services, but I don’t think it’s really there yet. Right now, AI is still advancing at such a fast clip that these services get outdated very quickly, and you don’t want to be locked into like several $100 per seat for something that’s not going to continue to meet your needs.

So I’m a big advocate for actually getting your own system set up, which, believe it or not, is actually not that expensive in the grand scheme. Like so if you go to use one of these providers for legal tech services, you’re going to pay a few 100 bucks per month. You could pay somebody a couple of $1,000 to set up a system that will cost you about 50 bucks per month, and it’s always cutting edge.

So, I think what I would highly recommend is people seriously playing with these chatbots, getting comfortable with it, understanding limitations, figuring out what they want, and then finding someone to help them put these systems together. I think that is going to be more or less the way things move for a while.

Otherwise, I think there’s just a lot of risk. And if you want to try some of those other tools, I would look for ones that perhaps don’t have contracts so you can see if it’s like hitting the needs that you have.

Voiceover: This episode of The Texas Family Law Insiders podcast is sponsored by The Draper Law Firm. Providing family law appellate representation across Texas. For more information, visit draperfirm.com or call 469-715-6801.

Holly: So if your law firm sets up an AI system, what does that do for you?

Alex: Yeah, so there’s a lot of things it can do. And so when I’m working with people on setting up their systems, I often say, don’t put yourself in a box here. What are the things that frustrates you in your practice? There’s almost certainly a way you can apply AI to make that a little bit better.

The way we do that primarily is we look at our time spent. So we have time-tracking tools. We bill flat fee only as much as possible, just because it makes everyone’s life easier. But we still track time. And you know that can be useful for clients, but it’s primarily like, hey, where do we feel like we’re not getting stuff done, but we’re spending a lot of time?

So our big first one was email. So our email was sucking like 50% of our day away. And you know what it’s like. You respond to emails and it feels like you haven’t done anything, but you’re still doing it, right? And so we targeted that for an automation.

And so we have a couple of different tools related to email. We have one that is a draft response, right? So it’ll take the incoming message, it’ll say, okay, based on this, does it require a response? If yes, look at past answers from our emails and insert that in a draft as a response.

The other portion that we have is we have some predetermined templates for what our general information is. So we do a formation for a company here in Texas. Usually, they’re going to get a letter in the mail from the Comptroller, and everyone forgets about this letter from the Comptroller and they lose it.

And you need it for filing your franchise tax. So that’s like, a common thing that we have to send out. And, you know, it only takes a few seconds to type out, like, hey, keep an eye out for this. But it sort of adds up over time. So we have these templates that we just click a button and it inserts the template so we can just send it off.

It’s nice, the client feels taken care of, and they have the information that they need. And we were able to get our email time down from 50% of our days to 25% which is huge because that means that we had a 25% gain, or maybe a little less than that gain in efficiency, and as a flat feed practice, that’s an increase in our bottom line.

Holly: So what skills do you think lawyers need to develop to stay relevant with AI?

Alex: That’s a really good question. So I think it’s about an approach to technology. It’s not like math. It’s not like figuring out how to use your computer. I think it’s like a different flavor of thing to address because there are so many uses for it, and because there are so many different kinds of AI. What I would try to do is identify the boundaries of what AI is good at and what it’s not good at.

Because that is constantly what I’m coming back to when I’m working on AI projects, even from the very beginning, it’s trying to understand what it’s useful for. And along that journey, you sort of identify these gaps, right? And you’re like, okay, well, I guess it can’t do that, you move on to something else.

But then as you sort of develop this muscle for like, what can it accomplish, you start to think, wait, so it can’t do that exactly. But if I do this, and I add this other piece to it, maybe I can get pretty close. And that has been sort of an ongoing phenomenon in my journey with AI, and I think it’s something that other people can develop as well.

Again, like, think of it like the CGI fidelity thing, right? You need to be able to recognize what it might be able to do and what it might not be able to do, to really come up with creative solutions in your own firm. You’re, you’re the person who’s responsible for identifying these things, and if you go to an outside person, sure they can walk you through it. But I think staying abreast of what is possible is gonna pay dividends in AI.

Holly: What tips do you have for writing prompts into these chatbots?

Alex: So we spoke a little bit about that. Basically, the more granular the question, the better the response is going to be. So let’s go back to the search engine example. And this is something that most people will be familiar with that are listening to the podcast.

So if you’re searching for, say, a particular type of case law or a template, or maybe you’re looking for a law firm’s guide on a particular area of law that you’re trying to brush up on, if you do a search, like just a plain text search, like, for example, liability for commercial trucks in car accidents.

If you search that, you’re likely to get a bunch of plaintiffs’ attorneys landing pages trying to convert plaintiffs into clients. And that usually is not the level of information that an attorney is looking for. They’re looking for something else. And so what you do is you end up adding these other little things to indicate what you’re looking for. You might say what section of the Texas code determines liability in a commercial car accident claim?

That type of question is far more likely to return what you’re looking for than the general question that requires more work from the search engine, basically. And the same thing happens in West Law and Lexus, where I often feel like the biggest time suck on those platforms is going down rabbit holes of is it this term that I’m looking for?

And figuring out what term is actually applicable, and then you search for that term to try and find the right result. It’s all the same idea. That process is applicable to AI. So even though it’s not returning a particular piece of paper, being very specific and sort of imitating the type of information you’re looking to find is going to improve your results.

So to sort of break that out, when you search for what Texas statute is controlling for this situation? That sort of mirrors the language you expect to see on a good search result. Like the lawyer is talking to other lawyers and saying, these are the statutes, and then it has a little section symbol, and there’s like, different identifiers on the page that are not directed towards lay people.

A layperson doesn’t want to know all of that stuff. And so when you do a more generic general search, that’s why you get those lay people results. When you do the more specific search you’re getting the more specific results. With AI, it’s doing that pattern-matching thing, but it’s trying to narrow the universe of documents that it’s reviewing for patterns.

So if you add in these more niche sort of identifiers, you are more likely to pattern-match results that have those unique identifiers as well. So if in your chat GPT prompt you use the section symbol, you’re more likely to get an accurate result than just saying what law governs car accidents with trucks? It’s just how the system sifts through information.

So that’s the first big one. The other one is to think about that chaining thing where I said a lot of questions that we ask really assume a lot of information or rely on the answer to know the context of the question. AI is not very good at that right now. The 01 model from ChatGPT is trying to imitate the thought process that people are using.

Like breaking it down into the sub-questions. But for now, and even with 01, honestly, if you break your questions down into smaller pieces and walk it through the thought process you’re following, you’re gonna get a better result. And what I mean by walkthrough is like, not tell it like, okay, think about it this way, then this way, then this way.

Rather, you say, here’s part one of my question. What portion of the Texas code generally is related to injury law? And then from there, you might say, okay, is there a particular section that applies to cars? Is there anything about commercial injuries within that code? Sort of going through step by step, is going to narrow in the AI closer to what you’re looking for.

But despite that example, just a reminder that AI is not returning static information. So it can get that stuff wrong, but when you go through this process, you’re going to get a lot closer to the truth. And I’m actually going to step into the hallucination question a little bit.

So hallucinations, for those of you who don’t know, are when an AI says something with confidence that ends up being false. And what it’s doing there is it’s getting a false pattern. And again, you can think of the transparency stack example, and why you can sometimes ask the AI a question and get a correct answer versus an incorrect answer has a lot to do with the patterns that are identifiable.

And so a good way to think about it is basically the more well-known and static a fact is, the more likely the AI is to get it right. The less well-known or more changeable the fact is, the more likely it is to get it wrong. So for example, it could be a fact that the temperature right now is 60 degrees, but the AI is going to get that wrong, because there are a million pages out there that say the temperature is now, and then all kinds of numbers, right?

So that’s why it can get some of those facts wrong. And then an in-between example between something that’s always changeable, and kind of changeable would be who’s the current president? It has a number of right answers over time, but the AI has no sense of that time. Instead, the AI is looking through for pattern, and it might get it right, it might get it wrong.

And then all the way back to, like, what year was the US founded? You have 1776. It’s just gonna probably know that one because that is a well-known fact that doesn’t change over time. So the pattern matching that query is almost certain to have the right answer.

Holly: So I was listening to a podcast recently. It was talking about AI and how to use it in your practice and all of that. And one of the things that they were talking about was training AI to get to know you and what you do, and all those things, and that the more AI gets to know you, the better outcome you’re going to have. How do you do that?

Alex: So there’s a there’s a number of ways. There are ways people talk about it that I think are not perfectly accurate, but let’s talk about it in that sense of, like, it gets to know you. So there are a number of ways that you can accomplish that. And I think with the paid ChatGPT, the $20 version, you can basically try all of them.

So one of those functions is called memory. And memory is, it’s really just like a little piece of information that it constantly reminds the AI of. So memory might say, okay, remember the user’s name is Alex Shahrestani. Or remember that his practice is based in Austin, Texas, or something along those lines.

And then next time you ask a question it has this small database of memory that gives it the context around your question. So it’s such that you don’t have to provide that in the future. So one way that I use ChatGPT is to write a bio for a particular use. And rather than have to say, okay, I’m Alex, I’m at Promise Legal. These are the things I care about.

These are the things I’ve done. You can add it to your memory, and it will maintain that across all of your questions. The other way to do it is through information retrieval. So one of the things you can do, and what we often do when it’s like something legal related, I mean, we always do when it’s legal related, is you can instruct the AI to only answer questions based on a given set of information.

So it might be a number of PDFs or docs or just instructions. It could be pretty much any piece of information. And whenever you ask a question, the AI will search those documents for the answer and provide the answer back to you. So that’s like another way that you might teach the AI who you are.

I think, if you’re starting out with AI, if you’re using ChatGPT, unless you’re using it, only one way, I would maybe not over-rely on that. Because if you over-rely on that, you’re not learning actually how to use the AI to the best of its ability. Because, if so, for example, if it always says, like, hey, I’m Alex Shahrestani.

I have Promise Legal I’m an attorney and this is what we do for our clients. And then I want to go talk about Alex Shahrestani, the father, for example, I don’t need all of that other stuff included. And it’s actually going to get in the way of better results down the road. So you have to be a little bit cautious about that unless you’re truly using it for one purpose. Maybe a personal assistance kind of thing.

Help me brainstorm a gift idea for someone. If you’re only using it one way, go ahead and do that. There’s not going to be a problem. But if you start to implement in law firm operations, relying on that is going to hurt rather than help. Because if you have anyone working for you, but as a different seat, their use of your tools is going to look different than your use of the tools, and that can lead to problems.

Holly: So next, I want to shift and focus a little bit onto the risks associated with AI. And I know we dabbled in our discussion already about some of these risks. But what are some ethical considerations that attorneys should have as they’re looking to go into this landscape?

Alex: I think not much has really changed, but it’s hard to like, I understand it’s like, sort of just not intuitive. Like, you are still the one responsible for the piece of paper that you’re submitting to a client or the court or whoever. And so that’s something to really bear in mind. And I think it dovetails nicely with this idea of understanding what AI can and can’t do.

So there’s going to be things that the AI might be really good at. It can probably summarize a page of a paper without any problem. But if you give it a whole document and ask it to summarize, it does mediocre at best. Maybe a first-year law student level of of competence. So I think first and foremost, it’s your license behind whatever the output is.

So pay attention to that. And I think you’re likely to be okay for as far as practice goes. The other issues are related to privacy. So privacy is really important. I am really uptight about confidentiality with my clients. I don’t like, even when I know I’m, probably safe to share something I just don’t.

Just as a rule of thumb, I am not comfortable with it. But I think a lot of the concerns around privacy with AI are really the surfacing of problems with any tech tool. So a lot of the issues that people have and the concerns that people have with AI are really things that have been around for 20 years. And I think it’s important to talk about these things.

I’m like a data privacy guy, but a lot of times it it gets misconstrued that AI is somehow a worse offender than other services. And so one thing to note is that your best protection against any sort of improper disclosure or breach of privacy or confidentiality is often going to come down to a contract.

Almost all services have access to your client data in some way, shape, or form. Otherwise, they would not be able to provide you with tech support. So, for example, you’re working on a document, the server crashes, you lose your progress, and you’ve lost your document.

Being able to call up someone on IT or customer support to retrieve that means that they, in some way have access to that document and are able to restore it.

The exception would be end-to-end encrypted technologies, and with those, that means there’s like, a lack of support because the admins have no way to actually restore your stuff. Aside from a backup, which is not always practical or part of the service or what have you. So that’s just sort of like a caveat here.

When we talk about privacy and confidentiality in AI, you should really be thinking about this for all of your stuff. Now, what you want to be sure of when you’re using an AI tool is they’re not training their model on your data. Even if they were, I think it’s like a minimal risk, but you don’t want to just that’s not a risk I would accept.

So I would, I would avoid that. If something is free, almost certainly someone has access to it. And they’re not mining it for your client’s information, but they’re trying to serve you ads, and it just gets swept up into the machine, and that can be an issue as well. And then, other than that, it’s really, I think it’s okay to use right?

Your license is the one on the line. You need to be sure about what you’re sending to someone or confident that it’s an acceptable work product, and then you need to be sure that they at least promise not to review your data or use it in a certain way.

Holly: Yeah. I think everybody’s heard the horror story of an attorney doing a brief with AI and it using cases that don’t really exist. And I can’t imagine ever being in a place where I’m just going to rely so heavily on it that I’m not even going to review it for accuracy or make sure these cases are real or whatever before I do it.

Just kind of in my everyday life, I love how AI has changed Google because I can ask any question, and it gives me a great answer. And oftentimes it’s work-related. I still always, you know, I ask about, you know, some rule or whatever, and it tells me I still go check that rule.

Alex: Click the link, yeah.

Holly: Make sure that’s actually what it says. That’s the correct rule and all of those things. But I got there way faster than if I was having to go comb through that rule.

Alex: For sure. Yeah, and I imagine, I don’t know these attorneys, I know nothing about them, but that kind of mistake makes me think that there were other ethical issues going on already, or like they were on track to do them. Because that level of just not caring seems egregious. But yeah, those things can happen. So you just need to be aware.

Holly: Will AI ever replace us as lawyers?

Alex: That’s not really my vision. I don’t think so. I think sort of the way I see things going is that bigger firms will probably not sustain. I don’t think you need as many worker bees. I think you need someone to oversee. And I sort of see it as like a quarterback, right? Like, I think attorneys are primarily going to be, eventually quarterbacks for a number of tech tools, and making sure they all play nicely in reviewing the work and collaborating with AI essentially.

I think that’s really the direction we’re going. I don’t think it’s going to fully take over. I see some of this play out. It’s it’s possible lawyers will be like, fully replaced. But I just don’t see it. And where you see AI actually replacing jobs, mostly, is not actually with layoffs and like full replacements of people. It’s like a new skill set is needed on some very niche thing.

You know, in your practice, you might come up against a question you’ve never had to answer before. And so instead of going and hiring somebody who’s an expert on that question, you maybe build an AI tool to supplement your research ability, so that way, you can actually answer those questions a little faster and get up to snuff for your own practice.

And in that way, over time, like you actually don’t need to hire somebody. So in that way, I think AI is replacing jobs. But as far as attorneys are concerned, I think we’re going to see smaller firms handling more clients, and the practice of law is also going to be kind of an information management problem. Which I think it kind of already is, but in a more real way.

Holly: So we’re just about out of time. But one thing I like to ask everyone who comes on the podcast is, if you could give one piece of advice to young lawyers, what would it be?

Alex: Oh, man, that’s that’s a tough one. So about AI or just in general?

Holly: Whatever you want.

Alex: I think one that’s served me well. And not everyone is going to be this way. But I think as a people, we attorneys, are very risk averse, and I think being aware of that, and not necessarily letting go of that, but giving yourself a chance to try new things and explore, and not knee jerk, no, I think will serve well. And that has been my experience. You can’t toss caution out the window, but trying new things is often a good thing.

Holly: So where can our listeners go if they want to learn more about you?

Alex: So they can visit my blog, at blog.promise.legal. That’s probably going to be the best place. I have a discord. I don’t really use it that much, but there’s been some interest in this discord coming back, so I’ll just let you know where it is. In case that’s something, that’s your jam.

You can find it at txhq.org/discord, and that will give you the invite into the server. And it’s, there’s like 100 attorneys in there. It’s not very active, but that’s another place where you could ping me if you’re if you’re looking to get in touch.

Holly: Perfect. Well, thank you so much for joining us today. For our listeners, if you enjoyed this podcast, please take a second to leave us a review and subscribe so you can enjoy future episodes.

Voiceover: The Texas Family Law Insiders podcast is sponsored by The Draper Law Firm. We help people navigate divorce and child custody cases and handle family law appellate matters. For more information, visit our website at www.draperfirm.com.

Subscribe to the Podcast

Follow Us
Categories

CONTACT US TODAY

Name(Required)
This field is for validation purposes and should be left unchanged.