In today's episode, I'm speaking with Ellen Keenan-O'Malley, a commercial IP lawyer at EIP, about the legal challenges surrounding Artificial Intelligence and Conversational AI.
We discuss the current state of AI law, the implications of GDPR on Conversational AI, and the importance of understanding liability in AI solutions.
Ellen emphasizes the need for companies to consider legal aspects from the outset of AI development and highlights the evolving role of legal professionals in the tech landscape.
If you've got any questions around the topic of intellectual property and AI, I would strongly recommend reaching out to Ellen for a conversation via her corporate profile on EIP.com or her LinkedIn profile.
Takeaways
- AI law is currently ambiguous and evolving.
- Companies often approach lawyers at the last minute.
- Data protection is a significant concern in conversational AI.
- Existing laws like GDPR still apply to AI technologies.
- Indemnity provisions are crucial for AI vendors.
- Legal considerations should be integrated from the start of AI projects.
- Human intervention is necessary to mitigate AI risks.
- AI can enhance efficiency in the legal profession.
- The legal profession must adapt to the rise of AI.
- Engaging with legal experts early can save time and resources.
Chapters
[00:00:00] Hello and welcome to the Conversational AI News Podcast and today we are looking at the legal issues around AI and we are joined by Ellen. Ellen, you're very welcome. Could you introduce yourself, Ellen? Tell us about your background and can you give us an overview of EIP?
[00:00:15] Brilliant. Well, firstly, I feel really privileged to be here. Thank you so much for inviting me on. Yes, so I am the commercial IP lawyer at EIP and we are a specialist IP law firm predominantly in technology and also science. But a lot of the work that I'm dealing with on a day-to-day basis is in the technology space, so like software from development phase all the way to deployment.
[00:00:41] But actually prior to joining EIP, my background is I was working in-house for an international investment bank in the digital team. So I saw it go from not really a digital bank to one of the leading banks. So I've definitely seen the evolution of technology and pros and cons.
[00:01:02] Ellen, would you just give us an overview of the type of type of clients that you're, no names obviously, but how are you servicing them?
[00:01:09] What kind of issues are you seeing? And we'll get into some detail.
[00:01:14] Brilliant. Yeah, no. So I think it's sort of a broad range of clients that I deal with, but especially we'll see it from like the development phase.
[00:01:25] So they might be integrating like artificial intelligence into their actual like offering and that could be just they want to offer an automated service.
[00:01:37] But the other end of the spectrum actually is and probably sometimes more complex is companies that want to implement technology that they've just purchased and I'll be reviewing that commercial arrangement.
[00:01:51] So they'll be thinking about indemnities and like what risks they're taking on as you know, the service, you know, the other side of the service provider.
[00:02:01] So I sort of see both sides and in that type of thing. Yeah.
[00:02:06] But one of the things I've seen working in a bank or previously myself with and working closely with the legal, the in-house legal teams and sometimes the external is the absolute frustration when the business teams come and go, right, we're ready to go live.
[00:02:23] And if you could just just just do the legal thing, you may I presume you sometimes see that I hopefully not right a bit more organized.
[00:02:32] Would you say some words about that?
[00:02:34] Yes, I think I definitely I've seen it both both in private practice, but definitely in-house.
[00:02:39] I think that everyone's really excited to like they've got this new product and they're like, this is going to just change the organization's world.
[00:02:48] Like you have to have this and then they just come to the lawyer at the 11th hour and you have to rubber stamp it.
[00:02:55] And I used to always say, I'm not a blocker.
[00:02:58] I'm a facilitator.
[00:03:00] If you come to me early on, I can help you and navigate the law and just make sure that you're protected from the outset.
[00:03:07] It's not going to say no.
[00:03:09] I mean, there might be times when I have to say no, but like I'll have a valid reason.
[00:03:13] But yeah, I always think don't think lawyers are the enemy.
[00:03:17] We're actually like a helper.
[00:03:18] We want you to like we want technology to come to market and we're excited too.
[00:03:23] So we just want to work together.
[00:03:24] I think that's that's a key thing.
[00:03:27] Would you give us an idea of of the type of issues you're seeing emerge in the context of AI, but also conversational AI?
[00:03:36] What are you seeing happening?
[00:03:37] I think would generally AI, I think from all spectrums, so be it from a company developing AI all the way to like purchasing it.
[00:03:49] I think the one it's like an overriding issue is what they can and cannot do.
[00:03:55] And the reason it's a very broad thing to say, but I think the reason is people are unclear in what the law is purely because AI law is not set in stone right now.
[00:04:07] So obviously you've got the EU AI Act, which is brilliant.
[00:04:09] And that's obviously come into effect.
[00:04:11] But you've got in the US and the UK, we don't actually have a legal framework just yet.
[00:04:18] And we're all waiting for all of the case law to come and the output.
[00:04:22] You know, obviously the UK, the Getty Images case is going to be the biggest one.
[00:04:24] And we're we're very excited to see the outcome of that for both sides, you know, from a tech perspective, also the the IP rights holders.
[00:04:31] So I think it's just this ambiguity.
[00:04:34] I think that's the main issue that people are coming to us and saying, what can and cannot we do?
[00:04:38] Like because obviously I think it's really important to say, actually, that we do have existing laws that address AI.
[00:04:46] And I think I think that's really important to say, like, obviously, the EU AI Act in Europe has made everyone think that we don't have anything in place.
[00:04:55] But we do, you know, we've got IP law, you know, we've got the Copyright Act, we've got, you know, got data protection or the GDPR.
[00:05:02] So there are existing laws.
[00:05:04] But I think it's just a case of like, how does that apply in the context of AI?
[00:05:10] And I think then that in particular conversational AI, I think the biggest issue I'm seeing is actually from the GDPR perspective.
[00:05:19] And there's a massive rise in risk and liability in conversational AI, because I think everyone sort of got used to conversational AI, like tools.
[00:05:33] So like chatbots generally being part of our everyday life.
[00:05:37] You know, I was logging on to AA driving school, for example.
[00:05:42] I'm doing refresher course at the moment.
[00:05:44] And on that was the initial thing was the chatbots.
[00:05:47] Now, you have to put in their personal data.
[00:05:50] So I had to put my date of birth.
[00:05:51] I had to put my name in there.
[00:05:54] And that's just a chatbot.
[00:05:56] And then you get given a live person to deal with your issue.
[00:05:59] Now, chatbots are just, everyone's just so used to it.
[00:06:03] But actually, that is one of the biggest risks we're seeing now.
[00:06:09] And the Danish regulator actually did a big warning out to a lot of companies, because chatgpt and co-pilot are the biggest causes of data breaches at the moment.
[00:06:21] And they've just said that people need to know about that and have to think about that.
[00:06:27] And what is it you do to try and mitigate that risk?
[00:06:30] So I think, for me, it's a lot of data protection issues is probably the biggest thing, especially in conversation with AI.
[00:06:37] I see this rush to do AI, which I think is wonderful and exciting, because I instinctively want to see innovation and new service innovation and so on.
[00:06:45] But I do worry that some institutions and vendors aren't really thinking through these implications that you're discussing.
[00:06:55] How would you be advising companies that perhaps maybe they've even put something live or they're very close to it?
[00:07:02] That must be quite a tension when you're talking to them.
[00:07:06] Yeah, I think that's a really good point.
[00:07:08] I think I would say GDPR is not a new thing.
[00:07:13] And this concept of privacy by design, you should be thinking about data protection from the outset.
[00:07:20] And I think AI is exactly the same.
[00:07:22] You should be thinking about all of the legal considerations right from the outset, because it will make your life a lot easier.
[00:07:29] But if, let's say, you've gone down that path and you're now, like you said, just about to deploy, you're like, oh, actually, now I've heard this podcast.
[00:07:36] Maybe I haven't thought about a few things.
[00:07:38] That's OK, too, because you can work through with us that it might be that you just need certain wording in a privacy policy or you might need an internal procedure that says, you know what?
[00:07:53] You're going to deploy this new technology, but you need to put some sort of framework controls around that to say you're only allowed to put X data into that type of conversational AI.
[00:08:06] And it's not a case of saying no, but it's a case of like doing a mapping exercise to think around what the issues can be, what type of data you're going to put into that.
[00:08:17] Will it have personal data or if not, does it have confidential data?
[00:08:21] Does it have trade secrets?
[00:08:22] Is that going to be a risk to the company?
[00:08:24] Your innovation, you've worked really hard.
[00:08:27] Is that going to be put at risk?
[00:08:29] It's just thinking through what could go wrong and then working back from that to protect yourself.
[00:08:35] One of the concerns I have when I'm talking with lots of competition AI vendors and those around the ecosystem is perhaps sometimes a misunderstanding of liability, right?
[00:08:46] Because their system is generating something and then they're saying that's a service and they're billing that service to the client.
[00:08:51] Would you say a few words about this, the understanding of vendors and how they're working with liability?
[00:08:57] Yes, actually, it's really, it's quite an interesting topic, actually, because I think you're seeing it more coming up in the media.
[00:09:03] Like, you know, you saw even we've seen Adobe and Microsoft have started offering a blanket indemnity.
[00:09:11] So if you use certain AI tools that they're going to indemnify you.
[00:09:15] And I think, you know, I think you'll probably see that model happening more, especially for larger companies, because people are nervous to buy that technology.
[00:09:26] And then especially with the law being uncertain right now, that's how they feel.
[00:09:31] And I think if I'm honest, I think I would advise clients that if you're going to be purchasing software, make sure you have an indemnity that if I get someone comes along and says you're infringing or there's, you know, been a data breach, things like that.
[00:09:50] You're indemnified against that loss. And I think that is probably the way that it's going to go forward.
[00:09:57] So I think vendors need to be very much ahead of that and think about, right, what is our how do we want to, like, frame that?
[00:10:05] What what is the costings we want to do around that?
[00:10:08] And think through what is it that our client, our customers are going to expect to see in there?
[00:10:18] And I think indemnity provisions, that is the big thing they're going to be looking for.
[00:10:23] And if you can say, I've already thought through that, this is what we're going to offer you.
[00:10:27] We'll we'll, you know, provide on site service for X period of time.
[00:10:33] And, you know, if you've got any issues, come to us like notifiers within 30 days, we can rectify things like that.
[00:10:39] That will give companies a lot of comfort.
[00:10:42] Do you see a risk that actually when clients come to get your advice, they actually stop doing what they were thinking of doing?
[00:10:52] But for good reason. But, you know, is that something you sometimes will see?
[00:10:57] I don't necessarily think that they'll we've seen people stop, if I'm honest.
[00:11:01] But I think what it has done is maybe slowed them down and it's made them think, actually, I hadn't thought about that risk or I hadn't thought that I needed this internal policy.
[00:11:18] All they were caring about was the external facing, the tick box.
[00:11:23] They know that they've got to have a terms and conditions on their website.
[00:11:26] They've got to have the privacy policy. They've got to have a cookies policy, whatever.
[00:11:29] Like everybody sort of knows the generic stuff to put on their website.
[00:11:32] And half the time they'll come to the lawyer and go, yeah, we've done that.
[00:11:35] We don't need to think about it.
[00:11:36] But I think the bit it's the other side of things that they've not really thought about the internal.
[00:11:41] And that might slow them down and think, oh, actually, have we got have we built in sort of like a measure of human intervention?
[00:11:55] And I think that's the bit that people are missing, that it's fantastic.
[00:12:01] I really am a big supporter of AI.
[00:12:03] I think there's some amazing, amazing things that are being done, especially in like the cancer diagnosis and treatment world.
[00:12:10] But I really do. I'm so excited about it.
[00:12:13] But I do think that we've also seen a few things that we don't.
[00:12:17] AI is so fast and it's becoming so advanced in what it can do that our brain can't even imagine it.
[00:12:23] And with that comes risks that there might be some sort of inferences.
[00:12:29] And that could then be a breach of GDPR, for example, if it creates bias or some sort of unethical outcomes.
[00:12:39] And that's when you need to make sure that you're monitoring your tools and having human intervention checks and balances.
[00:12:46] And so it's not necessarily saying that they'll stop, but they might think, OK, actually, what do we need to build into our supply chain?
[00:12:53] What do we need to build in to our framework to make sure that we're mitigating that risk?
[00:12:58] And we can reassure our customers that we've already thought about that, is what I would say.
[00:13:04] Some of the vendors I meet and talk with, they don't have in-house legal experts.
[00:13:10] Or if they do, it's a little bit of time and it's really about them and their contract.
[00:13:17] It's absolutely not necessarily product driven.
[00:13:20] Would a recommendation be try and have someone like yourself in the idea stage before you get too focused on moving forward to help formulate the product offering to an extent?
[00:13:33] I definitely agree with that.
[00:13:35] I think so one of the things that when I was in-house, I would really be involved with.
[00:13:41] They had like a proof of concept committee and I was a key member of that.
[00:13:45] And I loved it because you got to think through the different issues, not just from a legal perspective, but all the different work streams across the organization.
[00:13:53] And I think it's the ICO, for example, requires you now even with AI to do a data protection impact assessment.
[00:14:01] So it doesn't just be from a privacy perspective, but just for AI as well, you have to justify why you're replacing a human with an automated tool.
[00:14:12] And I actually think just doing that, like even just go on and see what the ICO guidance is.
[00:14:18] If you're like, actually, we don't have the budget for a lawyer.
[00:14:22] Just even going on there to see what the ICO says and see what's in that data protection impact assessment, that's there to help you.
[00:14:29] It's to map through the risks and, you know, even find out, actually, I'm going to be collecting personal data.
[00:14:35] It's only an email address.
[00:14:37] But actually, how do we secure that?
[00:14:39] What are the implications if that's compromised?
[00:14:41] And, you know, what are the requirements from the ICO?
[00:14:44] And actually, do I have a legitimate interest basis?
[00:14:48] I don't then need to get consent.
[00:14:49] And it's just at those beginning phases that actually it just can make your life so much easier long term.
[00:14:56] And even just a preliminary chat, even just a 30 minute call to just think through it can save you long term.
[00:15:05] And that's why I would recommend it.
[00:15:08] And just just even just a chat often helps you think about, oh, I had not really thought about that, actually, when I it's not really something in my remit.
[00:15:15] Why would I think about that?
[00:15:16] Ellen, I can't let you go without asking your view on AI and the legal profession.
[00:15:22] I know it's a thorny question.
[00:15:24] But how are you adapting to AI at EIP?
[00:15:27] So I may be a controversial lawyer.
[00:15:32] I'm really happy to see AI technology incorporated into an organization like I truly do.
[00:15:39] I think that it's not also a new thing.
[00:15:43] We've we've seen it like litigation.
[00:15:44] We've got we've been using technology, you know, for doc review.
[00:15:48] And that's a really good model for helping clients keep their costs down and also enable lawyers to really hone in on what is like the key information.
[00:16:02] And so I think it's a very time efficiency.
[00:16:04] It's a cost saving thing for both for the clients.
[00:16:07] And therefore, lawyers can actually do what they really are good at.
[00:16:11] It's honing in on the specialist topic and giving real helpful legal advice.
[00:16:17] So I actually see a lot of benefits.
[00:16:21] I think people are worried in this.
[00:16:22] There's often a frenzy of like, oh, AI is going to replace us and take our jobs.
[00:16:27] And and but I think that I trust that we add value and the stuff that we don't really want to do the admin tasks, then there's no like I don't think there's any harm.
[00:16:40] Why should we be charging a client to do admin when actually we could cost that we could save their money and actually then add value and then they'll be able to see the fruits of their money?
[00:16:51] So I I think there's a there's there's a good thing and a bad thing.
[00:16:55] But I see a lot of pros personally.
[00:16:58] With you, with you.
[00:16:59] Now, Ellen, for those watching and listening and thinking, goodness me, we we should have a further conversation, a further dialogue.
[00:17:06] What what's the best way of engaging with you?
[00:17:09] So, yeah, feel free to reach out to me.
[00:17:11] You can add me on LinkedIn and or you can email me directly my and on the EIP website, you can find my my page.
[00:17:19] And but, yeah, always feel free and even just for a quick chat or something that you're interested in.
[00:17:24] And I'm also also interested, like if somebody sees something really interesting, like an article and they're like, oh, Ellen, what's your thoughts on that?
[00:17:30] Feel free to get and get in touch because I just think this topic is is really interesting and it's growing and it's just moving really fast.
[00:17:38] So I love learning as well and chatting about it.
[00:17:41] Fantastic.
[00:17:42] Thank you so much for taking the time.
[00:17:45] It's been wonderful to get your perspectives on this.
[00:17:47] I think it'd be very valuable for the listeners.
[00:17:48] Thank you.
[00:17:49] Thank you very much.