In today's episode, we're meeting Darren Ford, the Founder and Director of The CAI Company.
Darren and his team are amongst the most experienced Conversational AI experts around, having driven the strategies and implementations for some of the largest players around.
Our discussion is wide-ranging. We start first with Darren's background and the origin story of The CAI Company. Then we get into some key use cases, he talks about how consumer acceptance is really changing (for the positive) and he highlights some of the key requirements for any successful implementation -- namely, executive sponsorship and commitment, data-led decision making and the importance of talent development.
Reach out to Darren at his company website or via his LinkedIn Profile.
Chapters
- 00:00 Introduction to Conversational AI and CAI Company
- 02:47 Darren's Background and the Origin of CAI Company
- 04:10 Use Cases and Growth of Conversational AI
- 10:08 Consumer Acceptance and Chatbot Horror Stories
- 12:30 Generative AI and Executive Awareness
- 15:48 Talent Development in Conversational AI
- 18:44 Ethics in Conversational AI
- 20:59 Internal vs External Chatbots
- 23:46 Key Considerations for Executives in Conversational AI
Keywords
Conversational AI, CAI Company, chatbot, generative AI, voice technology, user experience, ethics, talent development, executive awareness, use cases
[00:00:00] Hello and welcome to The Conversational AI News Podcast and today I'm joined by Darren from The Conversational AI Company or The CAI Company. Darren, welcome.
[00:00:09] Great, thank you Ewan, nice to meet you. I look forward to the chat.
[00:00:12] Bring it on. Now did I get that right there Darren? What's the name of the company first of all?
[00:00:16] It is The CAI Company where CAI stands for Conversational AI.
[00:00:22] That's great Darren, you're very welcome. Can you, first of all, for those watching and listening along, can you give us a background to you?
[00:00:29] What's your background and then we'll come to The CAI Company?
[00:00:31] Great, thank you. I've been working in the field of professional services for global software companies through my career.
[00:00:40] So I understand services and working with organisations to get the most out of the applications that they've deployed.
[00:00:48] I've actually been in the conversational AI space for eight years.
[00:00:51] I started off with a company called Artificial Solutions and I was the VP of services globally there for the professional services, educational training and support function.
[00:01:03] And typically working with the large enterprises, helping them on their strategy, design, implementation and support of their projects for typically intelligent projects and deployments that actually achieved results that they were looking for.
[00:01:22] So that was my background. Worked with a great team of people through my career and is a really exciting space to be in. So that's me.
[00:01:29] How did you come to find the company? Take us through that. What's the background? What's the origin story? I should probably have said.
[00:01:35] So back in towards the end of 2020, moved on from from that company and established the CAI Company, together with some really experienced individuals in the consulting world for conversational AI with masters and PhDs in computational linguistics and automation.
[00:01:56] And that became the focus of the company. So conversational AI is all we do and nothing else.
[00:02:03] And that's the DNA of the business. And together with the relationships I had with a number of these large enterprises, continue to work with them and other large enterprises.
[00:02:14] So we provide services only, as I mentioned, very much focus on the design development, coaching and training and strategy of conversational AI applications, typically applications that are automating interactions with humans, either over voice through the phone or web chat.
[00:02:32] And typically at large volumes and scale. So the sorts of companies that we've worked with AT&T in North America, Skoda in Europe, HelloFresh North America, Swisscom in Sweden, Circle K as well, a large Swiss bank that I can't name and one of the world's largest technology companies that can't be named.
[00:02:53] So we typically support either the end customers directly or with partners and provide that very specialist conversational AI expertise to the party to help them achieve what they're trying to achieve.
[00:03:07] Would you give a few use cases of those types of projects? Because those sound like some, you know, fantastic marquee clients there. But give an example of things that really come to mind.
[00:03:17] So as I mentioned, the typical use cases are usually larger volumes. So mostly in a B2C type context where there's large volumes of conversations.
[00:03:30] And with AT&T, that was within their home smart life and broadband application, automating conversations with troubleshooting of the home and engineers.
[00:03:41] Or it could be a virtual assistant that was at Skoda to help them interact with people coming to the website to try and figure out which vehicle they wanted and to organize test drives, etc.
[00:03:53] So it's usually higher volume and it's usually use cases that require quite deep end integrations to actually resolve an issue or a user's intent rather than simple FAQ signposting.
[00:04:05] So that's a few of the use cases, typically large volume, typically highly integrated into backend systems.
[00:04:12] Would you say a little bit about the growth of Conversational AI? You say you've been doing this for eight years.
[00:04:18] A lot of people listening or watching are very new.
[00:04:22] I think it's fair to say very new to the space.
[00:04:25] How have you seen that develop?
[00:04:27] And how have the use cases changed?
[00:04:30] Did it start as simply just routing calls, for example, audio?
[00:04:34] And then how have you seen things move?
[00:04:36] That's a really good question.
[00:04:37] Well, particularly in the last year or two, the underlying technology has improved significantly.
[00:04:44] What we've seen is probably three or four years ago, there's been a greater adoption of voice.
[00:04:50] A lot of the voice technologies, the voice technology specifically that's transforming the speech audio to text has greatly improved.
[00:05:00] All the big players now, their speech models out of the box work really well.
[00:05:05] So we're seeing definitely an increase in people where maybe they were just focusing on text over web, utilising that same knowledge and experience over voice,
[00:05:15] because that speech technology now has become more commonplace and more accurate.
[00:05:20] We've definitely seen that.
[00:05:21] And we've definitely seen organisations, you know, if you go back eight years, there were probably more FAQ type bots in play.
[00:05:30] So people might not have a very good website.
[00:05:33] And the, you know, the way to fix that is to put in a bot that can maybe answer questions as well.
[00:05:37] But definitely over the years, the capabilities of those virtual assistants have become much richer, much deeper and more capable.
[00:05:47] Whether that's integrating into an RPA system to enable that automation or exposing other APIs.
[00:05:55] So you've seen a move more from text to voice.
[00:05:59] And also you've seen much richer and deeper interactions with virtual assistants.
[00:06:05] How have you seen the demand for your services change then?
[00:06:09] That's interesting.
[00:06:10] Well, as a company, we are fortunate in that where we just provide the conversational AI services, people tend to find us.
[00:06:18] So, but what we have found was that particularly working with some large enterprises, the work that we do is really quite sticky.
[00:06:26] You may start off with a solution for a particular brand in a particular market or region.
[00:06:32] You show the results, you get the data, you see where you can then grow to the next level to expand the use case and or the region to get more payback.
[00:06:43] And as you learn more, you grow and you learn more and you grow.
[00:06:47] So we typically have long engagements with our organizations as in our customers, as they grow and develop their use cases to multiple brands, multiple regions or multiple languages.
[00:06:59] And in each step, it's a data driven decision to see where they're going to get their biggest bang for their buck because you've got the data being collected in the system.
[00:07:08] So we've definitely seen that.
[00:07:11] We've seen more organizations over the last few years particularly become more interested in being more capable themselves in maintaining and doing the work rather than outsourcing it all.
[00:07:22] And so that's been really good as well as we try and coach people in the best methods and processes so they can continue to grow.
[00:07:29] How are you seeing vendors mature?
[00:07:32] I'm really interested to see what your perspective is on that maturity journey with vendors.
[00:07:37] That's a really good question as well.
[00:07:39] Well, I think it's worth maybe going back a bit further.
[00:07:43] Originally in the conversational AI space, a lot of the vendors had more, had systems that meant you had to build more deterministic flows.
[00:07:51] And over the years, vendors have adopted initially it was machine learning and classifiers that helped with particular topics like intent recognition.
[00:08:01] So you didn't have to code every single potential utterance.
[00:08:04] And then more recently, the advent of large language models and what they can do.
[00:08:11] Now, the interesting thing with large language models is that they themselves and even ChatGPT, you might not be able to orchestrate and build a whole bot using it because you still need to orchestrate the dialogue.
[00:08:24] And so today, vendors are focusing more on the orchestration of complex dialogues than utilizing things like ChatGPT or other large language models or small language models to help make that deployment quicker and more fluid.
[00:08:41] Now, Darren, would you say a little bit about some of the horror stories that we've seen?
[00:08:46] I would imagine those horror stories may not have happened, these ChatGPT horror stories, had they been having you as an advisor.
[00:08:57] What are these horror stories, these ChatGPT horror stories?
[00:09:00] Is that a good thing for the industry to help educate or do you think it's holding things back a little?
[00:09:06] I think five or six years ago, it was a bad thing for the industry because there were so many poor applications out there that generally people consumers experience was regularly so bad, they would turn off from using a chat bot given the opportunity.
[00:09:26] You know, you would regularly get people come into a bot and the very first thing they say is, I want to speak to an agent.
[00:09:32] I want to speak to an agent.
[00:09:33] Human.
[00:09:34] They don't even try.
[00:09:36] Right.
[00:09:37] And so that was bad for the industry, having such a large number of really bad implementations.
[00:09:47] Things have improved in recent times and now we've got consumers' expectations around the more publicly available solutions coming from the likes of OpenAI.
[00:09:57] So consumers and people have, you know, moved on a little bit as well in terms of their adoption of technology.
[00:10:03] And they're more interested in speaking to having an automated conversation with a machine rather than speaking to a human.
[00:10:13] In fact, in some cases would prefer to.
[00:10:15] Yes.
[00:10:15] So there's a much greater acceptance of it.
[00:10:19] And in this scenario where we are now, having bad implementations are useful for that organization to then see what went wrong and how to improve and do it right next time.
[00:10:29] So now we're getting good lessons from bad implementations, whereas previously they were all over the place and they were impacting the ability of, you know, the acceptance from people and consumers to use them.
[00:10:43] I was reading recently about one vendor that's been working with a debt collector or a debt collection agency.
[00:10:50] And they've been astonished.
[00:10:52] The agency, and I think both actually have been astonished about how consumers who are being pursued for debts are actually reacting really, really well to a conversation with an AI, basically.
[00:11:06] Would you say a little bit about your attitude there?
[00:11:08] What are you thinking is happening there?
[00:11:10] Well, there's a psychological aspect to humans preferring to speak to non-humans when they're discussing sensitive topics.
[00:11:20] And that could be around personal debt, for example.
[00:11:24] Or it could be around certain medical issues or, you know, mental health related issues.
[00:11:34] In those scenarios, those bots need to be really clever and be really intelligent to be able to have a proper conversation with a person.
[00:11:43] So that requires a lot of investment to get that right.
[00:11:46] But it is human nature in those scenarios where people would prefer to not speak to another human.
[00:11:53] The other thing is that when you do have a virtual assistant and a bot that is well integrated and able to operate 24-7 and able to take a significant peak in loads,
[00:12:09] consumers do prefer to self-help and resolve issues there and then rather than wait for a callback the following day or wait in a queue.
[00:12:17] But only if the system can actually resolve the issue.
[00:12:23] So that is becoming a greater preference for users if it's more than just a simple FAQ bot.
[00:12:30] Thank you, Darren.
[00:12:31] I've been in quite a few meetings where a senior exec has come into the room brandishing, for example, an economist magazine in their hand, saying we need to do generative AI.
[00:12:43] So I wonder on your philosophy and how things have changed, I would hope for the better, over the last couple of years since OpenAI and ChatGPT has entered that, the widespread consciousness but also the executive consciousness.
[00:12:56] So from an executive consciousness point of view, the real positive around generative AI and the likes of OpenAI is bringing the AI topic and automation of conversations at the board level.
[00:13:13] And it's exposed the potential benefits at the board level rather than at a departmental head level.
[00:13:21] So that exposure has been great.
[00:13:24] One of the challenges, though, is the expectations that people have now that there is OpenAI.
[00:13:30] We'll just have one of those.
[00:13:31] Why can't it, you know, just work for us?
[00:13:34] You know, why can't it answer questions about someone's insurance policy?
[00:13:39] So it's great having that exposure at a board level, really bringing to the fore the potential opportunity and the potential ROI that you can get from automation of conversations.
[00:13:52] But on the other side to that, it's set expectations of how straightforward it is too high.
[00:14:00] So you will find organizations will explore, they will play with a particular large language model, they will look to build something.
[00:14:09] And over time, what I've seen over the last six to 12 months, actually, are people's expectations correcting a little bit.
[00:14:15] And then people perhaps thinking of the use of large language models as a toolkit within their AI ecosystem used for specific parts of the chain, the interaction, rather than thinking it can do everything for you.
[00:14:32] And so I think over the last 12 months, we're coming down a little bit in those expectations.
[00:14:37] And they're a bit more pragmatic in the use of large language models.
[00:14:41] In a moment, I want to ask your advice about executives and what should they be thinking top of mind.
[00:14:47] But before that, you mentioned education and talent as an aspect there.
[00:14:53] Could you say a few words about what you've been doing over the last period of time in helping organizations with their talent development, talent support and so on around conversational AI?
[00:15:04] That's another great question.
[00:15:34] One of the interesting things about people in this domain is the diversity of backgrounds.
[00:15:39] And by backgrounds, I mean their experience.
[00:15:41] So we in our business, we have people who may have had a more traditional technology background, who have, you know, an affinity towards language as well, towards language and enjoy the idea of getting involved in designing automated conversations.
[00:15:58] Or you might have somebody who was working for a magazine, you know, creating copy for articles who also has affinity towards language.
[00:16:09] But they, in addition to that, they like the technical aspects of it.
[00:16:12] So the diversity of people's background and experience is really broad, which makes the recruitment difficult because you're not just checking boxes on a CV.
[00:16:23] You know, for example, if you were building a technical development team outside of conversational AI, you may be looking for four years of Java experience or internet experience and particular things that you would then look for and maybe you would test for.
[00:16:39] It's different for conversational AI because your ideal candidate may or may not have a technical background, may or may not have a linguistic background.
[00:16:48] And so what we do when we're recruiting, we talk more about the role and what it is and the responsibilities and the sorts of things that people will be involved in.
[00:16:59] And we set those expectations.
[00:17:00] We recognize that the background can be quite diverse, but we go through a recruitment process that helps us identify how naturally somebody can get involved in the design and the development of conversations.
[00:17:14] And that really brings out, you know, the ideal candidate because we've worked with people who are, you know, really technical and they would rather try and make a piece of code perhaps more efficient in how it operates than think about the experience of the user.
[00:17:31] But that's the important thing in conversational AI.
[00:17:34] You need to think about the experience of the user.
[00:17:37] Speaking of the user, how much does the conversation around ethics come in, particularly when you're talking with executives or the like?
[00:17:47] I had an issue when I was introducing this stuff back in 2016, 2017, where there was deep concern about, you know, should we identify that this is a bot?
[00:17:58] And eventually we just said, look, yeah, you're chatting to the chatbot.
[00:18:01] You know, that was the first thing that the customer saw.
[00:18:03] But actually the customer didn't really mind, as you were saying earlier, if they got it done.
[00:18:07] You know, they got their tasks done effectively.
[00:18:09] What are you seeing and hearing in the ethical space about conversational AI?
[00:18:14] Well, there's two things on that, but I'll focus on the thing that you just mentioned first.
[00:18:20] We always say to all of our customers, speak like a human, but don't pretend to be one.
[00:18:27] So you want to make sure that the bot is intelligent.
[00:18:30] It can do the things a human can do.
[00:18:33] But right at the very beginning, make it clear that it's not human because consumers will feel duped and misled if they find out halfway in the conversation.
[00:18:43] You know, oh, is this a bot I'm speaking to?
[00:18:46] Even if it's fully capable, even if it's resolved the issue.
[00:18:49] So make it very clear at the outset, this is a virtual assistant, but you want to still try and speak and interact like a human.
[00:18:58] So that's really important for keeping the trust with your users in how they're interacting with you.
[00:19:06] But more broadly, on an ethical perspective, the persona of a virtual assistant, how it speaks and how it interacts really needs to represent the DNA of your business.
[00:19:19] And that's the important thing.
[00:19:21] And hopefully the DNA of your business is covering all the right ethical considerations.
[00:19:26] And you need to make sure that the virtual assistant is very much aligned to that.
[00:19:30] I know this might sound a little silly, but I am surprised by some organizations and how they deploy their virtual assistants.
[00:19:40] For example, in United Kingdom, we have NatWest with Cora.
[00:19:44] So they've given it a name.
[00:19:46] Santander have Sandy.
[00:19:49] HSBC, I'm not quite sure, actually.
[00:19:51] I was trying to find out.
[00:19:52] Sometimes it's got a name, sometimes it's not.
[00:19:54] Lloyd's is very clearly a virtual assistant.
[00:19:56] There's some quite different examples there of how they're broaching this.
[00:20:01] Do you have a view or a preference?
[00:20:03] What do you advise your clients?
[00:20:05] So we believe that it's always good if you can, if it fits in with your brand, to give your bot an identity and a name.
[00:20:14] Because you still want particularly people internally to think of it as a person that needs training, that is growing, that is developing.
[00:20:23] And for it to have a personality.
[00:20:25] And now, you wouldn't necessarily want to call your bot Dave or John and a very traditional, typical human name because that goes towards maybe misleading the user.
[00:20:40] But it can be.
[00:20:41] But I think giving your bot a name and identity is really important, even more so for internal stakeholders and users.
[00:20:50] And so internal stakeholders and users will start referring to the bot as he, she, or they, whichever.
[00:20:57] And is part of the family, is part of the internal organization.
[00:21:03] So, but, and there might be a theme.
[00:21:05] So with, with HelloFresh, they have a number of brands.
[00:21:08] You've got Brie, which is the main HelloFresh brand.
[00:21:12] You've got, you've got Colby, you've got Olive.
[00:21:16] And that's great.
[00:21:18] So there's a theme, you know, which works well, but there's still kind of names as well.
[00:21:22] And that helps internally establish the identity and persona of the brand and the bot.
[00:21:29] A quick one there on internal versus external.
[00:21:32] Do you do much with internal chatbots?
[00:21:36] We do.
[00:21:36] We do, we do work with internal chatbots as well, whether that be around internal IT support or whether it be facilitating organizations where they've got very operationally intensive activities.
[00:21:53] The, the key thing with internal virtual assistants is that typically the volume is lower than, than B2C.
[00:22:01] Yes, right.
[00:22:01] So the use case for internal virtual assistants tends to have less payback, although the cost per call may be higher.
[00:22:11] So you might have a typical call center may have a cost per call to a consumer of $10 per call, for example.
[00:22:18] But the internal cost, if you're helping somebody put together the terms and conditions for speaking to a customer in Asia about selling assets,
[00:22:28] then that internal cost might be $50 per call.
[00:22:30] So it comes down to the, the use case and the, you know, the ROI based on that.
[00:22:38] Do you see very different metrics applied or, or in some cases, no metrics, yeah,
[00:22:45] feelings applied by, by some of the senior execs or the teams that you're working with?
[00:22:49] More recently, metrics and business cases are more commonplace.
[00:22:54] Five, 10 years ago, they weren't.
[00:22:58] And as people were probably more exploring the technology.
[00:23:02] So there's two pieces to this.
[00:23:04] You either try something, build out a simple password reset use case internally because you want to try the technology.
[00:23:10] Does it work?
[00:23:11] What can we learn from it?
[00:23:12] But that isn't the real reason why you're doing it.
[00:23:14] The business case may not be there.
[00:23:17] But more recently, people are moving forward with a, with a, with a proper business case and a, you know,
[00:23:24] an idea as to why they're doing it in the first place.
[00:23:27] Although we have come across scenarios where, and situations where an organization is not clear on why they're doing it and the business case.
[00:23:34] And, you know, we take a step back and make sure that that is understood first.
[00:23:39] One of the things I really wanted to ask you about, Darren, was, you know,
[00:23:42] you're interacting with lots of different companies and lots of senior executives at various different companies.
[00:23:48] What are the three things that you think executives should be thinking top of mind when it comes to conversationally high at the moment?
[00:23:56] So the top three things, well, I would say the first one is actually what we were just discussing,
[00:24:01] which is around the business case.
[00:24:05] So, um, there may be some cases, as I mentioned, where you're just, you're exploring, uh, and you just want an excuse to play with a particular technology to see what it can do.
[00:24:16] Um, how does it work?
[00:24:18] Is it, could it fit within your organizational ecosystem, technical ecosystem?
[00:24:24] But if you are looking to deploy something for real, then you must, must have a business case.
[00:24:30] And there's a number of reasons for that.
[00:24:33] One reason is there is lots of different technology out there and there's lots of different approaches to deploy that technology.
[00:24:41] And then there's lots of different people who have different viewpoints on how to deploy the technology.
[00:24:45] If you don't have a clear business case, a North Star, a shining beacon by which you can make these decisions, then you will flounder and you won't get the most out of it.
[00:24:59] We worked with, um, I worked with Skoda, uh, previously we were doing, um, a pilot, um, and for their website for a chat bot.
[00:25:09] And at the very beginning, this is a joint project together with, um, Skoda and Accenture.
[00:25:15] Um, and it, uh, they wanted to do a pilot in the, in the, in the Spanish region, in Spain.
[00:25:20] Right.
[00:25:21] And the, um, when speaking at the very beginning with the stakeholder, uh, for initially it was a chat bot for the website was, well, why are you doing this?
[00:25:32] And it was great because he went into lots of detail in that they had a problem and that the conversion rate for their website was really low, less than 1%.
[00:25:41] And in the automotive industry, automotive industry, the conversion rate means someone coming to your website and booking a test drive.
[00:25:49] Cause then as a user, when consumer, when you get into the, um, when you get into the dealership, then the salesperson comes along and, you know, a dealer struck.
[00:25:58] And so the conversion rate is somebody coming to the website, asking about a vehicle and booking a test drive.
[00:26:04] And it was a really low conversion rate.
[00:26:06] And we, and this was said at the very beginning.
[00:26:08] And then there was a short project to develop this pilot, everything from the design, the look and feel, the integrations that were needed to inquire about a dealership stock and backend integrations to CRM systems.
[00:26:21] But every design decision, every discussion about whether we should implement something in a particular way, the team would ask themselves, well, will it improve the conversion rate?
[00:26:34] Will it increase the likelihood of somebody booking a test drive?
[00:26:38] And that became the North star for the project.
[00:26:41] And it was a great success.
[00:26:43] I mean, the, the, the results increased conversion rate by 300 and something percent, because that was the pure focus and we measured it.
[00:26:52] And that was the focus of the virtual assistant rather than the alternative would have been, we, you know, we want to chat, but on the website and it would not have had that same impact.
[00:27:02] So the business case is really important.
[00:27:04] And it's really important at the beginning to establish the KPIs that you want to measure to see how you're doing with that business case.
[00:27:12] Gotcha.
[00:27:13] Then take, take me through.
[00:27:14] So that's one, the business case.
[00:27:16] What's, what's number two?
[00:27:18] Corporate commitment.
[00:27:19] Now by corporate commitment, I mean, they're being buy-in from the top about why you are deploying a conversational AI application.
[00:27:30] There's many conversational AI applications and initiatives that are initiated at a departmental level.
[00:27:37] So customer care or customer support, for example.
[00:27:41] And what you will find is that over time, you, you know, you might deploy a pilot that may well be successful within the boundaries of the customer support organization.
[00:27:50] But when you really want to drive it to the next level, if you don't have the buy-in and the commitment from product marketing, from, from sales, from corporate IT, and you will hit a glass ceiling.
[00:28:06] And you will, the team will end up identifying workarounds because they can't get access to the right people within corporate IT because it's not, you know, assigned as a responsibility for them.
[00:28:18] Yes, yes.
[00:28:19] So corporate commitment and making sure that the, the project team is able to, when needed, work across boundaries is really important to get the maximum ROI and the best results from a virtual assistant.
[00:28:37] Gotcha. Gotcha.
[00:28:38] Okay.
[00:28:38] Okay.
[00:28:39] I like that because I've seen the ramifications of not having that in place.
[00:28:42] By the way, Darren, what's, what's number three?
[00:28:44] So number three is definitely around data.
[00:28:49] Right.
[00:28:50] And it's in two, two areas really.
[00:28:54] So what is really, when, when you have a virtual assistant and you're automating conversations with, with, with your consumers, you have access to all of that data.
[00:29:07] And the, until you analyze that data, you don't necessarily know exactly what is the right scope to deploy necessarily, or exactly how to tune it.
[00:29:17] So launch early, get the data and then improve and make sure you are responding to how your customers are really saying things and what they really want, rather than try and do that theoretically beforehand.
[00:29:31] So get the data and make data driven decisions on where you should focus, launch early and get the real data.
[00:29:40] And it's really important on an ongoing basis to measure the data and see where you need to improve because you, otherwise you could be in a situation.
[00:29:48] If you're not making data driven decisions, you might spend 80% of your time working on things that are only going to have a 20% impact.
[00:29:56] It's the reverse of the 80, 20 role rule.
[00:29:59] Whereas if you have the data and you know exactly what is going to have the impact in, in, in which areas also aligned to the business case, you know, don't focus too long in an area that isn't associated to the business case.
[00:30:13] Then you can have, you know, spend 20% of that time that you were spending initially and have 80% impact.
[00:30:20] So data driven decisions and getting the data and understanding it is really important.
[00:30:25] And that needs to be thought about at the outset and it needs to be aligned to the business case.
[00:30:31] The second part of that is, and this is perhaps the hardest part is what data.
[00:30:39] Everybody's got BI, power BI reports or other data warehouses where there's lots of data going in and there's, there's quite straightforward to get quantitative data for how your bot may be operating.
[00:30:51] How many times did it escalate to an agent example, or how many times was the safety net hit of the bot?
[00:30:57] But why, what, what is happening?
[00:31:00] How is it performing?
[00:31:01] Why did it do this?
[00:31:02] Why, why did it hand over 50% more than normal on Tuesday last week?
[00:31:07] And it's very hard to get that information from traditional reporting systems.
[00:31:14] And it often requires a team of people manually eyeballing conversations to figure out what was happening.
[00:31:22] And the problem with that is that you end up drawing conclusions that are not statistically relevant.
[00:31:28] So that's where you may end up wasting time fixing something that only happened twice in the last year.
[00:31:34] And so, you know, and there's, there's no out the box tools available today to do this qualitative analytics and understanding.
[00:31:42] So that's the hardest part is what data and how to get it.
[00:31:46] So it's something to think about early on.
[00:31:49] So you don't waste time working on things that are not going to have a big impact.
[00:31:54] Darren, for those thinking we need to have a conversation, what's the best way of engaging with you and the company?
[00:32:00] You can reach to me on LinkedIn, probably the most straightforward way.
[00:32:04] My details are there on LinkedIn.
[00:32:06] You can reach to me through yourself, Ewan, as well.
[00:32:09] That is entirely fine.
[00:32:10] So, yeah, I'm open and happy for any conversation and conversations with our team as well.
[00:32:16] The conversational AI is in our DNA.
[00:32:18] We love talking about it to anyone and everybody.
[00:32:21] And so we'd be happy to engage and have a conversation with somebody over a coffee, reach out.
[00:32:27] That's absolutely fine.
[00:32:28] So you heard it here, dear listener.
[00:32:31] Darren says get in touch.
[00:32:32] Excellent.
[00:32:33] Darren, that's been fantastic.
[00:32:34] I really appreciate you taking the time.
[00:32:37] Thanks for joining us.
[00:32:38] Thank you very much.
[00:32:38] It was a pleasure.