Tech ONTAP Podcast: Episode 408 – AI in Business (with Kamales Lardi)


Note: Post (mostly) generated by ChatGPT based on text transcript

This week on the Tech ONTAP Podcast, we’re joined by global AI and digital transformation strategist Kamales Lardi for a real-world, business-aligned discussion on responsible, ethical, and scalable AI adoption — with data strategy and human enablement at the core.

For fun, I asked ChatGPT to rate the accuracy of this discussion and this is what it came up with:

During our post-episode review, the insights shared were evaluated as highly aligned with widely accepted enterprise AI research, ethics guidance, and adoption trends, earning an overall accuracy rating of 4.5 out of 5 stars. The themes, risks, and recommendations discussed were well-supported, with only minor caveats around variable success metrics and anecdotal claims.

So if AI agrees with us, we’re golden, right? Right??? 🙂

In this episode, we unpack:
• Why AI is not new — but enterprise adoption is scaling
• Why many pilots stall without data governance, business alignment & change management
• Ethical, legal & regulatory challenges, including IP and hallucinations
• The importance of human-centered transformation strategies
• Why AI should augment — not replace — people and processes

Learn more about Kamales Lardi:
🌐 https://kamaleslardi.com

Finding the podcast

Check it out here, like and subscribe and all that jazz (now hosted on the NetApp YouTube channel!)

Note: YouTube episodes may lag behind audio in publishing.

If you prefer audio only, I also still offer that:

You can also find the Tech ONTAP Podcast on:

Transcription

The following transcript was generated using Descript’s speech to text service and then further edited. As it is AI generated, YMMV.

Ep 408 – AI Ethics Kamales Lardi edit
===

Justin Parisi: [00:00:00] I’m here in the basement of my house and with me today I have Kamales Lardi to talk to us all about AI and business.

So, Kamales, what exactly is it that you do and tell us a little more about yourself and where to reach you.

Kamales Lardi: It’s a pleasure to join you here today, Justin. I, I really enjoyed our last session, so it’s great to be back. I’m a AI and digital transformation expert, so what this means is basically I help companies across the world understand the value of technology and how they can leverage it for their digital future.

And I’ve worked with companies across industries and sectors as well as across regions over the last 25 years. And one of the key things I focus on is really, implementing technology and driving transformation in organizations that’s centered around people. So human-centric approach.

Justin Parisi: Okay. And how do we reach you?

Kamales Lardi: I am on most social media channels, LinkedIn blue Sky, Instagram, but you can reach me on my website as well. kamaleslardi.com or lardipartner.com.

Justin Parisi: So you cover AI and businesses, so let me ask you, did AI find you [00:01:00] or did you find ai?

Kamales Lardi: Oh, it’s hard not to find AI these days, isn’t it?

It’s the new shiny thing that everyone’s talking about. It’s actually quite fascinating to see because a couple of years ago it was all about the metaverse. Before that it was blockchain. So we are kind of going through this technology cycles. I have to say, since I’ve been in the space for over 25 years, I started my career off as a coder back in the late nineties.

AI has always been a topic of discussion in the tech field. It’s not a new field. It’s been around since the 1950s. Research and development in the area has continued over this timeframe over decades. What we’re seeing different today is really the accessibility of the technology, the ease of use.

The interfaces have significantly developed as well as the foundation technology behind it, right? So the unlimited computing power, the unlimited access to data with cloud computing and so on. We’ve got the right foundations in place. Plus we have a very much more sophisticated global audience with people becoming so used to [00:02:00] using social media and online technologies that we are now able to adopt artificial intelligence based technologies fairly easily. So all of these elements have come together to create this sort of perfect storm that just accelerated artificial intelligence a couple of years ago. And we are seeing it everywhere. It’s hard to escape, to be honest.

Justin Parisi: So, out of blockchain Web3, metaverse, NFTs, and AI rank the top three of staying power.

Kamales Lardi: Staying power. I would definitely say ai. I am a big fan of virtual and augmented reality, so I think that’s something that’s going to pick up over the next six to 10 years is gonna be part of our daily lives. And also blockchain, right? Blockchain creates a certain foundation that can transform the way we interact with one another, the way we build trust between individuals as well as with businesses and with corporations. So we’re a bit a ways. I think we are still at that nascent level where there’s still some development and some maturity to be achieved, but [00:03:00] these technologies are here to stay, to be honest. What I feel is more interesting is not the individual technologies themselves.

But the convergence of the various technologies, right? So if you combine, for example, the kind of computing power that AI brings, the insights and the capabilities, the functionalities that AI based technology brings, and the transparency that blockchain could offer, the data transparency, the platform transparency, and the trust that you can build, combining these technologies could actually create a very, very powerful system of trust where we can utilize these technologies for transformative outcomes. And then if you add to that, the layer of, for example VR and ar, you create something that’s extremely immersive and easy to use that can embed the people and human beings within the systems. And so the potential is there.

I think there’s this significant I feel inspiring potential for us as human beings, but also for organizations to explore these technologies and move towards the digital future. [00:04:00]

Justin Parisi: Yeah, I think when you look at things like AI or blockchain as a individual thing, it becomes less interesting as when you start to combine them.

Hmm. Outside of NFTs, they have no use NFTs.

Kamales Lardi: Well, we have a virtual office that we use for global client meetings or trainings presentations. It creates a certain wow factor, I have to say, and it allows us to interact on a very personal level with people who are not in the room with us.

Justin Parisi: Yeah.

Kamales Lardi: So we do see some very good use cases with it.

Justin Parisi: You wrote a book about AI and business. I’m guessing you covered the main topic of how AI is changing, how businesses operate and the challenges they are facing. So let’s expand upon that a bit.

Kamales Lardi: Definitely. So one of the things I noticed when the publishers reached out to me for this to write this book, is the fact that there wasn’t any publication in the markets, or, let me say it another way. Most publications focused on the technology side of ai, the capabilities of the tech, or it focused on elements around [00:05:00] regulation and those capabilities.

Now, what was missing, I felt in the market was this playbook for business leaders where business leaders could fully understand. What this technology is and how it applies within their business environment. What kind of commercial outcomes could be realized and what are some of the key elements, prerequisites, risks that need to be addressed?

And so with the book, I did try to create this guide for business leaders to understand the technology in its depth. Not from a technical perspective, from a layman’s perspective, but also a business perspective. And I think one of the things that I found quite interesting to do was in chapter one, I readdressed what is intelligence?

And I think there’s a misconception in the market when we say artificial intelligence. Oftentimes people assume this means that the technology itself is incredibly intelligent and functions like human intelligence whereas there are certain definitions for what intelligent human beings are and how these technologies mirror those capabilities.

And so understanding this then sets the foundation [00:06:00] for how companies can actually leverage that artificial capability and where it should leverage that capability and where it shouldn’t. And so I really enjoy delving into that space as well.

Justin Parisi: Yeah, the intelligence factor that you’re talking about, I’ve run into that several times when attempting to use it.

I think about when you grade somebody on a reading level, right? So in America, we say, oh, you read at a second grade level. You read at a 10th grade level. Where would you say AI is currently reading at? What grade level?

Kamales Lardi: That’s a great question. And to be honest, the different models work at different levels and capacities, and they have different purposes, right?

Mm-hmm. Some of them for example, chat, GPT, I personally find the most interesting and the most effective for the use that I have. And I would say probably about sixth to seventh grade level, depending on the task. There are certain tasks that it really excels in, and then certain tasks where you go, I’m way better at this.

And I think that the one key element that I have to highlight is it comes down to the user at the end of the day. How well the user knows [00:07:00] the technology, how well the user is trained to use the technology, and how well you know, your own field. So if you’re trying to do a certain task or execute a certain task with the help of the AI model and you know your field really well.

I think it’s fairly quick that you can catch how the system is not able to deliver what you need or it’s not being as accurate as you need, and you can kind of understand, okay, this is how I can use the technology and this is how I can’t. So if I give you an example if while writing an article, for example, I wanna do a little bit of research on a certain topic.

I could go into chat GPT and say give me all the relevant bullet points on X, Y, Z topic. If I know that topic really well, and I’ve read a lot of background information on the topic, I can very easily catch out where it’s being accurate and where it’s not in terms of feeding back that information.

And I think this is where people forget that systems like chat GPT, they’re not actually doing the research and throwing that research back at you, but they are doing the [00:08:00] research, pulling out information, adding it to the information that it knows within its own system, and then reflecting that back, which means there could be information in there that’s inaccurate or simply made up.

And so it is really, really important to have that human oversight when we work and interact with these systems.

Justin Parisi: Yeah, it does really well when it’s trying to pull information from rote. Right? So basically, define this word for me where it struggles, I think sometimes is when you want to infer information and when you ask it a question and you want it to actually think for you.

The challenge then is what data is it actually pulling from? How many guardrails are in place for that data? Is it spanning an entire data set that maybe is inaccurate or is it focusing directly on the stuff that’s pertinent to that question? And that’s why you see such a variance in models like chat GPT or open ai.

And then you have really niche type of things like Gemini, which does like the imaging. You have Claude, which does more of your coding. [00:09:00] So when you are dealing with ais, you’re not just dealing with a monolith, you’re dealing with a lot of different niche areas of different ais.

Kamales Lardi: Yeah. And I think understanding, first of all, which platform is best for which use, mm-hmm. Is really important. But also building up our own knowledge in terms of how do we write a prompt in a way that gives us the result that we are looking for. And we have to be super aware when we develop prompts as well that the systems are built for confirmation bias. So it’ll confirm if it sees in your request that there’s a certain direction you’re heading towards, it’ll double confirm that direction for you. And so one of the things that I find important to train myself as well as a user is to look at the prompts that I’m creating to ensure that I’m not indicating certain biases or certain directions or showing it a certain direction to go and really trying to write a neutral prompt to get a neutral response to train the bias out of myself as well, when I’m using these tools. Some of the challenges that I’m looking at or I’ve been [00:10:00] experiencing as well is around, I would say copyright of information that’s being used. One of the things that I posted a few weeks ago on LinkedIn is the fact that one of these models has actually taken a full copy of my book that was published by Wiley and it’s now in their database.

And so I only found this out because I went and did a proactive search and I found out that my book was in that library. So there are lots of questions around, is this ethical, is this the right thing to do? And how that information is going to be used in terms of my intellectual property going to be regurgitated somewhere else and utilized in some other way.

Justin Parisi: It’d be nice if the AI would just pay you every time it referenced you.

Kamales Lardi: Oh, that would be amazing. I think I would be excited.

Justin Parisi: That’s not gonna happen. There’s no incentive for that right now. No. And we’ll talk about why that is when we talk about regulations here. But before we get to that, a lot of this AI use case starts to trickle down into the anxiety of the employee, right?

Mm-hmm. So as an employee, is my [00:11:00] job going away? Is AI gonna take it? And I’ll give my opinion. In my use of ai. I don’t think we’re in danger right now, but as we train the ai, maybe, but what is your take on that?

Kamales Lardi: I think the situation has been worsened by leadership teams and specific CEOs who’ve gone public saying, we are not hiring unless AI can’t do this particular role.

And, famously Shopify CEO did this and we have Amazon announcing something similar. So I think that’s the challenge. We are living in an environment now where many of the leadership teams that I interact with as well are assuming that these technologies are intelligent enough to replace human beings one-to-one.

At a cheaper, more productive level, which is entirely not true. So there there’s a fallacy around that. There’s a mistrust in the technology where people are assuming the tech is really that intelligent and can be used in that way. What I’m seeing on the ground is implementation [00:12:00] challenges. Once that human oversight has been removed you are actually cutting out or gutting certain parts of the process from your organization. So you’re gutting workflows and processes and trying to replace that with an artificial system that hasn’t learned the kind of tacit knowledge, the experience, the intricacies of the process.

And so you are left with something that doesn’t work as well. And you are left with also challenges around risks and opening up yourself to certain breaches and things like that, which humans used to be trained to address that.

There’s three key elements that I’m seeing on the ground that challenge us on a human level. One is the shifting of roles, right? With any transformative technology that’s being implemented. We’ve seen this since the first Industrial Revolution. There’s a shift in role.

People will perform their roles differently. Some roles get replaced or removed or become redundant. Other roles become more efficient. New roles, develop new tasks, new types of things. So we always see this shift happening, and I think this is a natural evolutionary [00:13:00] process. It’s a natural digital evolution that we’re seeing over the years.

And there’s this push within organizations to quickly upskill people. And if you’re not upskilling yourself, then you will become redundant, outdated, and so on. And this push to train, there’s still a gap there because we’re seeing the training programs being implemented and being driven and so on. But the number of people who actually are getting trained are not keeping up with the pace. There’s also a knowledge gap at the leadership level where a large number of C-level executives and and board members that I work with have never used one of these systems on their own.

They’ve never tried the system themselves, but they’re making decisions about how these technologies are used in the company. And I wrote about this recently. It’s creating this digital Darwinism where you’re seeing this gap or this in equity grow.

And it’s not about the locations where you have technology and don’t have technology. Even within organizations, we’re seeing the AI literates and the people who aren’t keeping up with the tech, [00:14:00] that divide is growing. This digital dawinism, this new environment is creating a significant amount of uncertainty. Most people are very much tied to the jobs that they do. They identify themselves with the jobs that they do. And now we are in a situation where your job, your identity is being threatened by a technology. So there’s this huge amount of uncertainty and unhappiness being created across various organizations.

It’s creating a sense of fear, loss of control. And it’s also impacting loyalty towards brands. It’s impacting the way people show up at work and the level of productivity they bring. And I think this is a psychological impact of technology that we are not paying enough attention to as, as business leaders.

So organizations really need to start looking at. How the impact is on the people within the organization, the people who ha are being forced to leave, the people who are being forced to stay and where the technology can actually be positioned within their structures. Right? My personal belief is technologies like [00:15:00] artificial intelligence should enable human beings and augment human beings and not replace. There’s a place for them, but it’s not on top of the food chain.

Justin Parisi: Right now the best use case I see for AI is that mundane task, the data crunching populating data sets. That’s where the value is. It’s not so much in doing the actual end-to-end work. I really look at this whole AI push very similarly to the DevOps revolution. So there was a lot of consternation then from administrators of, do I have to learn the code now? Is automation gonna replace me? And now we’re redoing it again with ai.

Kamales Lardi: I do see a huge case example for replacing software engineers, replacing engineers with these technologies though. So some of these systems are incredibly powerful in developing software, but we don’t know enough about the risks that come with automated development work. I do see this cycle happening, as you said, with DevOps. We are [00:16:00] constantly going through the cycle of change. Do I need to upskill? And I think that’s to a certain extent, natural. What is different now is the rate of change that we’re seeing. If we just take one kind of category of gen ai, the rate of change in terms of how quickly the tools and platforms are developing, changing, updating, and constantly improving on themselves. It’s harder for people to keep up with that. As soon as you master using one tool, it has the next version coming out or a new tool coming out that does something similar but better.

And I think that’s where people are getting a little bit overwhelmed in terms of, I can’t keep upskilling myself and I can’t keep up with the tech development. There’s a recent study that came out that basically showed over 95% of use cases are bringing zero returns of AI applications.

And I think that comes down to a fundamental element where we’re seeing high adoption. We’re seeing high appetite across enterprises for these technologies. But there’s very little strategy [00:17:00] around how the tech applies within the business environment and low transformation, right? So high adoption, low transformation, which basically refers to you’re not looking at how the entire organization needs to transform in order.

Take full advantage of the tech, this end-to-end capabilities. We are looking at pockets of application and pockets of application have limited results being driven. So you stay in this pilot environment and not really full scale implementation. And many of these projects, which I find really disappointing is they are tech driven.

They’re really just focused on tangible let’s implement this tech, let’s spend X amount and get the tech, and then we can promote that we’ve implemented the latest version of X, Y, Z. And so we are not really seeing kind of this holistic transformation happening.

Justin Parisi: Honestly, I think the AI part is gonna be more of a trickle than it is gonna be a tidal wave. Hmm. And you kind of already see it now with chatbots in knowledge based scenarios where it’s replacing a support engineer unless you need [00:18:00] actual deeper analysis where it’s trying to tackle the higher level problems. And that gives you 24 by seven support.

But there’s another aspect of that is it’s not just the side of the person. It might be replacing, but it’s also the side of the person interacting. And a lot of people get frustrated when they deal with a chat bot. So what are your thoughts on the other side of the human impact of this, where the people have to interact with the AI themselves?

Kamales Lardi: So I think several aspects to that, right? From one perspective, I think organizations need to really consider what kind of interactions? So if I am interacting with an AI chat bot for recommend me something X, y, Z from your web shop that meets my needs and it spits out recommendations, it’s a simple use case. It’s low risk. If I’m wanting to speak to someone about my banking information or my investments or whatever that may be. So something high critical, I don’t wanna have to speak. To anything, even going through a telephone [00:19:00] automated answering machine really gets on my nerves,

Justin Parisi: I hate those.

Kamales Lardi: It’s horrible. God, but most companies have those, right? And it’s even worse if I have a slight accent, and then it’s like you have to scream the words out. And I think this is the challenge you have to understand from the customer perspective what kind of interaction is the customer looking for.

And of course, what I’ve also seen, and if you go on our website, lardipartner.com, you’ll find that we have a digital human employee who helps guide you through the website and answers certain questions. And there you have a very humanized face and person who can read your facial expressions and understand your tone of voice. That creates a certain hyper-personalized experience on a otherwise 2D environment. There are certain use cases where I believe they can be very positive and supportive and in other cases they shouldn’t be. I think the governance around these technologies are not there and not enough. We’re seeing challenges with AI chatbots that are interacting with minors, for example. This was in the news, and as a parent, this is [00:20:00] incredibly alarming to know that there are chatbots out there that are specifically targeting younger people with voices of celebrities that they’ve stolen. Literally just, targeting and exploiting people who are underaged. And this has a significant impact on the psychology of a child, but also in society as a whole. And of course, the cases around chatbots driving people to harm themselves and so on.

So I think that the governance is just not enough for us to say mass adoption is here. And this is where human oversight comes into play. This is where organizations are prioritizing commercialization with these technologies rather than good or impactful use for society. And so that’s, for me, a bigger challenge that needs to be addressed before we as business leaders can say these technologies are great for customer interaction.

Justin Parisi: Yeah, it’s basically a perfect storm of the slowness and ineffectiveness of government mashed with the speed and agility of tech. Because we’ve seen this over and over [00:21:00] again with new techs that come out, whether it’s the internet as a whole mm-hmm. Or social media or now deep fakes in AI where the governments don’t keep up with regulations.

They don’t act fast enough, and by the time they do act, it’s too late. It’s already out in the wild.

Kamales Lardi: I think this catch up game, as you said, it’s always existed. What is different again now is the pace of development. Keeping up with the internet or with regulating websites and then eventually social media.

That was a catch up game, but it wasn’t as drastic as we’re seeing the tech developing today and the catch up game is just not working. Whereas if you see in certain regions, like in Europe that’s driving a very regulated approach to AI with the EU AI Act, which is one of the most comprehensive regulatory frameworks for ai.

There’s a trade off, right? You’re seeing a trade off where caution overrides the speed of innovation, and so certain sectors are falling behind in the region. So I think from a government and regulator perspective, it’s also a challenge to [00:22:00] understand how can we balance this?

We don’t wanna get left behind. We don’t wanna be the region that’s not progressing with ai, but on the other hand, we need to prioritize human protection and data privacy. Where’s the line? We definitely need a new way to regulate. The traditional way of regulating tech is not working with the new tech, and so we need to find a different approach for it.

Justin Parisi: I think it starts with eliminating the whole idea of regional and country regulation and becoming more global. Everybody has to be on the same page. Like similar when you do environmental regulations where everybody signs a global pact where it’s like, oh, we’re not gonna do this or a nuclear proliferation pact, or, Hey, we’re not gonna have this many nuclear weapons.

It’s gotta be the same with AI and tech or it’s not gonna work.

Kamales Lardi: I agree. I think to a certain extent, we should view ai this might be controversial to say, but we should view AI like nuclear weapons, because the level of harm that could be done at a mass scale, it could be quite significant for society. However, I am not sure it’s so realistic or even [00:23:00] possible to get every country in the world on board.

Oh, it

Justin Parisi: absolutely is not just because we need to do it. It doesn’t mean we’re gonna, we’re gonna eliminate ourselves because we can’t agree. That’s gonna be what ends the world.

It’s not gonna be ai, it’s not gonna be nuclear weapons. It’s gonna be a disagreement.

Kamales Lardi: Yeah. It is gonna be interesting to see how things progress because some regions are accelerating forward. I think the US and north America is probably one of the regions, but also the Middle East.

We are seeing significant investments go into these technologies and it’s about global competitiveness. It’s about diversifying their economies. We have to go into this direction in order to be competitive at a global scale. And on the other hand, there’s the human factor where the regulators are moving at one pace, tech companies and then enterprises at another pace.

And then you have the individual that’s adopting these technologies as quickly as they come out and not being as aware as they should be of the risks that come with that, particularly younger people that are using the tech. So I think, mine was probably the last generation [00:24:00] that grew up without tech. The Gen X. And every generation after that has had technologies part of growing up. The risks that are evident to us are maybe not as evident to people who grew up with these technologies.

Justin Parisi: You grew up with a AOL CDs.

Kamales Lardi: I, I ha I did sql. SQL coding. Grew up with this stuff.

Justin Parisi: You grew up with the nascent phases of this.

Now another issue you’re running into, I think with regulation is the people making the policies. Either they don’t want to listen to experts or they’re not experts and they think they are. That becomes problematic with how the policies are written. And then it varies from region to region as well, which is another reason why we have to have more of a global approach to this.

I think you can actually transpose that over to CEOs as well, where CEOs that are making these decisions maybe don’t have the expertise they need to make these decisions.

Kamales Lardi: So just for the [00:25:00] first part, I don’t think it’s regulators, not listening to tech companies. I think it’s more of the knowledge gap that needs to be addressed.

And this is something I deal with a lot of times working with senior executives and leaders. Sometimes that can be overwhelming, to have to understand something so completely new and something that’s transforming so quickly, it can be overwhelming to have to learn these new things, but I think it’s so necessary, especially as a regulator, you have to have mandatory training programs.

And make sure that there’s some kind of certification at the end that says if you’re a regulator of tech, you have to know that tech. Maybe not from a technical perspective, but at the very least, understand how it works, why it does what it does, and how it impacts different areas of business and society, and what are the risks that come with that. There should be some sort of certification that goes with that. From the business leader side, I think the challenge is slightly different. If I think of the executives that I work with or have worked with in the past, a key element is how they’re [00:26:00] measured for success. Most leadership teams within organizations are measured based on tangible sales growth elements. And these technologies can deliver those at scale exponential results. They can drive exponential financial and business outcomes. And so there’s very little incentive to stop and think, well, how does this affect society as a whole?

And am I doing good in the world? At the end of the day, many of these executives are measured on the wrong things. That doesn’t drive a more sustainable thinking. I’ve had conversations with executives who’ve said, well, by the time something like that happens, I won’t be in this. I’ll be retired or I won’t be around. If, if we’re talking about singularity, and I don’t think that’s a huge threat at the moment. I think there are more pressing threats to deal with. But I’ve had one executive say to me, well, by the time singularity hits, I won’t be in this world. This sort of short term thinking is also a little bit of a challenge. There should be a more sustainable thinking and there should be more incentives that drive for more balanced thinking of societal good [00:27:00] and commercial outcomes.

Justin Parisi: I’ve often heard people say that out of all the jobs that AI could replace, CEO could probably be one of the first.

I don’t know what your thoughts are on that, but if that were to happen, do you think the CEOs would treat AI a little differently?

Kamales Lardi: I think that could happen. I’m not gonna say no. There actually is a company in Poland that has a AI developed CEO, and she’s in charge of certain aspects of growth and supply chain management and so on.

I don’t think it’s so far out to think that there could be a technology that could be a CEO sounding board, something that could replace certain aspects of the CEO role, not the human management aspects, though. If you’re a good people manager and you’re a good. Leader in your organization?

I think that’s very difficult to replace with a tech. I definitely think CEOs will start treating, although I do know many CEOs who think they are irreplaceable and that they’re the best of the best.

Justin Parisi: I think that’s the biggest issue. Right? It’s that. Hubris. Yeah. And AI would [00:28:00] not have, and honestly, if CEOs are replacing people with ai, wouldn’t an AI just do that as well?

Kamales Lardi: You know what would be worse though, Justin? If we developed A CEO AI based on data from the CEOs. From data and behaviors and characteristics. And then you would have this super CEO that’s probably not the best.

Justin Parisi: All the jokes about AI and taking over in Skynet, that would be Skynet right there.

Like the, the Uber CEO

Kamales Lardi: Yeah.

Justin Parisi: Strictly motivated by the bottom line. So pivoting a little bit. What sort of tools are you seeing businesses using when they’re trying to implement ai? I mean, we’ve already talked about chat GPT and that sort of thing. Anything else that sticks out to you when we talk about AI tools?

Kamales Lardi: We’re seeing marketing communication and customer touchpoint type tools. So Gen AI is, is, definitely one of the big adoptions that we’re seeing across the board around knowledge work as well as creative work communication elements.

I think those are definitely something that goes without saying. I’m also seeing a lot [00:29:00] of tech teams, IT teams and operations teams looking at agentic ai. Implementing them, testing them out. I think that in itself creates a certain challenge. I absolutely love the fact that you can automate so much and create independent capabilities.

But on the other hand, you’re opening up yourself to certain breaches and certain kind of threats and challenges that are not so evident right away until something bad happens. You’re essentially giving decision power to an autonomous system, and you’re giving internal access, data access, and, customer touchpoint access to systems that you then don’t have control over anymore.

’cause it’s autonomous. We’re seeing such a huge uptake in this area, but also challenges that are coming and then you have agent hijacking and things like that that we’re seeing in the market. The other piece that I feel is maybe a lot more grounded and I would see a lot more success coming out of these industry or sector or function specific applications of ai.

So for example, financial fraud detection or predictive [00:30:00] maintenance applications. As you said earlier crunching the numbers, the analytics, those are use cases that are delivering results already. Those are use cases that we have a depth of understanding for.

And those are also use cases that have existed for over a decade. They’re not new, but they are solid and they are delivering results. There’s a range of elements, if we can maybe think of a combination of these solid, functional or industry specific applications.

And then a layer of this generative capabilities that allow for ease of use and the ability to make it more accessible to people. There might be better solutions in the market for that.

Justin Parisi: I’ve recently found a new use case for AI fantasy football. To help you. I don’t you play fantasy football, but it’s really useful.

Kamales Lardi: Or

Justin Parisi: just helping me pick out who I want to pick for my teams. And this could apply to any fantasy sport, whether it’s fantasy hockey or fantasy soccer, whatever. That’s, but it, but it does lie to you. It lies you [00:31:00] constantly and it pulls the wrong data. So you really have to be careful. And this is a lesson across all ai.

AI will lie to you. Yeah. It’ll be really nice to you too. It’ll be like, oh, you’re the best person ever. I’m so glad you AI is a people pleaser. I’m so glad you pointed that out to me. I’m gonna lie to you again.

Kamales Lardi: It’s a people pleaser. You never trust a people pleaser.

Justin Parisi: It’s an abusive boyfriend.

Kamales Lardi: Narcissistic.

Justin Parisi: So like we talked about earlier, you have a book you also have another book that touches more on the human side of things. So where would I find those two books?

Kamales Lardi: So the new book is AI for Business. It’s published by Cogan page, so you can get it on their website.

It’s available on Amazon as well. Or you can go to my website, lardipartner.com or kamaleslardi.com. And so both books are listed in there. And the AI for Business book is I think also available on most online book retailers. I would say. Okay.

Justin Parisi: Or I could just use an AI apparently and give me, give me Kamales’ latest book.

Kamales Lardi: No, the new book is not on there. The previous book. Oh, the

Justin Parisi: new book’s. Not, it’s only grab the old one.

Kamales Lardi: Yeah. [00:32:00]

Justin Parisi: Okay. That’s,

Kamales Lardi: it’s unfortunate.

Justin Parisi: Yeah, it is unfortunate.

Kamales Lardi: And Switzerland doesn’t do class action.

Justin Parisi: There’s a new term going around for ai. It’s clanker. Have you read this?

Dunno what I mean. It’s basically like a derogatory term for ai. It’s almost like you’re trying to insult the ai, but the AI doesn’t care ’cause it’s still gonna compliment you and tell you how beautiful you are. But clanker is the latest term. So you could just refer to this AI that’s stealing your book as the clanker.

Kamales Lardi: Alinker. Great. Okay. Is that something I should avoid saying when I’m talking to clients and on stage or, you know,

Justin Parisi: yeah. I don’t know how politically correct it is these days to call something a clanker. I’m sure it’ll eventually not be a great term to use, but right now I give it my blessing.

Kamales Lardi: Awesome. Okay. Alright.

Justin Parisi: Alright. Well Kamales, thank you so much for joining us today and talking to us all about AI and your book. And hopefully we see some new books from you soon. I’m guessing there’s some stuff in the works from you.

Kamales Lardi: I am hoping as well definitely around cognitive neuroscience and ai, so look out for that.

But thank you for having me. It’s definitely been a pleasure.

Justin Parisi: Absolutely. [00:33:00] Thank you for coming on.

We will be happy to hear your thoughts

Leave a reply

Som2ny Network
Logo
Register New Account
Compare items
  • Total (0)
Compare
0
Shopping cart