Tech ONTAP Podcast Episode 397 – Navigating the New EU AI Regulations (w/ Adam Gale)


This week on the Tech ONTAP podcast, Adam Gale joins us to discuss the new EU AI regulations and how they may impact you and your business.

Summary of the podcast below, courtesy of ChatGPT:

The Importance of Cybersecurity and Redundancy

Adam highlighted the critical role of cybersecurity within the EU AI regulations, particularly Article 15, which emphasizes accuracy, robustness, and cyber resilience. Organizations must implement technical redundancies to ensure continuous operation, especially for high-risk AI systems like those used in critical infrastructure, such as rail transport.

For instance, if an AI system controlling train schedules fails, a backup solution—like a secondary site using Metro Cluster—can mitigate risks. This is where NetApp can shine by providing robust data protection solutions that ensure systems remain operational and secure.

Addressing Data Poisoning Risks

A significant concern raised in the regulations is the risk of “data poisoning,” where malicious actors could manipulate AI training sets. Adam stressed that protecting training data is vital to maintaining the integrity of AI systems. NetApp’s capabilities can help create immutable copies of training sets and provide logging solutions to track changes, ensuring transparency and accountability.

The Need for Human Oversight

Another essential aspect of the EU AI Act is the requirement for human oversight, outlined in Article 14. Adam joked about the idea of a literal “big red button” to halt AI operations, but he emphasized the necessity for a failsafe mechanism in high-risk AI systems. This kind of oversight ensures that AI operations can be interrupted if they veer off course.

The Role of AI in Cybersecurity

The conversation also delved into how AI is used to bolster cybersecurity. Given the shortage of skilled professionals in this field, AI can automate monitoring and threat detection, helping organizations keep pace with evolving cyber threats. Adam pointed out that NetApp is already leveraging AI for ransomware detection and protection, demonstrating a proactive approach to safeguarding customer data.

Penalties for Non-Compliance

Adam outlined the penalties for violating the EU AI regulations, which mirror the structure of GDPR. Companies could face fines up to €35 million or 7% of their global annual turnover for serious infringements. This high-stakes environment underscores the importance of compliance for businesses developing or deploying AI technologies.

Public Perception of AI Regulations

Interestingly, Adam mentioned that while 61% of Europeans view AI and robotics favorably, 88% believe that these technologies require careful management. This duality of perception indicates a societal desire for innovation while also recognizing the need for accountability and oversight.

The Future of AI at NetApp

Looking ahead, Adam expressed optimism about integrating AI into various business processes, even sharing his personal experiences with AI tools. While he may not use AI extensively yet, he acknowledged its potential to enhance productivity, particularly in summarizing information and streamlining workflows.

For more information:

Finding the podcast

You can find this week’s episode on Soundcloud here:

There’s also an RSS feed on YouTube using the new podcast feature, found here:

You can also find the Tech ONTAP Podcast on:

Transcription

The following transcript was generated using Descript’s speech to text service and then further edited. As it is AI generated, YMMV.

Tech ONTAP Podcast Episode 397 – Navigating the New EU AI Regulations (w/ Adam Gale)
===

Justin Parisi: I’m here in the basement of my house and with me today, I have a special guest to talk to us all about AI and regulations in the European Union. And to do that, none other than Adam Gale. Adam, what do you do here at NetApp and how do we reach you?

Adam Gale: I am a enterprise architect and you can reach me at Adam.Gale, that’s g-a-l-e, @netapp.com.

Or you can search for me and find me on LinkedIn. I generally look at things like regulation, and I also look at our cloud design workshops. This is a bit of a passion for me, and as you know, last time I spoke with you, we were discussing DORA. That got some great traction, spoke to a lot of people, and I thought, Let’s have a look at the EU AI Act.

So here I am today.

Justin Parisi: And it is very relevant and topical because AI is at top of mind for most everyone, including NetApp. NetApp has recently announced that we’re doing more for AI. We’re trying to get more into that workspace. This brings up some interesting points about AI in general.

And one of those things is the use of AI and the ethics behind the use of AI, because now we’re seeing a lot of intellectual property being, I guess, repurposed, for lack of a better term, by these AI training models, and we’re starting to see that in some cases it’s not okay. So what do you feel about that?

Adam Gale: I think in certain respects, the cat’s already out of the bag. A lot of models have been trained using copyright material and harm has already been done and putting that back in is going to be difficult. But acts like the EU AI Act do go some way to answering this.

And other countries are also struggling with this big question. You have yourselves in the US with the executive order and the AI training act. And you also have things like China with the ethical norms for new generation of artificial intelligence or India, the digital India act. Every country, every continent is looking at this, and they’re all thinking the same things, and they’re all writing something.

And the speed at which this has been addressed is quite phenomenal, really. I always think to myself, when you see regulation or legislation or laws or anything that’s published this quickly, there’s one of three things happening. One, there’s going to be a huge power shift, which I think we’re already seeing.

Or two, a lot of money is going to be spent. And I think that’s definitely true. The third thing I think is someone’s going to get hurt. Generally, regulation is published with the intent to stop damage or stop people getting hurt. And that’s what we find here in the EU AI Act. It’s trying to limit things or limit damage being done.

So, hopefully, as we delve in today, we can pick up a few of those bits.

Justin Parisi: I find that government in general moves very quickly when their self interests are at stake.

Adam Gale: Yeah, definitely.

Justin Parisi: And that’s not to say this isn’t the right move, but I think that the speed of what we’re seeing is driven by a couple of things.

One is probably previous experience. And we’ll talk a little bit about that later, but also because the AI ML portion of this is going to threaten a lot of their livelihoods or even their reelections, right? So we start to get into this gray area of ethical and legal. So where is the use of AI breaching ethical use and where is it actually committing crimes?

Where is it taking away the whole legality of what you’re supposed to be doing? So where do you see that line? How do you draw a line where something is just not kind of sketchy to do it versus legal?

Adam Gale: Well, that’s a good question. I personally think that the EU’s approach to this is the right one.

They break it down into four areas. Unacceptably risky AI, high risk, limited, and then minimal. Unacceptable risk. I think that traverses the questions you’re asking here. What’s unethical? What shouldn’t we be doing? And they deliberately call out some things here which are unethical and are banned basically.

Banned with the EU AI Act, such as cognitive behavior manipulation. So I can’t go ahead and develop a toy for children that manipulates their behavior to do dangerous activities. Like I know we all wanted to build an AI Chucky, right? But unfortunately the EU’s banned that. We can’t do that.

Justin Parisi: Aw man!

Adam Gale: I know I was desperate for one of those.

Justin Parisi: Aw murder dolls!

Adam Gale: That’s the only schedule. Yeah, but it doesn’t mean to say you can’t develop it somewhere else. And I’ll tell you what’s interesting about this. This only applies to the consumers. This does not apply really to defense. Defense has its own area you can play in.

So this is just to us. This act does not cover those. Also, things that are banned are social scoring. So classifying people based on their social economics data, personal characteristics, anything like that. Banned. Can’t do it. We also can’t do real time remote biometric identification systems, such as facial recognition.

Banned. Now again, that comes with an exception. And I believe this was fought for because of the Olympic Games. You can get an exception and do real time remote biometric identification for something like a missing person or a potential terrorist attack. I like to think of this as the same way like those old fashioned mapping movies where you get a wiretap.

The same sort of process needs to happen to do this. You need to justify, you need to go through a regulatory body to get permission to do these things. But I guess coming back to your question, these things are ethically wrong in our consideration, and they’re absolutely bound. And they fall into the first category, which is unacceptable risk.

What do you think should be banned?

Justin Parisi: Yeah, so I think banning it as far as you committed a crime would come down to something that can create bodily harm, right? Or, mental harm in some cases. So like your training of children with the Chucky doll,.

Adam Gale: Yeah, yeah.

Justin Parisi: As far as the ethical side of it, something recently in the news happened where Scarlett Johansson was approached by Sam Altman to donate her voice or give her voice for this new AI chatbot.

Right. And she was like, nah. And then a little while later, this chatbot comes out and it sounds an awful lot like Scarlett Johansson. And she’s like, whoa, whoa, whoa, whoa. I said no. And they’re like, oh no, no, no, we didn’t do anything. And then they asked for the training data and they were like, oh, we’ll just take it down.

Adam Gale: You see, I know this right, and I’ve seen “Her,” because Sam even quotes “Her” like he posts something on Twitter saying her, doesn’t he? So it’s a nice kinetic lead in to the release of this, but the voice woman doesn’t sound anything like Scarlett Johansson to me, like absolutely nowhere near her. So maybe my ears are a bit broken.

Justin Parisi: I don’t know. I feel like ethical becomes this thing where it’s like when weird nerds want to act out their fantasies. That’s where the ethical line gets drawn on me. And that is exactly what happened with Sam Altman. That’s the unethical, I used your voice without your permission Yeah. Then the legality comes in if you’re making money off of it, I think, where it’s like, okay, I’m cashing in on your voice now becomes a intellectual property type of thing, but it’s just stuff like that. And it’s a really weird gray area for a lot of this stuff because you don’t want to stifle innovation, but at the same time, I don’t want a bunch of people just taking on other people’s work and using AI and then calling themselves artists.

Right. I think that’s just messed up.

Adam Gale: Yeah. I think this is going to be an interesting one to follow because it’s so high profile. We’ll get a flavor for what’s to come and what the outcome will be of it. Are they just both going to walk away or are they actually going to take this to court? And we’ll probably see off the back of it a sort of precedent set.

So it’s going to be quite interesting. But just to touch on something I think you said then, which was quite interesting. You were talking about innovation. And now that is something that’s quite close to my heart because I read a lot of regulation and I speak to a lot of Americans and I’ve heard this before, people say to me America innovates and the EU regulates and that’s why we don’t have some larger companies or the fast moving technologies that Americas do.

And I think there is some truth in that statement in certain regards and I believe that the EU have taken this on board. Because within this EU Act, to stop us from falling behind, they are looking to enhance innovation. So rather than constrain everybody and say, look, you can’t do this, you can’t do that, they actually want to promote the use.

So they have these key areas to help us promote AI within the EU. They have sandboxes. So the Act calls for regulatory sandboxes to be created. So if you are developing an AI, I want to know that it complies with all the testing and all the reporting. I can check it in this sandbox, play with it and test it.

They’re also setting up a number of AI hubs as well, and digital innovation hubs for experimentation and that sort of thing. I believe that’s also similar to the executive order that’s been written recently as well, they have some sync. But they argue that if you can abide by these rules, you will get access to a bigger market and users will trust what you’re positioning because it abides by those rules.

So just to circle back there, I think they’ve taken on board some of the previous criticisms of recent regulation being released, and they’ve really gone ahead and thought about it and tried to foster innovation rather than clamp down, which I hope we’ll see in the future.

Justin Parisi: Yeah, and just like there’s a fine line between ethical and legal, there’s also a fine line between responsible and irresponsible innovation.

Like, I don’t know if you’ve ever seen the Planet of the Apes movies.

Adam Gale: Yes, I have.

Justin Parisi: Right, so the latest iteration was based on A virus being created to try to cure Alzheimer’s and as an example of irresponsible innovation they decided to just go ahead and start testing it out without actually, you know, doing any research.

They just wanted to get to market as fast as possible. Right? So you have to take that into account. What is the greater good and the greater harm that’s going to come out of this innovation?

Adam Gale: Yes, you do. And I guess some of the biggest risks have some of the biggest payoffs, don’t they?

Justin Parisi: Yeah, but with the approach of a sandbox, having the playtime happen in a sequestered area where you’re not just deploying into production immediately, right?

I think that’s a responsible way to do it, because then you can start to see, oh, well, maybe this wasn’t such a great idea, because we’re human. We think things and we have to see it actually happen before we get it. You see this kind of in the in the AI space now with facial recognition technology, not recognizing certain races, right?

It’s because we didn’t think about that. We’re like, Oh, I designed this for what I look like. I didn’t design this for what everyone else looks like. That’s kind of benign in terms of harm, but it is a real problem.

Adam Gale: It’s a phenomenal problem because I think that It’s almost impossible to create something with a blank slate.

Because everything we create is a reflection of ourselves. Because we know no different. We can’t comprehend or envisage anything that isn’t inflected by ourselves. So we’ll never ever create anything blank. It always reminds me of Iain M. Banks, who’s an author, a sci fi author, I think he’s Scottish one of the best sci fi authors I think of our generation.

And he writes about AIs creating AIs, and the AIs are given a task to create a blank AI, completely uninfluenced by humans or anything like that. And that AI is unrecognizable. It won’t communicate. It won’t interact with us because it’s so obscure. It’s so blank. It doesn’t understand us and we don’t understand it.

So I guess what I’m trying to say there is whatever we create will be a reflection of ourselves. And we’ll see that in bias or unconscious bias, particularly in areas like you mentioned, such as recruiting. And if we’re going to recruit somebody and we’re looking at the best attributes to fill that position, those will be naturally biased by what we think is the best.

So I think it’s an interesting subject area.

Justin Parisi: And that’s where regulation is trying to, I think, address some of these things and these concerns before they actually become widespread. But the question now is, is it too late already? Is the toothpaste already out of the tube?

Adam Gale: Well, I don’t know. I think your guess is probably as good as mine in that one.

I do think, like I said earlier, a certain amount of this is already just happening, and I think it would probably be foolish to think that governments aren’t developing AI off radar, particularly in the defense sector, unregulated, because our counterparts will be doing it.

Yep.

And it’s just going to be a race.

So I do believe that’s absolutely happening. But consumer grade AI, the things that we play with on a day to day basis, I do believe that regulation will tame those, or at least push them in the right direction. But the other side of it, I don’t think, yeah, it’s not something I really want to think about, honestly, I think it’s quite scary.

Justin Parisi: Well, you’re on to something with governments already developing it, because they’ve already kind of started using it. And you think about election interference stuff that’s happened across the world, whether it’s in France, the US, or wherever, where these AI, ML models are grabbing voter data, they’re grabbing profiles of people, and they’re tailoring propaganda campaigns for these people. If you think about deepfakes and how far they’ve come and how few people can recognize the difference between deepfake Joe Biden or real Joe Biden, right, that becomes a real problem because now you’re fighting this battle against this person you don’t see, and that is very effective because you’re playing off of the deepest emotions of the person on the other end.

Adam Gale: And what scares me the most is, before we started this, I mentioned my mother, who’s 70 years old, and for 70, she’s incredibly tech savvy, she’s got an iPhone, she had Facebook, she had Instagram, all these things, and my mother was consuming a significant amount of propaganda through Facebook.

And I didn’t know this, because I’m not really on Facebook, and then I was playing with her phone, she asked me to fix something for her, and she was interacting with bots, and propaganda, and AI generated images, and when I queried her, she evoked they were genuine. She actually thought these were real news articles in the local area.

And it just dawned on me, I was like, there’s a whole subsection of society which is extremely vulnerable to this. And it’s in my own home. It’s my own mother. And so it kind of dawned on me that there was a lot of things that she’d been saying over the past, which weren’t true. And I just brushed them aside, but.

Here it was. The evidence of it, it works. It generally works.

Justin Parisi: It does. Basically, with that generation, they have been raised on this notion that media, the news, is infallible. It’s where you get your information. I have to watch the news. I have to watch the news. So when you’re getting news, Now, it’s interesting because that same generation is very now distrustful of the mainstream news, the CNNs or whatever, but they are fast to latch on to whatever inherent biases they have.

If they find an article that supports a predisposed notion, they’re going to dive right into it and trust it wholeheartedly. It could have been written by some AI robot out in like Ukraine, right? It’s the reality of it. They have this disconnect where it’s, I trust the news, but I don’t trust it.

I only trust the news now that I like.

Adam Gale: Yes. And I think this is ironic considering this is the generation that raised us and told us, don’t trust everything you read on the internet. Or said be careful who you’re talking to on the internet when AOL chat rooms first came about. And here they are now on the internet trusting everybody and listening to everybody.

And I’m having to replay that advice back to them. Mm hmm. Luckily, fingers crossed, neither of my family, my mum or dad, have fallen prey of any big scams. But that is a genuine fear because they’re extremely sophisticated. I’ve heard of situations where, this is a great one, my wife, who works in finance, she was at a conference recently, and they showed a video of an AI generated CFO ringing up a company This was a video call on Zoom and authorizing multi million pound transactions and it was played and they were like, that was fake.

It was a recreation of a real life scenario. I believe it was in China. And they conned a business out of money and I looked at it and I was like, I couldn’t tell the difference. That would have got me. So I don’t know how we’re going to combat that sort of thing. I think in the EUAI Act, they talk a lot about using AI to stop disinformation because you can fight AI with AI.

Justin Parisi: Yeah, AI can detect AI and that’s where Skynet happens, by the way.

Adam Gale: Yeah, that’s where Skynet happens. Oh, great. Yeah. Well, at least we’ll see.

Justin Parisi: Everybody thinks that we’re already in Skynet. No, no, no. It’s when the AI is combating the AI and it becomes a battle. And then they both realize. Wait a minute, why are we fighting each other? It’s the humans that are the problem.

Adam Gale: Yeah. You know, I think I’m getting tired of big changes in my lifetime. I wish everything would just settle down for a little bit, you know? Yeah. , no more big changes, everybody. Let’s all just enjoy what we’ve got for a bit.

Justin Parisi: Speaking of big changes, the regulations in the eu we’ve kind of hinted at what those are, so let’s get a little more into the weeds of what these are.

I know that you’ve got a passion for this, so tell me about these new AI EU regulations and what that means for people.

Adam Gale: So yeah, this is something that I’m really interested in, as I said, and the crux of it is that what they’re going to do is break down AI into these four categories, which I’ve already mentioned.

The unacceptable, banned, risky stuff. High risk, limited, and then minimal. And I think the one here which is of real interest to us, particularly NetApp, is the high risk AIs. So when you think of high risk AIs, think of anything that’s critical infrastructure. Things that support citizens, like rail infrastructure, transport, or education, for example, scoring exams.

Or safety components, so robotically assisted surgery. Employment, CV writing, or recruitment processes, which we’ve already mentioned. All these things, and there’s more, by the way, law enforcement, immigration, administration of justice, all these things are considered high risk because they can significantly affect you and significantly affect society.

So, this is where rules and guidelines are coming in place and you have to adhere to a set of standards. Now, you can definitely see, for example, critical infrastructure, why this would be required. If I’ve got a AI assisted railway optimization AI, which is, I don’t know, picking the best paths or the best routes for my trains, and it makes a mistake and two trains collide, there needs to be accountability.

And there also needs to be a log of why it made that decision. Which human interacted and said, yes, that decision is the correct decision. Because there is a tendency here for AI bias. I mean, I am already doing this. I am already assuming that ChatGPT is correct. I just bash in my question, it spits it out, and then I’m like, yeah, that’s the answer, because it looks right, because I’ve got an emotional bias for yes.

Everything for me is a yes. And we can’t have that. We have to question everything. We have to check it. So, if you’ve got one of these AIs, or you’re developing one of these AIs, you are going to be required to do a few things. One is step through a process of identifying an adequate risk assessment and then you have to do a conformity test and register it as an AI within the EU database and then declare it and it gets a kind mark.

I won’t go into the details of all that because that’s not really interesting stuff. The bit that I find fascinating are the articles. So, like when we discussed DORA a little while ago, there’s these articles they’re all cunningly named like Record Keeping or Transparency and Provision of Information to Users or Human Oversight, and each one has got something really cool in it.

So let’s pick article 12 for example, Record Keeping. It says, and I’m just going to read a bit to you here, High risk AI systems shall be designed and developed with capabilities enabling automatic recording of events or logs. Now, what they want here is a level of traceability of the AI system. So, the use time, the date, the reference database, what input was used, who verified the results.

And if I go back to that example of train automation software or route planning, if there’s an accident, we need to know why it made that decision. Because logs tell us a story. So, there is requirements around that. And by the way, I think this is something that NetApp could and should be helping with.

This should be something we are discussing with our customers. If they are developing an AI, and it is creating logs, and it should be if it’s a high risk one, Those logs need to be kept, and depending on what industry it is, it might be applicable to HIPAA, for example or maybe a finance regulation.

They need to be kept for a certain amount of time. Those logs fall under that too, and we need to make them immutable, and we need to keep them, and make sure no one tampers with them. And that feeds nicely into some of the tools and some of the systems we have within NetApp. So I absolutely think we should be the forefront of this conversation, discussing with our customers, NetApp’s capabilities when looking at AI, in particular high risk AI.

Justin Parisi: So, you’ve tied it into the NetApp story. Let’s talk about that. Where do you see NetApp, not just ONTAP, but, you know, StorageGRID or Cloud Data Sense or BlueXP, where does that all fit into the AI regulation story?

Adam Gale: So, obviously, there’ll be a certain amount of regulation, I believe, upon ourselves, because as we develop our own AIs or use AIs within our own systems, there’ll be that, but I can’t really speak to that today.

That’s not really my sort of specialist area, but I can talk about it when it comes to our customers using our systems. So, I see us fitting in here brilliantly. For example, the best area, I think, is cyber security for us. Article 15 deals with accuracy, robustness, and cyber security. And it talks a lot about the appropriate level of accuracy, robustness, and cyber security and performing it consistently.

And also providing thoroughly technical redundant solutions. So let’s go back to that example again of a train system. If it is affecting train schedules, On a day to day basis, and all of a sudden it goes away, so our AI just crashes, or we lose a bit of hardware or something. We need to have a redundant system.

Now that could be a pen and paper, someone who knows how to do this manually, obviously would prefer another IT solution because it would be quicker. So, we need to be able to build a technical redundant solution. And this is where I think we fit in great. We could have, for example, a backup or a failsafe.

A secondary site, connected with something like Metro Cluster, and we can have that ready to go should we have a failure. I think cyber security wise, this is a conversation and we should definitely be involved in. It goes one step further actually. Now I’m going to quote again from the text here.

“The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent and control for attacks trying to manipulate training set data,” or they call it data poisoning. Now this is cool. I think what they’ve done here is they’ve identified that AIs could be nefariously interfered with if you mess with their training set data.

Now that’s quite obvious, isn’t it? I mean, if I go mess with it and make my AI biased in some way, or maybe I put something sleeper in there that if I have assertion command, or if I’m in this particular scenario, do something nefarious, we could all imagine a scenario there. I think this is where NetApp can help.

And what we can do here is create protected sets of data or immutable copies. Or at least we can do some logging to prove the training sets that we used. And say, that was our training set, this augmented the LLM or something along those lines. We prove it was operating at 99 percent efficiency and creating no bad scenarios at that point.

And if someone’s made a change subsequently to a training set data and re ran it, we will then know. And we can point to where the issue started. And then we have all our other tools which we should be layering on top of there, such as, MAV, Immutable Storage, fPolicy, and DARE, and Ransomware Protection. But yeah. One of the key areas here is Article 15.

Justin Parisi: Yeah, there’s a lot in that portfolio with NetApp with the snapshots and the ransomware protection that really lend itself to a variety of use cases that involve protection of data.

And, you mentioned the data poisoning aspect of this, and that is very important because the AI training is only as good as the data, and if the data is bad, then the AI training is bad.

Adam Gale: Yeah, that’s correct. So, I think here, data poisoning, they’ve called it out, and it’s a good place to start. When it comes to a conversation, to talk about how are you protecting that training set.

Another one which you might like, because we did just reference Terminator a little while ago, is Human Oversight, Article 14. And I’ll just jump straight to it. Basically, they want a great big red stop button, a failsafe. So, be able to intervene on the operation of a high risk AI system or interrupt the system through a stop, in quotes, button or similar procedure.

Now, I think NetApp should develop a great big red button on the front of our NetApp boxes, and you punch it, and it just turns everything off. Right? Or, no, we could have it in a glass case. Lift up the glass case, punch it, AI shuts off. Obviously I’m joking, we probably shouldn’t do that. It’s way too open to abuse. But, we need to be able to stop these things.

Justin Parisi: Yeah, and as reasonably as possible. And you mentioned earlier how AI is used to try to combat AI and NetApp’s doing a bit of that now with use of AI and things like the ransomware detection and protection.

Doing things in BlueXP. So, trying to implement that as much as possible because it is hard to keep up as an administrator when you have so many things going on externally, and if you’re trying to combat an AI, you’re not going to win.

Adam Gale: Absolutely. And I think we’re also just running into the human constraint here of there literally isn’t enough people we could employ to do this work for us.

There’s not enough skilled cyber security professionals out there to look at every box, to look at every transaction, to make sure me or you are doing the things we say we should do. So we absolutely have to use AI. It’s the only way to meet the requirement.

Justin Parisi: So I know that with other regulations in the EU, if you mess up your GDPR, if you violate that, then you’re facing heavy fines, right? DORA, I’m sure, has heavy fines as well. What are they doing for penalties if you violate these regulations?

Adam Gale: So, they’ve actually gone with a very similar mechanism to GDPR and with DORA. This seems to be a tried and true tested mechanism here. They go with a worldwide turnover. So For example it’s up to 35 million or 7% of the total worldwide annual turnover of the preceding financial year, which are on higher basically for infringements on prohibited practices or non-compliance.

So just to really simplify that 35 million, if you do something that we tell you not to do, if you go to help and go ahead and develop Chucky, we’ll find you 35 mil. And then it goes down into a different category, which is the lesser of the regulated AI’s areas of 3 percent and 15 million.

And that’s for regulating, including infringements on rules of general purpose AI models, those sort of things. But as you mentioned GDPR, they’ll also be creating an area that you can report on this. With GDPR, one of the things that was incredibly successful was that if you felt that your data was being inappropriately used or collected, you could go to a commission or a portal of somewhere and say, please look at this.

They will create the same thing for artificial intelligence. So if you feel like a company is abusing artificial intelligence or using your data in an appropriate way with artificial intelligence, you can then report it. So there is mechanisms for that, which is really good.

Justin Parisi: That’s good. Cause the regulations are only as good as the penalties, you have to feel real pain for violating a regulation or you’re just not going to do it.

Adam Gale: Absolutely. What’s interesting here is as well that. 61 percent of Europeans look favorably on AI and robots. They just do. So I mean, most of us are quite favorable. I particularly am because Copilot is awesome. ChatGPT has pretty much rewritten everything I’ve ever done for me. But 88 percent say that these technologies require careful management.

And being able to report on these things falls under the bracket of careful management. So this is wanted. This is wanted by the European Union and its citizens. We see the benefits, we also see the potential pitfalls, so we want some management of it. I’ll just quickly bring this up, is that if you operate in the financial sector, or if you operate in the healthcare sector, and you insert AI into those areas, You also need to take into consideration existing legislation and a number of countries, I believe, particularly in the U.S., are making their regulatory bodies report back on what they should be doing on AI. So, for example, your medical industry or your healthcare providers will be reporting back in and saying, this is what we see, here’s how we’re going to regulate it. So, this isn’t just the be all and end all. There is more to come in each individual sector.

Because each individual sector is going to use it a little bit differently. For Actually, that leads me to a question I wanted to ask you. How do you see AI affecting your day to day? With your creation of things like this, and your job, do you use it? Are you making use of it?

Justin Parisi: I don’t use it a ton. I do use a software to transcribe the podcast and helps me edit it better. So I guess I wouldn’t say that’s AI. It’s more like speech to text. I think it probably feeds into an AI training model at some point to improve their speech to text operations. But as far as my day to day job, I have not clicked on the Rewrite with AI yet.

Like, LinkedIn has that, and I was like, I don’t really need to do that. But I can see where it would come in handy. Like if I’m trying to figure out a script and AI can just do it for me easily. The one time I did use AI to try to look for an answer it came back with a very confident response that was incorrect.

And I was like, you know what? No, if I didn’t know any better, I’d be like, Oh yeah, this is great. And I started reading it like, no, this is nonsense. This doesn’t make any sense. This isn’t how this works.

Adam Gale: That’s absolutely me. Whatever it spits out, I’m like, yep, that’s the truth. Let’s go with that.

Justin Parisi: And it sounds very convincing. I was like, oh wow, this is really cool. I didn’t know you could do this. And then I started looking at like, no, you can’t. Where did this come from?

Adam Gale: I always thought AI would be working in your editing suite. So if I’m speaking to you now and you’re like, I want to cut this bit, you could just look through your speech to text and say stuff like to that bit where Adam’s talking now, remove it.

And it would shunt everything up for you. And particularly if we had video as well to go with this. Like, I wonder if you could use it to just squash the video and do a nice cut between scenes or something.

Justin Parisi: Yeah, the software I use does stuff like that, but I don’t think it’s AI driven. It does recognize filler words. So I’ll kill those right away. So that we don’t have so much of that. And then I basically am going through manually and it’s text, so I can basically edit the text, but sometimes it’s not quite right with the transcription, so I have to be real careful not to cut out actual words, so I’m still listening to it.

I’m still cutting it, but it makes it a little easier to do overall, but it does not save me a ton of time.

Adam Gale: Okay, so I guess anyone listening to this, by the way, I did an error a hell of a lot during this. This all just been cut out, so this isn’t how I normally speak, I get it.

Justin Parisi: No, Adam was perfect. He didn’t make any mistakes.

Adam Gale: I did not, no. That was absolutely brilliant, yeah. Oh, fascinating. Well, if you do use AI, I’d love to hear about it. Like I said, I’ve been using a little bit of Copilot myself, and the ability to summarize emails, or email chains, that sort of thing, is just phenomenal.

I’ve often get copied into email chains, which are 10, 20, 30 long, and it just says, see below, and that frustrates me for a start, it’s like, don’t just say, see below, tell me what you need me to do. And then you can get AI to scan it, and just basically look for actions that involve you, or summarize the whole thing, And that saves so much time.

Or using it in a virtual meeting, where we record minutes and then summarize the meeting for you. And what’s scary is, I’ve been sat in a two hour meeting and it summarized it in one sentence, and I thought, oh my god, did we just sat two hours talking nonsense? And the AI just summarized it.

Justin Parisi: The meeting that could have been an email will be a meeting that could have been AI.

Adam Gale: Yeah. So this could save us a lot of time.

Justin Parisi: It could. I mean, I’m not poo pooing it. It’s just for me personally, I haven’t found a giant use case. I do have a buddy of mine that we do a podcast on the side and he uses the AI piece for logo design and that sort of thing. So like Dall-E or whatever, right. Otherwise, yeah, I haven’t implemented it a ton on my own.

Adam Gale: Yeah. Okay. Well, I look forward to it. I hope if you do it in the future, I’ll keep an eye out.

Justin Parisi: Yeah. Yeah. Okay. So if we wanted to find more information about the new EU AI regulations do you have a place we can go to do that?

Adam Gale: I do. I’ve developed a presentation, which I will provide you with. It’s got a bunch of frequently asked questions at the end where I go into great detail to explain some of the things I discussed. Or you can just contact me directly. Obviously, this is a publicly addressable act, too. It’s over 100 pages, size 12 text, so unless you’re a little bit boring like me, you’re probably not going to read it.

But you definitely can if that’s your interest and your persuasion. But otherwise contact me and I’ll share this with you, and I’ll send it to you if you want to attach it to the podcast.

Justin Parisi: All right, excellent. We will attach that contact information as well as the presentation to the blog that goes along with the podcast. Again, Adam, thanks so much for joining us and talking to us all about AI in the EU.

Adam Gale: Wonderful. Thank you so much for having me. It’s a real pleasure.

We will be happy to hear your thoughts

Leave a reply

Som2ny Network
Logo
Register New Account
Compare items
  • Total (0)
Compare
0
Shopping cart