25Minutes: Insights. Expertise. Impact.

11 - Kristina Jovanovska: Good AI, Bad AI, Cancel Culture DEI & the AI Debate – What now?

Eliel Mulumba

Kristina is back on 25Minutes! In this episode, we talk about how AI has changed over the past three years and what makes AI “good” or “bad” in practice. Kristina shares how companies can use AI in a responsible way and why this is more important than ever.

We explore how artificial intelligence has evolved over the past three years: What makes AI helpful or harmful? How can companies use it responsibly? And what role does human decision-making still play in an AI-driven world? Kristina explains why responsible AI isn’t just a technical topic – it’s a leadership challenge that affects us all. 

We also dive into a topic that’s shaped many conversations in 2025: the backlash against Diversity, Equity & Inclusion (DEI) and the rise of so-called cancel culture. Kristina shares her personal view as a woman in cybersecurity and tech leadership – and why she believes diverse teams are not just “nice to have,” but essential for building better, safer solutions. 

 👉 Note: I was on the road while recording, so there’s a slight echo on my side. It may sound like I’m talking over Kristina in some moments – but I promise, I wasn’t! Thanks for your understanding.

Important note: The views and opinions expressed in this episode are solely those of the individuals involved and do not necessarily reflect those of any organization, employer or affiliation.

Our Guest: 

LinkedIn: https://at.linkedin.com/in/kristinajovanovska

25 Minutes Podcast

Hostey by: Eliel Mulumba

Audio editing & mastering: Michael Lauderez

Join conversation on LinkedIn: www.linkedin.com/in/eliel-mulumba-133919147

Join conversation on X: Eliel Mulumba (@ElielCyber) / X

Send us a text

 Such a pleasure to welcome you again on our show. 25 minutes. We received amazing feedback from our, our session. Many people that have reached out. But, uh, when you and I. Connected. We thought that there were a couple of topics that we haven't been able to really address. we again, do have the pleasure and want to focus this time a bit more on AI and especially the voluntary work that you're doing around that because you're getting quite some insights with regards to. How different nations are also approaching AI in a responsible way. I mean, myself, of course, uh, have a lot of exposure to ai, not only in my day-to-day work, but also privately exploring a lot of tools that are out there. We have seen really amazing. Companies and tools showing up in the last couple of months.

I would say now, roughly three years now since we do have, or two and a half years since we do have hype that has started with uh, GPT we have also now Lama Gini. We have also deep seek the Chinese response to GPT and I would be curious to know, Global Council. That you are supporting right now?

What is it actually about? Why do we need a global council for AI and what are you guys doing exactly? I.

Uh, so the Global Council for Responsible ai, AI is really dedicated to providing an inclusive platform for all to really discuss to, to influence and to really shape AI going forward. We want to have a significant role of really defining ethical principles, um, that we, uh, co uh, that we contribute to governance frameworks and policies and that we really.

Bring everything in the direction of, of responsible ai. So the whole organization is around, uh, several key principles. So one of them is really around fairness. Basically to, to avoid bias and discrimination because, um, as humans, of course, we have a lot of this and we spent a lot of years trying to unlearn some bad habits.

But then AI came and, and like. We saw, like we see each other in the mirrors, all the mistakes we have done during the years, were kind of all over again in front of us. And then this, this fairness topic is again, a very big thing where it's, it's really, really important that we that we work on, on doing this, right?

Second, uh, the key principle is the transparency. And I think many people underestimate how the decisions are made with artificial intelligence, and especially people coming from different areas of expertise that are not IT related. They are rather on the side that they're amazed of AI and what AI can do, and they find it so fascinating and they trust it so blindly.

And there are so, so many examples where companies used AI since, since years to do some quick decisions and even for recruiting purposes and for for many other purposes. And they never understood how discrimative their decisions are because they never fully understood how. How the system works.

So transparency, it's really a big, a big thing that we stand for or one of our really key principles. Another one is really to protect the personal data. So in the, in the global council, we have people from many different countries, so we are protected in, in different ways. And finally, to have AI secure and reliable.

So those, those are the key principles around which all of our activities in these initiatives are built. This whole movement was started by Carmen Marsh, which, which is a really, an amazing woman. And, um, we, we met for the first time in London in February this year. And since then, there are a lot of people that have joined it to be more exact.

We have at the moment, 520 representatives from 71 countries. And I find that amazing because that shows how many people see the importance to do this, right? Because nobody discusses I, I, is it AI something we should use and is it going to help us? Certainly, yes. And we have a lot of benefits from it, but we really need to do this right, and we really need to focus on the less shiny side.

But we have to make it fair, transparent, and our data needs to be, uh, used where we want it to be used and it needs to be really secure and reliable. That's very, very important.

the statement that we need to do this in the right way and it needs to be fair and transparent. And I'm wondering when it comes to this organization, I. Global Council for responsible ai, who are you actually targeting with the initiative? Are we talking about consumers, you and myself, private person?

Are we more looking to institutions or to business organizations are these recommendations or are you also trying to, you know. Have a look into those, um, AI providers or developers to make sure that they apply some of those principles that you were just explaining earlier.

So it's a combination of different type of initiatives and we have colleagues. That are very much on the research side. So they contribute with, uh, additional research in this direction. Uh, they find examples they share examples for improvements. They share examples of how it's done in a good or a bad way in in particular places.

Uh, so, so it's more on the, on the research side, and there are other colleagues that are more on the industry side, and then they more show cases how it is implemented for them. What are their learnings. So it's mainly there to collect the knowledge from all different aspects that somebody could think of.

And the, the whole thing is still in a, in a stage where. We have kind of many different initiatives and we are trying to put all of the information together, so to, to really have one place of all of the learnings and then to see how we can make a difference and how we can make impact one by one.

So therefore there, like all of the things that you mentioned will be tackled and, um. Hopefully we will be really contributing to kind of changing policies, giving better advices to, to companies for how to do this. But as I said, we are at the moment really working on understanding the situation in each and every country, and that is also the reason why we have.

Local chapters. So I'm part of the chapter in Switzerland. Meet together with, uh, another amazing woman, El Elsin Beran. So we are in the board, or we are president and vice president of the Swiss chapter. And, basically, uh, there are also people who are regionally active and then there are also global ambassadors that are kind of doing cross country and cross uh, continents basically to, to understand the situation and how we can the best help.

So it's really now focusing on. Getting the stats together, understanding what the situation everywhere is, and then it's slowly making a difference. So I'm more hoping to, to make a difference in the German speaking region personally. That is my target. And then we see where the journey takes us.

I mean, this sounds really exciting and you just mentioned also about, you know, the local chapters that you do have in different countries, and I would be curious to understand a bit more about the global. Adaptation of AI in different regions. I mean, when we're just looking at us, new presidential, new president, um, nominated or new president enforced, announcing billions of investments in ai. We have seen Europe also. Doing billions of investments, especially France with a lot of commitments. of course, is also investing millions, if not billions as well into ai, AI technology. But then there are some other regions in the world, for example, the African continent, where you don't hear that much about.

I know a bit about country, the Democratic Republic of Congo, where the situation is actually that. 75% of the population doesn't even have access to internet. So how can they really adapt AI holistically? So I would be curious to understand what are like the major that you have seen so far in leveraging ai, not only for corporates, but also within the society.

Mm-hmm. Yes. So from my working experience, I can tell that we have limited possibilities or like our learning curve is not as steep as the one in us. When I say our, I mean in Europe. And the reason for that is because we are of course, a way more regulated environment. And while I'm really happy that I live in Europe and that my private data is not public property at the same time I see that, in order to be protected, we need to know what we are dealing with here. And I remember I was sitting with my team and we're coming up with a lot of hypothesis that we wanted to try out, but we can only try out very limited scenarios in very limited demo environments, and that doesn't replicate the reality.

And then when I speak with my US colleagues, and when I tell them, oh, we thought of this and we thought of that, then they immediately say, well, this is gonna work. This is not gonna work. This is how it's done. This is what you can correlate. And then you realize that they have tried it out. And for sure, uh, I would say they have, uh, better preparation to be ready for, for AI attacks, which, which makes me quite nervous because I feel like.

In Europe, we have so many brilliant engineers as such brilliant minds, and why should we be stopped from trying out and learning and making ourselves more cyber resilient? So that, that is something that personally bothers me. And I would say the. Uh, the best approach would be maybe somewhere in the middle some balanced approach where there is regulation and, uh, data cannot be, uh, misused, but our playground is bigger so that we are having active participation in this change.

And we are part of this change and transition because at the moment, we are passive watchers, that we just see the change. We are users, but we are not ready to be resilient from attacks. That are AI generated, we, we are not having a chance to get ready. And that for sure is making me nervous as somebody who, who works in cyber.

I understand, you actually do see the US in. Very far when it comes to adaptation, but also playing around with a bunch of use cases. Europe, also somewhere in the forefront. What is your take about, you know, other regions in the world also diff different developing countries, I don't know, break Brazil, Japan, China. What is the kind of leverage that you have based on the exchange with your peers?

I, I really don't have data for this.

That's, that's all good. we were just talking about ai, right? And when it comes to is good, what is responsible, what are the ethical guidelines for, figure out some fairness, transparency, but also making sure that. Personal data is transferred and secured in the right way. I would like to know from you, what do you consider as being bad AI or good ai?

Bad AI or good ai.

In the context of the examples that you also mentioned I think you referred to a team was having a few examples of adequate guidelines. You were saying, okay, this is good, this is not good. And maybe there's something in the context of good and bad ai. Just give, to, give you just some fruits for thought. If I would be asked the, the, the, the, the same question and I just had the question right now. It's intuitive, bad ai of course. Um. If you, if you leverage it for doing port scanning also of, you know, corporation and, uh, use it a bit more in the criminal way and maybe good ai, good AI can be something that you leverage for society, maybe for people that have a barrier with language or maybe you use it in the medical space.

These are just a few examples on, on top of my head, but you could have different ones.

Hmm. Let me gimme a second to think about it. Uh, okay. I can put it a bit in a, in a, in a cyber context. Yeah.

go.

Sounds good.

Do you wanna ask it again or should I just speak?

I mean, I can, I can start, uh, asking you again. So Christina, when it comes to using ai, would you consider as being bad AI or good ai? I.

Yes. So, um, when speaking about ai especially in context of of, of cybersecurity, it's really a double agent. So. On the defense side is obviously the, the positive side and the good ai, right? Because we can do a lot of advanced threat detection. We can identify real time threats and we can mitigate them much faster than what we ever could.

Then also the automation responses that we can do are much more advanced, much more autonomous because just the data available is much bigger, the decisions are more comfortable. So that has massively changed and that has helped us from, from the positive side of, of ai. And the final thing is prediction has now a different different level.

Because predicting potential vulnerabilities, predicting potential attacks, and noticing an normalities way, way faster than what we used to be able to. That's obviously something that has reached a whole new level in, in cybersecurity thanks to ai. When, speaking of the offense side and how, how AI has brought us some challenges.

So I, I mentioned the, the AI driven attacks. So the automated phishing, the deep fake creation has reached also whole new levels. I find this especially. Difficult because I remember in the past regarding phishing emails, we were training colleagues from other departments to be careful for bad grammar, to be careful, for weird language, for kind of addressing general audience.

None of this is correct any longer, right? Like there is a perfect language. They are addressed, very personalized. Because it's just so easy to do that, so fast to do that. And then like the only way to train them is to tell them, well, please be paranoid and always think, oh, it's maybe something wrong.

So it it, it has gotten much more difficult to distinguish it also for deep fakes. I mean, for all of us who work in a very big organizations, we often get. Updates on how the year has been by getting videos either of our CEO or of somebody who is on global level responsible. And we are used to get a video with, with them explaining to us what the news are.

And of course, those are also persons that you can find on the internet easily and that you can for sure find a podcast from them and get their voice. Or you can find other small videos and you can do easily, easily deep fake. So for sure these things have done or like this type of bad AI has shaken up the things.

Another thing is having adaptive malware there was a huge POC. Where there was a malware that each time it was executed, it was completely different and mutated version. And this meant that in this traditional way, how we approach security, where we see something, we search for it to find it and address it.

It was no longer possible, right? Because we never ever could find it in a second time in, in, in the same fashion and in the same state. So it was then impossible to catch it. So that is also a development of, as you, as you mentioned, kind of bad AI in a way. And finally exploitation of AI systems. So this is something that organizations rarely put it as, as a high priority thing. Uh, so we have clients that feel more comfortable to rely on ai. And then I often raise the question like, okay, what if your AI gets hacked because that's also an engine, you know, that can also go wrong. And then you often see like, oh. Well, we will then, uh, figure it out or we will then go back to manual.

But for that you also need a very clear plan, like for everything else that it's that we make a clear plan for what happens if, if something is breached, then, uh, having this for, for the AI system is also. It's also kind of something that can bring a lot of challenges and it's a bad way of using or like misusing the ai.

I'm, I'm definitely with you with regards to this. I think potential solutions that also have seen being discussed are around creating awareness around critical thinking and media literacy. Right? By educating the public on the dangers of deep fakes, how to spot them. how can we reduce the impact of, uh, such malicious campaigns?

But I believe that there's still a way ahead of us. thing that I would like to discuss actually with you around de and I, I mean there have been tremendous changes being introduced by global organizations following also government changes that we have seen in the world and. When I'm looking around myself as being a cyber professional who's spending most of the time in the manufacturing industrial environments, with scar DCS systems, IOT devices there that are running a production 24 by seven, I. In many of those meetings and workshops that I have, I don't see that much It's just the fact, in this industry or of cybersecurity, particularly in the industrial. Maybe I'm, I could be lucky or unlucky. I, but I would like to know a bit what your take is on this entire and I constellations that have happened globally.

Yeah. Uh, well, um, diversity in the cybersecurity space, it's a topic very close to my heart, and I have done a lot of work with women for cyber to change this and to attract women to cybersecurity and also to keep them here. So obviously this, uh, topic touches me on personal level a lot. I. Am very happy that there are companies that still stand to these values and they, they still keep them as a priority.

And I'm very disappointed in companies who decided to start dropping this because this is not something we should do because it used to be popular. Um, we should do it because it's the right thing to do. And, I strongly believe that having different perspectives in the room is the only way to succeed.

I strongly believe in diverse teams that they're very good for the business. They're extremely good for cybersecurity. I, I know in my diverse team, I have a lot of, uh, people from many different countries. I, I have both men and women in the team, so I, I can see that they come up with different ideas.

They, they bring different points to the table, and I, I really enjoyed it. And I, I also want to say, because the organization is called Woman for Cyber, and then maybe many people think that, oh, we are aiming for, for kind of more women dominated team. And I, I want to say that that I find not good as well.

So, uh, really diverse team is, is the way to go. And when we speak about attacks and incidents and, and there, I find it especially fascinating how much they come up with so different ideas and, and therefore I mean, I, I have daily proofs that that's really very, a good combination to have and I find it the winning combination.

So I hope that going forward this will be picked up again as a priority from everybody and that we'll have inclusive teams, uh, also going forward because it's really the right thing to do.

you very much, Christina. It was such a pleasure to welcome you again on 25 minutes.

Thank you very much.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The Vergecast Artwork

The Vergecast

The Verge