
25Minutes: Insights. Expertise. Impact.
In just 25 minutes, I deliver concise and thought-provoking conversations with top minds in technology, cybersecurity, business, culture and enterpreneurship. Whether you’re a technologist, executive, culture-enthusiast or someone passionate about growth, each episode explores trends, strategies and ideas that shape success.
For those with limited time but unlimited ambition, 25Minutes offers actionable insights and fresh perspectives where they matter most. Your time is valuable. Your 25 minutes. Your advantage.
Contact: eliel.mulumba25@gmail.com
25Minutes: Insights. Expertise. Impact.
5 - Adrian Ott: AI, AI, AI- Reinforcement learning, Deepseek, Gemini, China, future of work, EU AI ACT, Switzerland
After our first conversation with Adrian, a forensic expert with over 18 years of experience in financial services, where we explored how AI is transforming industries, automating processes and reshaping business strategies, we knew we had to continue the discussion. In that episode, we talked about the most impactful AI use cases, the biggest automation opportunities, and the mistakes companies make when implementing AI. Now, we’re taking things a step furthe In this second episode, we dive deep into some of the most critical and complex AI topics that will define the future. We break down Large Language Models (LLMs) - how they work, how they differ and why they are revolutionizing industries. We explore the role of reinforcement learning, its unique challenges, and how it enhances AI’s decision-making. We also discuss the concept of Chain of Thought reasoning and why it’s a game-changer in AI’s ability to process information more effectively. One of the biggest shake-ups in the AI world has been DeepSeek and we examine how it has sent shockwaves through the global AI landscape. What does it mean for businesses and how does it compare to existing models? Beyond the technology, we tackle one of the most pressing business concerns: How will AI redefine the workforce? Which jobs will change, what new skills will be required and how can leaders prepare for the shift? Finally, we take a close look at AI regulations in Switzerland, debating whether regulatory frameworks are an innovation killer or a necessary safeguard for responsible AI development.
If you found our last conversation insightful, you won’t want to miss this one. We’re going even deeper into AI’s evolution and its impact on industries, businesses and the workforce.
Our Guest:
Linkedin: https://www.linkedin.com/in/adrian-ott-517759143/
https://www.moneycab.com/it/adrian-ott-chief-ai-officer-ey-schweiz-im-interview/
25 Minutes Podcast
Hostey by: Eliel Mulumba
Audio editing & mastering: Michael Lauderez
Join conversation on LinkedIn: www.linkedin.com/in/eliel-mulumba-133919147
Hi Adrian, welcome again to 25 Minutes. It's such a pleasure to be here. Welcome you again, as a guest on our show, we had a great discussion the last time about how you actually entered the AI game what kind of recommendations you would give to your younger Adrian but also to others that want to embark on the AI journey today, we would like to discuss a bit more about technical aspects around AI, but also about ethic and compliance.
And to start with, let's really try to break it down to the basics. When we are talking about AI, we often hear about AI models, language models, like GPT, Gemini, Lama, Claude that are actually evolving rapidly. I think there's not a month where we don't have a new major release and I would like to understand from you what are actually the key differences between them or are there actually differences between them?
Yeah hello. I think that's a, it's a, it's a interesting question. So how I would maybe for grant fragment them a little bit is on one hand, you have to close source models, like you mentioned, um, Gemini, Claude chat VPT. So the open AI models these are. Models that are trained, but the only way to use it is basically you go to the official website and or you have an API key that you can connect own solution to and you ask questions, you will receive the answer. But it's very intransparent. What is the data that it was trained on? Also often the training process itself is, is, is not that clear. You don't know how there was reinforcement learning from humans. So how it approaches certain questions. That's for example, something that I always in politics is.
It's, it's, it's discussed because these, these large language model, they are not 100 percent smart and being able to answer each question without having to have some human intervention that actually guided them how they should approach certain questions. And then you mentioned Lama, also deep seek, for example, or Mistral. There are open source models. So that means. so open sources may be the wrong word. It's open weights. So these are models that you can download on your computer if you have a powerful computer you can further train them on your own data. You can also change the weights of the models and you can use it without having to go through a provider.
So you can, ask your questions there. And you can fiddle around with the model yourself training how to answer certain questions and even jailbreak them if you feel they are too restrictive in the answer. So these are the open source models and the strength. So there's every model has different strengths.
So all of them represent. very large training offer. So they build a base model that is very costly. So you basically have data that is scraped from the internet. And there's a lot of digital books that has been used, but the provider of these models are often not very transparent what they have trained the model on. But you can imagine that everything that is somehow out in the open was used to train their models. And they vary a little bit in their size and in the smartness in the logic reasoning. So how smart are they when answering a more complex question? and there I think OpenAI with the latest model, O3 Pro is one of the smartest model.
Today Grok 3 came out and this being seen from, from from, um, Elon Musk. It's been on the benchmark. It's now the new benchmark for the smartest model. It's bigger. It has been trade for longer and more CPU and feminized the Google model. So there are differences, but they all try to achieve the same thing.
And that is understand the text input. That is often a question and give you like the best output.
I mean, you just talked about some players and one of those players is new and has recently entered also or crashed the AI party and I'm speaking here about DeepSeek. We know the open AI of the world. We know the Microsoft, we know the Googles, but now we have a Chinese company where we also have a geopolitical angle that has entered the scene.
I read a few white papers also from very smart researchers, and I would be curious to hear what your take is on DeepSeek and video has been losing a lot of. Value. Yeah. Market value. In the recent weeks there are people that are saying that the market is overpriced, overhyped, overrated with regards to certain costs.
So now the Chinese company actually throwing something just out of the window putting it on open source and providing full transparency and claiming that they did it for much less cost than what we know from the other big players. So I would be curious to hear your take on that.
Yeah. I mean, I agree. It is definitely, it was a shocker. because if you believe the team there they used much, much less, let's say training power to build their model. It's a new foundational model. but basically, I agree. is from a smartness level, almost up there with the largest models that you can get.
For example GPT for O or O three or Gemini models are clawed and. And everybody's shocked because, it was always said, and I think Sam Altman said that in a conference two years ago, If you want to build a new foundational model that should compete with us, just forget about it. You will never succeed.
It's just too expensive. So you need a lot of infrastructure to train these models. And DeepSeek has done some Changes in the training process, and in the whole pipeline, how information was digested and and how the model was built. they say they can do it much, much, much cheaper and reach them in the end, the same intelligence that some of the best model. And it's true. They
Huh. Uh huh.
they actually use much, much less GPU hours. And training hours, then the big models, however, they most likely have also used a smart, the smartest model that exists to help train the model. So there's something reinforcement learning how you like then, tested with certain questions and then rate the answer and improve the answer. This is often done by humans as well, before you release a model. So they use them, some of the smartest models, but. Their training process led to a model that is just very smart, and that is obviously a shocker. I think, to be honest, I think it will always be cheaper to build a model that is almost as good as the smartest models.
Mm-hmm
But to create really a new model that surpasses all the existing knowledge, I think that's where most of the training efforts will be required in the future. And here we talk about self learning AI models test hypothesis that try to update their own weights based on learning that. Go back to their fundamental information and try to generate new meaning.
I think this is where most of the of the time and the energy costs will happen and not necessary to build. A smart smaller model,
Mm. Mm-hmm
a shock. And it was also a shock because so the, I mean, obviously you can say do you want to use chat GPT and send you data there or do you wanna use Deep C can send you say, your data to China, but they also it open source.
That means you can download it, on your computer at home. Or on your company server and the model is completely functional offline. So no data needs to leave. So you can completely have an offline instance of a smart model.
Mm-hmm
this was a bit of shocker.
Mm-hmm. And, and I mean, you have been talking about reinforcement learning where human feedback is also necessary also about training, about optimization. And I was wondering if you could break down a bit the concept of pre-training. This is post-training, and why both are actually crucial.
Um, Uh, Uh, Uh, Uh, Uh, Uh, Uh, Uh, Uh, Uh, Uh, Uh, Uh, Uh, Uh, Uh,
just ingest data from, from, yeah, from, from the internet, from books and so on. So there is a, a process also, how do you correct the data that you have maybe collected from the internet from different books. And then you ingest it into a new model.
So you, you, you, you learn it. And that already, that's a very, very um, I would say compute intensive process. But then what you get from that is basically nothing else that a token generator, a birth generator. So if you would take a base model and ask the question, is the best audit firm on the world? Then the answer would not be EY as the right answer but it would be what is the best audit company in the U S so it would just follow with a new question because it hadn't learned yet. What is the question? What is an answer? So it will words that are the most likely that come after the birth that have already been written down.
So it will, and you can, you can also see that it's not able to answer your questions. that you actually go into pre training stage where you start. learning it, what is the question and what would be appropriate answer to that question. And you start learning it just to deal with question and answer.
You also learn it how to refuse. And now in the latest it also learns how to think. So not just give an answer, but also have an internal monologue where it actually thinks about the question before it starts answering it. And this is usually done by the pre training And there is also where there is some human influence because somebody needs to tell the model what are good answers to questions. But by no means it means that every Question that you ask had to be answered by human. If you just teach it what the question and what an answer is and let it an abstract from that learning for completely new questions as well.
And I mean, when it comes to those advanced AI techniques, have you tried DeepSeq already? Um, Uh, Uh, Uh, Uh, Uh, Uh, Uh, Uh, Uh, Uh, Uh, Uh, Uh, Uh,
it is. So we, we use DeepSeek that is hosted in Microsoft Azure. So we are not using the China version. But that's just from a company standpoint and we have a prototype want to tackle complex legal questions and. We, so something that already works quite well in our solution is you ask you the legal question, we have a database in the background where we have all the legal documents, for example, for Switzerland. then you need a research agent that will actually look up what are the relevant bits and pieces from all these legal documents that I need to use to answer that question. So you just retrieve the right information that is needed to answer the question. And then in the last step, need to tell the model, okay, here are all the bits and pieces that are relevant that question. was the original question. And now think step by step, what is the final answer? we tested it thick against open AI models. And against Gemini and DeepSeek has the best accuracy, for that last step to, to come up with the right answer. And we tested it in, in tax, in, in legal, and in different areas.
And it's, it's great. It's great.
Indeed, great is actually the topic that That I had in mind when I played a bit around with DeepSeek, right? And something that was really, I don't find the words to describe this, is the chain of thought reasoning R1 has. And I would like to understand, how does this actually improve AI's problem solving abilities in the future?
So the chain of thought is very important and it's still area of research at the moment. But you have to imagine you ask the early version of chat GPT a question, it will give you the most likely answer. Based on all the questions and, and all the texts it has learned from the internet and you can trick it very easily. So for example we have different example how you can trick it. You can use the riddle that is maybe quite famous and, and was in the internet in many different variations. You just ask it that riddle, but then you change one key fact in the riddle that is maybe a bit abstract and then. It will still answer and ignore that one fact because the most likely answer is actually the riddle that it knows. So it, more like it learned the pattern of the question and it will provide the most likely answer based on learned pattern. Now, when you ask it, think step by step and critically and look at the question again and rule out all the possible pitfalls, then it starts generating a lot of text and it starts generating, Oh, there is a change that I have never seen before.
I need to make sure that I incorporate that into the answer. So it will actually change the way it actually then drafts the answer to it because he thought during the dialogue that something in that region maybe is not as, as, as it has learned it from a pattern base, and maybe an easier example is in mathematics, you, you, there's a famous study, when you give cat GPT mathematics just different Mathematical tasks that have the right answer. you ask it, just provide me the final answer right away. And researchers do that because when you only have one number or one answer, it's easier to automatically then rate the results. then it got like. only maybe 20 percent of the questions, right? But when you ask the same mathematical question, and you ask you to think step by step and really calculate it step by step, it's got like 80 percent of the questions, right? and it's like we human do. If I ask you, what's five point what's five times five, you don't need to calculate. You just say 25. Then I asked you what's 24 times 12. You actually have to think in your brain and start like calculating, but by going through your monologue and your internal brain, you will maybe come up with the right answer. and large language model work the same. They need to think first and digest the question before being able to answer. Yeah. I
you have just been talking about the example of in a working environment. And there are many people that ask questions around AI. And the future of work. So many people actually fear AI will replace jobs. What's your take on this? Do you think that some roles will disappear and which new ones will actually emerge?
think. I mean definitely there will be changes in, in the roles and maybe also some jobs that disappear. everything that maybe has to do a lot with finding the right information, curating information, knowledge. I, I think this is something that current implementation of AI is already quite strong. So it's research on, on, on, on certain information based tasks. I think that's something that maybe will change. It's more like creating the right information, make sure that the models have access to the right information, that it is readable and it doesn't itself from, from the core information. I, would not be that afraid because I see the little bit similar to parallels to like the industrialization. I mean, back then you could ask the same like machines are taking over human jobs. And back then it was very hard to imagine what are the new jobs that will be created. You only saw like what will be. Gone, but you, you, you, it was very hard to imagine what will be the future. And I feel maybe it will be similar today. It is maybe a bit broader and a bit more extensive. but I feel humans will find a way to keep themselves busy. And I think there will be changes obviously. And there are also people that say we need a universal income and, and, and things like that. But. I'm more on the direction. Let's go step by step. Definitely stay curious and and understand AI and how you can benefit from it. And not like just blindly looked away on the other side.
And what role should governments and organizations play in regulating AI? While also maintaining innovation.
Um, so what, what team or what rule, sorry,
Yeah. No worries. So the question was what role should government actually play in regulating AI? So trying to set some boundaries, but also making sure that organizations and nations still have the possibility and room for innovation.
think inside the company. Usually you have a risk department, you have a legal department, you have a data protection. maybe it's also the same person or the same team sometimes,
Mm hmm.
you definitely need to see what is the data that you can use for AI. you. for for example, from a legal perspective, you cannot just use all the client information you have.
You maybe want to check your contract first, also when you build your use cases. And you also need to think through what are the risks when something goes wrong. So do you have at the moment, because the responsibility is still with the people and with the company itself.
Mm hmm.
cannot blindly trust, you need to embed it into, the process within your company. And I think it's a C level task to ensure that,
Mm hmm.
and I just leave it to some teams. You need to be very aware what are you going to use AI for and what are the risks that could happen when, when something goes wrong. And you mentioned, I think also country on the country level.
Exactly. Organizations we see AI acts coming and standards and regulations. So should they be more proactive with regards to that and try to set boundaries or should we also give sufficient freedom actually for companies and startups to evolve in this field?
Yeah. Yeah. I think that you AI act is a big topic at the moment for our clients and we try to help them setting up a structure that they can make sure that they comply with the UAI act without limiting actually the because there's a lot of room for interpretation. Especially when you go into details and especially when not everybody just understands AI. So you
Mhm.
Um, some bad ideas of what you can't do. And often I see companies in a shock because they just don't know how to deal with it and who will approve what, and I think regulation, I am, I think it's very, very, you need to be very careful what you want to regulate from a country perspective. So we have an AI conference where we have the creator of the EU, AI act
Mhm.
his initial thoughts. So it was really the person that drafted it. And his story is that now he actually doesn't work with the EU anymore because. The UAI act is, it's much too strict and much too broad.
Mhm.
He is afraid and I shared it a little bit. You, if you restrict your country to create new foundational models or cutting edge AI technology,
Mhm.
you are always dependent on the bigger ones that don't have this restriction, which is maybe China and the U S nothing for Switzerland, I would I hope that we, we don't restrict what we can do in Switzerland too much.
I think some regulations maybe are necessary to protect data,
Mhm. Mhm.
and the right to forget certain information as well from individuals, also we should not limit that. We. can keep pace with the big players because in Switzerland we have the know how and we have also the infrastructure to do some very interesting projects.
Mhm. I mean, thank you very much for that, Adrian. It was such a pleasure again to listen to your current AI topics that you're focusing on, but also reflecting on your journey that we had the last time. And I was wondering if there is actually a personal anecdote from your AI journey that you would like to share with us.
Oh, um, yeah, there is, I think what, what, what stuck with me the most is, When you try to have this large language model or this intelligence solve certain tasks, it can be quite funny when it goes wrong. so I have seen in the early days where we started creating different AI agents. That had to, so for example, we had a legal agent, we have a risk agent, and you have like a business agent.
Mhm.
We wanted that these different AI agents are discussing, the wording of an email that goes out to a client and they need to make sure it adheres to all the different like legal standards and risk standards and so on.
Mhm.
just started fighting and that just never stopped having like a very intensive dialogue. and we had to stop it after a few minutes because it's just, they didn't come to a conclusion. They are in the end, just like fighting against each other.
Mhm.
me sometimes also on you see in companies, it almost is a little bit too human and that we also learned there. Okay. AI, you need to know how to use it and how to implement it so that it actually fulfills the purpose.
It's by no means just something you roll out and it will be magic. So you need to invest some engineering into it.
Thank you very much for sharing all those insights again.