DiscoverDržavljan D106 Maximilian Gahntz: AI is not (just) tech!
106 Maximilian Gahntz: AI is not (just) tech!

106 Maximilian Gahntz: AI is not (just) tech!

Update: 2025-01-15
Share

Description

<figure class="wp-block-image size-full"></figure>



We sat down with Maximilian Gahntz, Mozilla Foundation’s AI Policy Lead, working on questions around the regulation and governance of AI around the world. Previously, he has also led work around data governance and platform accountability.





Before he joined Mozilla, he was a fellow of the Mercator Fellowship on International Affairs, working on the EU’s AI Act at the European Commission. 





We talked about the currents in the AI policy development, the issues with media representations and political interpretation of the AI issues and more.





Transcript of the episode:





Expand the transcript





00:00:11 Domen Savič  / Citizen D 





It’s the 9th of January 2025, but you’re listening to this episode of Citizen D Podcast on the 15th of January same year. With us today is Maximilian Gantz from the Mozilla Foundation, he’s the AI policy lead, working on questions around the regulation and governance of AI around the world.  





Previously he has also led work around data governance and platform accountability, and before that he was a Fellow of the Mercator Fellowship on International Affairs, working on the E US AI Act at the European Commission. Happy New year, welcome and so good of you to drop by.  





00:00:48 Maximilian Gahntz / Mozilla Foundation 





Happy new year, thanks so much for having me. 





00:00:51 Domen Savič  / Citizen D 





Well, to jump right into the gist of it, we’re obviously going to talk about artificial intelligence, the AI act, the regulatory frameworks and maybe for an opening question is: How will the EU AI Act help or prevent the big tech capture of the AI field that we are facing today? 





00:01:19 Maximilian Gahntz / Mozilla Foundation 





It’s a good question and I think it’s important to also take a step back and think about what the AI act actually is and what it isn’t, because in in the end it’s a Product Safety law, not competition law. So, like the primary intention of the legislators here was never to make sure that there is no concentration of power or to, you know, curb the power of big tech.  





The intention was to essentially make AI products in the usaver and prevent harm first and foremost, so I think that’s just like important context here because obviously there are some implications. There are some tiered rules that might apply to bigger companies, but not to smaller companies, there are some exceptions, for example, when it comes to open source. 





But ultimately, it’s a Product Safety law, so I think if we want to talk about concentration of power and market power here, we need to look at this in a broader context, because there are other levers we can pull and that regulators in the EU, in the UK, in the US are trying to pull as well, and in general, when it comes to the success or not success of the AI Act, I think it’s also important to say that it’s too early to know really because there are so many vague provisions in the AI act is so much implementation work to be done, standardization, secondary legislation, there is a big code of practice process currently underway for developers of large language models. 





So, in the end, what the rules that the AI act put forward will look like, we’ll only know in a few months, a few years, and then how these rules are enforced is actually the long game here, so it’s too soon to judge how well the act is working out at this… we’ll actually have to wait a few years and then also just be… As activist and civil society organizations very vigilant about how these rules are implemented, how oversight authorities are enforcing the law and then what companies are doing in the market.  





00:03:39 Domen Savič  / Citizen D 





So currently like the broad perspective of, let’s say, the AI landscape consists, if you look at it from, from, from a perspective or through the eyes of let’s say a regular digital user, it looks like you have a couple of big companies that are battling over for market dominance and then you have some localized or individual tools that are being used by smaller actors. 





Would you say that assessment of the AI landscape is correct or are there other players who we as activists should pay attention to when we’re looking at this situation? 





00:04:31 Maximilian Gahntz / Mozilla Foundation 





Yeah, I think naturally a lot of the attention right now is on the big AI models and their providers. If you look at what the people in the press are writing about and people are talking about is ChatGPT and Gemini, and what those tools can and cannot do and how they’re implemented into different services, but I think it’s important to keep in mind that, sorry, this is the tip of the iceberg of the AI value chain, which is pretty long and complex, but if we’re talking about the distribution of power and concentration here, it’s important to go down or go up the value chain and think about whose strike dominance in these different markets that are really important for AI.  





Because in the end, obviously like what, end users, consumers, people are going to be using or going to be affected by, it’s the applications and how these tools are put into practice and deployed, whether it’s in like a consumer setting or even by governments, so naturally like a lot of the attention is there. 





But if you think about what you need to build these applications, there’s a lot that goes in there, so you have the big model provider is Google open AI anthropic and they train large language models. If we’re talking about generative AI here, using a ton of data and a ton of computing power and raw processing power and to do that, they need a lot of data. So, who has the benefit here? 





The companies that already do collect a lot of data from other services, for example or that have the capacity to enter into licensing deals with publishers or other rights holders or who have the capacity to build a web crawler who can crawl large parts of the web and download their data and feed it back into the model.  





The other part is computing power and there, if you look at who the big cloud providers are, who provide that computing power, it’s roughly the same companies or many of the same companies that you also see strike being very active in the AI industry. It’s AWS from Amazon, it’s Google… So they actually have a lot of market power in the cloud market as well. There is the chip providers where I think when it comes to a specialized AI chips, NVIDIA controls large parts of the market, I think it’s like somewhere in the 80 to 90% market share range when it comes to AI chips. 





So, the further up you go, you still see market concentration and you can’t really look at any one of those levels in… you do need to look at each one of those in combination, because only that way you will actually see who’s like if. If you want to, you know, put it in a poignant way, like who holds power over AI? 





00:07:48 Domen Savič  / Citizen D 





You previously worked on data governance and platform regulation… How is let’s say the area of AI and maybe more specifically generative AI, how does this differ from the situation that we’ve been having with digital platforms, with social media, with trying to regulate them for the past few years? Listening to you naming all of these actors, it would seem to me that it’s just a little bit of history repeating, right? 





00:08:28 Maximilian Gahntz / Mozilla Foundation 





I think in parts, yes and parts no. Obviously the problem space is still somewhat different, because, for example, if you’re talking about safety and bias in AI, it’s a bit different with many AI products compared to online platforms, social media platforms who have their own problems of bias that’s like deeply enmeshed in content moderation and recommendation algorithms.  





And obviously this is a very salient topic right now, again because two days ago, just announced that they’re going to change their content moderation practices, so I think some of it is similar, but obviously if we’re talking about content moderation, there’s been in the past 10 years a lot of debate around the limits and you know, liberties of free expression and the value of preserving free expression that’s been politicized in different ways. 





It’s probably something that will come up in the AI debate as well at some point, and we’ve already had some discussion around like, do chat bots display political biases, for example, but that’s not the core of the issue. 





So I think there’s definitely lessons to be learned and we should look at the different experiences we’ve made in different digital policy and other policy fields in the past ten, 20-30 years to learn more like new debates on AI, but it’s not going to be, you know the same thing all over again, because there’s just different equities, different changing political circumstances, obviously. 





What the European Commission and the new US government and the new UK Government are going to be talking about in the coming years is going to be somewhat different from our political focus areas have been determined in the years.  





S

Comments 
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

106 Maximilian Gahntz: AI is not (just) tech!

106 Maximilian Gahntz: AI is not (just) tech!

Državljan D