(Un)trustworthy AI
Panel
Friday, 07 Mar 2025
“The danger isn’t that AI destroys us. It’s that it drives us insane.” – Jaron Lanier
Artificial intelligence is not a neutral advancement but a battleground. It isn’t simply developed to make our lives easier — it’s shaped by the world’s most powerful corporations, often pursuing interests that conflict with societal needs and planetary boundaries. In a world where AI systems can spread deception and lies on an unprecedented scale, the focus shouldn’t be on blind trust in so-called “Trustworthy AI” — a term already co-opted by Big Tech PR campaigns. Instead, it’s about questioning the infrastructure and rules that shape these systems.
This panel brings together experts challenging uncritical trust in AI and the power structures behind it. Irma Mastenbroek, an expert on global AI standards and advocate for queer and feminist tech policies, calls for responsible development and adherence to technical standards that promote social justice. Lajla Fetic, a tech governance specialist, analyzes AI’s societal impact and demands clear rules for transparency and accountability. Alfred Ongere, founder of AI Kenya, highlights how AI is being used in East Africa as a tool for education, democracy, and anti-corruption efforts, emphasizing the importance of involving local communities in shaping technology.
Moderated by Christoph Weiss, aka Burstup, the discussion will explore key questions about AI trustworthiness and the people developing it: How do we define “trustworthy” in a technological context? How do global perspectives differ on the perception and development of AI? What mechanisms can ensure transparency and justice in a technology-driven world? And is trust in technology even the goal — or should it be about trusting the people, processes, and values that shape it?