Episodios

  • Democratic governance of technology: myth or reality ? With Dr Mar Hicks from the University of Virginia School of Data Science
    Nov 3 2025

    Welcome back to the last episode of this special series exploring the relationship between power and technology!

    We now know that power and technology are intertwined and led to the rise of techno-oligarchy in the United States. Beyond structural impacts on democracy and the power structure that allowed this to happen, I wondered if democratic governance of technology was even possible considering this power structure, on top of intense lobbying from Big Tech companies, their wealth, and the inevitability rhetoric they push. Beyond that, I wanted to talk with an expert about the impacts on gender minorities like women and non-binary people, as well as other marginalized communities.

    To answer these questions and tell us some dad jokes, I am welcoming Dr Mar Hicks, an associate professor with tenure at the University of Virginia in the School of Data Science. Dr Hicks is a historian of technology, gender and modern Europe, working on the history of women in computing,and technology powered inequalities. Two books Dr Hicks has published are available everywhere, the first one is titled “Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing”, and the most recent one titled “Your Computer Is on Fire”, a book on the inequality, marginalization, and biases in our technological systems. I hope you enjoy this episode.

    Get Dr Mar Hicks books: https://mitpress.mit.edu/author/mar-hicks-23807/

    Más Menos
    27 m
  • The TESCREAL to fascism pipeline with Adrienne Williams from the Distributed AI Research Institute (DAIR)
    Oct 7 2025

    Reading about TESCREAL feels like reading a bad sci-fi storyline written by a man with a god complex. Unfortunately, it’s real: a movement that allows its proponents to use the threat of human extinction to justify expensive or harmful projects and demand billions of dollars to save us from these "existential threats". Sounds familiar, doesn’t it? These are very real aspirations, and some of the tech billionaires setting the rules of the tech industry even call themselves “creators”. It’s worse than godfather, if you ask me, though it’s fairly close. We’ve seen it for example with OpenAI asking for billions to “save us” from the very AI systems they are still trying to build, somehow, and more generally AI gurus working on how to save us from killer robots or AI systems becoming conscious instead of addressing hunger, homelessness, inequalities, or, environmental issues in their country (even just in their city would have more of an impact than stealing money under the excuse of saving people who don’t exist yet from evil AI systems that also don’t exist yet). But it’s not about helping, or “AI for Humanity” as they like to call it: it’s about power, influence and money. And the pipeline from these delusions to the far right ideology and technofascism is pretty straightforward.

    Adrienne Williams, researcher at the Distributed AI Research Institute joined me to talk about TESCREAL, neocolonialism, policymaking, and everything in between.

    Created, hosted and produced by Mélissa M'Raidi-Kechichian.

    Más Menos
    33 m
  • Back to basics: Technofascism 101 with Emile Dirks from the Citizen Lab
    Sep 16 2025

    After Trump was elected and tech oligarchs like Elon Musk, an unelected person, started to get power in decision making at the White House, the topic of technofascism got very popular. But this political trend of leveraging technology to empower fascist ideologies is nothing new. A lot has been written about technofascism in the 21st century, but I wanted to go back to the basics: what is fascism before technology? How do fascist movements use technology to take power and what kind of power dynamic is created? How can we resist the rise of technofascism?

    To answer these questions, I welcomed Dr Emile Dirks. Emile is a Senior Research Associate at the Citizen Lab at the University of Toronto, where he explores Chinese politics and digital authoritarianism.

    Created, hosted and produced by Mélissa M'Raidi-Kechichian.

    Más Menos
    28 m
  • The datafication of refugees: humanitarian agencies & biometrics with Zara Rahman from the Superrr Lab
    Feb 18 2025

    Biometrics – our fingerprints, faces, irises, for instance – are increasingly used to verify identity. But what happens when this data collection is applied to vulnerable populations, like refugees and asylum seekers, in ways that can remove agency rather than offer them protection? In the humanitarian space, organizations justify biometric data collection in a way to increase efficiency, yet stories have shown that such mechanisms can be weaponized: data handed over to oppressive governments, misidentifications leading to life-altering mistakes, and accountability often falling on the very people humanitarian programs claim to help. Beyond survival depending on data-driven systems, racial capitalism also plays a critical role by reinforcing the same global inequalities that force people to migrate in the first place. Who benefits from implementing biometric data collection in a humanitarian context, and who bears the consequences when it fails?

    To answer these questions and more, I had the pleasure to talk with Zara Rahman, author of “Machine Readable Me: The Hidden Ways That Technology Shapes Our Identities”, Strategic Advisor at the SUPERRR Lab and Visiting Research collaborator at the Citizens and Technology Lab at Cornell University. Zara is a researcher, writer, public speaker and non-profit executive, whose interests lie at the intersection of technology, justice and community. For over a decade, her work has focused on supporting the responsible use of data and technology in advocacy and social justice, working with activists from around the world to support context-driven and thoughtful uses of tech and data.

    Created, hosted and produced by Mélissa M'Raidi-Kechichian.

    Más Menos
    36 m
  • Mapping for justice: from cartography to GIS, with Cathy Richards from the Open Environmental Data Project
    Feb 7 2025

    If cartography, the ancestor of GIS, already displayed colonial patterns and racist stereotypes back in the day, why would the digital legacy of maps be any different? Maps have an authoritative value and hold power through the representation of the world from the perspective of who creates them. However, communities are often excluded from their design leading to the misrepresentation or omission of important landmarks and third places. In this episode, Cathy Richards explains why it is critical for communities to have the tools to paint their own stories through mapping, what is the role of communities in the development of tech powered solutions that include GIS and what are the risks associated with the exclusion of said communities.

    Cathy is the Civic Science Fellow and Data Inclusion Specialist at the Open Environmental Data Project. Previously, she was the Associate for Digital Resilience and Emerging Technology at The Engine Room where she advised civil society organizations on their use of technology and data. As a Green Web Fellow, she investigated the benefits, ethical questions, and security risks associated with using GIS for environmental justice. Cathy holds a Bachelor's degree in International Relations from Boston University, an MPA from the Monterey Institute of International Studies, and she comes from beautiful Costa Rica.

    Created, hosted and produced by Mélissa M'Raidi-Kechichian.

    Más Menos
    38 m
  • How MyBranz fights fake online reviews to save the planet — and your wallet with Janani Kumar, Founder of MyBranz
    Jan 31 2025

    A few months ago, in August 2024, the Federal Trade Commission (FTC) announced a final rule banning fake reviews and testimonials, a rule that will allow to deter AI-generated fake reviews by prohibiting fake or false consumer reviews, consumer testimonials, and celebrity testimonials for instance.

    Now, if you have ever been online shopping, you know that this decision is pretty groundbreaking. How many of us have been deceived by fake reviews before buying a product? Not only is it a waste of money, but also in the midst of a climate crisis, it's an additional waste and it’s very detrimental to our environment. Joining us today is Janani Kumar, the founder of MyBranz.

    Janani knew something had to be done way before the FTC even made a move. MyBranz is a software accessible online that promotes transparency at every step of the consumer journey by leveraging AI to provide verified reviews from across the web to help users find the best brands and products based on lawful and real feedback, saving money and helping the environment at the same time. In this episode, we explored the sources and impacts of online fake reviews, consumer trust, and what the FTC ruling means for users.

    https://www.mybranz.com/

    Created, hosted and produced by Mélissa M'Raidi-Kechichian.

    Más Menos
    24 m
  • Quit Clicking Kids: protecting child influencers through policy with Chris McCarty, founder of Quit Clicking Kids
    Jan 24 2025

    Since #KOSA, protecting kids online has continued to be a very hot topic. However, we often overlook the influence industry that also impacts kids online, for instance with the emergence of thousands of YouTube family channels. Horror stories of behind the scenes abuse have come out in recent news stories, in addition to the serious lack of protection when it comes to their privacy and their financial exploitation. Kids cannot give informed consent to become part of the family influence industry that is, unlike kids in acting careers, barely, if at all, regulated: to date, only three US states have signed this type of protective legislation into law, and many more have bills in the works. To talk about this topic, I welcomed the amazing Chris McCarty.

    At 17, Chris founded Quit Clicking Kids, an advocacy organization, to safeguard the rights of children who grow up on monetized family social media accounts after discovering that child social media stars lacked the same rights and protections as child actors. Since then, they have worked with legislators across the United States to introduce protective legislation. In addition to leading advocacy efforts at Quit Clicking Kids, Chris is a junior at the University of Washington majoring in Political Science. Their work has been featured by The New York Times, CNN, NBC News, Teen Vogue, and they recently made the Forbes List of 30 under 30 in the social media category.

    For more information on Quit Clicking Kids:

    https://quitclickingkids.com/

    https://www.instagram.com/quit_clicking_kids/

    Created, hosted and produced by Mélissa M'Raidi-Kechichian.

    Más Menos
    30 m
  • How existing safety mitigation safeguards fail in LLMs with Khaoula Chehbouni, PhD Researcher at McGill and MILA
    Jan 17 2025

    Large Language Models, or LLMs, may be the most popular type of AI systems, often seen as an alternative to search engines, even though they should not as the information they throw at users only resemble and mimic human speech and is not always factual, among many other issues that are talked about in this episode.

    Our guest today is Khaoula Chehbouni, she is a PhD Student in Computer Science at McGill University and Mila (Quebec AI Institute). Khaoula was awarded the prestigious FRQNT Doctoral Training Scholarship to research fairness and safety in large language models. She previously worked as a Senior Data Scientist at Statistics Canada and completed her Masters in Business Intelligence at HEC Montreal, where she received the Best Master Thesis award.

    In this episode, we talked about the impact of Western narratives on which LLMs are trained, the limits of trust and safety, how racism and stereotypes are mirrored and amplified by LLMs, and what it is like to be a minority in a STEM academic environment. I hope you’ll enjoy this episode.

    Created, hosted and produced by Mélissa M'Raidi-Kechichian.

    Más Menos
    37 m