Machines that fail us

By: University of St. Gallen Philip Di Salvo
  • Summary

  • From educational institutions to healthcare professionals, from employers to governing bodies, artificial intelligence technologies and algorithms are increasingly used to assess and decide upon various aspects of our lives. However, the question arises: are these systems truly impartial and just in their judgments when they read humans and their behaviour? Our answer is that they are not. Despite their purported aim to enhance objectivity and efficiency, these technologies paradoxically harbor systemic biases and inaccuracies, particularly in the realm of human profiling. The Human Error Project has investigated how journalists, civil society organizations and tech entrepreneurs in Europe make sense of AI errors and how they are negotiating and coexisting with the human rights implications of AI. With the aim of fostering debate between academia and the public, the “Machines That Fail Us” podcast series will host the voices of some of the most engaged individuals involved in the fight for a better future with artificial intelligence. “Machines That Fail Us” is made possible thanks to grant provided by the Swiss National Science Foundation (SNSF)’s “Agora” scheme. The podcast is produced by The Human Error Project Team. Dr. Philip Di Salvo, the main host, works as a researcher and lecturer in the HSG’s Institute for Media and Communications Management. https://mcm.unisg.ch/ https://www.unisg.ch/
    Copyright 2024 University of St. Gallen, Philip Di Salvo
    Show more Show less
Episodes
  • Machines That Fail Us #5: "The shape of AI to come"
    Jul 4 2024

    The AI we have built so far comes with many different shortcomings and concerns. At the same time, the AI tools we have today are the product of specific technological cultures and business decisions. Could we just do AI differently? For the final episode of “Machines That Fail Us”, we are joined by a leading expert on the intersection of emerging technology, policy, and rights. With Frederike Kaltheuner, founder of the consulting firm new possible and a Senior Advisor to the AI NOW institute, we discussed the shape of future AI and of our life with it.

    Show more Show less
    30 mins
  • Machines That Fail Us #4: Building different AI futures
    Jun 13 2024

    We don’t necessarily have to build artificial intelligence the way we’re doing it today. To make AI really inclusive we must look beyond Western techno-cultures and beyond our understanding of technology being either utopian or dystopian. How could our AI future look different? We asked Prof. Payal Arora, a Professor of Inclusive AI Cultures at Utrecht University.

    Show more Show less
    34 mins
  • Machines That Fail Us #3 Errors and biases: tales of algorithmic discrimination
    May 16 2024

    The records of biases, discriminatory outcomes, and errors as well as the societal impacts of artificial intelligence systems is now widely documented. However, the question remains: How is the struggle for algorithmic justice evolving? We asked Angela Müller, Executive Director of AlgorithmWatch Switzerland.

    Show more Show less
    27 mins

What listeners say about Machines that fail us

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.