• Rogue AI and Catastrophic Risk

  • Nov 29 2023
  • Duración: 31 m
  • Podcast

Rogue AI and Catastrophic Risk  Por  arte de portada

Rogue AI and Catastrophic Risk

  • Resumen

  • Nico Andreas Heller in conversation with Yoshua Bengio.

    What if artificial intelligence (AI) continues to progress towards and beyond our abilities, in areas where it could become dangerous, and what if our regulations won’t be 100% foolproof, opening the door to seriously harmful misuse by bad actors, historically never seen concentrations of power and existential threats to our collective future? Even if these were a low-probability events, given the high stakes, should we not have a plan B?

    With a view to minimising those risks, Yoshua Bengio proposes the creation of a multilateral network of non-profit and non-governmental labs, collaborating on the defence of democracy, human rights and against eventual rogue autonomous AI. His proposal hinges on avoiding a single point of failure and excessive concentrations of power (economic, political and military), and establishing strong democratically mandated international governance mechanisms.

    In this Reboot Dialogue we talk, both about the various ways in which AIs potentially pose an existential threat to democracy and human rights, and what it would take, what kind of governance architecture we would need to put into place, to protect us against AI-driven catastrophic risks.

    Yoshua Bengio is professor of computer science at the Université de Montréal, founder and scientific director of Mila–Quebec Artificial Intelligence Institute, and senior fellow and codirector of the Learning in Machines and Brains program at the Canadian Institute for Advanced Research. He won the 2018 A.M. Turing Award (with Geoffrey Hinton and Yann LeCun).

    More information about Yoshua Bengio is available via democracyschool.com/reboot-contributors. His paper on AI and catastrophic risk is now out at journalofdemocracy.org/ai-and-catastrophic-risk.

    Note to listeners: Unfortunately, this dialogue was disrupted (cut short), as we lost our internet connection temporarily during the live stream - hence the edit/cuts. Since Yoshua had scheduled a follow-on interview with The Guardian newspaper, we had to finish earlier then planned and hence were not able to cover everything we wanted to. We apologise for this disruption and the abrupt ending, and promise to continue this conversation next year (2024) with a follow-up dialogue, focusing on Yoshua’s proposal for a globally distributed AI governance architecture.

    Más Menos
activate_primeday_promo_in_buybox_DT

Lo que los oyentes dicen sobre Rogue AI and Catastrophic Risk

Calificaciones medias de los clientes

Reseñas - Selecciona las pestañas a continuación para cambiar el origen de las reseñas.