• Artificial Intelligence

  • Nov 16 2018
  • Duración: 42 m
  • Podcast
  • 5.0 out of 5 stars (1 calificación)

  • Resumen

  • An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)

    Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.

    Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

    See omnystudio.com/listener for privacy information.

    Más Menos

Lo que los oyentes dicen sobre Artificial Intelligence

Calificaciones medias de los clientes
Total
  • 5 out of 5 stars
  • 5 estrellas
    1
  • 4 estrellas
    0
  • 3 estrellas
    0
  • 2 estrellas
    0
  • 1 estrella
    0
Ejecución
  • 5 out of 5 stars
  • 5 estrellas
    1
  • 4 estrellas
    0
  • 3 estrellas
    0
  • 2 estrellas
    0
  • 1 estrella
    0
Historia
  • 5 out of 5 stars
  • 5 estrellas
    1
  • 4 estrellas
    0
  • 3 estrellas
    0
  • 2 estrellas
    0
  • 1 estrella
    0

Reseñas - Selecciona las pestañas a continuación para cambiar el origen de las reseñas.