• AI and the Alignment Challenge

  • Mar 11 2024
  • Length: 16 mins
  • Podcast

AI and the Alignment Challenge  By  cover art

AI and the Alignment Challenge

  • Summary

  • We dive deep into the intricacies and ethical considerations of AI development, specifically focusing on OpenAI's Chat-GPT and GPT-4. Join us as we discuss how OpenAI approached the alignment problem, the impact of Human Aligned Reinforcement Learning, and the role of human raters in shaping Chat-GPT. We'll also revisit past AI mishaps like Microsoft's Tay and explore their influence on current AI models. The episode delves into OpenAI's efforts to address ethical concerns, the debate over universal human values in AI, and the diverse perspectives of users, developers, and society on AI technology. Lastly, we tackle the critical issue of employing workers from the global south for AI alignment, examining the ethical implications and the need for support. Tune in to uncover the complexities and breakthroughs in the evolving world of AI!

    Dr. Joel Esposito. He is a Professor in the Robotics and Control Engineering Department at the Naval Academy. He teaches courses in Robotics, Unmanned Vehicles, Artificial Intelligence and Data Science. He is the recipient of the Naval Academy's Rauoff Award for Excellence in Engineering Education, and the 2015 Class of 1951 Faculty Research Excellence Award. He received both a Master of Science, and a Ph.D. from the University of Pennsylvania.

    Show more Show less
activate_primeday_promo_in_buybox_DT

What listeners say about AI and the Alignment Challenge

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.