Clearer Thinking with Spencer Greenberg  By  cover art

Clearer Thinking with Spencer Greenberg

By: Spencer Greenberg
  • Summary

  • Clearer Thinking is a podcast about ideas that truly matter. If you enjoy learning about powerful, practical concepts and frameworks, wish you had more deep, intellectual conversations in your life, or are looking for non-BS self-improvement, then we think you'll love this podcast! Each week we invite a brilliant guest to bring four important ideas to discuss for an in-depth conversation. Topics include psychology, society, behavior change, philosophy, science, artificial intelligence, math, economics, self-help, mental health, and technology. We focus on ideas that can be applied right now to make your life better or to help you better understand yourself and the world, aiming to teach you the best mental tools to enhance your learning, self-improvement efforts, and decision-making. • We take on important, thorny questions like: • What's the best way to help a friend or loved one going through a difficult time? How can we make our worldviews more accurate? How can we hone the accuracy of our thinking? What are the advantages of using our "gut" to make decisions? And when should we expect careful, analytical reflection to be more effective? Why do societies sometimes collapse? And what can we do to reduce the chance that ours collapses? Why is the world today so much worse than it could be? And what can we do to make it better? What are the good and bad parts of tradition? And are there more meaningful and ethical ways of carrying out important rituals, such as honoring the dead? How can we move beyond zero-sum, adversarial negotiations and create more positive-sum interactions?
    Show more Show less
Episodes
  • Separating quantum computing hype from reality (with Scott Aaronson)
    May 1 2024

    Read the full transcript here.

    What exactly is quantum computing? How much should we worry about the possibility that quantum computing will break existing cryptography tools? When will a quantum computer with enough horsepower to crack RSA likely appear? On what kinds of tasks will quantum computers likely perform better than classical computers? How legitimate are companies that are currently selling quantum computing solutions? How can scientists help to fight misinformation and misunderstandings about quantum computing? To what extent should the state of the art be exaggerated with the aim of getting people excited about the possibilities the technology might afford and encouraging them to invest in research or begin a career in the field? Is now a good time to go into the field (especially compared to other similar options, like going into the booming AI field)?

    Scott Aaronson is Schlumberger Chair of Computer Science at the University of Texas at Austin and founding director of its Quantum Information Center, currently on leave at OpenAI to work on theoretical foundations of AI safety. He received his bachelor's from Cornell University and his PhD from UC Berkeley. Before coming to UT Austin, he spent nine years as a professor in Electrical Engineering and Computer Science at MIT. Aaronson's research in theoretical computer science has focused mainly on the capabilities and limits of quantum computers. His first book, Quantum Computing Since Democritus, was published in 2013 by Cambridge University Press. He received the National Science Foundation's Alan T. Waterman Award, the United States PECASE Award, the Tomassoni-Chisesi Prize in Physics, and the ACM Prize in Computing; and he is a Fellow of the ACM and the AAAS. Find out more about him at scottaaronson.blog.

    Staff

    • Spencer Greenberg — Host / Director
    • Josh Castle — Producer
    • Ryan Kessler — Audio Engineer
    • Uri Bram — Factotum
    • WeAmplify — Transcriptionists
    • Alexandria D. — Research and Special Projects Assistant

    Music

    • Broke for Free
    • Josh Woodward
    • Lee Rosevere
    • Quiet Music for Tiny Robots
    • wowamusic
    • zapsplat.com

    Affiliates

    • Clearer Thinking
    • GuidedTrack
    • Mind Ease
    • Positly
    • UpLift
    [Read more]
    Show more Show less
    1 hr and 19 mins
  • Should we pause AI development until we're sure we can do it safely? (with Joep Meindertsma)
    Apr 24 2024

    Read the full transcript here.

    Should we pause AI development? What might it mean for an AI system to be "provably" safe? Are our current AI systems provably unsafe? What makes AI especially dangerous relative to other modern technologies? Or are the risks from AI overblown? What are the arguments in favor of not pausing — or perhaps even accelerating — AI progress? What is the public perception of AI risks? What steps have governments taken to migitate AI risks? If thoughtful, prudent, cautious actors pause their AI development, won't bad actors still keep going? To what extent are people emotionally invested in this topic? What should we think of AI researchers who agree that AI poses very great risks and yet continue to work on building and improving AI technologies? Should we attempt to centralize AI development?

    Joep Meindertsma is a database engineer and tech entrepreneur from the Netherlands. He co-founded the open source e-democracy platform Argu, which aimed to get people involved in decision-making. Currently, he is the CEO of Ontola.io, a software development firm from the Netherlands that aims to give people more control over their data; and he is also working on a specification and implementation for modeling and exchanging data called Atomic Data. In 2023, after spending several years reading about AI safety and deciding to dedicate most of his time towards preventing AI catastrophe, he founded PauseAI and began actively lobbying for slowing down AI development. He's now trying to grow PauseAI and get more people in action. Learn more about him on his GitHub page.

    Staff

    • Spencer Greenberg — Host / Director
    • Josh Castle — Producer
    • Ryan Kessler — Audio Engineer
    • Uri Bram — Factotum
    • WeAmplify — Transcriptionists
    • Alexandria D. — Research and Special Projects Assistant

    Music

    • Broke for Free
    • Josh Woodward
    • Lee Rosevere
    • Quiet Music for Tiny Robots
    • wowamusic
    • zapsplat.com

    Affiliates

    • Clearer Thinking
    • GuidedTrack
    • Mind Ease
    • Positly
    • UpLift
    [Read more]
    Show more Show less
    1 hr and 1 min
  • What should the Effective Altruism movement learn from the SBF / FTX scandal? (with Will MacAskill)
    Apr 15 2024

    Read the full transcript here.

    What are the facts around Sam Bankman-Fried and FTX about which all parties agree? What was the nature of Will's relationship with SBF? What things, in retrospect, should've been red flags about Sam or FTX? Was Sam's personality problematic? Did he ever really believe in EA principles? Does he lack empathy? Or was he on the autism spectrum? Was he naive in his application of utilitarianism? Did EA intentionally install SBF as a spokesperson, or did he put himself in that position of his own accord? What lessons should EA leaders learn from this? What steps should be taken to prevent it from happening again? What should EA leadership look like moving forward? What are some of the dangers around AI that are not related to alignment? Should AI become the central (or even the sole) focus of the EA movement?

    William MacAskill is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. He also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and 80,000 Hours, which together have moved over $300 million to effective charities. He's the author of What We Owe The Future, Doing Good Better, and Moral Uncertainty.

    Further reading:

    • Episode 133: The FTX catastrophe (with Byrne Hobart, Vipul Naik, Maomao Hu, Marcus Abramovich, and Ozzie Gooen) — Our previous podcast episode about what happened in the FTX disaster
    • "Who is Sam Bankman-Fried (SBF) really, and how could he have done what he did? – three theories and a lot of evidence" — Spencer's essay about SBF's personality
    • Why They Do It: Inside the Mind of the White-Collar Criminal by Eugene Soltes

    Staff

    • Spencer Greenberg — Host / Director
    • Josh Castle — Producer
    • Ryan Kessler — Audio Engineer
    • Uri Bram — Factotum
    • WeAmplify — Transcriptionists
    • Alexandria D. — Research and Special Projects Assistant

    Music

    • Broke for Free
    • Josh Woodward
    • Lee Rosevere
    • Quiet Music for Tiny Robots
    • wowamusic
    • zapsplat.com

    Affiliates

    • Clearer Thinking
    • GuidedTrack
    • Mind Ease
    • Positly
    • UpLift
    [Read more]
    Show more Show less
    2 hrs and 2 mins

What listeners say about Clearer Thinking with Spencer Greenberg

Average customer ratings
Overall
  • 5 out of 5 stars
  • 5 Stars
    2
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Performance
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Story
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0

Reviews - Please select the tabs below to change the source of reviews.