80,000 Hours Podcast

By: Rob Luisa Keiran and the 80 000 Hours team
  • Summary

  • Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Produced by Keiran Harris. Hosted by Rob Wiblin and Luisa Rodriguez.
    All rights reserved
    Show more Show less
activate_primeday_promo_in_buybox_DT
Episodes
  • #194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government
    Jul 26 2024

    "If you’re a power that is an island and that goes by sea, then you’re more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trade. If you are on the Mongolian steppes, then your entire mindset is kill or be killed, conquer or be conquered … the breeding ground for basically everything that all of us consider to be dystopian governance. If you want more utopian governance and less dystopian governance, then find ways to basically change the landscape, to try to make the world look more like mountains and rivers and less like the Mongolian steppes." —Vitalik Buterin

    Can ‘effective accelerationists’ and AI ‘doomers’ agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “My techno-optimism,” which both camps agreed was basically reasonable.

    Links to learn more, highlights, video, and full transcript.

    Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive.

    Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously.

    But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions.

    The upshot? Defensive acceleration: humanity should run boldly but also intelligently into the future — speeding up technology to get its benefits, but preferentially developing ‘defensive’ technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination.

    Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you, learn about the programme and apply by August 2, 2024. You don’t need a business idea yet — just the hustle to start a technology company.

    In addition to all of that, host Rob Wiblin and Vitalik discuss:

    • AI regulation disagreements being less about AI in particular, and more whether you’re typically more scared of anarchy or totalitarianism.
    • Vitalik’s updated p(doom).
    • Whether the social impact of blockchain and crypto has been a disappointment.
    • Whether humans can merge with AI, and if that’s even desirable.
    • The most valuable defensive technologies to accelerate.
    • How to trustlessly identify what everyone will agree is misinformation
    • Whether AGI is offence-dominant or defence-dominant.
    • Vitalik’s updated take on effective altruism.
    • Plenty more.

    Chapters:

    • Cold open (00:00:00)
    • Rob’s intro (00:00:56)
    • The interview begins (00:04:47)
    • Three different views on technology (00:05:46)
    • Vitalik’s updated probability of doom (00:09:25)
    • Technology is amazing, and AI is fundamentally different from other tech (00:15:55)
    • Fear of totalitarianism and finding middle ground (00:22:44)
    • Should AI be more centralised or more decentralised? (00:42:20)
    • Humans merging with AIs to remain relevant (01:06:59)
    • Vitalik’s “d/acc” alternative (01:18:48)
    • Biodefence (01:24:01)
    • Pushback on Vitalik’s vision (01:37:09)
    • How much do people actually disagree? (01:42:14)
    • Cybersecurity (01:47:28)
    • Information defence (02:01:44)
    • Is AI more offence-dominant or defence-dominant? (02:21:00)
    • How Vitalik communicates among different camps (02:25:44)
    • Blockchain applications with social impact (02:34:37)
    • Rob’s outro (03:01:00)

    Producer and editor: Keiran Harris
    Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
    Transcriptions: Katy Moore

    Show more Show less
    3 hrs and 4 mins
  • #193 – Sihao Huang on the risk that US–China AI competition leads to war
    Jul 18 2024

    "You don’t necessarily need world-leading compute to create highly risky AI systems. The biggest biological design tools right now, like AlphaFold’s, are orders of magnitude smaller in terms of compute requirements than the frontier large language models. And China has the compute to train these systems. And if you’re, for instance, building a cyber agent or something that conducts cyberattacks, perhaps you also don’t need the general reasoning or mathematical ability of a large language model. You train on a much smaller subset of data. You fine-tune it on a smaller subset of data. And those systems — one, if China intentionally misuses them, and two, if they get proliferated because China just releases them as open source, or China does not have as comprehensive AI regulations — this could cause a lot of harm in the world." —Sihao Huang

    In today’s episode, host Luisa Rodriguez speaks to Sihao Huang — a technology and security policy fellow at RAND — about his work on AI governance and tech policy in China, what’s happening on the ground in China in AI development and regulation, and the importance of US–China cooperation on AI governance.

    Links to learn more, highlights, video, and full transcript.

    They cover:

    • Whether the US and China are in an AI race, and the global implications if they are.
    • The state of the art of AI in China.
    • China’s response to American export controls, and whether China is on track to indigenise its semiconductor supply chain.
    • How China’s current AI regulations try to maintain a delicate balance between fostering innovation and keeping strict information control over the Chinese people.
    • Whether China’s extensive AI regulations signal real commitment to safety or just censorship — and how AI is already used in China for surveillance and authoritarian control.
    • How advancements in AI could reshape global power dynamics, and Sihao’s vision of international cooperation to manage this responsibly.
    • And plenty more.

    Chapters:

    • Cold open (00:00:00)
    • Luisa's intro (00:01:02)
    • The interview begins (00:02:06)
    • Is China in an AI race with the West? (00:03:20)
    • How advanced is Chinese AI? (00:15:21)
    • Bottlenecks in Chinese AI development (00:22:30)
    • China and AI risks (00:27:41)
    • Information control and censorship (00:31:32)
    • AI safety research in China (00:36:31)
    • Could China be a source of catastrophic AI risk? (00:41:58)
    • AI enabling human rights abuses and undermining democracy (00:50:10)
    • China’s semiconductor industry (00:59:47)
    • China’s domestic AI governance landscape (01:29:22)
    • China’s international AI governance strategy (01:49:56)
    • Coordination (01:53:56)
    • Track two dialogues (02:03:04)
    • Misunderstandings Western actors have about Chinese approaches (02:07:34)
    • Complexity thinking (02:14:40)
    • Sihao’s pet bacteria hobby (02:20:34)
    • Luisa's outro (02:22:47)


    Producer and editor: Keiran Harris
    Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
    Additional content editing: Katy Moore and Luisa Rodriguez
    Transcriptions: Katy Moore

    Show more Show less
    2 hrs and 24 mins
  • #192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US
    Jul 12 2024

    "Ring one: total annihilation; no cellular life remains. Ring two, another three-mile diameter out: everything is ablaze. Ring three, another three or five miles out on every side: third-degree burns among almost everyone. You are talking about people who may have gone down into the secret tunnels beneath Washington, DC, escaped from the Capitol and such: people are now broiling to death; people are dying from carbon monoxide poisoning; people who followed instructions and went into their basement are dying of suffocation. Everywhere there is death, everywhere there is fire.

    "That iconic mushroom stem and cap that represents a nuclear blast — when a nuclear weapon has been exploded on a city — that stem and cap is made up of people. What is left over of people and of human civilisation." —Annie Jacobsen

    In today’s episode, host Luisa Rodriguez speaks to Pulitzer Prize finalist and New York Times bestselling author Annie Jacobsen about her latest book, Nuclear War: A Scenario.

    Links to learn more, highlights, and full transcript.

    They cover:

    • The most harrowing findings from Annie’s hundreds of hours of interviews with nuclear experts.
    • What happens during the window that the US president would have to decide about nuclear retaliation after hearing news of a possible nuclear attack.
    • The horrific humanitarian impacts on millions of innocent civilians from nuclear strikes.
    • The overlooked dangers of a nuclear-triggered electromagnetic pulse (EMP) attack crippling critical infrastructure within seconds.
    • How we’re on the razor’s edge between the logic of nuclear deterrence and catastrophe, and urgently need reforms to move away from hair-trigger alert nuclear postures.
    • And plenty more.

    Chapters:

    • Cold open (00:00:00)
    • Luisa’s intro (00:01:03)
    • The interview begins (00:02:28)
    • The first 24 minutes (00:02:59)
    • The Black Book and presidential advisors (00:13:35)
    • False alarms (00:40:43)
    • Russian misperception of US counterattack (00:44:50)
    • A narcissistic madman with a nuclear arsenal (01:00:13)
    • Is escalation inevitable? (01:02:53)
    • Firestorms and rings of annihilation (01:12:56)
    • Nuclear electromagnetic pulses (01:27:34)
    • Continuity of government (01:36:35)
    • Rays of hope (01:41:07)
    • Where we’re headed (01:43:52)
    • Avoiding politics (01:50:34)
    • Luisa’s outro (01:52:29)

    Producer and editor: Keiran Harris
    Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
    Additional content editing: Katy Moore and Luisa Rodriguez
    Transcriptions: Katy Moore

    Show more Show less
    1 hr and 54 mins

What listeners say about 80,000 Hours Podcast

Average customer ratings
Overall
  • 5 out of 5 stars
  • 5 Stars
    2
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Performance
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Story
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0

Reviews - Please select the tabs below to change the source of reviews.

Sort by:
Filter by:
  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

Brilliant

For anyone who's interested in audiobooks, especially non-fiction work, this podcast is perfect. For people used to short-form podcasts, the 2-5 hour range may seem intimidating, but for those used to the length of audiobooks it's great. The length allows the interviewer to ask genuinely interesting questions, with a bit of back-and-forth with the interviewee.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

1 person found this helpful