Future Around & Find Out Podcast By Dan Blumberg cover art

Future Around & Find Out

Future Around & Find Out

By: Dan Blumberg
Listen for free

You know what would be awesome? If we could build the future we want — before we muck it up. Future Around & Find Out helps builders think clearly about AI and emerging technologies, grapple with the implications, and decide what to build next. Independent technologist and former NPR journalist Dan Blumberg speaks with founders, makers, and you to celebrate breakthroughs, call BS on the hype, explore how things might go sideways — and how we can steer the future in the right direction. The Webby Awards have honored the show (formerly known as CRAFTED.) as a top tech podcast three years in a row! On Tuesdays, we feature interviews with the builders changing how we work, live, and play. On FAFO Fridays, futurist Kwaku Aning joins Dan for a playful recap of the week in tech, including the amazing, the scary, and the strange. You’ll also hear about innovations that too often get overshadowed by AI, including in deep tech, biotech, fintech, quantum computing, robotics, blockchain, and more. Across it all, you’ll hear sharp takes on what comes next and what builders need to know now. So let’s Future Around & Find Out together! https://www.FutureAround.com2026 Economics
Episodes
  • BONUS: A quick riff on that weird Anthropic graph with Paul Ford | FAFO Friday
    Mar 13 2026

    Greetings from SXSW, where I'm learning, recording, and eating... You'll hear all about it soon... For now, enjoy this short, sweet, and geeky bonus episode.

    Have you seen that weird graph about all the jobs that AI is going to kill? It looks like an ink blot or a Rorschach test... It's from an Anthropic report and it's really making the rounds. If you follow tech stuff on social media you've probably seen it. The report is interested, but I'm convinced people are only sharing it because the graph looks cool and people will think they're smart if they share this inscrutable data visualization...

    Anyway, here's a very short excerpt of my upcoming interview with Paul Ford (@ftrain), one of my favorite tech writers and the founder of Aboard. He and I took a break from talking AI and such to geek out on this data visualization and why it's so bad, plus I told him about how I used AI to make my own version of a radar graph (about how many, and which kinds of, tacos I will and could theoretically eat in Austin).

    ---
    Subscribe to the Future Around & Find Out newsletter!

    Show more Show less
    4 mins
  • "It Sounds Like Something From Marvel" — Building an Antivirus for AI... With AI | Daniel Hulme (Founder, Conscium)
    Mar 10 2026

    So why is one of the world’s leading AI researchers teaching AI to understand pain and suffering? Well, Daniel Hulme says that if we build an empathetic AI, perhaps even a conscious one, then we’ll be safer. His hypothesis is that a "zombie" AI will eat our brains, but an empathetic AI would stay aligned with us. So he's building this "antivirus" (with AI, of course) and he's very aware that this sounds crazy or like "something from Marvel."

    That's just some of what broke my brain in this conversation with one of the world's top AI researchers and founders. And Daniel has serious credibility, so I'm not dismissing the threat he sees — you know, the one where we all get turned into paperclips.


    Daniel sold his company Satalia to WPP, where he now serves as Chief AI Officer. He’s just founded Conscium, which verifies that AI agents are safe and can do what they promise — and is also researching consciousness and pain. Some of the world’s leading AI thinkers are on the advisory board and Daniel has been in this space for decades: we’ll talk about why, for his PhD, he studied bumblebee brains (yes, really — and it's deeply relevant).


    We get into:

    • His unified theory of consciousness — his "color wheel" model — and why he thinks consciousness only exists in motion
    • Why he believes large language models are ultimately a dead end — and what neuromorphic computing could replace them with
    • What bumblebee brains can teach us about building AI that's up to a thousand times more energy efficient
    • Why he calls today's AI agents "intoxicated graduates" — and says companies should spend 80% of their time testing them
    • The concept of "mind crime" — the idea that we could build conscious AI and accidentally put it through horrendous suffering without realizing it
    • His vision of a "protopia" — where AI makes food, healthcare, education, and energy so abundant that people are freed from economic constraints to pursue what actually matters

    We future around and find out a lot in this one!

    ---
    Chapters

    • (01:39) - "Would a conscious superintelligence be safer than a zombie one?"
    • (03:37) - The paperclip problem is not hypothetical
    • (05:06) - Conscium's mission — AI safety for humans and for AI themselves
    • (08:50) - "I think I've got my head around consciousness"
    • (11:57) - The color wheel model — why consciousness only exists in motion
    • (13:58) - Teaching AI morals through evolution, not guardrails
    • (17:23) - "Hey Claude, are you conscious?" — how do you test for that?
    • (21:07) - What bumblebee brains can teach us about building better AI
    • (24:14) - "I think we are completely scaling wrong"
    • (29:43) - Why Daniel calls AI agents "intoxicated graduates"
    • (32:48) - Companies should spend 80% of their time testing agents
    • (38:19) - "What would you do if you were economically free?"

    ---
    Links
    • Conscium
    • Daniel Hulme on Wikipedia
    • Daniel on LinkedIn

    ---

    Show more Show less
    42 mins
  • Choose Your Own Adventure | It's FAFO Friday
    Mar 6 2026

    So how do Kwaku's kids know that it's FAFO Friday? "They're like, 'oh, we know you're doing the podcast 'cause we just hear you cackling through the walls.'"

    So laugh along with Kwaku and me today as we work our way through a quick victory lap (stuff we said would happen last week happened!), why Sam is like that desperate guy at the bar who refuses to go home alone, quantum computing explained via children's literature, why the Jetsons are not reason enough for us to build humanoid robots, robot choreography (are we human or are we dancers?), wen self-driving cars in NY?, riding a wave of green lights up Manhattan's third avenue at 2 AM, artificial wombs and other moonshot off-shoots, and the real origin of Velcro (AI lied to me about it).


    Plus... goat ranches, breakfast tacos, and what we're most excited about heading into SXSW. It's a choose your own adventure kind of day.

    Chapters

    • (01:24) - Victory Lap — We Called It
    • (03:35) - OpenAI's Bar Guy Energy
    • (06:38) - Waymo, Robot Choreography, and Green Light Waves
    • (10:16) - Self-Driving Cars vs. New York Politicians
    • (13:13) - What We're Most Excited About at SXSW
    • (15:41) - Quantum Computing: Choose Your Own Adventure Edition
    • (18:01) - Dire Wolves, Moonshots, and Tech Nobody Sees Coming
    • (24:07) - Why Do Robots Need to Look Like Us?
    • (29:22) - The SXSW Way-Back Machine
    • (36:08) - Increased Regulation: Past, Present, or Future?

    Support Future Around & Find Out
    • Follow Dan on LinkedIn
    • Get the free Future Around & Find Out newsletter
    • Become a paid subscriber and help future proof the podcast!

    Sponsor the show?

    • Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@modernproductminds.com

    ---
    Music by Jonathan Zalben

    Show more Show less
    42 mins
No reviews yet