Episodios

  • AI Agents, Digital Twins, and the Future of Work, w/ Read.AI CEO David Shim
    Sep 11 2025

    What if “AI teammates” aren’t sci-fi at all, but the next mundane tool that quietly kills Monday dread?

    In this episode of AI-Curious, we sit down with David Shim, CEO of Read.ai, to unpack what workers actually want from AI, how teams are adopting agents from the bottom up, and what a practical “digital twin” might do at work—minus the Black Mirror vibes. We cover fast-path ROI (meeting notes → action items), the shift from “prompts” to ambient workflows, and why the most valuable corporate asset may soon be the storage of intelligence—the living record of how your organization thinks and decides.

    What we cover

    • Why 70% of workers say they want AI agents—and what basic tasks deliver real ROI now
    • A crawl-walk-run roadmap: note-taking → briefing → follow-ups → lightweight agents → digital twin
    • “Storage of intelligence” as a competitive moat (institutional knowledge that doesn’t walk out the door)
    • Guardrails, data separation, and how to make privacy concerns non-negotiable
    • Bottom-up adoption: why employees are forcing IT’s hand—and how leaders should respond
    • The macro view: augmentation vs. replacement, and the provocative idea that AI replaces computers (as the interface)

    If you find this useful, we’d love a rating and a quick share with a teammate who’s piloting AI at work.

    Read.AI:

    https://www.read.ai/



    Más Menos
    42 m
  • How AI Could Help Solve Climate Change, w/ Climate Tech Expert Josh Dorfman
    Aug 28 2025

    AI is often framed as a climate problem—energy-hungry data centers, ballooning carbon emissions, and talk of nuclear power just to keep the servers running. But could AI also become part of the solution?

    In this episode of AI-Curious, we sit down with Josh Dorfman—climate tech entrepreneur and host of Supercool—to explore how artificial intelligence might help tackle climate change. Josh doesn’t offer hand-wavy promises. Instead, we dive into concrete examples where AI is already making a difference.

    What we cover:

    • [4:17] Josh’s background at the intersection of technology, climate, and business.
    • [8:18] How AI data centers are impacting energy use—and why fossil fuels can’t scale to meet demand.
    • [12:30] The role of nuclear, geothermal, and solar-plus-storage in powering AI sustainably.
    • [23:25] AI-optimized school buses: how Oakland electrified its fleet with fewer vehicles.
    • [27:44] BrainBox AI and smarter buildings: cutting emissions through predictive HVAC optimization.
    • [31:42] AI in waste management: from pneumatic trash tubes to AI sorting recyclables.
    • [41:17] Big-picture futures: AI efficiency, plummeting solar costs, and the possibility of “trivially cheap” energy.

    The conversation blends realism with optimism—grounded in the challenges of energy demand, yet hopeful about AI-driven solutions in transportation, buildings, waste, and renewable power.

    If you’ve ever wondered whether AI can be more than an energy drain—and instead help drive sustainability—this episode offers both perspective and inspiration.

    🎧 Subscribe to AI-Curious:

    • Apple Podcasts
    https://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308

    • Spotify
    https://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b

    • YouTube
    https://www.youtube.com/@jeffwilser


    Más Menos
    45 m
  • Can AI Be Funny? With ComedyBytes’ Eric Doyle
    Aug 14 2025

    Can artificial intelligence actually be funny, or is humor still a human stronghold? We explore that question with Eric Doyle, co-founder of ComedyBytes, a Brooklyn-based multimedia comedy show where AI and humans face off in roast battles, dating games, and other interactive formats. Doyle combines the craft of stand-up with the tools of generative AI, building AI characters like “AI Kanye West” or “AI Sarah Silverman” that deliver pre-scripted jokes in real time.

    In this episode of AI-Curious, we dig into:

    • [0:52] The story behind ComedyBytes and its AI-powered format
    • [3:46] How AI roast battles work, from concept to stage mechanics
    • [7:53] Using tools like ChatGPT, Claude Sonnet, and Gemini AI to write jokes
    • [12:55] The art of prompting for humor and boosting the “funny hit rate”
    • [16:36] Why specificity matters in generative AI comedy
    • [23:43] Inside the “Data-ing Game,” an AI twist on the classic dating game
    • [25:58] Can AI really be funny—or just imitate the structure of humor?
    • [32:30] The triple, listing technique, and other joke-writing structures AI can learn
    • [39:10] Advice for non-comedians using AI to add humor
    • [41:24] The future of AI in entertainment and its impact on creators

    From the structure and anatomy of a joke to the ethics of deepfake comedy, this conversation blends technology, performance, and the evolving role of AI in creative work. Whether you’re an AI enthusiast, a comedy fan, or simply curious about where these worlds collide, this is a look at AI and humor you haven’t heard before.

    Más Menos
    40 m
  • The New Jobs That AI Might Create, w/ Robert Capps (NYT Magazine Contributor)
    Jul 24 2025

    Is Kant the new code? If AI can write, code, and even plan, which human skills suddenly become scarce—and valuable?

    In this conversation with Robert Capps (former Editorial Director of Wired, contributor to The New York Times Magazine), we dive into his widely shared NYT Mag feature, “AI Might Take Your Job. Here Are 22 New Ones It Could Give You.” We unpack the three big buckets of new work he sees emerging—Trust, Integrators, and Taste—and explore why philosophy majors, auditors, and “AI translators” may be the surprise winners. We also get frank about hallucinations, over-extrapolation, inequality, lethal autonomous weapons, and why Rob still comes out more optimistic.

    In this episode of AI-Curious, we:

    • Break down Rob’s three buckets of future AI jobs: Trust (auditors, ethicists, legal guarantors), Integrators (the translators who know both your business and the models), and Taste (the Rick Rubin-esque role of vision, judgment, and curation).
    • Talk about why Ethan Mollick refuses to let AI write his first drafts—and why that matters for your own thinking.
    • Examine how “the tools will be commodities, not the people,” and what that means for founders, creators, journalists, and scrappy upstarts.
    • Get into the very real risk of inequality and policy paralysis—and why UBI isn’t a satisfying answer.
    • Preview Rob’s documentary on AI weapons and the fight to keep humans in the loop.

    Takeaways

    • Trust work explodes. Expect a cottage industry of auditors, ethicists, and “legal guarantors” to ensure AI output is accurate, defensible, and compliant.
    • Integrators win inside companies. The most valuable people will be those who can translate between business reality and fast-moving model ecosystems.
    • Taste is leverage. Vision, taste, and editorial judgment—knowing what good looks like—become the human moat.
    • Beware first-draft capture. Letting AI write your first draft can quietly dominate your thinking (Mollick’s rule is worth adopting).
    • Inequality is the real threat. Most experts Rob spoke with fear a rapid widening of inequality more than mass permanent joblessness.
    • Tools, not people, become commodities. When everyone has Goldman-tier tools, expect disruption from the bottom, not reinforcement of the top.

    Rob’s NYT Magazine piece: “AI Might Take Your Job. Here Are 22 New Ones It Could Give You.”

    https://www.nytimes.com/2025/06/17/magazine/ai-new-jobs.html

    Más Menos
    52 m
  • AI and Education: Inside the AI Solution Partnering with Denver Public Schools, w/ Dr. Michael Everest
    Jul 18 2025

    Could AI actually improve public education? Not just automate it, but make it more personalized, more equitable — and even more human?

    We explore this possibility with Dr. Michael Everest, founder of edYOU, an AI tutoring platform being piloted in a Denver-area school district. While many worry that AI could become a shortcut for students to avoid real learning, Everest argues the opposite — that AI can reinforce understanding, boost confidence, and offer 24/7 support tailored to each student’s needs.

    In this episode of AI-Curious, we dig into the real-world mechanics of how this works — including partnerships with schools, how teachers interact with the platform, and what kind of results they’re seeing so far.

    We also ask the tough questions: What about data privacy? What about bias and hallucinations? Is there a risk we’re outsourcing critical thinking? And what does the future of education look like if every student has a lifelong AI companion?

    Topics include:

    • The promise and pitfalls of AI in classrooms
    • edYOU’s pilot program with Adams 14 School District
    • How the AI tutoring platform personalizes learning
    • The role of teachers in an AI-enhanced education system
    • Oversight, privacy, and academic integrity
    • The vision of a lifelong AI learning companion

    Whether you’re a parent, educator, technologist, or just curious about where education is headed, this conversation offers a grounded, hopeful — and at times provocative — look at the future of learning.

    Más Menos
    48 m
  • AI's Impact on History Writing and Journalism, w/ The New York Times Magazine's Editorial Director Bill Wasik
    Jul 11 2025

    What happens when AI becomes a co-pilot for writers, researchers, and journalists — not in theory, but in practice?

    In this episode of AI-Curious, we speak with Bill Wasik, Editorial Director of The New York Times Magazine, who recently oversaw their special issue, “Learning to Live with AI.” We explore how AI is already transforming journalism, nonfiction writing, and historical research — and why the most interesting impacts may come not from content creation, but from how we discover, organize, and interpret information.

    We dig into the creative tension between AI and human storytelling, including how historians are using tools like NotebookLM to tackle research projects previously deemed impossible. Bill shares how AI can augment writing workflows without compromising editorial judgment — and why trust and authorship still matter in a world of fast content.

    We also cover:

    • The risks of over-relying on AI for research (19:45)
    • How AI might transform local journalism and accountability (41:30)
    • The evolving AI policies at The New York Times (29:40)
    • Whether AI could ever win the Booker Prize — and what that would mean (7:30)
    • Use cases from historians and academics using ChatGPT (26:00)

    Bill's (excellent) piece: "AI is Poised to Rewrite History. Literally."

    https://www.nytimes.com/2025/06/16/magazine/ai-history-historians-scholarship.html

    The NYT Magazine's Special Issue:

    https://www.nytimes.com/2025/06/16/magazine/using-ai-hard-fork.html

    Más Menos
    49 m
  • The (Data-Driven) Top AI Trends, w/ the CEOs of HumanX and Read.AI
    Jun 27 2025

    What are the top minds in AI actually talking about behind closed doors?

    At the HumanX conference—arguably the flagship event in the AI ecosystem—hundreds of speakers (from CEOs to policymakers to Kamala Harris) shared their unfiltered thoughts on the state and future of artificial intelligence. But with so much happening at once, even attendees couldn’t absorb it all.

    So HumanX did something novel: they partnered with Read.AI to record and synthesize every single session. The result? A real-time AI copilot for the conference and a post-event report that reveals the key themes, trends, and tensions shaping the industry.

    In this episode, we speak with HumanX CEO Stefan Weitz and Read.AI CEO David Shim to unpack the insights from that report—what they signal for 2025, what business leaders should pay attention to, and what’s probably just noise.

    We talk about the rise of agentic AI, the shift from AGI ambition to ROI expectations, and the practical realities of implementing AI inside large organizations. We also dig into issues of trust, open source, industry-specific adoption, and how AI is starting to reshape roles from customer service to legal to healthcare.

    Whether you’re in strategy, ops, tech, or just trying to keep up, this conversation offers a data-driven pulse check on where enterprise AI is headed.

    Highlights & Timestamps:

    • [1:00] – How Read AI became the official AI copilot of the HumanX conference
    • [3:10] – “You can’t be everywhere at once”—the problem this tech solves at events
    • [6:15] – The most talked-about concept at HumanX: agentic AI
    • [7:45] – Why AGI hype is shifting toward practical use cases with agents
    • [8:58] – The fast hype-decay cycle of AI and the emerging focus on outcomes
    • [12:26] – Open source, cost savings, and why business leaders care about transparency
    • [14:19] – Trust as the “anchoring tenet” of enterprise AI adoption
    • [16:45] – Real ROI: how Read AI identified $10M in sales pipeline in 30 days
    • [20:03] – Why companies are hiding their AI wins from competitors
    • [22:43] – Cross-industry learnings: how healthcare patterns may apply to other sectors
    • [25:47] – The “put up or shut up” moment: 2025 as the year AI must deliver
    • [29:06] – What business leaders should do before launching AI agent initiatives
    • [35:03] – The #1 mistake orgs make with AI: failing to assign ownership
    • [37:09] – Predictions: personalization, interoperability, and privacy friction ahead
    • [42:28] – How Stefan and David personally use AI—for work, fun, and creative hacking

    Links & Mentions:

    • HumanX – Flagship AI conference co-founded by Stefan Weitz
    • Read AI – Productivity-focused AI platform by David Shim
    • Suno – AI music generation tool mentioned by Stefan
    • Replit – AI coding sandbox used by Stefan for strategy visualization
    • Veo by Google DeepMind – AI video generation tool referenced by David

    🎧 Subscribe to AI-Curious:

    • Apple Podcasts
    https://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308

    • Spotify
    https://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b

    • YouTube
    https://www.youtube.com/@jeffwilser

    Más Menos
    44 m
  • Introducing "AUI": Artificial Useful Intelligence, w/ IBM's Chief Scientist Dr. Ruchir Puri
    Jun 12 2025

    What if we’re all chasing the wrong kind of AI? Dr. Ruchir Puri, Chief Scientist of IBM, argues that Artificial General Intelligence (AGI) is overrated—and that we should be focusing instead on AUI: Artificial Useful Intelligence. This is a pragmatic, business-focused approach to AI that emphasizes real-world value, measurable outcomes, and implementable solutions.

    In this episode of AI-Curious, we explore what AUI actually looks like in practice. We discuss how to bring AI into your organization (even if you’re just getting started), why IBM is betting big on small language models (SLMs), and how companies can move beyond hype toward real, trustworthy AI agents that do actual work.

    You’ll also hear:

    • Why AI usefulness is a function of both quality and cost [00:11:00]
    • The “crawl, walk, run” strategy IBM recommends for business adoption [00:14:00]
    • Internal IBM examples: HR systems and coding assistants [00:16:00]
    • Why SLMs may be a smarter bet than LLMs for many enterprises [00:37:00]
    • A breakdown of how agentic systems are evolving to reflect, act, and self-correct [00:41:00]

    Whether you’re leading a startup or an enterprise, this conversation will help you reframe how you think about deploying AI—starting not with hype, but with value.

    🎧 Subscribe to AI-Curious:

    • Apple Podcasts
    https://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308

    • Spotify
    https://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b

    • YouTube
    https://www.youtube.com/@jeffwilser

    Más Menos
    47 m