• 🌶️ Cutting through the Responsible AI hype: how to enter the field | irResponsible AI EP2S01

  • Jun 4 2024
  • Length: 34 mins
  • Podcast

🌶️ Cutting through the Responsible AI hype: how to enter the field | irResponsible AI EP2S01  By  cover art

🌶️ Cutting through the Responsible AI hype: how to enter the field | irResponsible AI EP2S01

  • Summary

  • Got questions or comments or topics you want us to cover? Text us!

    It gets spicy in this episode of irResponsible AI:
    ✅ Cutting through the Responsible AI hype to separate experts from "AI influencers" (grifters)
    ✅ How you can you break into Responsible AI consulting
    ✅ How the EU AI Act discourages irresponsible AI
    ✅ How we can nurture a "cohesively diverse" Responsible AI community

    What can you do?
    🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.

    🎙️Who are your hosts and why should you even bother to listen?
    Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.

    Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

    All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.

    Follow us for more Responsible AI and the occasional sh*tposting:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    CHAPTERS:
    0:00 - Introduction
    0:45 - The 3 topics we cover in the episode
    1:52 - Is RAI all hype?
    4:14 - Who to trust and who to ignore in RAI
    10:35 - How can newcomers to RAI navigate through the hype?
    13:36 - How to break into responsible AI consulting
    15:56 - Do people need to have a PhD to get into this?
    18:52 - Responsible AI is inherently sociotechnical (not just technical)
    21:54 - Why we need "cohesive diversity" in RAI not just diversity
    23:57 - The EU AI Act's draft and discouraging irresponsible AI
    27:26 - We need Responsible AI not Regulatory AI
    29:03 - Why we need early cross pollination between RAI and other domains
    31:39 - The range of diverse work in real-world RAI
    33:20 - Outro

    #ResponsibleAI #ExplainableAI #podcasts #aiethics

    Support the Show.

    What can you do?
    🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

    Follow us for more Responsible AI:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    Show more Show less
activate_primeday_promo_in_buybox_DT

What listeners say about 🌶️ Cutting through the Responsible AI hype: how to enter the field | irResponsible AI EP2S01

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.