Episodes

  • 🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01
    Jun 4 2024

    Got questions or comments or topics you want us to cover? Text us!

    In this episode we discuss AI Risk Management Frameworks (RMFs) focusing on NIST's Generative AI profile:
    ✅ Demystify misunderstandings about AI RMFs: what they are for, what they are not for
    ✅ Unpack challenges of evaluating AI frameworks
    ✅ Inert knowledge in frameworks need to be activated through processes and user-centered design to bridge the gap between theory and practice.

    What can you do?
    🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.

    🎙️Who are your hosts and why should you even bother to listen?
    Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.

    Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

    All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.

    Follow us for more Responsible AI and the occasional sh*tposting:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    CHAPTERS:
    00:00 - What will we discuss in this episode?
    01:22 - What are AI Risk Management Frameworks
    03:03 - Understanding NIST's Generative AI Profile
    04:00 - What's the difference between NIST's AI RMF vs GenAI Profile?
    08:38 - What are other equivalent AI RMFs?
    10:00- How we engage with AI Risk Management Frameworks?
    14:28 - Evaluating the Effectiveness of Frameworks
    17:20 - Challenges of Framework Evaluation
    21:05 - Evaluation Metrics are NOT always quantitative
    22:32 - Frameworks are inert-- they need to be activated
    24:40 - The Gap of Implementing a Framework in Practice
    26:45 - User-centered Design solutions to address the gap
    28:36 - Consensus-based framework creation is a chaotic process
    30:40 - Tip for small businesses to amplify profile in RAI
    31:30 - Takeaways


    #ResponsibleAI #ExplainableAI #podcasts #aiethics

    Support the Show.

    What can you do?
    🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

    Follow us for more Responsible AI:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    Show more Show less
    35 mins
  • 🧐 Responsible AI is NOT the icing on the cake | irResponsible AI EP4S01
    Jun 4 2024

    Got questions or comments or topics you want us to cover? Text us!

    In this episode filled with hot takes, Upol and Shea discuss three things:
    ✅ How the Gemini Scandal unfolded
    ✅ Is Responsible AI is too woke? Or is there a hidden agenda?
    ✅ What companies can do to address such scandals

    What can you do?
    🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.

    🎙️Who are your hosts and why should you even bother to listen?
    Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.

    Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

    All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.

    Follow us for more Responsible AI and the occasional sh*tposting:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    CHAPTERS:
    0:00 - Introduction
    1:25 - How the Gemini Scandal unfolded
    5:30 - Selective outrage: hidden social justice warriors?
    7:44 - Should we expect Generative AI to be historically accurate?
    11:53 - Responsible AI is NOT the icing on the cake
    14:58 - How Google and other companies should respond
    16:46 - Immature Responsible AI leads to irresponsible AI
    19:54 - Is Responsible AI too woke?
    22:00 - Identity politics in Responsible AI
    23:21 - What can tech companies do to solve this problem?
    26:43 - Responsible AI is a process, not a product
    28:54 - The key takeaways from the episode

    #ResponsibleAI #ExplainableAI #podcasts #aiethics

    Support the Show.

    What can you do?
    🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

    Follow us for more Responsible AI:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    Show more Show less
    31 mins
  • 🔥 The Taylor Swift Factor: Deep fakes & Responsible AI | irResponsible AI EP3S01
    Jun 4 2024

    Got questions or comments or topics you want us to cover? Text us!

    As they say, don't mess with Swifties. This episode irResponsible AI is about the Taylor Swift Factor in Responsible AI:
    ✅ Taylor Swift's deepfake scandal and what it did for RAIg
    ✅ Do famous people need to be harmed before we do anything about it?
    ✅ How to address the deepfake problem at the systemic and symptomatic levels

    What can you do?
    🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.

    🎙️Who are your hosts and why should you even bother to listen?
    Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.

    Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

    All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.

    Follow us for more Responsible AI and the occasional sh*tposting:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    CHAPTERS:
    0:00 - Introduction
    01:20 - Taylor Swift Deepfakes: what happened
    02:43 - Does disaster need to strike famous people for us to move the needle?
    06:31 - What role can RAI play to address this deepfake problem?
    07:19 - Disagreement! Deep fakes have both systemic and symptomatic causes
    09:28 - Deep fakes, Elections, EU AI Act, and US State legislations
    11:45 - The post-truth era powered by AI
    15:40 - Watermarking AI generated content and the difficulty
    19:26 - The enshittification of the internet
    22:00- Three actionable takeaways


    #ResponsibleAI #ExplainableAI #podcasts #aiethics #taylorswift

    Support the Show.

    What can you do?
    🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

    Follow us for more Responsible AI:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    Show more Show less
    23 mins
  • 🌶️ Cutting through the Responsible AI hype: how to enter the field | irResponsible AI EP2S01
    Jun 4 2024

    Got questions or comments or topics you want us to cover? Text us!

    It gets spicy in this episode of irResponsible AI:
    ✅ Cutting through the Responsible AI hype to separate experts from "AI influencers" (grifters)
    ✅ How you can you break into Responsible AI consulting
    ✅ How the EU AI Act discourages irresponsible AI
    ✅ How we can nurture a "cohesively diverse" Responsible AI community

    What can you do?
    🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.

    🎙️Who are your hosts and why should you even bother to listen?
    Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.

    Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

    All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.

    Follow us for more Responsible AI and the occasional sh*tposting:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    CHAPTERS:
    0:00 - Introduction
    0:45 - The 3 topics we cover in the episode
    1:52 - Is RAI all hype?
    4:14 - Who to trust and who to ignore in RAI
    10:35 - How can newcomers to RAI navigate through the hype?
    13:36 - How to break into responsible AI consulting
    15:56 - Do people need to have a PhD to get into this?
    18:52 - Responsible AI is inherently sociotechnical (not just technical)
    21:54 - Why we need "cohesive diversity" in RAI not just diversity
    23:57 - The EU AI Act's draft and discouraging irresponsible AI
    27:26 - We need Responsible AI not Regulatory AI
    29:03 - Why we need early cross pollination between RAI and other domains
    31:39 - The range of diverse work in real-world RAI
    33:20 - Outro

    #ResponsibleAI #ExplainableAI #podcasts #aiethics

    Support the Show.

    What can you do?
    🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

    Follow us for more Responsible AI:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    Show more Show less
    34 mins
  • 🤯 Harms in the Algorithm's Afterlife: how to address them | irResponsible AI EP1S01
    Jun 4 2024

    Got questions or comments or topics you want us to cover? Text us!

    In this episode of irResponsible AI, Upol & Shea bring the heat to three topics--
    🚨 Algorithmic Imprints: harms from zombie algorithms with an example of the LAION dataset
    🚨 The FTC vs. Rite Aid Scandal and how it could have been avoided
    🚨 NIST's Trustworthy AI Institute and the future of AI regulation

    You’ll also learn:
    🔥 why AI is a tricky design material and how it impacts Generative AI and LLMs
    🔥 how AI has a "developer savior" complex and how to solve it

    What can you do?
    🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

    🎙️Who are your hosts and why should you even bother to listen?
    Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.

    Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

    All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.

    Follow us for more Responsible AI:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    #ResponsibleAI #ExplainableAI #podcasts #aiethics

    Chapters:
    00:00 - What is this series about?
    01:34 - Personal Updates from Upol & Shea
    04:35 - Algorithmic Imprint: How dead algorithms can still hurt people
    06:47 - A recent example of the Imprint: LAION Dataset Scandal
    11:09 - How can we create imprint-aware algorithms design guidelines?
    11:53 - FTC vs Rite Aid Scandal: Biased Facial Recognition
    15:48 - Hilarious mistakes: Chatbot selling a car for $1
    18:14 - How could Rite Aid prevented this scandal?
    21:28 - What's the NIST Trustworthy AI Institute?
    25:03 - Shea's wish list for the NIST working group?
    27:57 - How AI is different as a design material
    30:08 - AI has a developer savior complex
    32:29 - You can move fast and break things that you can't fix
    32:40 - Audience Requests and Announcements

    Support the Show.

    What can you do?
    🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

    Follow us for more Responsible AI:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    Show more Show less
    34 mins