Episodios

  • NYC AI Bias Law: One Year In and What to Consider
    Jul 1 2024
    Join us for an insightful episode of "Lunchtime BABLing" as BABL AI CEO Shea Brown and VP of Sales Bryan Ilg dive deep into New York City's Local Law 144, a year after its implementation. This law mandates the auditing of AI tools used in hiring for bias, ensuring fair and equitable practices in the workplace. Episode Highlights: Understanding Local Law 144: A breakdown of what the law entails, its goals, and its impact on employers and AI tool providers. Year One Insights: What has been learned from the first year of compliance, including common challenges and successes. Preparing for Year Two: Key considerations for organizations as they navigate the second year of compliance. Learn about the nuances of data sharing, audit requirements, and maintaining compliance. Data Types and Testing: Detailed explanation of historical data vs. test data, and their roles in bias audits. Practical Advice: Decision trees and strategic advice for employers on how to handle their data and audit needs effectively. This episode is packed with valuable information for employers, HR professionals, and AI tool providers to ensure compliance with New York City's AI bias audit requirements. Stay informed and ahead of the curve with expert insights from Shea and Bryan. 🔗 Don't forget to like, subscribe, and share! If you're watching on YouTube, hit the like button and subscribe to stay updated with our latest episodes. If you're tuning in via podcast, thank you for listening! See you next week on Lunchtime BABLing.
    Más Menos
    20 m
  • Understanding Colorado's New AI Consumer Protection Law
    Jun 3 2024
    In this insightful episode of Lunchtime BABLing, BABL AI CEO Shea Brown and COO Jeffery Recker dive deep into Colorado's pioneering AI Consumer Protection Law. This legislation marks a significant move at the state level to regulate artificial intelligence, aiming to protect consumers from algorithmic discrimination. Shea and Jeffery discuss the implications for developers and deployers of AI systems, emphasizing the need for robust risk assessments, documentation, and compliance strategies. They explore how this law parallels the EU AI Act, focusing particularly on discrimination and the responsibilities laid out for both AI developers and deployers. Listeners, don't miss the chance to enhance your understanding of AI governance with a special offer from BABL AI: Enjoy 20% off all courses using the coupon code "BABLING20." Explore our courses here: https://courses.babl.ai/ For a deeper dive into Colorado's AI law, check out our detailed blog post: "Colorado's Comprehensive AI Regulation: A Closer Look at the New AI Consumer Protection Law". Don't forget to subscribe to our newsletter at the bottom of the page for the latest updates and insights. Link to the blog here: https://babl.ai/colorados-comprehensive-ai-regulation-a-closer-look-at-the-new-ai-consumer-protection-law/ Timestamps: 00:21 - Welcome and Introductions 00:43 - Overview of Colorado's AI Consumer Protection Law 01:52 - State vs. Federal Initiatives in AI Regulation 04:00 - Detailed Discussion on the Law's Provisions 07:02 - Risk Management and Compliance Techniques 09:51 - Importance of Proper Documentation 12:21 - Developer and Deployer Obligations 17:12 - Strategies for Public Disclosure and Risk Notification 20:48 - Annual Impact Assessments 22:44 - Transparency in AI Decision-Making 24:05 - Consumer Rights in AI Decisions 26:03 - Public Disclosure Requirements 28:36 - Final Thoughts and Takeaways Remember to like, subscribe, and comment with your thoughts or questions. Your interaction helps us bring more valuable content to you!
    Más Menos
    31 m
  • NIST AI Risk Management Framework & Generative AI Profile
    May 6 2024
    🎙️ Welcome back to Lunchtime BABLing, where we bring you the latest insights into the rapidly evolving world of AI ethics and governance! In this episode, BABL AI CEO Shea Brown and VP of Sales Bryan Ilg delve into the intricacies of the newly released NIST AI Risk Management Framework, with a specific focus on its implications for generative AI technologies. 🔍 The conversation kicks off with Shea and Bryan providing an overview of the NIST framework, highlighting its significance as a voluntary guideline for governing AI systems. They discuss how the framework's "govern, map, measure, manage" functions serve as a roadmap for organizations to navigate the complex landscape of AI risk management. 📑 Titled "NIST AI Risk Management Framework: Generative AI Profile," this episode delves deep into the companion document that focuses specifically on generative AI. Shea and Bryan explore the unique challenges posed by generative AI in terms of information integrity, human-AI interactions, and automation bias. 🧠 Shea provides valuable insights into the distinctions between AI, machine learning, and generative AI, shedding light on the nuanced risks associated with generative AI's ability to create content autonomously. The discussion delves into the implications of misinformation and disinformation campaigns fueled by generative AI technologies. 🔒 As the conversation unfolds, Shea and Bryan discuss the voluntary nature of the NIST framework and explore strategies for driving industry-wide adoption. They examine the role of certifications and standards in building trust and credibility in AI systems, emphasizing the importance of transparent and accountable AI governance practices. 🌐 Join Shea and Bryan as they navigate the complex terrain of AI risk management, offering valuable insights into the evolving landscape of AI ethics and governance. Whether you're a seasoned AI practitioner or simply curious about the ethical implications of AI technologies, this episode is packed with actionable takeaways and thought-provoking discussions. 🎧 Tune in now to stay informed and engaged with the latest advancements in AI ethics and governance, and join the conversation on responsible AI development and deployment!
    Más Menos
    44 m
  • The EU AI Act: Prohibited and High-Risk Systems and why you should care
    Apr 8 2024
    In this episode of the Lunchtime BABLing Podcast, Dr. Shea Brown, CEO of BABL AI, dives into the intricacies of the EU AI Act alongside Jeffery Recker, the COO of BABL AI. Titled "The EU AI Act: Prohibited and High-Risk Systems and why you should care," this conversation sheds light on the recent passing of the EU AI Act by the parliament and its implications for businesses and individuals alike. Dr. Brown and Jeffery explore the journey of the EU AI Act, from its proposal to its finalization, outlining the key milestones and upcoming steps. They delve into the categorization of AI systems into prohibited and high-risk categories, discussing the significance of compliance and the potential impacts on businesses operating within the EU. The conversation extends to the importance of understanding biases in AI algorithms, the complexities surrounding compliance, and the value of getting ahead of the curve in implementing necessary measures. Dr. Brown offers insights into how BABL AI assists organizations in navigating the regulatory landscape, emphasizing the importance of building trust and quality products in the AI ecosystem. Key Topics Covered: Overview of the EU AI Act and its journey to enactment Differentiating prohibited and high-risk AI systems Understanding biases in AI algorithms and their implications Compliance challenges and the importance of early action How BABL AI supports organizations in achieving compliance and building trust Why You Should Tune In: Whether you're a business operating within the EU or an individual interested in the impact of AI regulation, this episode provides valuable insights into the evolving regulatory landscape and its implications. Dr. Shea Brown and Jeffery Recker offer expert perspectives on navigating compliance challenges and the importance of ethical AI governance. Don't Miss Out: Subscribe to the Lunchtime BABLing Podcast for more thought-provoking discussions on AI, ethics, and governance. Stay tuned for upcoming episodes and join the conversation on critical topics shaping the future of technology.
    Más Menos
    25 m
  • Live Webinar Q&A Recording: Finding Your Place in AI Ethics Consulting
    Mar 18 2024
    Join us in this latest episode of the Lunchtime BABLing Podcast, where Shea Brown, CEO of BABL AI, shares invaluable insights from a live webinar Q&A session on carving out a niche in AI Ethics Consulting. Dive deep into the world of AI ethics, algorithm auditing, and the journey of building a boutique firm focused on ethical risk, bias, and effective governance in AI technologies. In This Episode: Introduction to AI Ethics Consulting: Shea Brown introduces the session, providing a backdrop for his journey and the birth of BABL AI. Journey of BABL AI: Discover the challenges and milestones in creating and growing an AI ethics consulting firm. Insights from the Field: Shea shares his experiences and learnings from auditing algorithms for ethical risks and navigating the evolving landscape of AI ethics. Live Q&A Highlights: Audience questions range from enrolling in AI ethics courses, the role of lawyers in AI audits, to the importance of philosophy in AI ethics consulting. Advice on Career Pivoting: Shea offers advice on pivoting into AI ethics consulting, highlighting the importance of understanding regulatory requirements and finding one’s niche. Auditing Process Explained: Get a high-level overview of the auditing process, including the distinction between assessments and formal audits. Building a Career in AI Ethics: Discussion on the demand for AI ethics consulting, networking strategies, and the interdisciplinary nature of audit teams. Key Takeaways: The essential blend of skills needed in AI ethics consulting. Insights into the challenges and opportunities in the field of AI ethics. Practical advice for individuals looking to enter or pivot into AI ethics consulting. Don’t miss this opportunity to learn from one of the pioneers in AI ethics consulting. Whether you’re new to the field or looking to deepen your knowledge, this episode is packed with insights, experiences, and advice to guide you on your journey. Listeners can use coupon code "FREEFEB" to get our "Finding Your Place in AI Ethics Consulting" course for free. Link on our Website. Lunchtime BABLing listeners can use coupon code "BABLING" to save 20% on all our course offerings.
    Más Menos
    59 m
  • NIST, ISO 42001, and BABL AI online courses
    Feb 19 2024
    Welcome to another enlightening episode of Lunchtime BABLing, proudly presented by BABL AI, where we dive deep into the evolving world of artificial intelligence and its governance. In this episode, Shea is thrilled to bring you a series of exciting updates and educational insights that are shaping the future of AI. What's Inside: 1. BABL AI Joins the NIST Consortium: We kick off with the groundbreaking announcement that BABL AI has officially become a part of the prestigious NIST consortium. Discover what this means for the future of AI development and governance, and how this collaboration is set to elevate the standards of AI technologies and applications. 2. Introducing ISO 42001: Next, Shea delves into the newly announced ISO 42001, a comprehensive governance framework that promises to redefine AI governance. Join Shea as she explores the high-level components of this auditable framework, shedding light on its significance and the impact it's poised to have on the AI industry. 3. Aligning Education with Innovation: We also explore how BABL AI’s online courses are perfectly aligned with the NIST AI framework, ISO 42001, and other pivotal regulations and frameworks. Learn how our educational offerings are designed to empower you with the competencies needed to navigate and excel in the complex landscape of AI governance. Whether you're a professional looking to enhance your skills or a student eager to enter the AI field, our courses offer invaluable insights and knowledge that align with the latest standards and frameworks.
    Más Menos
    11 m
  • Navigating Global AI Regulatory Compliance
    Feb 5 2024
    Sign up for Free to our online course "Finding your place in AI Ethics Consulting," during the month of February 2024. 🌍 In this news episode of Lunchtime BABLing, Shea does dive deep into the complex world of AI regulatory compliance on a global scale. As the digital frontier expands, understanding and adhering to AI regulations becomes crucial for businesses and technologists alike. This episode offers a high-level guide on what to consider for AI regulatory compliance globally. 🔍 Highlights of This Episode: EU AI Act: Your Compliance Compass - Discover how the European Union's AI Act serves as a holistic framework that can guide you through 95% of global AI compliance challenges. Common Grounds in Global AI Laws - Shea explore the shared foundations across various AI regulations, highlighting the common themes across global regulatory requirements. Proactive Mindset Shift - The importance of shifting corporate mindsets towards proactive risk management in AI cannot be overstated. We discuss why companies must start establishing Key Performance Indicators (KPIs) now to identify and mitigate risks before facing legal consequences. NIST's Role in Measuring AI Risk - Get insights into how the National Institute of Standards and Technology (NIST) is developing methodologies to quantify risk in AI systems, and what this means for the future of AI. 🚀 Takeaway: This episode is a must-listen for anyone involved in AI development, deployment, or governance. Whether you're a startup or a multinational corporation, aligning with global AI regulations is imperative. Lunchtime BABLing will provide you with the knowledge and strategies to navigate this complex landscape effectively, ensuring your AI solutions are not only innovative but also compliant and ethical. 👉 Subscribe to our channel for more insights into AI technology and its global impact. Don't forget to hit the like button if you find this episode valuable and share it with your network to spread the knowledge. #AICompliance #EUAIAct #AIRegulation #RiskManagement #TechnologyPodcast #AIethics #GlobalAI #ArtificialIntelligence
    Más Menos
    12 m
  • Exploring the socio-technical side of AI Ethics (Re-uploaded) | Lunchtime BABLing .07
    Jan 29 2024
    Sign up for our Free during the month of February for our online course Finding you place in AI Ethics Consulting. Link here: https://courses.babl.ai/p/finding-your-place-ai-ethics-consulting Lunchtime BABLing listeners can save 20% off all our online courses by using coupon code "BABLING." Link here: https://babl.ai/courses/ 🤖 Welcome to another engaging episode of Lunchtime BABLing! In this episode, we delve into the intricate world of AI ethics with a special focus on its socio-technical aspects. 🎙️ Join our host, Shea Brown, as they welcome a distinguished guest, Borhane Blili-Hamelin, PhD. Together, they explore some thought-provoking parallels between implementing AI ethics in industry and research environments. This discussion promises to shed light on the challenges and nuances of applying ethical principles in the fast-evolving field of artificial intelligence. 🔍 The conversation is not just theoretical but is grounded in ongoing research. Borhane Blili-Hamelin and Leif Hancox-Li's joint work, which was a highlight at the NeurIPS 2022 workshop, forms the basis of this insightful discussion. The workshop, held on November 28 and December 5, 2022, provided a platform for presenting their findings and perspectives. Link to paper here: https://arxiv.org/abs/2209.00692 💡 Whether you're a professional in the field, a student, or just someone intrigued by the ethical dimensions of AI, this episode is a must-watch! So, grab your lunch, sit back, and let's BABL about the socio-technical side of AI ethics. 👍 Don't forget to like, share, and subscribe for more insightful episodes of Lunchtime BABLing. Your support helps us continue to bring fascinating topics and expert insights to your screen. 📢 We love hearing from you! Share your thoughts on this episode in the comments below. What are your views on AI ethics in industry versus research? Let's keep the conversation going! 🔔 Stay tuned for more episodes by hitting the bell icon to get notified about our latest uploads. #LunchtimeBABLing #AIethics #SocioTechnical #ArtificialIntelligence #EthicsInAI #NeurIPS2022 #AIResearch #IndustryVsResearch #TechEthics
    Más Menos
    50 m