Episodes

  • Metrics to Detect Hallucinations with Pradeep Javangula
    May 2 2024

    In this episode, we’re joined by Pradeep Javangula, Chief AI Officer at RagaAI

    Deploying LLM applications for real-world use cases requires a comprehensive workflow to ensure LLM applications generate high-quality and accurate content. Testing, fixing issues, and measuring impact are critical steps of the workflow to help LLM applications deliver value.

    Pradeep Javangula, Chief AI Officer at RagaAI will discuss strategies and practical approaches organizations can follow to maintain high performing, correct, and safe LLM applications.

    Show more Show less
    59 mins
  • AI Safety and Alignment with Amal Iyer
    Mar 7 2024

    In this episode, we’re joined by Amal Iyer, Sr. Staff AI Scientist at Fiddler AI.

    Large-scale AI models trained on internet-scale datasets have ushered in a new era of technological capabilities, some of which now match or even exceed human ability. However, this progress emphasizes the importance of aligning AI with human values to ensure its safe and beneficial societal integration. In this talk, we will provide an overview of the alignment problem and highlight promising areas of research spanning scalable oversight, robustness and interpretability.

    Show more Show less
    57 mins
  • Managing the Risks of Generative AI with Kathy Baxter
    Jan 23 2024

    On this episode, we’re joined by Kathy Baxter, Principal Architect of Responsible AI & Tech at Salesforce.

    Generative AI has become widely popular with organizations finding ways to drive innovation and business growth. The adoption of generative AI, however, remains low due to ethical implications and unintended consequences that negatively impact the organization and its consumers.

    Baxter will discuss ethical AI practices organizations can follow to minimize potential harms and maximize the social benefits of AI.

    Show more Show less
    57 mins
  • Legal Frontiers of AI with Patrick Hall
    Dec 21 2023

    On this episode, we’re joined by Patrick Hall, Co-Founder of BNH.AI.

    We will delve into critical aspects of AI, such as model risk management, generating adverse action notices, addressing algorithmic discrimination, ensuring data privacy, fortifying ML security, and implementing advanced model governance and explainability.

    Show more Show less
    59 mins
  • Building Generative AI Applications for Production with Chaoyu Yang
    Sep 29 2023

    On this episode, we’re joined by Chaoyu Yang, Founder and CEO at BentoML.

    AI-forward enterprises across industries are building generative AI applications to transform their businesses. While AI teams need to consider several factors ranging from ethical and social considerations to overall AI strategy, technical challenges remain to deploy these applications into production.

    Yang, will explore key aspects of generative AI application development and deployment.

    Show more Show less
    59 mins
  • Graph Neural Networks and Generative AI with Jure Leskovec
    Sep 2 2023

    On this episode, we’re joined by Jure Leskovec, Stanford professor and co-founder at Kumo.ai.

    Graph neural networks (GNNs) are gaining popularity in the AI community, helping ML teams build advanced AI applications that provide deep insights to tackle real-world problems. Stanford professor and co-founder at Kumo.AI, Jure Leskovec, whose work is at the intersection of graph neural networks, knowledge graphs, and generative AI, will explore how organizations can incorporate GNNs in their generative AI initiatives.

    Show more Show less
    52 mins
  • Machine Learning for High Risk Applications with Parul Pandey
    Jul 26 2023

    On this episode, we’re joined by Parul Pandey, Principal Data Scientist at H2O.ai and co-author of Machine Learning for High-Risk Applications.

    Although AI is being widely adopted, it poses several adversarial risks that can be harmful to organizations and users. Listen to this episode to learn how data scientists and ML practitioners can improve AI outcomes with proper model risk management techniques.

    Show more Show less
    55 mins
  • AI Safety in Generative AI with Peter Norvig
    Jun 30 2023

    On this episode, we’re joined by Peter Norvig, a Distinguished Education Fellow at the Stanford Institute for Human-Centered AI and co-author of popular books on AI, including Artificial Intelligence: A Modern Approach and more recently, Data Science in Context.

    AI has the potential to improve humanity’s quality of life and day-to-day decisions. However, these advancements come with their own challenges that can cause harm. Listen to this episode to learn considerations and best practices organizations can take to preserve human control and ensure transparent and equitable AI.

    Show more Show less
    40 mins