The AI Fundamentalists  By  cover art

The AI Fundamentalists

By: Dr. Andrew Clark & Sid Mangalik
  • Summary

  • A podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses.

    © 2024 The AI Fundamentalists
    Show more Show less
activate_primeday_promo_in_buybox_DT
Episodes
  • Data lineage and AI: Ensuring quality and compliance with Matt Barlin
    Jul 3 2024

    Ready to uncover the secrets of modern systems engineering and the future of AI? Join us for an enlightening conversation with Matt Barlin, the Chief Science Officer of Valence. Matt's extensive background in systems engineering and data lineage sets the stage for a fascinating discussion. He sheds light on the historical evolution of the field, the critical role of documentation, and the early detection of defects in complex systems. This episode promises to expand your understanding of model-based systems and data issues, offering valuable insights that only an expert of Matt's caliber can provide.

    In the heart of our episode, we dive into the fundamentals and transformative benefits of data lineage in AI. Matt draws intriguing parallels between data lineage and the engineering life cycle, stressing the importance of tracking data origins, access rights, and verification processes. Discover how decentralized identifiers are paving the way for individuals to control and monetize their own data. With the phasing out of third-party cookies and the challenges of human-generated training data shortages, we explore how systems like retrieval-augmented generation (RAG) and compliance regulations like the EU AI Act are shaping the landscape of AI data quality and compliance.

    Don’t miss this thought-provoking episode that promises to keep you at the forefront of responsible AI.

    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    Show more Show less
    28 mins
  • Differential privacy: Balancing data privacy and utility in AI
    Jun 4 2024

    Explore the basics of differential privacy and its critical role in protecting individual anonymity. The hosts explain the latest guidelines and best practices in applying differential privacy to data for models such as AI. Learn how this method also makes sure that personal data remains confidential, even when datasets are analyzed or hacked.

    Show Notes

    • Intro and AI news (00:00)
      • Google AI search tells users to glue pizza and eat rocks
      • Gary Marcus on break? (Maybe and X only break)
    • What is differential privacy? (06:34)
      • Differential privacy is a process for sensitive data anonymization that offers each individual in a dataset the same privacy they would experience if they were removed from the dataset entirely.
      • NIST’s recent paper SP 800-226 IPD: “Any privacy harms that result form a differentially private analysis could have happened if you had not contributed your data”.
      • There are two main types of differential privacy: global (NIST calls it Central) and local
    • Why should people care about differential privacy? (11:30)
      • Interest has been increasing for organizations to intentionally and systematically prioritize the privacy and safety of user data
      • Speed up deployments of AI systems for enterprise customers since connections to raw data do not need to be established
      • Increase data security for customers that utilize sensitive data in their modeling systems
      • Minimize the risk of sensitive data exposure for your data privileges - i.e. Don’t be THAT organization
    • Guidelines and resources for applied differential privacy
      • Guidelines for Evaluating Differential Privacy Guarantees:
      • NIST De-Identification
    • Practical examples of applied differential privacy (15:58)
      • Continuous Features - cite: Dwork, McSherry, Nissim, and Smith’s 2006 seminal paper "Calibrating Noise to Sensitivity in Private Data Analysis”[2], introduces a concept called ε-differential privacy
      • Categorical Features - cite: Warner (1965) created a randomized response technique in his paper titled: “Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias”
    • Summary and key takeaways (23:59)
      • Differential privacy is going to be a part of how many of us need to manage data privacy
      • Data providers can’t provide us with anonymized data for analysis or when anonymization isn’t enough for our privacy needs
      • Hopeful that cohort targeting takes over for individual targeting
      • Remember: Differential privacy does not prevent bias!


    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    Show more Show less
    28 mins
  • Responsible AI: Does it help or hurt innovation? With Anthony Habayeb
    May 7 2024

    Artificial Intelligence (AI) stands at a unique intersection of technology, ethics, and regulation. The complexities of responsible AI are brought into sharp focus in this episode featuring Anthony Habayeb, CEO and co-founder of Monitaur, As responsible AI is scrutinized for its role in profitability and innovation, Anthony and our hosts discuss the imperatives of safe and unbiased modeling systems, the role of regulations, and the importance of ethics in shaping AI.

    Show notes

    Prologue: Why responsible AI? Why now? (00:00:00)

    • Deviating from our normal topics about modeling best practices
    • Context about where regulation plays a role in industries besides big tech
    • Can we learn from other industries about the role of "responsibility" in products?

    Special guest, Anthony Habayeb (00:02:59)

    • Introductions and start of the discussion
    • Of all the companies you could build around AI, why governance?

    Is responsible AI the right phrase? (00:11:20)

    • Should we even call good modeling and business practices "responsible AI"?
    • Is having responsible AI a “want to have?” or a “need to have?”

    Importance of AI regulation and responsibility (00:14:49)

    • People in the AI and regulation worlds have started pushing back on Responsible AI.
    • Do regulations impede freedom?
    • Discussing the big picture of responsibility and governance: Explainability, repeatability, records, and audit

    What about bias and fairness? (00:22:40)

    • You can have fair models that operate with bias
    • Bias in practice identifies inequities that models have learned
    • Fairness is correcting for societal biases to level the playing field for safer business and modeling practices to prevail.

    Responsible deployment and business management (00:35:10)

    • Discussion about what organizations get right about responsible AI
    • And what organizations can get completely wrong if they aren't careful.

    Embracing responsible AI practices (00:41:15)

    • Getting your teams, companies, and individuals involved in the movement towards building AI responsibly

    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    Show more Show less
    46 mins

What listeners say about The AI Fundamentalists

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.