• Episode 36: About That 'Dangerous Capabilities' Fanfiction (feat. Ali Alkhatib), June 24 2024
    Jul 19 2024

    When is a research paper not a research paper? When a big tech company uses a preprint server as a means to dodge peer review -- in this case, of their wild speculations on the 'dangerous capabilities' of large language models. Ali Alkhatib joins Emily to explain why a recent Google DeepMind document about the hunt for evidence that LLMs might intentionally deceive us was bad science, and yet is still influencing the public conversation about AI.

    Ali Alkhatib is a computer scientist and former director of the University of San Francisco’s Center for Applied Data Ethics. His research focuses on human-computer interaction, and why our technological problems are really social – and why we should apply social science lenses to data work, algorithmic justice, and even the errors and reality distortions inherent in AI models.

    References:

    Google DeepMind paper-like object: Evaluating Frontier Models for Dangerous Capabilities

    Fresh AI Hell:

    Hacker tool extracts all the data collected by Windows' 'Recall' AI

    In NYC, ShotSpotter calls are 87 percent false alarms

    "AI" system to make callers sound less angry to call center workers

    Anthropic's Claude Sonnet 3.5 evaluated for "graduate level reasoning"

    OpenAI's Mira Murati says "AI" will have 'PhD-level' intelligence

    OpenAI's Mira Murati also says AI will take some creative jobs, maybe they shouldn't have been there to start out with



    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show more Show less
    1 hr and 2 mins
  • Episode 35: AI Overviews and Google's AdTech Empire (feat. Safiya Noble), June 10 2024
    Jul 3 2024

    You've already heard about the rock-prescribing, glue pizza-suggesting hazards of Google's AI overviews. But the problems with the internet's most-used search engine go way back. UCLA scholar and "Algorithms of Oppression" author Safiya Noble joins Alex and Emily in a conversation about how Google has long been breaking our information ecosystem in the name of shareholders and ad sales.

    References:

    Blog post, May 14: Generative AI in Search: Let Google do the searching for you
    Blog post, May 30: AI Overviews: About last week

    Algorithms of Oppression: How Search Engines Reinforce Racism, by Safiya Noble

    Fresh AI Hell:

    AI Catholic priest demoted after saying it's OK to baptize babies with Gatorade

    National Archives bans use of ChatGPT

    ChatGPT better than humans at "Moral Turing Test"

    Taco Bell as an "AI first" company

    AGI by 2027, in one hilarious graph



    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show more Show less
    1 hr and 2 mins
  • Episode 34: Senate Dot Roadmap Dot Final Dot No Really Dot Docx, June 3 2024
    Jun 20 2024

    The politicians are at it again: Senate Majority Leader Chuck Schumer's series of industry-centric forums last year have birthed a "roadmap" for future legislation. Emily and Alex take a deep dive on this report, and conclude that the time spent writing it could have instead been spent...making useful laws.

    References:

    Driving US Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States

    Tech Policy Press: US Senate AI Insight Forum Tracker

    Put the Public in the Driver's Seat: Shadow Report to the US Senate AI Policy Roadmap

    Emily's opening remarks on “AI in the Workplace: New Crisis or Longstanding Challenge” virtual roundtable

    Fresh AI Hell:

    Homophobia in Spotify's chatbot

    StackOverflow in bed with OpenAI, pushing back against resistance

    • See also: https://scholar.social/@dingemansemark/112411041956275543

    OpenAI making copyright claim against ChatGPT subreddit

    Introducing synthetic text for police reports

    ChatGPT-like "AI" assistant ... as a car feature?

    Scarlett Johansson vs. OpenAI


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show more Show less
    1 hr and 4 mins
  • Episode 33: Much Ado About 'AI' 'Deception', May 20 2024
    Jun 5 2024

    Will the LLMs somehow become so advanced that they learn to lie to us in order to achieve their own ends? It's the stuff of science fiction, and in science fiction these claims should remain. Emily and guest host Margaret Mitchell, machine learning researcher and chief ethics scientist at HuggingFace, break down why 'AI deception' is firmly a feature of human hype.

    Reference:

    Patterns: "AI deception: A survey of examples, risks, and potential solutions"

    Fresh AI Hell:

    Adobe's 'ethical' image generator is still pulling from copyrighted material

    Apple advertising hell: vivid depiction of tech crushing creativity, as if it were good

    "AI is more creative than 99% of people"

    AI generated employee handbooks causing chaos

    Bumble CEO: Let AI 'concierge' do your dating for you.

    • Some critique


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show more Show less
    1 hr and 1 min
  • Episode 32: A Flood of AI Hell, April 29 2024
    May 23 2024

    AI Hell froze over this winter and now a flood of meltwater threatens to drown Alex and Emily. Armed with raincoats and a hastily-written sea shanty*, they tour the realms, from spills of synthetic information, to the special corner reserved for ShotSpotter.

    **Lyrics & video on Peertube.

    *Surveillance:*

    • Public kiosks slurp phone data
    • Workplace surveillance
    • Surveillance by bathroom mirror
    • Stalking-as-a-service
    • Cops tap everyone else's videos
    • Facial recognition at the doctor's office

    *Synthetic information spills:*

    • Amazon products called “I cannot fulfill that request”
    • AI-generated obituaries
    • X's Grok treats Twitter trends as news
    • Touch the button. Touch it.
    • Meta’s chatbot enters private discussions
    • WHO chatbot makes up medical info

    *Toxic wish fulfillment:*

    • Fake photos of real memories

    *ShotSpotter:*

    • ShotSpotter adds surveillance to the over-policed
    • Chicago ending ShotSpotter contract
    • But they're listening anyway

    *Selling your data:*

    • Reddit sells user data
    • Meta sharing user DMs with Netflix
    • Scraping Discord

    *AI is always people:*

    • Amazon Fresh
    • 3D art
    • George Carlin impressions
    • The people behind image selection

    *TESCREAL corporate capture:*

    • Biden worried about AI because of "Mission: Impossible"
    • Feds appoint AI doomer to run US AI safety institute
    • Altman & friends will serve on AI safety board

    *Accountability:*

    • FTC denies facial recognition for age estimation
    • SEC goes after misleading claims
    • Uber Eats courier wins payout over ‘racist’ facial recognition app


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show more Show less
    58 mins
  • Episode 31: Science Is a Human Endeavor (feat. Molly Crockett and Lisa Messeri), April 15 2024
    May 7 2024

    Will AI someday do all our scientific research for us? Not likely. Drs. Molly Crockett and Lisa Messeri join for a takedown of the hype of "self-driving labs" and why such misrepresentations also harm the humans who are vital to scientific research.

    Dr. Molly Crockett is an associate professor of psychology at Princeton University.
    Dr. Lisa Messeri is an associate professor of anthropology at Yale University, and author of the new book, In the Land of the Unreal: Virtual and Other Realities in Los Angeles.


    References:

    AI For Scientific Discovery - A Workshop
    Nature: The Nobel Turing Challenge
    Nobel Turing Challenge Website
    Eric Schmidt: AI Will Transform Science
    Molly Crockett & Lisa Messeri in Nature: Artificial intelligence and illusions of understanding in scientific research
    404 Media: Is Google's AI actually discovering 'millions of new materials?'

    Fresh Hell:

    Yann LeCun realizes generative AI sucks, suggests shift to objective-driven AI
    In contrast:
    https://x.com/ylecun/status/1592619400024428544
    https://x.com/ylecun/status/1594348928853483520
    https://x.com/ylecun/status/1617910073870934019

    CBS News: Upselling “AI” mammograms
    Ars Technica: Rhyming AI clock sometimes lies about the time
    Ars Technica: Surveillance by M&M's vending machine


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show more Show less
    1 hr and 3 mins
  • Episode 30: Marc's Miserable Manifesto, April 1 2024
    Apr 19 2024

    Dr. Timnit Gebru guest-hosts with Alex in a deep dive into Marc Andreessen's 2023 manifesto, which argues, loftily, in favor of maximizing the use of 'AI' in all possible spheres of life.

    Timnit Gebru is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR). Prior to that she was fired by Google, where she was serving as co-lead of the Ethical AI research team, in December 2020 for raising issues of discrimination in the workplace. Timnit also co-founded Black in AI, a nonprofit that works to increase the presence, inclusion, visibility and health of Black people in the field of AI, and is on the board of AddisCoder, a nonprofit dedicated to teaching algorithms and computer programming to Ethiopian highschool students, free of charge.

    References:
    Marc Andreessen: "The Techno-Optimism Manifesto"
    First Monday: The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence (Timnit Gebru & Émile Torres)
    Business Insider: Explaining 'Pronatalism' in Silicon Valley


    Fresh AI Hell:
    CBS New York: NYC subway testing out weapons detection technology, Mayor Adams says.
    The Markup: NYC's AI chatbot tells businesses to break the law

    • Read Emily's Twitter / Mastodon thread about this chatbot.

    The Guardian: DrugGPT: New AI tool could help doctors prescribe medicine in England
    The Guardian: Wearable AI: Will it put our smartphones out of fashion?
    TheCurricula.com


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show more Show less
    1 hr and 1 min
  • Episode 29: How LLMs Are Breaking the News (feat. Karen Hao), March 25 2024
    Apr 3 2024

    Award-winning AI journalist Karen Hao joins Alex and Emily to talk about why LLMs can't possibly replace the work of reporters -- and why the hype is damaging to already-struggling and necessary publications.

    References:

    Adweek: Google Is Paying Publishers to Test an Unreleased Gen AI Platform

    The Quint: AI Invents Quote From Real Person in Article by Bihar News Site: A Wake-Up Call?

    Fresh AI Hell:

    Alliance for the Future

    VentureBeat: Google researchers unveil ‘VLOGGER’, an AI that can bring still photos to life

    Business Insider: A car dealership added an AI chatbot to its site. Then all hell broke loose.

    • More pranks on chatbots


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    Show more Show less
    1 hr and 3 mins