The Sentience Institute Podcast  By  cover art

The Sentience Institute Podcast

By: Sentience Institute
  • Summary

  • Interviews with activists, social scientists, entrepreneurs and change-makers about the most effective strategies to expand humanity’s moral circle, with an emphasis on expanding the circle to farmed animals. Host Jamie Harris, a researcher at moral expansion think tank Sentience Institute, takes a deep dive with guests into advocacy strategies from political initiatives to corporate campaigns to technological innovation to consumer interventions, and discusses advocacy lessons from history, sociology, and psychology.
    © 2024 The Sentience Institute Podcast
    Show more Show less
Episodes
  • Eric Schwitzgebel on user perception of the moral status of AI
    Feb 15 2024

    I call this the emotional alignment design policy. So the idea is that corporations, if they create sentient machines, should create them so that it's obvious to users that they're sentient. And so they evoke appropriate emotional reactions to sentient users. So you don't create a sentient machine and then put it in a bland box that no one will have emotional reactions to. And conversely, don't create a non sentient machine that people will attach to so much and think it's sentient that they'd be willing to make excessive sacrifices for this thing that isn't really sentient.

    • Eric Schwitzgebel

    Why should AI systems be designed so as to not confuse users about their moral status? What would make an AI system sentience or moral standing clear? Are there downsides to treating an AI as not sentient even if it’s not sentient? What happens when some theories of consciousness disagree about AI consciousness? Have the developments in large language models in the last few years come faster or slower than Eric expected? Where does Eric think we will see sentience first in AI if we do?

    Eric Schwitzgebel is professor of philosophy at University of California, Berkeley, specializing in philosophy of mind and moral psychology.  His books include Describing Inner Experience? Proponent Meets Skeptic (with Russell T. Hurlburt), Perplexities of Consciousness, A Theory of Jerks and Other Philosophical Misadventures, and most recently The Weirdness of the World.  He blogs at The Splintered Mind.

    Topics discussed in the episode:

    • Introduction (0:00)
    • AI systems must not confuse users about their sentience or moral status introduction (3:14)
    • Not confusing experts (5:30)
    • Not confusing general users (9:12)
    • What would make an AI system sentience or moral standing clear? (13:21)
    • Are there downsides to treating an AI as not sentient even if it’s not sentient? (16:33)
    • How would we implement this solution at a policy level? (25:19)
    • What happens when some theories of consciousness disagree about AI consciousness? (28:24)
    • How does this approach to uncertainty in AI consciousness relate to Jeff Sebo’s approach? (34:15)
    • Consciousness and artificial intelligence insights from the science of consciousness introduction (36:38)
    • How does the indicator properties approach account for factors relating to consciousness that we might be missing? (39:37)
    • What was the process for determining what indicator properties to include? (42:58)
    • Advantages of the indicator properties approach (44:49)
    • Have the developments in large language models in the last few years come faster or slower than Eric expected? (46:25)
    • Where does Eric think we will see sentience first in AI if we do? (50:17)
    • Are things like grounding or embodiment essential for understanding and consciousness? (53:35)

    Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

    Support the show
    Show more Show less
    58 mins
  • Raphaël Millière on large language models
    Jul 3 2023

    Ultimately, if you want more human-like systems that exhibit more human-like intelligence, you would want them to actually learn like humans do by interacting with the world and so interactive learning, not just passive learning. You want something that's more active where the model is going to actually test out some hypothesis, and learn from the feedback it's getting from the world about these hypotheses in the way children do, it should learn all the time. If you observe young babies and toddlers, they are constantly experimenting. They're like little scientists, you see babies grabbing their feet, and testing whether that's part of my body or not, and learning gradually and very quickly learning all these things. Language models don't do that. They don't explore in this way. They don't have the capacity for interaction in this way.

    • Raphaël Millière

    How do large language models work? What are the dangers of overclaiming and underclaiming the capabilities of large language models? What are some of the most important cognitive capacities to understand for large language models? Are large language models showing sparks of artificial general intelligence? Do language models really understand language? 

    Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society and a Lecturer in the Philosophy Department at Columbia University. He completed his DPhil (PhD) in philosophy at the University of Oxford, where he focused on self-consciousness. His interests lie primarily in the philosophy of artificial intelligence and cognitive science. He is particularly interested in assessing the capacities and limitations of deep artificial neural networks and establishing fair and meaningful comparisons with human cognition in various domains, including language understanding, reasoning, and planning.

    Topics discussed in the episode:

    • Introduction (0:00)
    • How Raphaël came to work on AI (1:25)
    • How do large language models work? (5:50)
    • Deflationary and inflationary claims about large language models (19:25)
    • The dangers of overclaiming and underclaiming (25:20)
    • Summary of cognitive capacities large language models might have (33:20)
    • Intelligence (38:10)
    • Artificial general intelligence (53:30)
    • Consciousness and sentience (1:06:10)
    • Theory of mind (01:18:09)
    • Compositionality (1:24:15)
    • Language understanding and referential grounding (1:30:45)
    • Which cognitive capacities are most useful to understand for various purposes? (1:41:10)
    • Conclusion (1:47:23)


    Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

    Support the show
    Show more Show less
    1 hr and 49 mins
  • Matti Wilks on human-animal interaction and moral circle expansion
    Jan 19 2023

    Speciesism being socially learned is probably our most dominant theory of why we think we're getting the results that we're getting. But to be very clear, this is super early research. We have a lot more work to do. And it's actually not just in the context of speciesism that we're finding this stuff. So basically we've run some studies showing that while adults will prioritize humans over even very large numbers of animals in sort of tragic trade-offs, children are much more likely to prioritize humans and animals lives similarly. So an adult will save one person over a hundred dogs or pigs, whereas children will save, I think it was two dogs or six pigs over one person. And this was children that were about five to 10 years old. So often when you look at biases in development, so something like minimal group bias, that peaks quite young.

    • Matti Wilks

    What does our understanding of human-animal interaction imply for human-robot interaction? Is speciesism socially learned? Does expanding the moral circle dilute it? Why is there a correlation between naturalness and acceptableness? What are some potential interventions for moral circle expansion and spillover from and to animal advocacy?

    Matti Wilks is a lecturer (assistant professor) in psychology at the University of Edinburgh. She uses approaches from social and developmental psychology to explore barriers to prosocial and ethical behavior—right now she is interested in factors that shape how we morally value others, the motivations of unusually altruistic groups, why we prefer natural things, and our attitudes towards cultured meat. Matti completed her PhD in developmental psychology at the University of Queensland, Australia, and was a postdoc at Princeton and Yale Universities.

    Topics discussed in the episode:

    • Introduction (0:00)
    • What matters ethically? (1:00)
    • The link between animals and digital minds (3:10)
    • Higher vs lower orders of pleasure/suffering (4:15)
    • Psychology of human-animal interaction and what that means for human-robot interaction (5:40)
    • Is speciesism socially learned? (10:15)
    • Implications for animal advocacy strategy (19:40)
    • Moral expansiveness scale and the moral circle (23:50)
    • Does expanding the moral circle dilute it? (27:40)
    • Predictors for attitudes towards species and artificial sentience (30:05)
    • Correlation between naturalness and acceptableness (38:30)
    • What does our understanding of naturalness and acceptableness imply for attitudes towards cultured meat? (49:00)
    • How can we counter concerns about naturalness in cultured meat? (52:00)
    • What does our understanding of attitudes towards naturalness imply for artificial sentience? (54:00)
    • Interventions for moral circle expansion and spillover from and to animal advocacy (56:30)
    • Academic field building as a strategy for developing a cause area (1:00:50)

    Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

    Support the show
    Show more Show less
    1 hr and 6 mins

What listeners say about The Sentience Institute Podcast

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.