• [HUMAN VOICE] "On green" by Joe Carlsmith
    Apr 12 2024

    Cross-posted from my website. Podcast version here, or search for "Joe Carlsmith Audio" on your podcast app.

    This essay is part of a series that I'm calling "Otherness and control in the age of AGI." I'm hoping that the individual essays can be read fairly well on their own, but see here for brief summaries of the essays that have been released thus far.

    Warning: spoilers for Yudkowsky's "The Sword of the Good.")

    Examining a philosophical vibe that I think contrasts in interesting ways with "deep atheism."

    Text version here: https://joecarlsmith.com/2024/03/21/on-green

    This essay is part of a series I'm calling "Otherness and control in the age of AGI." I'm hoping that individual essays can be read fairly well on their own, but see here for brief text summaries of the essays that have been released thus far: https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-agi

    (Though: note that I haven't put the summary post on the podcast yet.)

    Source:
    https://www.lesswrong.com/posts/gvNnE6Th594kfdB3z/on-green

    Narrated by Joe Carlsmith, audio provided with permission.

    Show more Show less
    1 hr and 15 mins
  • [HUMAN VOICE] "Toward a Broader Conception of Adverse Selection" by Ricki Heicklen
    Apr 12 2024

    Support ongoing human narrations of LessWrong's curated posts:
    www.patreon.com/LWCurated

    This is a linkpost for https://bayesshammai.substack.com/p/conditional-on-getting-to-trade-your

    “I refuse to join any club that would have me as a member” -Marx[1]

    Adverse Selection is the phenomenon in which information asymmetries in non-cooperative environments make trading dangerous. It has traditionally been understood to describe financial markets in which buyers and sellers systematically differ, such as a market for used cars in which sellers have the information advantage, where resulting feedback loops can lead to market collapses.

    In this post, I make the case that adverse selection effects appear in many everyday contexts beyond specialized markets or strictly financial exchanges. I argue that modeling many of our decisions as taking place in competitive environments analogous to financial markets will help us notice instances of adverse selection that we otherwise wouldn’t.

    The strong version of my central thesis is that conditional on getting to trade[2], your trade wasn’t all that great. Any time you make a trade, you should be asking yourself “what do others know that I don’t?”

    Source:
    https://www.lesswrong.com/posts/vyAZyYh3qsqcJwwPn/toward-a-broader-conception-of-adverse-selection

    Narrated for LessWrong by Perrin Walker.

    Share feedback on this narration.

    Show more Show less
    22 mins
  • [HUMAN VOICE] "My PhD thesis: Algorithmic Bayesian Epistemology" by Eric Neyman
    Apr 12 2024

    Support ongoing human narrations of LessWrong's curated posts:
    www.patreon.com/LWCurated

    In January, I defended my PhD thesis, which I called Algorithmic Bayesian Epistemology. From the preface:

    For me as for most students, college was a time of exploration. I took many classes, read many academic and non-academic works, and tried my hand at a few research projects. Early in graduate school, I noticed a strong commonality among the questions that I had found particularly fascinating: most of them involved reasoning about knowledge, information, or uncertainty under constraints. I decided that this cluster of problems would be my primary academic focus. I settled on calling the cluster algorithmic Bayesian epistemology: all of the questions I was thinking about involved applying the "algorithmic lens" of theoretical computer science to problems of Bayesian epistemology.


    Source:
    https://www.lesswrong.com/posts/6dd4b4cAWQLDJEuHw/my-phd-thesis-algorithmic-bayesian-epistemology

    Narrated for LessWrong by Perrin Walker.

    Share feedback on this narration.

    Show more Show less
    13 mins
  • [HUMAN VOICE] "How could I have thought that faster?" by mesaoptimizer
    Apr 12 2024

    Support ongoing human narrations of LessWrong's curated posts:
    www.patreon.com/LWCurated

    This is a linkpost for https://twitter.com/ESYudkowsky/status/144546114693741363

    I stumbled upon a Twitter thread where Eliezer describes what seems to be his cognitive algorithm that is equivalent to Tune Your Cognitive Strategies, and have decided to archive / repost it here.

    Source:
    https://www.lesswrong.com/posts/rYq6joCrZ8m62m7ej/how-could-i-have-thought-that-faster

    Narrated for LessWrong by Perrin Walker.

    Share feedback on this narration.

    Show more Show less
    3 mins
  • LLMs for Alignment Research: a safety priority?
    Apr 6 2024
    A recent short story by Gabriel Mukobi illustrates a near-term scenario where things go bad because new developments in LLMs allow LLMs to accelerate capabilities research without a correspondingly large acceleration in safety research.

    This scenario is disturbingly close to the situation we already find ourselves in. Asking the best LLMs for help with programming vs technical alignment research feels very different (at least to me). LLMs might generate junk code, but you can keep pointing out the problems with the code, and the code will eventually work. This can be faster than doing it myself, in cases where I don't know a language or library well; the LLMs are moderately familiar with everything.

    When I try to talk to LLMs about technical AI safety work, however, I just get garbage.

    I think a useful safety precaution for frontier AI models would be to make them more useful for [...]

    The original text contained 8 footnotes which were omitted from this narration.

    ---

    First published:
    April 4th, 2024

    Source:
    https://www.lesswrong.com/posts/nQwbDPgYvAbqAmAud/llms-for-alignment-research-a-safety-priority

    ---

    Narrated by TYPE III AUDIO.

    Show more Show less
    21 mins
  • [HUMAN VOICE] "Scale Was All We Needed, At First" by Gabriel Mukobi
    Apr 5 2024

    Support ongoing human narrations of LessWrong's curated posts:
    www.patreon.com/LWCurated

    Source:
    https://www.lesswrong.com/posts/xLDwCemt5qvchzgHd/scale-was-all-we-needed-at-first

    Narrated for LessWrong by Perrin Walker.

    Share feedback on this narration.

    Show more Show less
    15 mins
  • [HUMAN VOICE] "Using axis lines for good or evil" by dynomight
    Apr 5 2024

    Support ongoing human narrations of LessWrong's curated posts:
    www.patreon.com/LWCurated

    Source:
    https://www.lesswrong.com/posts/Yay8SbQiwErRyDKGb/using-axis-lines-for-good-or-evil

    Narrated for LessWrong by Perrin Walker.

    Share feedback on this narration.

    Show more Show less
    12 mins
  • [HUMAN VOICE] "Social status part 1/2: negotiations over object-level preferences" by Steven Byrnes
    Apr 5 2024

    Support ongoing human narrations of LessWrong's curated posts:
    www.patreon.com/LWCurated

    Source:
    https://www.lesswrong.com/posts/SPBm67otKq5ET5CWP/social-status-part-1-2-negotiations-over-object-level

    Narrated for LessWrong by Perrin Walker.

    Share feedback on this narration.

    Show more Show less
    50 mins