Episodios

  • “Decomposing Agency — capabilities without desires” by owencb, Raymond D
    Jul 29 2024
    This is a link post.What is an agent? It's a slippery concept with no commonly accepted formal definition, but informally the concept seems to be useful. One angle on it is Dennett's Intentional Stance: we think of an entity as being an agent if we can more easily predict it by treating it as having some beliefs and desires which guide its actions. Examples include cats and countries, but the central case is humans.

    The world is shaped significantly by the choices agents make. What might agents look like in a world with advanced — and even superintelligent — AI? A natural approach for reasoning about this is to draw analogies from our central example. Picture what a really smart human might be like, and then try to figure out how it would be different if it were an AI. But this approach risks baking in subtle assumptions — [...]

    The original text contained 5 footnotes which were omitted from this narration.

    The original text contained 7 images which were described by AI.

    ---

    First published:
    July 11th, 2024

    Source:
    https://www.lesswrong.com/posts/jpGHShgevmmTqXHy5/decomposing-agency-capabilities-without-desires

    ---

    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Más Menos
    24 m
  • “Universal Basic Income and Poverty” by Eliezer Yudkowsky
    Jul 27 2024
    (Crossposted from Twitter)

    I'm skeptical that Universal Basic Income can get rid of grinding poverty, since somehow humanity's 100-fold productivity increase (since the days of agriculture) didn't eliminate poverty.

    Some of my friends reply, "What do you mean, poverty is still around? 'Poor' people today, in Western countries, have a lot to legitimately be miserable about, don't get me wrong; but they also have amounts of clothing and fabric that only rich merchants could afford a thousand years ago; they often own more than one pair of shoes; why, they even have cellphones, as not even an emperor of the olden days could have had at any price. They're relatively poor, sure, and they have a lot of things to be legitimately sad about. But in what sense is almost-anyone in a high-tech country 'poor' by the standards of a thousand years earlier? Maybe UBI works the same way [...]

    ---

    First published:
    July 26th, 2024

    Source:
    https://www.lesswrong.com/posts/fPvssZk3AoDzXwfwJ/universal-basic-income-and-poverty

    ---

    Narrated by TYPE III AUDIO.

    Más Menos
    16 m
  • “Optimistic Assumptions, Longterm Planning, and ‘Cope’” by Raemon
    Jul 19 2024
    Eliezer Yudkowsky periodically complains about people coming up with questionable plans with questionable assumptions to deal with AI, and then either:

    • Saying "well, if this assumption doesn't hold, we're doomed, so we might as well assume it's true."
    • Worse: coming up with cope-y reasons to assume that the assumption isn't even questionable at all. It's just a pretty reasonable worldview.
    Sometimes the questionable plan is "an alignment scheme, which Eliezer thinks avoids the hard part of the problem." Sometimes it's a sketchy reckless plan that's probably going to blow up and make things worse.

    Some people complain about Eliezer being a doomy Negative Nancy who's overly pessimistic.

    I had an interesting experience a few months ago when I ran some beta-tests of my Planmaking and Surprise Anticipation workshop, that I think are illustrative.

    i. Slipping into a more Convenient World

    I have an exercise where I give people [...]

    ---

    Outline:

    (00:59) i. Slipping into a more Convenient World

    (04:26) ii. Finding traction in the wrong direction.

    (06:47) Takeaways

    ---

    First published:
    July 17th, 2024

    Source:
    https://www.lesswrong.com/posts/8ZR3xsWb6TdvmL8kx/optimistic-assumptions-longterm-planning-and-cope

    ---

    Narrated by TYPE III AUDIO.

    Más Menos
    13 m
  • “Superbabies: Putting The Pieces Together” by sarahconstantin
    Jul 15 2024


    This post was inspired by some talks at the recent LessOnline conference including one by LessWrong user “Gene Smith”.

    Let's say you want to have a “designer baby”. Genetically extraordinary in some way — super athletic, super beautiful, whatever.

    6’5”, blue eyes, with a trust fund.

    Ethics aside[1], what would be necessary to actually do this?

    Fundamentally, any kind of “superbaby” or “designer baby” project depends on two steps:

    1.) figure out what genes you ideally want;

    2.) create an embryo with those genes.

    It's already standard to do a very simple version of this two-step process. In the typical course of in-vitro fertilization (IVF), embryos are usually screened for chromosomal abnormalities that would cause disabilities like Down Syndrome, and only the “healthy” embryos are implanted.

    But most (partially) heritable traits and disease risks are not as easy to predict.

    Polygenic Scores

    If what you care about is [...]

    ---

    Outline:

    (01:16) Polygenic Scores

    (03:35) Massively Multiplexed, Body-Wide Gene Editing? Not So Much, Yet.

    (06:45) Embryo Selection

    (07:52) “Iterated Embryo Selection”?

    (09:33) Iterated Meiosis?

    (13:29) Generating Naive Pluripotent Cells

    (16:50) What's Missing?

    The original text contained 5 footnotes which were omitted from this narration.

    The original text contained 2 images which were described by AI.

    ---

    First published:
    July 11th, 2024

    Source:
    https://www.lesswrong.com/posts/2uJsiQqHTjePTRqi4/superbabies-putting-the-pieces-together

    ---

    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    Más Menos
    19 m
  • “Poker is a bad game for teaching epistemics. Figgie is a better one.” by rossry
    Jul 12 2024
    This is a link post.Editor's note: Somewhat after I posted this on my own blog, Max Chiswick cornered me at LessOnline / Manifest and gave me a whole new perspective on this topic. I now believe that there is a way to use poker to sharpen epistemics that works dramatically better than anything I had been considering. I hope to write it up—together with Max—when I have time. Anyway, I'm still happy to keep this post around as a record of my first thoughts on the matter, and because it's better than nothing in the time before Max and I get around to writing up our joint second thoughts.

    As an epilogue to this story, Max and I are now running a beta test for a course on making AIs to play poker and other games. The course will a synthesis of our respective theories of pedagogy re [...]

    ---

    First published:
    July 8th, 2024

    Source:
    https://www.lesswrong.com/posts/PypgeCxFHLzmBENK4/poker-is-a-bad-game-for-teaching-epistemics-figgie-is-a

    ---

    Narrated by TYPE III AUDIO.

    Más Menos
    18 m
  • “Reliable Sources: The Story of David Gerard” by TracingWoodgrains
    Jul 11 2024


    This is a linkpost for https://www.tracingwoodgrains.com/p/reliable-sources-how-wikipedia-admin, posted in full here given its relevance to this community. Gerard has been one of the longest-standing malicious critics of the rationalist and EA communities and has done remarkable amounts of work to shape their public images behind the scenes.

    Note: I am closer to this story than to many of my others. As always, I write aiming to provide a thorough and honest picture, but this should be read as the view of a close onlooker who has known about much within this story for years and has strong opinions about the matter, not a disinterested observer coming across something foreign and new. If you’re curious about the backstory, I encourage you to read my companion article after this one.

    Introduction: Reliable Sources

    Wikipedia administrator David Gerard cares a great deal about Reliable Sources. For the past half-decade, he has torn [...]

    ---

    Outline:

    (00:55) Introduction: Reliable Sources

    (06:01) Gerard's Standards for Reliable Sources

    (13:53) Who Is David Gerard?

    (16:53) The Early Romantic Years

    (28:00) Gerard's fling with LessWrong in the twilight of the old internet

    (37:51) The bitter end

    (45:26) The Vindictive Ex

    (50:01) LessWrong

    (01:04:17) Effective Altruism

    (01:07:55) Scott Alexander

    (01:16:22) Conclusion

    (01:21:58) Companion article: A Young Mormon Discovers Online Rationality

    The original text contained 24 footnotes which were omitted from this narration.

    The original text contained 12 images which were described by AI.

    ---

    First published:
    July 10th, 2024

    Source:
    https://www.lesswrong.com/posts/3XNinGkqrHn93dwhY/reliable-sources-the-story-of-david-gerard

    ---

    Narrated by TYPE III AUDIO.

    Más Menos
    1 h y 22 m
  • “When is a mind me?” by Rob Bensinger
    Jul 8 2024
    xlr8harder writes:

    In general I don’t think an uploaded mind is you, but rather a copy. But one thought experiment makes me question this. A Ship of Theseus concept where individual neurons are replaced one at a time with a nanotechnological functional equivalent.

    Are you still you?

    Presumably the question xlr8harder cares about here isn't semantic question of how linguistic communities use the word "you", or predictions about how whole-brain emulation tech might change the way we use pronouns.

    Rather, I assume xlr8harder cares about more substantive questions like:

    1. If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biological self?
    2. Should I anticipate experiencing what my upload experiences?
    3. If the scanning and uploading process requires destroying my biological brain, should I say yes to the procedure?
    My answers:

    1. The original text contained 1 footnote which was omitted from this narration.

      The original text contained 7 images which were described by AI.

      ---

      First published:
      April 17th, 2024

      Source:
      https://www.lesswrong.com/posts/zPM5r3RjossttDrpw/when-is-a-mind-me

      ---

      Narrated by TYPE III AUDIO.

    Más Menos
    27 m
  • “80,000 hours should remove OpenAI from the Job Board (and similar orgs should do similarly)” by Raemon
    Jul 4 2024
    I haven't shared this post with other relevant parties – my experience has been that private discussion of this sort of thing is more paralyzing than helpful. I might change my mind in the resulting discussion, but, I prefer that discussion to be public.



    I think 80,000 hours should remove OpenAI from its job board, and similar EA job placement services should do the same.

    (I personally believe 80k shouldn't advertise Anthropic jobs either, but I think the case for that is somewhat less clear)

    I think OpenAI has demonstrated a level of manipulativeness, recklessness, and failure to prioritize meaningful existential safety work, that makes me think EA orgs should not be going out of their way to give them free resources. (It might make sense for some individuals to work there, but this shouldn't be a thing 80k or other orgs are systematically funneling talent into)

    There [...]

    ---

    First published:
    July 3rd, 2024

    Source:
    https://www.lesswrong.com/posts/8qCwuE8GjrYPSqbri/80-000-hours-should-remove-openai-from-the-job-board-and

    ---

    Narrated by TYPE III AUDIO.

    Más Menos
    13 m