activate_primeday_promo_in_buybox_DT
Episodios
  • LW - Secondary forces of debt by KatjaGrace
    Jun 28 2024
    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Secondary forces of debt, published by KatjaGrace on June 28, 2024 on LessWrong. A general thing I hadn't noticed about debts until lately: Whenever Bob owes Alice, then Alice has reason to look after Bob, to the extent that increases the chance he satisfies the debt. Yet at the same time, Bob has an incentive for Alice to disappear, insofar as it would relieve him. These might be tiny incentives, and not overwhelm for instance Bob's many reasons for not wanting Alice to disappear. But the bigger the owing, the more relevant the incentives. When big enough, the former comes up as entities being "too big to fail", and potentially rescued from destruction by those who would like them to repay or provide something expected of them in future. But the opposite must exist also: too big to succeed - where the abundance owed to you is so off-putting to provide that those responsible for it would rather disempower you. And if both kinds of incentive are around in whisps whenever there is a debt, surely they often get big enough to matter, even before they become the main game. For instance, if everyone around owes you a bit of money, I doubt anyone will murder you over it. But I wouldn't be surprised if it motivated a bit more political disempowerment for you on the margin. There is a lot of owing that doesn't arise from formal debt, where these things also apply. If we both agree that I - as your friend - am obliged to help you get to the airport, you may hope that I have energy and fuel and am in a good mood. Whereas I may (regretfully) be relieved when your flight is canceled. Money is an IOU from society for some stuff later, so having money is another kind of being owed. Perhaps this is part of the common resentment of wealth. I tentatively take this as reason to avoid debt in all its forms more: it's not clear that the incentives of alliance in one direction make up for the trouble of the incentives for enmity in the other. And especially so when they are considered together - if you are going to become more aligned with someone, better it be someone who is not simultaneously becoming misaligned with you. Even if such incentives never change your behavior, every person you are obligated to help for an hour on their project is a person for whom you might feel a dash of relief if their project falls apart. And that is not fun to have sitting around in relationships. (Inpsired by reading The Debtor's Revolt by Ben Hoffman lately, which may explicitly say this, but it's hard to be sure because I didn't follow it very well. Also perhaps inspired by a recent murder mystery spree, in which my intuitions have absorbed the heuristic that having something owed to you is a solid way to get murdered.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
    Más Menos
    3 m
  • LW - AI #70: A Beautiful Sonnet by Zvi
    Jun 28 2024
    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #70: A Beautiful Sonnet, published by Zvi on June 28, 2024 on LessWrong. They said it couldn't be done. No, not Claude Sonnet 3.5 becoming the clear best model. No, not the Claude-Sonnet-empowered automatic meme generators. Those were whipped together in five minutes. They said I would never get quiet time and catch up. Well, I showed them! That's right. Yes, there is a new best model, but otherwise it was a quiet week. I got a chance to incorporate the remaining biggest backlog topics. The RAND report is covered under Thirty Eight Ways to Steal Your Model Weights. Last month's conference in Seoul is covered in You've Got Seoul. I got to publish my thoughts on OpenAI's Model Spec last Friday. Table of Contents Be sure to read about Claude 3.5 Sonnet here. That is by far the biggest story. 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. I am increasingly persuaded. 4. Language Models Don't Offer Mundane Utility. EU's DMA versus the AiPhone. 5. Clauding Along. More people, mostly impressed. 6. Fun With Image Generation. They are coming for our memes. Then Hollywood. 7. Copyright Confrontation. The RIAA does the most RIAA thing. 8. Deepfaketown and Botpocalypse Soon. Character.ai addiction. Am I out of touch? 9. They Took Our Jobs. More arguments that the issues lie in the future. 10. The Art of the Jailbreak. We need to work together as a team. 11. Get Involved. AISI, Apollo, Astra, Accra, BlueDot, Cybersecurity and DOE. 12. Introducing. Forecasting, OpenAI Mac App, Otto, Dot, Butterflies, Decagon. 13. In Other AI News. OpenAI equity takes steps forward. You can sell it. 14. Quiet Speculations. A distinct lack of mojo. 15. You've Got Seoul. Delayed coverage of the Seoul summit from last month. 16. Thirty Eight Ways to Steal Your Model Weights. Right now they would all work. 17. The Quest for Sane Regulations. Steelmanning restraint. 18. SB 1047. In Brief. 19. The Week in Audio. Dwarkesh interviews Tony Blair, and many more. 20. Rhetorical Innovation. A demolition, and also a disputed correction. 21. People Are Worried About AI Killing Everyone. Don't give up. Invest wisely. 22. Other People Are Not As Worried About AI Killing Everyone. What even is ASI? 23. The Lighter Side. Eventually the AI will learn. Language Models Offer Mundane Utility Training only on (x,y) pairs, define the function f(x), compose and invert it without in-context examples or chain of thought. AI Dungeon will let you be the DM and take the role of the party, if you prefer. Lindy 'went rogue' and closed a customer on its own. They seem cool with it? Persuasive capability of the model is proportional to the log of the model size, says paper. Author Kobi Hackenburg paints this as reassuring, but the baseline is that everything scales with the log of the model size. He says this is mostly based on 'task completion' and staying on topic improving, and current frontier models are already near perfect at that, so he is skeptical we will see further improvement. I am not. I do believe the result that none of the models was 'more persuasive than human baseline' in the test, but that is based on uncustomized messages on generic political topics. Of course we should not expect above human performance there for current models. 75% of knowledge workers are using AI, but 78% of the 75% are not telling the boss. Build a team of AI employees to write the first half of your Shopify CEO speech from within a virtual office, then spend the second half of the speech explaining how you built the team. It is so weird to think 'the best way to get results from AI employees I can come up with is to make them virtually thirsty so they will have spontaneous water cooler conversations.' That is the definition of scratching the (virtual) surface. Do a bunch of agent-based analysis off a si...
    Más Menos
    1 h y 12 m
  • EA - My Current Claims and Cruxes on LLM Forecasting & Epistemics by Ozzie Gooen
    Jun 27 2024
    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Current Claims and Cruxes on LLM Forecasting & Epistemics, published by Ozzie Gooen on June 27, 2024 on The Effective Altruism Forum. I think that recent improvements in LLMs have brought us to the point where LLM epistemic systems are starting to be useful. After spending some time thinking about it, I've realized that such systems, broadly, seem very promising to me as an effective altruist intervention area. However, I think that our community has yet to do a solid job outlining what this area could look like or figuring out key uncertainties. This document presents a rough brainstorm on these topics. While I could dedicate months to refining these ideas, I've chosen to share these preliminary notes now to spark discussion. If you find the style too terse, feel free to use an LLM to rewrite it in a format you prefer. I believe my vision for this area is more ambitious and far-reaching (i.e. not narrow to a certain kind of forecasting) than what I've observed in other discussions. I'm particularly excited about AI-heavy epistemic improvements, which I believe have greater potential than traditional forecasting innovations. I'm trying to figure out what to make of this regarding our future plans at QURI, and I recommend that other organizations in the space consider similar updates. Key Definitions: Epistemic process: A set of procedures to do analysis work, often about topics with a lot of uncertainty. This could be "have one journalist do everything themselves", to a complex (but repeatable) ecosystem of multiple humans and software systems. LLM-based Epistemic Process (LEP): A system that relies on LLMs to carry out most or all of an epistemic process. This might start at ~10% LLM-labor, but can gradually ramp up. I imagine that such a process is likely to feature some kinds of estimates or forecasts. Scaffolding: Software used around an LLM system, often to make it valuable for specific use-cases. In the case of an LEP, a lot of scaffolding might be needed. 1. High-Level Benefits & Uses Claim 1: If humans could forecast much better, these humans should make few foreseeable mistakes. This covers many mistakes, particularly ones we might be worried about now. Someone deciding about talking to a chatbot that can be predicted to be net-negative (perhaps it would create an unhealthy relationship) could see this forecast and simply decide not to start the chat. Say that a person's epistemic state could follow one of four trajectories, depending on some set of reading materials. For example, one set is conspiratorial, one is religious, etc. Good forecasting could help forecast this and inform the person ahead of time. Note that this can be radical and perhaps dangerous. For example, a religious family knowing how to keep their children religious with a great deal of certainty. Say that one of two political candidates is predictably terrible. This could be made clear to voters who trust said prediction. If an AI actor is doing something likely to be monopolistic or dangerous, this would be made more obvious to itself and those around it. Note: There will also be unforeseeable mistakes, but any actions that we could do that are foreseeably-high-value for them, could be predicted. For example, general-purpose risk mitigation measures. Claim 2: Highly intelligent / epistemically capable organizations are likely to be better at coordination. This might well mean fewer wars and conflict, along with corresponding military spending. If highly capable actors were in a prisoner's dilemma, the results could be ugly. But very often, there's a lot of potential and value in not getting into one in the first place. Evidence: From The Better Angels of Our Nature, there's significant evidence that humanity has become significantly less violent over time. One potential exception is t...
    Más Menos
    40 m

Lo que los oyentes dicen sobre The Nonlinear Library

Calificaciones medias de los clientes

Reseñas - Selecciona las pestañas a continuación para cambiar el origen de las reseñas.