Episodios

  • EA - Number of EAs per capita by country by OscarD
    Jun 30 2024
    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Number of EAs per capita by country, published by OscarD on June 30, 2024 on The Effective Altruism Forum. I was surprised I couldn't find a graph like this already on the forum, so I made one and thought I would share it: The data is from the 2022 EA survey,[1] and here is my sheet. The main surprising thing to me is that English-speaking countries are less dominant than I expected, in this per capita framing. My vague sense was that the EA community was notably more popular in the Anglosphere than even in other rich countries, but eyeballing this data makes me think I was wrong: Northern/Western Europe seems to have quite comparable rates of EAs. And what on earth is happening in Estonia? Perhaps some Estonian EAs can tell us all what you are doing that works so well! 1. ^ Maybe there are quite different response rates by country, and this could explain some of the variance but I assume there isn't a large or systematic effect here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
    Más Menos
    1 m
  • EA - Analysis of key AI analogies by Kevin Kohler
    Jun 30 2024
    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Analysis of key AI analogies, published by Kevin Kohler on June 30, 2024 on The Effective Altruism Forum. The following is an analysis of seven prominent AI analogies: aliens, the brain, climate change, electricity, the Industrial Revolution, the neocortex, & nuclear fission. You can find longer versions of these as separate blogposts on my substack. 0. Why? AI analogies have a real-world impact For better or worse, analogies play a prominent role in the public debate about the long-term trajectory and impacts of AI. Analogies play a role in designing international institutions for AI (e.g. CERN, IPCC) and in legal decisions Analogies as mental heuristics can influence policymakers in critical decisions. Changes in AI analogies can lead to worldview shifts (e.g. Hinton) Having worked with a diverse set of experts my sense is that their thinking is anchored by wildly different analogies Analogies can be misleading Matthew Barnett ("Against most, but not all, AI risk analogies") & others have already discussed the shortcomings of analogies on this forum Every individual analogy is imperfect. AI is its own thing, and there is simply no precedent that would closely match the characteristics of AI across 50+ governance-relevant dimensions. Overly relying on a single analogy without considering differences and other analogies can lead to blind spots, overconfidence, and overfitting reality to a preconceived pattern. Analogies can be useful When facing a complex, open-ended challenge, we do not start with a system model. It is not clear which domain logic, questions, scenarios, risks, or opportunities we should pay attention to. Analogies can be a tool to explore such a future with deep uncertainty. Analogies can be an instrumental tool in advocacy to communicate complex concepts in a digestible and intuitively appealing way. My analysis is written in the spirit of exploration without prescribing or proscribing any specific analogy. At the same time, as a repository, it may still be of interest to policy advocates. 1. Aliens (full text) Basic idea comparison to first contact with an alien civilization symbolizing AI's underlying non-human reasoning processes, masked by human-like responses from RLHF Selected users Yuval Noah Harari (2023, 2023, 2023, 2023, 2023, 2023, 2024) Ray Kurzweil (disanalogy - 1997, 1999, 2005, 2006, 2007, 2009, 2012, 2013, 2017, 2018, 2023) Selected commonalities 1. Superhuman power potential: Technologically mature extraterrestrials would likely be either far less advanced than us or significantly more advanced, comparable to our potential future digital superintelligence. 2. Digital life: Popular culture often envisions aliens as evolved humans, but mature aliens are likely digital beings due to the advantages of digital intelligence over biological constraints and because digital beings can be more easily transported across space. The closest Earthly equivalent to these digital aliens is artificial intelligence. 3. Terraforming: Humans shape their environment for biological needs, while terraforming by digital aliens would require habitats like electricity grids and data centers, which is very similar to a rapid build-out of AI infrastructure. Pathogens from digital aliens are unlikely to affect humans directly but could impact our information technology. 4. Consciousness: We understand neural correlates of consciousness in biological systems but not in digital systems. The consciousness of future AI and digital aliens remains a complex and uncertain issue. 5. Non-anthropomorphic minds: AI and aliens encompass a vast range of possible minds shaped by different environments and selection pressures than human minds. AI can develop non-human strategies, especially when trained with reinforcement learning. AI can have non-human failure modes such as ...
    Más Menos
    31 m
  • EA - AI scaling myths by Nicholas Kruus
    Jun 29 2024
    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI scaling myths, published by Nicholas Kruus on June 29, 2024 on The Effective Altruism Forum. "So far, bigger and bigger language models have proven more and more capable. But does the past predict the future? "One popular view is that we should expect the trends that have held so far to continue for many more orders of magnitude, and that it will potentially get us to artificial general intelligence, or AGI. "This view rests on a series of myths and misconceptions. The seeming predictability of scaling is a misunderstanding of what research has shown. Besides, there are signs that LLM developers are already at the limit of high-quality training data. And the industry is seeing strong downward pressure on model size. While we can't predict exactly how far AI will advance through scaling, we think there's virtually no chance that scaling alone will lead to AGI..." Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
    Más Menos
    1 m
  • EA - #191 - The economy and national security after AGI (Carl Shulman on the 80,000 Hours Podcast) by 80000 Hours
    Jun 29 2024
    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: #191 - The economy and national security after AGI (Carl Shulman on the 80,000 Hours Podcast), published by 80000 Hours on June 29, 2024 on The Effective Altruism Forum. We just published an interview: Carl Shulman on the economy and national security after AGI. Listen on Spotify or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts. Episode summary Consider just the magnitude of the hammer that is being applied to this situation: it's going from millions of scientists and engineers and entrepreneurs to billions and trillions on the compute and AI software side. It's just a very large change. You should also be surprised if such a large change doesn't affect other macroscopic variables in the way that, say, the introduction of hominids has radically changed the biosphere, and the Industrial Revolution greatly changed human society. Carl Shulman The human brain does what it does with a shockingly low energy supply: just 20 watts - a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply? Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they're creating. Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour. It's a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field. It's a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business. It's a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in. It's a world where, overnight, the number of human beings becomes irrelevant to rates of economic growth, which is now driven by how quickly the entire machine economy can copy all its components. Looking at how long it takes complex biological systems to replicate themselves (some of which can do so in days) that occurring every few months could be a conservative estimate. It's a world where any country that delays participating in this economic explosion risks being outpaced and ultimately disempowered by rivals whose economies grow to be 10-fold, 100-fold, and then 1,000-fold as large as their own. As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine 'people' to help them with every aspect of their lives. And with growth rates this high, it doesn't take long to run up against Earth's physical limits - in this case, the toughest to engineer your way out of is the Earth's ability to release waste heat. If this machine economy and its insatiable demand for power generates more heat than the Earth radiates into space, then it will rapidly heat up and become uninhabitable for humans and other animals. This eventually creates pressure to move economic activity off-planet. There's little need for computer chips to be on Ear...
    Más Menos
    28 m
  • EA - Contra Acemoglu on AI by Maxwell Tabarrok
    Jun 29 2024
    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Contra Acemoglu on AI, published by Maxwell Tabarrok on June 29, 2024 on The Effective Altruism Forum. The Simple Macroeconomics of AI is a 2024 working paper by Daron Acemoglu which models the economic growth effects of AI and predicts them to be small: About a .06% increase in TFP growth annually. This stands in contrast to many predictions which forecast immense impacts on economic growth from AI, including many from other academic economists. Why does Acemoglu come to such a different conclusion than his colleagues and who is right? First, Acemoglu divides up the ways AI could affect productivity into four channels: 1. AI enables further (extensive-margin) automation. Obvious examples of this type of automation include generative AI tools such as large language models taking over simple writing, translation and classification. 2. AI can generate new task complementarities, raising the productivity of labor in tasks it is performing. For example, AI could provide better information to workers, directly increasing their productivity. Alternatively, AI could automate some subtasks (such as providing readymade subroutines to computer programmers) and simultaneously enable humans to specialize in other subtasks, where their performance improves. 3. AI could induce deepening of automation - meaning improving performance, or reducing costs, in some previously capital-intensive tasks. Examples include IT security, automated control of inventories, and better automated quality control 4. AI can generate new labor-intensive products or tasks. Each of these four channels is referring to specific mechanism in his task-based model of production. Automation raises the threshold of tasks which are performed by capital instead of labor Complementarities raises labor productivity in non-automated tasks Deepening of automation raises capital productivity in already-automated tasks New tasks are extra production steps that only labor can perform in the economy, for example, the automation of computers leads to programming as a new task. The chief sin of this paper is dismissing the latter half of these mechanisms without good arguments or evidence. "Deepening automation" in Acemoglu's model means increasing the efficiency of tasks already performed by machines. This raises output but doesn't change the distribution of tasks assigned to humans vs machines. AI might deepen automation by creating new algorithms that improve Google's search results on a fixed compute budget or replacing expensive quality control machinery with vision-based machine learning, for example. This kind of productivity improvement can have huge growth effects. The second industrial revolution was mostly "deepening automation" growth. Electricity, machine tools, and Bessemer steel improved already automated processes, leading to the fastest rate of economic growth the US has ever seen. In addition, this deepening automation always increase wages in Acemoglu's model, in contrast to the possibility of negative wage effects from the extensive margin automation that he focuses on. So why does Acemoglu ignore this channel? I do not dwell on deepening of automation because the tasks impacted by (generative) AI are quite different than those automated by the previous wave of digital technologies, such as robotics, advanced manufacturing equipment and software systems. This single sentence is the only justification he gives for omitting capital productivity improvements from his analysis. A charitable interpretation of this argument acknowledges that he is only referring to "(generative) AI", like ChatGPT and Midjourney. These tools do seem more focused on augmenting human labor rather than doing what software can already do, but more efficiently. Though Acemoglu is happy to drop the "generative" qualifier everywhere ...
    Más Menos
    9 m
  • LW - The Incredible Fentanyl-Detecting Machine by sarahconstantin
    Jun 29 2024
    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Incredible Fentanyl-Detecting Machine, published by sarahconstantin on June 29, 2024 on LessWrong. There's bound to be a lot of discussion of the Biden-Trump presidential debates last night, but I want to skip all the political prognostication and talk about the real issue: fentanyl-detecting machines. Joe Biden says: And I wanted to make sure we use the machinery that can detect fentanyl, these big machines that roll over everything that comes across the border, and it costs a lot of money. That was part of this deal we put together, this bipartisan deal. More fentanyl machines, were able to detect drugs, more numbers of agents, more numbers of all the people at the border. And when we had that deal done, he went - he called his Republican colleagues said don't do it. It's going to hurt me politically. He never argued. It's not a good bill. It's a really good bill. We need those machines. We need those machines. And we're coming down very hard in every country in Asia in terms of precursors for fentanyl. And Mexico is working with us to make sure they don't have the technology to be able to put it together. That's what we have to do. We need those machines. Wait, what machines? You can remotely, non-destructively detect that a bag of powder contains fentanyl rather than some other, legal substance? And you can sense it through the body of a car? My god. The LEO community must be holding out on us. If that tech existed, we'd have tricorders by now. What's actually going on here? What's Up With Fentanyl-Detecting Machines? First of all, Biden didn't make them up. This year, the Department of Homeland Security reports that Customs and Border Patrol (CBP) has deployed "Non-Intrusive Inspection" at the US's southwest border: "By installing 123 new large-scale scanners at multiple POEs along the southwest border, CBP will increase its inspection capacity of passenger vehicles from two percent to 40 percent, and of cargo vehicles from 17 percent to 70 percent." In fact, there's something of a scandal about how many of these scanners have been sitting in warehouses but not actually deployed. CBP Commissioner Troy Miller complained to NBC News that the scanners are sitting idle because Congress hasn't allocated the budget for installing them. These are, indeed, big drive-through machines. They X-ray cars, allowing most traffic to keep flowing without interruption. Could an X-ray machine really detect fentanyl inside a car? To answer that, we have to think about what an x-ray machine actually does. An X-ray is a form of high-energy, short-wavelength electromagnetic radiation. X-rays can pass through solid objects, but how easily they pass through depends on the material - higher atomic number materials are more absorbing per unit mass. This is why bones will show up on an X-ray scan. The calcium (element 20) in bones has higher atomic mass than the other most common elements in living things (carbon, hydrogen, oxygen, nitrogen, sulfur), and bones are also denser than soft tissue, so bones absorb X-rays while the rest of the body scatters it. This is also how airport security scans baggage: a cabinet x-ray shows items inside a suitcase, differentiated by density. It's also how industrial CT scans can look inside products nondestructively to see how they're made. To some extent, X-ray scanners can distinguish materials, by their density and atomic number. But fentanyl is an organic compound - made of carbon, hydrogen, nitrogen, and oxygen, just like lots of other things. Its density is a very normal 1.1 g/mL (close to the density of water.) I'm pretty sure it's not going to be possible to tell fentanyl apart from other things by its density and atomic number alone. Indeed, that's not what the scanner vendors are promising to do. Kevin McAleenam, the former DHS secretary who...
    Más Menos
    12 m
  • EA - A Research Agenda for Psychology and AI by carter allen
    Jun 28 2024
    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Research Agenda for Psychology and AI, published by carter allen on June 28, 2024 on The Effective Altruism Forum. I think very few people have thought rigorously about how psychology research could inform the trajectory of AI or humanity's response to it. Despite this, there seem to be many important contributions psychology could make to AI safety. For instance, a few broad paths-to-impact that psychology research might have are: 1. Helping people anticipate the societal response to possible developments in AI. In which areas is public opinion likely to be the bottleneck to greater AI safety? 2. Improving forecasting/prediction techniques more broadly and applying this to forecasts of AI trajectories (the Forecasting Research Institute's Existential Persuasion Tournament is a good example). 3. Describing human values and traits more rigorously to inform AI alignment, or to inform decisions about who to put behind the wheel in an AI takeoff scenario. 4. Doing cognitive/behavioral science on AI models. For instance, developing diagnostic tools that can be used to assess how susceptible an AI decision-maker is to various biases. 5. Modeling various risks related to institutional stability. For instance, arms races, risks posed by malevolent actors, various parties' incentives in AI development, and decision-making within/across top AI companies. I spent several weeks thinking about specific project ideas in these topic areas as part of my final project for BlueDot Impact's AI Safety Fundamentals course. I'm sharing my ideas here because a) there are probably large topic areas I'm missing, and I'd like for people to point them out to me, b) I'm starting my PhD in a few months, and I want to do some of these ideas, but I haven't thought much about which of them are more/less valuable, and c) I would love for anyone else to adopt any of the ideas here or reach out to me about collaborating! I also hope to write a future version of this post that incorporates more existing research (I haven't thoroughly checked which of these project ideas have already been done). Maybe another way this post could be valuable is that I've consolidated a lot of different resources, ideas, and links to other agendas in one place. I'd especially like for people to send me more things in this category, so that I or others can use this post as a resource for connecting people in psychology to ideas and opportunities in AI safety. In the rest of this post, I list various topic areas I identified in no particular order, as well as any particular project ideas I had which struck me as potentially especially neglected & valuable in that area. Any feedback is appreciated! Topic Areas & Project Ideas 1. Human-AI Interaction 1.1 AI Persuasiveness A lot of people believe that future AI systems might be extremely persuasive, and perhaps we should prepare for a world where interacting with AI models carries a risk of manipulation/brainwashing. How realistic is this concern? (Note, although I don't think this sort of research is capable of answering whether AI will ever be extremely persuasive, I think it could still be very usefully informative.) For instance: How good are people at detecting AI-generated misinformation in current models, or inferring ulterior motives in current AI advisors? How has this ability changed in line with compute trends? Are people better or worse at detecting lies in current AI models, compared to humans? How has this ability changed in line with compute trends? How does increasing quantity of misinformation affect people's susceptibility to misinformation? Which effect dominates between "humans get more skeptical of information as less of it is true" and "humans believe more false information as more false information is available"? In which domains is AI most likely to b...
    Más Menos
    30 m
  • LW - How a chip is designed by YM
    Jun 28 2024
    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How a chip is designed, published by YM on June 28, 2024 on LessWrong. Disclaimer: This is highly incomplete. I am not an expert in the field. There might be some unfamiliar terms. While I will try to explain things, explaining every single term would be beyond this post. You will usually be able to get a sufficient understanding by clicking the links or googling it. Introduction I think everyone, if they read about the chip industry long enough, has a moment where they have to put down a book or pause a podcast and simply remain stunned at the fact that it is possible to design and build something that is so incredibly impressive. The Apple A17 chip contains 183 million transistors per square millimeter. All placed in a coherent manner and produced with extremely high reliability. This is exactly why it is so fascinating to learn more about how it is actually done. On top of that, in a universe where compute is arguably the most important input in the AI production function, this knowledge is also crucial to effective AI governance. So what follows is a quick introduction to the processes of getting a chip from a vague idea to sending your files to the manufacturer, also called the tape-out. Background Knowledge One of the most important decisions, a decision that significantly determines all the others, is what manufacturer will build your chip and what process they will use. There are companies that do both design and manufacturing (e.g. Intel), but especially when it comes to the most advanced logic chips, more and more companies are what is called " fabless" - they focus on the design and task a so-called "foundry" (e.g. TSMC) with the manufacturing. Nowadays many fabs and fabless companies work together very closely in what is called Design-Technology Co-Optimization (DTCO). In practice, there are quite significant limitations in chip design, and the fab will check design plans and inform designers what can and can't be manufactured. This collaborative approach ensures that chip designs are optimized for the specific manufacturing process, balancing performance, power, area, and yield considerations. DTCO has become increasingly important as the industry approaches the physical limits of semiconductor scaling, requiring closer integration between design teams and process engineers to continue advancing chip capabilities. The foundry sends the design company what is called the process design kit ( PDK ), which contains all the important specifics to the fab and the manufacturing process (also known as the technology node). One factor that in large part determines the profitability of a chip is the yield of the manufacturing process. The yield is the fraction of chips produced that work flawlessly and can be sold. Compared to other types of products, in the semiconductor industry the yield is quite low, sometimes moving significantly below 50% for periods of time, especially at the beginning of a new technology node. To improve yield, optimal manufacturability is taken into account at many stages of the design process in what is called Design for Manufacturability (DFM). Chips are also designed to be easy to test (Design For Testability, DFT). In this post we are focussing on the design process, not with the actual manufacturing steps or the details of a transistor. But it is important to know that in practice we are working with standard cells that are all equal in height and vary in width. varies to make design and manufacturing easier. Often the IP for the standard cells is licensed from third parties. The Design Process My stages follow the outline given by Prof. Adam Teman in this lecture. Definition and Planning This is the stage where we think about what you even want to build. What bus structure do you want? How many cores should it have? What amount of p...
    Más Menos
    10 m