
EA - Analysis of key AI analogies by Kevin Kohler
No se pudo agregar al carrito
Solo puedes tener X títulos en el carrito para realizar el pago.
Add to Cart failed.
Por favor prueba de nuevo más tarde
Error al Agregar a Lista de Deseos.
Por favor prueba de nuevo más tarde
Error al eliminar de la lista de deseos.
Por favor prueba de nuevo más tarde
Error al añadir a tu biblioteca
Por favor intenta de nuevo
Error al seguir el podcast
Intenta nuevamente
Error al dejar de seguir el podcast
Intenta nuevamente
-
Narrado por:
-
De:
Acerca de esta escucha
The following is an analysis of seven prominent AI analogies: aliens, the brain, climate change, electricity, the Industrial Revolution, the neocortex, & nuclear fission. You can find longer versions of these as separate blogposts on my substack.
0. Why?
AI analogies have a real-world impact
For better or worse, analogies play a prominent role in the public debate about the long-term trajectory and impacts of AI.
Analogies play a role in designing international institutions for AI (e.g. CERN, IPCC) and in legal decisions
Analogies as mental heuristics can influence policymakers in critical decisions. Changes in AI analogies can lead to worldview shifts (e.g. Hinton)
Having worked with a diverse set of experts my sense is that their thinking is anchored by wildly different analogies
Analogies can be misleading
Matthew Barnett ("Against most, but not all, AI risk analogies") & others have already discussed the shortcomings of analogies on this forum
Every individual analogy is imperfect. AI is its own thing, and there is simply no precedent that would closely match the characteristics of AI across 50+ governance-relevant dimensions.
Overly relying on a single analogy without considering differences and other analogies can lead to blind spots, overconfidence, and overfitting reality to a preconceived pattern.
Analogies can be useful
When facing a complex, open-ended challenge, we do not start with a system model. It is not clear which domain logic, questions, scenarios, risks, or opportunities we should pay attention to. Analogies can be a tool to explore such a future with deep uncertainty.
Analogies can be an instrumental tool in advocacy to communicate complex concepts in a digestible and intuitively appealing way.
My analysis is written in the spirit of exploration without prescribing or proscribing any specific analogy. At the same time, as a repository, it may still be of interest to policy advocates.
1. Aliens (full text)
Basic idea
comparison to first contact with an alien civilization
symbolizing AI's underlying non-human reasoning processes, masked by human-like responses from RLHF
Selected users
Yuval Noah Harari (2023, 2023, 2023, 2023, 2023, 2023, 2024)
Ray Kurzweil (disanalogy - 1997, 1999, 2005, 2006, 2007, 2009, 2012, 2013, 2017, 2018, 2023)
Selected commonalities
1.
Superhuman power potential: Technologically mature extraterrestrials would likely be either far less advanced than us or significantly more advanced, comparable to our potential future digital superintelligence.
2.
Digital life: Popular culture often envisions aliens as evolved humans, but mature aliens are likely digital beings due to the advantages of digital intelligence over biological constraints and because digital beings can be more easily transported across space. The closest Earthly equivalent to these digital aliens is artificial intelligence.
3.
Terraforming: Humans shape their environment for biological needs, while terraforming by digital aliens would require habitats like electricity grids and data centers, which is very similar to a rapid build-out of AI infrastructure. Pathogens from digital aliens are unlikely to affect humans directly but could impact our information technology.
4. Consciousness: We understand neural correlates of consciousness in biological systems but not in digital systems. The consciousness of future AI and digital aliens remains a complex and uncertain issue.
5.
Non-anthropomorphic minds: AI and aliens encompass a vast range of possible minds shaped by different environments and selection pressures than human minds. AI can develop non-human strategies, especially when trained with reinforcement learning. AI can have non-human failure modes such as ...
Todavía no hay opiniones