• Dead Internet Theory

  • Sep 29 2024
  • Duración: 12 m
  • Podcast

  • Resumen

  • Today we’re discussing Dead Internet Theory, a conspiracy theory which posits that the majority of online activity is generated by automated bots rather than real people.


    This theory asserts that these bots are intentionally deployed by various actors, including governments and corporations, to manipulate users and spread misinformation.


    The texts explore evidence supporting the theory, including reports on bot traffic, the increasing use of large language models like ChatGPT, and the proliferation of AI-generated content on social media platforms.


    They also discuss the potential consequences of this phenomenon, including the erosion of trust in online information and the manipulation of public opinion.


    ***

    What is the Dead Internet Theory?


    The Dead Internet Theory is an online conspiracy theory which claims that most current online activity and content is not generated by real people, but by artificial intelligence (AI) and bots. Proponents of the theory believe that bots are intentionally created to manipulate search algorithms and control the information people see online.


    ● The theory suggests that the internet as we knew it, full of genuine human interaction, is “dead”.

    ● The date given for this "death" is generally around 2016 or 2017.

    ● The theory has two main components:


    ○ Organic human activity has been displaced by bots.

    ○ State actors are coordinating this to manipulate the population.


    Arguments and Evidence


    ● Bot Activity: Reports show a significant increase in bot traffic online. For example, a 2016 Imperva report found that bots were responsible for 52% of web traffic.

    ● Algorithmic Curation: Social media algorithms often prioritize "relatable content," leading to the repetition of similar posts and a decline in original content. This is seen as evidence of a manufactured online experience.

    ● AI-Generated Content: The rise of sophisticated AI, particularly large language models (LLMs) like ChatGPT, has made it easier to create realistic-looking but artificial content. This includes text, images, and even videos, making it difficult to discern human from AI-generated content. Examples cited include:


    ○ "I hate texting" tweets: Repetitive tweets starting with "I hate texting" followed by an alternative activity, suspected to be from bot accounts.

    ○ "Shrimp Jesus" images on Facebook: AI-generated images combining Jesus and shrimp went viral, suggesting AI's ability to exploit algorithms for engagement.

    ○ AI-generated responses on Facebook: Facebook allows AI-generated responses to group posts, further blurring the lines between human and AI interaction.



    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
activate_Holiday_promo_in_buybox_DT_T2

Lo que los oyentes dicen sobre Dead Internet Theory

Calificaciones medias de los clientes

Reseñas - Selecciona las pestañas a continuación para cambiar el origen de las reseñas.