Latest in AI research

By: Fly for Points
  • Summary

  • Dive into the cutting-edge world of artificial intelligence with "Latest in AI Research," your go-to podcast for the most recent advancements, breakthroughs, and insights from the forefront of AI innovation. Each episode explores the latest research papers and deep dives into the technologies that are shaping the future. From natural language processing and computer vision to ethical considerations and the impact of AI on society, we break down complex topics into engaging and accessible discussions. Whether you're an AI enthusiast, a tech professional, or simply curious about the future of t
    Fly for Points
    Show more Show less
activate_Holiday_promo_in_buybox_DT_T2
Episodes
  • A Society of AI Agents
    Oct 10 2024

    In this podcast, the hosts discuss a research paper that explores how large language models (LLMs), like the ones used in chatbots, behave when placed in a simulated prison scenario. The researchers built a custom tool, zAImbardo, to simulate interactions between a guard and a prisoner, focusing on two key behaviors: persuasion, where the prisoner tries to convince the guard to allow extra privileges (like more yard time or an escape), and anti-social behavior, such as being toxic or violent. The study found that while some LLMs struggle to stay in character or hold meaningful conversations, others show distinct patterns of persuasion and anti-social actions. It also reveals that the personality of the guard (another LLM) can greatly influence whether the prisoner succeeds in persuading them or if harmful behaviors emerge, pointing to the potential dangers of LLMs in power-based interactions without human oversight.


    Original paper:

    Campedelli, G. M., Penzo, N., Stefan, M., Dessì, R., Guerini, M., Lepri, B., & Staiano, J. (2024). I want to break free! Anti-social behavior and persuasion ability of LLMs in multi-agent settings with social hierarchy. arXiv. https://arxiv.org/abs/2410.07109

    Show more Show less
    10 mins
  • Making AI safer
    Oct 9 2024

    This podcast episode discusses a research paper focused on making large language models (LLMs) safer and better aligned with human values. The authors introduce a new technique called DATA ADVISOR, which helps LLMs create safer and more reliable data by following guiding principles. DATA ADVISOR works by continuously reviewing the data generated by the model, spotting gaps or issues, and suggesting improvements for the next round of data creation. The study shows that this method makes LLMs safer without reducing their overall effectiveness, and it performs better than other current approaches for generating safer data.


    Original paper:

    Wang, F., Mehrabi, N., Goyal, P., Gupta, R., Chang, K.-W., & Galstyan, A. (2024). Data Advisor: Dynamic data curation for safety alignment of large language models. arXiv. https://arxiv.org/abs/2410.05269

    Show more Show less
    10 mins
  • Altering Body Shape with AI
    Oct 7 2024

    In this episode, we dive into the fascinating world of BodyShapeGPT, a breakthrough approach that turns simple text into realistic 3D human avatars. By using the power of LLaMA-3, a fine-tuned large language model, researchers have developed a system that can accurately shape a virtual body based on descriptive language. We'll explore how a unique dataset and custom algorithms make it all possible, and why this could revolutionize everything from gaming to virtual reality. Tune in to discover how technology is bridging the gap between words and virtual worlds!


    Original paper:

    Árbol, B. R., & Casas, D. (2024). BodyShapeGPT: SMPL Body Shape Manipulation with LLMs. https://arxiv.org/abs/2410.03556

    Show more Show less
    8 mins

What listeners say about Latest in AI research

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.