Episodios

  • Self-Rewarding Language Models
    Jan 8 2026
    In this episode, we discuss Self-Rewarding Language Models by Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Xian Li, Sainbayar Sukhbaatar, Jing Xu, Jason Weston. The paper proposes training language models to give themselves feedback using a self-rewarding approach, bypassing the limitations of human-labeled reward models. By iteratively fine-tuning Llama 2 70B with this method, the model improves both its instruction-following and self-assessment abilities. The resulting model surpasses several top systems, demonstrating the potential for continual self-improvement in AI agents.
    Más Menos
    9 m
  • On the generalization of language models from in-context learning and finetuning: a controlled study
    Jan 5 2026
    In this episode, we discuss On the generalization of language models from in-context learning and finetuning: a controlled study by Andrew K. Lampinen, Arslan Chaudhry, Stephanie C. Y. Chan, Cody Wild, Diane Wan, Alex Ku, Jörg Bornschein, Razvan Pascanu, Murray Shanahan, James L. McClelland. The paper compares the generalization and deductive reasoning abilities of large language models when learning through fine-tuning versus in-context learning, finding that in-context learning generally enables more flexible generalization. It introduces novel datasets to rigorously test these differences by isolating new factual information from pretraining knowledge. Additionally, the authors propose enhancing fine-tuning by including in-context reasoning traces, which improves the models' reasoning and generalization performance across multiple benchmarks.
    Más Menos
    8 m
  • OpenThoughts: Data Recipes for Reasoning Models
    Dec 16 2025
    In this episode, we discuss OpenThoughts: Data Recipes for Reasoning Models by Etash Guha, Ryan Marten, Sedrick Keh, Negin Raoof, Georgios Smyrnis, Hritik Bansal, Marianna Nezhurina, Jean Mercat, Trung Vu, Zayne Sprague, Ashima Suvarna, Benjamin Feuer, Liangyu Chen, Zaid Khan, Eric Frankel, Sachin Grover, Caroline Choi, Niklas Muennighoff, Shiye Su, Wanjia Zhao, John Yang, Shreyas Pimpalgaonkar, Kartik Sharma, Charlie Cheng-Jie Ji, Yichuan Deng, Sarah Pratt, Vivek Ramanujan, Jon Saad-Falcon, Jeffrey Li, Achal Dave, Alon Albalak, Kushal Arora, Blake Wulfe, Chinmay Hegde, Greg Durrett, Sewoong Oh, Mohit Bansal, Saadia Gabriel, Aditya Grover, Kai-Wei Chang, Vaishaal Shankar, Aaron Gokaslan, Mike A. Merrill, Tatsunori Hashimoto, Yejin Choi, Jenia Jitsev, Reinhard Heckel, Maheswaran Sathiamoorthy, Alexandros G. Dimakis, Ludwig Schmidt. The paper presents the OpenThoughts project, which develops open-source datasets for training reasoning models to address the lack of publicly available data. Their OpenThoughts3 dataset, created through extensive controlled experiments, enables training of the OpenThinker3-7B model that outperforms previous state-of-the-art models on several reasoning benchmarks. All datasets and models are publicly released to support further research in reasoning AI.
    Más Menos
    7 m
  • Nested Learning: The Illusion of Deep Learning Architecture
    Dec 13 2025
    In this episode, we discuss Nested Learning: The Illusion of Deep Learning Architecture by The authors of the paper "Nested Learning: The Illusion of Deep Learning Architecture" are: - Ali Behrouz - Meisam Razaviyayn - Peilin Zhong - Vahab Mirrokni. The paper introduces Nested Learning (NL), a new paradigm framing machine learning as multiple nested optimization problems with distinct context flows, explaining in-context learning in large models. It proposes more expressive optimizers as associative memory modules, a self-modifying sequence model that learns its own update rules, and a continuum memory system to improve continual learning. Together, these contributions enable a continual learning module called Hope, which shows promise in language modeling, knowledge integration, and long-context reasoning tasks.
    Más Menos
    8 m
  • ARC Is a Vision Problem!
    Dec 9 2025
    In this episode, we discuss ARC Is a Vision Problem! by Keya Hu, Ali Cy, Linlu Qiu, Xiaoman Delores Ding, Runqian Wang, Yeyin Eva Zhu, Jacob Andreas, Kaiming He. The paper reframes the Abstraction and Reasoning Corpus (ARC) tasks as an image-to-image translation problem using a vision-centric approach. It introduces Vision ARC (VARC), a model based on a vanilla Vision Transformer trained from scratch on ARC data, which generalizes well to new tasks via test-time training. VARC achieves a 60.4% accuracy on the ARC-1 benchmark, outperforming previous scratch-trained methods and approaching human-level performance.
    Más Menos
    8 m
  • Solving a Million-Step LLM Task with Zero Errors
    Dec 9 2025
    In this episode, we discuss Solving a Million-Step LLM Task with Zero Errors by Elliot Meyerson, Giuseppe Paolo, Roberto Dailey, Hormoz Shahrzad, Olivier Francon, Conor F. Hayes, Xin Qiu, Babak Hodjat, Risto Miikkulainen. The paper presents MAKER, a system that achieves error-free execution of tasks requiring over one million steps by decomposing them into subtasks handled by specialized microagents. This modular approach enables efficient error correction through multi-agent voting, overcoming the persistent error rates that limit standard LLM scalability. The findings suggest that massively decomposed agentic processes offer a promising path to scaling LLM applications to complex, large-scale problems beyond individual model improvements.
    Más Menos
    7 m
  • DataRater: Meta-Learned Dataset Curation
    Dec 5 2025
    In this episode, we discuss DataRater: Meta-Learned Dataset Curation by Dan A. Calian, Gregory Farquhar, Iurii Kemaev, Luisa M. Zintgraf, Matteo Hessel, Jeremy Shar, Junhyuk Oh, András György, Tom Schaul, Jeffrey Dean, Hado van Hasselt, David Silver. The paper proposes DataRater, a meta-learning approach that estimates the value of individual training data points to improve dataset curation. By leveraging meta-gradients, DataRater optimizes data selection to enhance training efficiency on held-out data. Experiments demonstrate that filtering data with DataRater significantly boosts compute efficiency across various model scales and datasets.
    Más Menos
    9 m
  • Mathematical exploration and discovery at scale
    Nov 15 2025
    In this episode, we discuss Mathematical exploration and discovery at scale by Bogdan Georgiev, Javier Gómez-Serrano, Terence Tao, Adam Zsolt Wagner. AlphaEvolve is an evolutionary coding agent that combines large language models with automated evaluation to iteratively generate and refine solutions for complex mathematical problems. It successfully rediscovered and improved known solutions across various math domains and can generalize results into universal formulas. When integrated with proof assistants, AlphaEvolve enables automated proof generation, demonstrating significant potential for advancing mathematical discovery and optimization.
    Más Menos
    8 m
adbl_web_global_use_to_activate_DT_webcro_1694_expandible_banner_T1