Episodios

  • Arxiv paper - OMEGA: Can LLMs Reason Outside the Box in Math? Evaluating Exploratory, Compositional, and Transformative Generalization
    Jun 27 2025
    In this episode, we discuss OMEGA: Can LLMs Reason Outside the Box in Math? Evaluating Exploratory, Compositional, and Transformative Generalization by Yiyou Sun, Shawn Hu, Georgia Zhou, Ken Zheng, Hannaneh Hajishirzi, Nouha Dziri, Dawn Song. The paper introduces OMEGA, a new benchmark to evaluate large language models' out-of-distribution generalization on math problems along three creativity-inspired axes: exploratory, compositional, and transformative reasoning. Evaluations reveal that state-of-the-art LLMs struggle increasingly with problem complexity, especially in compositional and transformative reasoning. Fine-tuning improves exploratory skills but not the other two, highlighting challenges in achieving genuine mathematical creativity beyond routine problem-solving.
    Más Menos
    7 m
  • Arxiv paper - Long-Context State-Space Video World Models
    Jun 25 2025
    In this episode, we discuss Long-Context State-Space Video World Models by Ryan Po, Yotam Nitzan, Richard Zhang, Berlin Chen, Tri Dao, Eli Shechtman, Gordon Wetzstein, Xun Huang. The paper introduces a novel video diffusion model architecture that uses state-space models (SSMs) to extend temporal memory efficiently for causal sequence modeling. It employs a block-wise SSM scanning scheme combined with dense local attention to balance long-term memory with spatial coherence. Experiments on Memory Maze and Minecraft datasets show the method outperforms baselines in long-range memory retention while maintaining fast inference suitable for real-time use.
    Más Menos
    7 m
  • Arxiv paper - From Bytes to Ideas: Language Modeling with Autoregressive U-Nets
    Jun 24 2025
    In this episode, we discuss From Bytes to Ideas: Language Modeling with Autoregressive U-Nets by Mathurin Videau, Badr Youbi Idrissi, Alessandro Leite, Marc Schoenauer, Olivier Teytaud, David Lopez-Paz. The paper introduces an autoregressive U-Net model that dynamically learns its own token embeddings from raw bytes instead of relying on fixed tokenization schemes like BPE. This multi-scale architecture processes text from fine-grained bytes to broader semantic units, enabling predictions at varying future horizons. The approach matches strong baselines with shallow hierarchies and shows potential improvements with deeper ones, offering flexibility across languages and tasks.
    Más Menos
    9 m
  • Arxiv paper - Reinforcement Pre-Training
    Jun 20 2025
    In this episode, we discuss Reinforcement Pre-Training by Qingxiu Dong, Li Dong, Yao Tang, Tianzhu Ye, Yutao Sun, Zhifang Sui, Furu Wei. The paper introduces Reinforcement Pre-Training (RPT), a method that applies reinforcement learning to next-token prediction by rewarding correct predictions as a reasoning task. This approach leverages large text datasets without needing domain-specific annotations, improving language modeling accuracy and enabling strong foundations for further RL fine-tuning. Experimental results demonstrate that RPT scales effectively with compute, making it a promising paradigm for advancing language model pre-training.
    Más Menos
    9 m
  • Arxiv paper - Token-Efficient Long Video Understanding for Multimodal LLMs
    Jun 18 2025
    In this episode, we discuss Token-Efficient Long Video Understanding for Multimodal LLMs by Jindong Jiang, Xiuyu Li, Zhijian Liu, Muyang Li, Guo Chen, Zhiqi Li, De-An Huang, Guilin Liu, Zhiding Yu, Kurt Keutzer, Sungjin Ahn, Jan Kautz, Hongxu Yin, Yao Lu, Song Han, Wonmin Byeon. The paper introduces STORM, a new architecture that incorporates a temporal encoder using the Mamba State Space Model to better capture temporal dynamics in video-based multimodal large language models. This approach enables effective token reduction, significantly lowering computational costs and latency while preserving essential temporal information. Experiments demonstrate that STORM achieves state-of-the-art performance on long video understanding benchmarks with substantial improvements in efficiency and accuracy.
    Más Menos
    9 m
  • Arxiv paper - The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
    Jun 11 2025
    In this episode, we discuss The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity by The authors of the paper are: - Parshin Shojaee - Iman Mirzadeh - Keivan Alizadeh - Maxwell Horton - Samy Bengio - Mehrdad Farajtabar. This paper examines the reasoning abilities of Large Reasoning Models (LRMs) using controlled puzzles to analyze both their final answers and internal reasoning processes. It reveals that LRMs struggle with high-complexity problems, showing performance collapse and inconsistent reasoning despite sufficient computational resources. The study identifies distinct performance regimes and highlights fundamental limitations in LRMs' exact computation and use of explicit algorithms, questioning their true reasoning capabilities.
    Más Menos
    5 m
  • Arxiv paper - Vibe-Eval: A hard evaluation suite for measuring progress of multimodal language models
    Jun 9 2025

    In this episode, we discuss Vibe-Eval: A hard evaluation suite for measuring progress of multimodal language models by Piotr Padlewski, Max Bain, Matthew Henderson, Zhongkai Zhu, Nishant Relan, Hai Pham, Donovan Ong, Kaloyan Aleksiev, Aitor Ormazabal, Samuel Phua, Ethan Yeo, Eugenie Lamprecht, Qi Liu, Yuqi Wang, Eric Chen, Deyu Fu, Lei Li, Che Zheng, Cyprien de Masson d'Autume, Dani Yogatama, Mikel Artetxe, Yi Tay. The paper introduces Vibe-Eval, an open benchmark and framework with 269 visual understanding prompts designed to evaluate multimodal chat models on everyday and challenging tasks. It highlights that over half of the hardest prompts are incorrectly answered by current frontier models, emphasizing the benchmark's difficulty. The authors discuss evaluation methods, demonstrate correlation between automatic and human assessments, provide free API access, and release all code and data publicly. Github: https://github.com/reka-ai/reka-vibe-eval

    Más Menos
    6 m
  • Arxiv paper - How much do language models memorize?
    Jun 6 2025
    In this episode, we discuss How much do language models memorize? by John X. Morris, Chawin Sitawarin, Chuan Guo, Narine Kokhlikyan, G. Edward Suh, Alexander M. Rush, Kamalika Chaudhuri, Saeed Mahloujifar. The paper introduces a method to quantify how much a language model memorizes versus generalizes from data, defining model capacity as total memorization excluding generalization. Through extensive experiments on GPT-family models of varying sizes, the authors find that models memorize data until their capacity is full, after which generalization (or "grokking") increases and unintended memorization decreases. They establish scaling laws linking model capacity, data size, and membership inference, estimating GPT models have about 3.6 bits-per-parameter capacity.
    Más Menos
    10 m