Episodes

  • arxiv preprint - Human-like Episodic Memory for Infinite Context LLMs
    Jul 15 2024

    In this episode, we discuss Human-like Episodic Memory for Infinite Context LLMs by Zafeirios Fountas, Martin A Benfeghoul, Adnan Oomerjee, Fenia Christopoulou, Gerasimos Lampouras, Haitham Bou-Ammar, Jun Wang. The paper introduces EM-LLM, an approach that enhances large language models (LLMs) by incorporating principles of human episodic memory and event cognition, enabling them to manage extensive contexts efficiently. EM-LLM uses Bayesian surprise and graph-theoretic boundary refinement to organize token sequences into episodic events and employs a two-stage memory process for effective retrieval. Experiments demonstrate that EM-LLM outperforms existing models on various tasks, showing significant improvement, and aligning well with human event perception, suggesting potential for interdisciplinary AI and cognitive science research.

    Show more Show less
    5 mins
  • arxiv preprint - Learning to (Learn at Test Time): RNNs with Expressive Hidden States
    Jul 12 2024

    In this episode, we discuss Learning to (Learn at Test Time): RNNs with Expressive Hidden States by Yu Sun, Xinhao Li, Karan Dalal, Jiarui Xu, Arjun Vikram, Genghan Zhang, Yann Dubois, Xinlei Chen, Xiaolong Wang, Sanmi Koyejo, Tatsunori Hashimoto, Carlos Guestrin. The paper introduces Test-Time Training (TTT) layers, a new type of sequence modeling layer combining the efficiency of RNNs with the long-context performance of self-attention mechanisms. TTT layers make use of a machine learning model as their hidden state, updated through self-supervised learning iterations even on test sequences. The proposed TTT-Linear and TTT-MLP models demonstrate competitive or superior performance to both advanced Transformers and modern RNNs like Mamba, with TTT-Linear proving more efficient in certain long-context scenarios.

    Show more Show less
    5 mins
  • arxiv preprint - Graph-Based Captioning: Enhancing Visual Descriptions by Interconnecting Region Captions
    Jul 11 2024

    In this episode, we discuss Graph-Based Captioning: Enhancing Visual Descriptions by Interconnecting Region Captions by Yu-Guan Hsieh, Cheng-Yu Hsieh, Shih-Ying Yeh, Louis Béthune, Hadi Pour Ansari, Pavan Kumar Anasosalu Vasu, Chun-Liang Li, Ranjay Krishna, Oncel Tuzel, Marco Cuturi. The paper introduces a new annotation strategy termed graph-based captioning (GBC) that uses labelled graph structures to describe images more richly than plain text. GBC combines object detection and dense captioning to create a hierarchical graph of nodes and edges detailing entities and their relationships. The authors demonstrate the effectiveness of GBC by creating a large dataset, GBC10M, which significantly improves performance in vision-language models and propose a novel attention mechanism to utilize the graph's structure for further benefits.

    Show more Show less
    5 mins
  • arxiv preprint - Evaluating Human Alignment and Model Faithfulness of LLM Rationale
    Jul 9 2024

    In this episode, we discuss Evaluating Human Alignment and Model Faithfulness of LLM Rationale by Mohsen Fayyaz, Fan Yin, Jiao Sun, Nanyun Peng. The paper investigates how effectively large language models (LLMs) can explain their decisions through rationales extracted from input texts. It compares two types of rationale extraction methods—attribution-based and prompting-based—finding that prompting-based rationales better align with human-annotated rationales. The study also explores the faithfulness limitations of prompting-based methods and shows that fine-tuning models on specific datasets can improve the faithfulness of both rationale extraction approaches.

    Show more Show less
    5 mins
  • arxiv preprint - Detection and Measurement of Syntactic Templates in Generated Text
    Jul 8 2024

    In this episode, we discuss Detection and Measurement of Syntactic Templates in Generated Text by Chantal Shaib, Yanai Elazar, Junyi Jessy Li, Byron C. Wallace. The paper investigates syntactic features in text generated by large language models (LLMs), revealing higher rates of templated text in these models compared to human-generated text. It finds that a significant portion of these templates originates from pre-training data and remain unchanged during fine-tuning. The study demonstrates that syntactic templates can distinguish between different models and tasks, and serves as an effective tool for evaluating style memorization in LLMs.

    Show more Show less
    6 mins
  • arxiv preprint - From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data
    Jul 1 2024

    In this episode, we discuss From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data by Zheyang Xiong, Vasilis Papageorgiou, Kangwook Lee, Dimitris Papailiopoulos. This paper addresses the challenge Large Language Models (LLMs) face with long-context information retrieval and reasoning. The authors propose finetuning LLMs using a synthetic dataset designed for numerical key-value retrieval tasks, resulting in significant improvements. Experiments demonstrate enhanced performance on longer-context tasks without compromising general benchmark performance, unlike other long-context augmentation methods that can provoke hallucination.

    Show more Show less
    6 mins
  • arxiv preprint - MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning
    Jun 27 2024

    In this episode, we discuss MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning by Xiangyu Zhao, Xiangtai Li, Haodong Duan, Haian Huang, Yining Li, Kai Chen, Hua Yang. The study presents MG-LLaVA, a multi-modal large language model designed to process both low-resolution and high-resolution images along with object-centric features for improved perception tasks. It includes a high-resolution visual encoder and a Conv-Gate fusion network to amalgamate fine-grained details with base features, enhancing object recognition using bounding box-derived data from offline detectors. Extensive benchmarking demonstrates MG-LLaVA's superior performance over comparable MLLMs, validated by evaluations using various language encoders ranging from 3.8B to 34B parameters.

    Show more Show less
    6 mins
  • arxiv preprint - 4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities
    Jun 26 2024

    In this episode, we discuss 4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities by Roman Bachmann, Oğuzhan Fatih Kar, David Mizrahi, Ali Garjani, Mingfei Gao, David Griffiths, Jiaming Hu, Afshin Dehghan, Amir Zamir. The paper presents a novel any-to-any model that significantly extends the capabilities of existing multimodal and multitask foundation models by training on tens of highly diverse modalities, including images, text, geometric data, and more. Through discrete tokenization of various data types and co-training on large-scale datasets, the model can address three times more tasks/modalities than current models without sacrificing performance. The authors demonstrate this with a three billion parameter model, providing open access to the models and training code.

    Show more Show less
    5 mins