agentic learning
ai lab
app
Adaptive Foundation Models

The development of adaptive foundation models marks a significant shift toward AI systems that can continually learn, adapt, and evolve in response to new information, changing environments, and user preferences. Current foundation models are typically trained on static data, with limited ability to adapt through context post-deployment. Our goal is to enable foundation models to continuously absorb new knowledge and compress it into reusable representations for more up-to-date responses. This capability is also valuable for third-party customization, personalization, and safety alignment. We are interested in both the foundational study of sequential learning dynamics in large language models and practical applications that demand adaptive foundation models, such as personalized assistance and news forecasting.

Research Works in the Area

design

Context Tuning for In-Context Optimization

We introduce Context Tuning, a simple and effective method to significantly enhance few-shot adaptation of LLMs without fine-tuning model parameters.

Published: 2025-07-06

Learn more
design

Are LLMs Prescient? A Continuous Evaluation using Daily News as Oracle

Our new benchmark, Daily Oracle, automatically generates question-answer (QA) pairs from daily news, challenging LLMs to predict "future" events based on pre-training data.

Published: 2024-11-13

Learn more
design

CoLLEGe: Concept Embedding Generation for Large Language Models

CoLLEGe is a meta-learning framework capable of generating flexible embeddings for new concepts using a small number of example sentences or definitions.

Published: 2024-03-22

Learn more
design

Reawakening Knowledge: Anticipatory Recovery from Catastrophic Interference via Structured Training

We discover a curious and remarkable property of LLMs fine-tuned sequentially in this setting: they exhibit anticipatory behavior, recovering from the forgetting on documents before encountering them again.

Published: 2024-03-14

Learn more
design

Learning and Forgetting Unsafe Examples in Large Language Models

We explore the behavior of LLMs finetuned on noisy custom data containing unsafe content and propose a simple filtering algorithm for detecting harmful content based on the phenomenon of selective forgetting.

Published: 2023-12-20

Learn more
design

LifelongMemory: Leveraging LLMs for Answering Queries in Long-form Egocentric Videos

LifelongMemory is a new framework for accessing long-form egocentric videographic memory through natural language question answering and retrieval.

Published: 2023-12-07

Learn more