agentic learning
ai lab
app
Adaptive Agents and Foundation Models

The development of adaptive agents and foundation models marks a significant shift toward AI systems that can continually learn, adapt, and evolve in response to new information, changing environments, and user preferences. Current AI models are typically trained on static data, with limited ability to adapt through context post-deployment. Our goal is to enable agents to continuously absorb new knowledge and compress it into reusable representations for more up-to-date responses. This capability is also valuable for third-party customization, personalization, and safety alignment. We are interested in both the foundational study of sequential learning dynamics in large language models and practical applications that demand adaptive agents, such as personalized assistance, multimodal learning, and news forecasting.

Research Works in the Area

design

Local Reinforcement Learning with Action-Conditioned Root Mean Squared Q-Functions

Action-conditioned Root mean squared Q-Functions (ARQ) is a novel backprop-free value estimation method that applies a goodness function and action conditioning for local reinforcement learning.

Published: 2025-10-08

Learn more
design

StreamMem: Query-Agnostic KV Cache Memory for Streaming Video Understanding

StreamMem is a query-agnostic KV cache memory mechanism for streaming video understanding.

Published: 2025-08-21

Learn more
design

Context Tuning for In-Context Optimization

Context Tuning is a simple and effective method to significantly enhance few-shot adaptation of LLMs without fine-tuning model parameters.

Published: 2025-07-06

Learn more
design

Are LLMs Prescient? A Continuous Evaluation using Daily News as Oracle

Our new benchmark, Daily Oracle, automatically generates question-answer (QA) pairs from daily news, challenging LLMs to predict "future" events based on pre-training data.

Published: 2024-11-13

Learn more
design

CoLLEGe: Concept Embedding Generation for Large Language Models

CoLLEGe is a meta-learning framework capable of generating flexible embeddings for new concepts using a small number of example sentences or definitions.

Published: 2024-03-22

Learn more
design

Reawakening Knowledge: Anticipatory Recovery from Catastrophic Interference via Structured Training

We discover a curious and remarkable property of LLMs fine-tuned sequentially in this setting: they exhibit anticipatory behavior, recovering from the forgetting on documents before encountering them again.

Published: 2024-03-14

Learn more
design

Learning and Forgetting Unsafe Examples in Large Language Models

We explore the behavior of LLMs finetuned on noisy custom data containing unsafe content and propose a simple filtering algorithm for detecting harmful content based on the phenomenon of selective forgetting.

Published: 2023-12-20

Learn more
design

LifelongMemory: Leveraging LLMs for Answering Queries in Long-form Egocentric Videos

LifelongMemory is a new framework for accessing long-form egocentric videographic memory through natural language question answering and retrieval.

Published: 2023-12-07

Learn more