agentic learning
ai lab
app
agentic learning
ai lab
Agentic Learning AI Lab is a research lab in New York University founded in 2022. We innovate learning algorithms that enable future agentic AI to learn and adapt flexibly in the real world.

Key Areas

Recent Works

design

Replay Can Provably Increase Forgetting

We provide a theoretical analysis of sample replay in over-parameterized continual linear regression, and we show that replay can provably increase forgetting in the worst case even though the network has the capacity to memorize all tasks.

Published: 2025-06-04

Learn more
design

Memory Storyboard: Leveraging Temporal Segmentation for Streaming Self-Supervised Learning from Egocentric Videos

Memory Storyboard groups recent past frames into temporal segments and provides effective summarization of the past visual streams for memory replay.

Published: 2025-01-21

Learn more
design

Are LLMs Prescient? A Continuous Evaluation using Daily News as Oracle

Our new benchmark, Daily Oracle, automatically generates question-answer (QA) pairs from daily news, challenging LLMs to predict "future" events based on pre-training data.

Published: 2024-11-13

Learn more
design

PooDLe: Pooled and Dense Self-Supervised Learning from Naturalistic Videos

We propose PooDLe, a self-supervised learning method that combines an invariance-based objective on pooled representations with a dense SSL objective that enforces equivariance to optical flow warping.

Published: 2024-08-20

Learn more
design

ProCreate, Don't Reproduce! Propulsive Energy Diffusion for Creative Generation

ProCreate is a simple and easy-to-implement method to improve sample diversity and creativity of diffusion-based image generative models and to prevent training data reproduction.

Published: 2024-08-05

Learn more
design

Integrating Present and Past in Unsupervised Continual Learning

We formulate Osiris, a unifying framework for unsupervised continual learning (UCL), which disentangles learning objectives that encompass stability, plasticity, and cross-task consolidation.

Published: 2024-04-29

Learn more
design

CoLLEGe: Concept Embedding Generation for Large Language Models

CoLLEGe is a meta-learning framework capable of generating flexible embeddings for new concepts using a small number of example sentences or definitions.

Published: 2024-03-22

Learn more
design

Reawakening Knowledge: Anticipatory Recovery from Catastrophic Interference via Structured Training

We discover a curious and remarkable property of LLMs fine-tuned sequentially in this setting: they exhibit anticipatory behavior, recovering from the forgetting on documents before encountering them again.

Published: 2024-03-14

Learn more