agentic learning
ai lab
app
agentic learning
ai lab
Agentic Learning AI Lab is a research lab in New York University founded in 2022. We innovate learning algorithms that enable future agentic AI to learn and adapt flexibly in the real world.

Key Areas

Recent Works

design

Memory Storyboard: Leveraging Temporal Segmentation for Streaming Self-Supervised Learning from Egocentric Videos

Memory Storyboard groups recent past frames into temporal segments and provides effective summarization of the past visual streams for memory replay.

Published: 2025-01-21

Learn more
design

Are LLMs Prescient? A Continuous Evaluation using Daily News as Oracle

Our new benchmark, Daily Oracle, automatically generates question-answer (QA) pairs from daily news, challenging LLMs to predict "future" events based on pre-training data.

Published: 2024-11-13

Learn more
design

PooDLe: Pooled and Dense Self-Supervised Learning from Naturalistic Videos

We propose PooDLe, a self-supervised learning method that combines an invariance-based objective on pooled representations with a dense SSL objective that enforces equivariance to optical flow warping.

Published: 2024-08-20

Learn more
design

ProCreate, Don't Reproduce! Propulsive Energy Diffusion for Creative Generation

ProCreate is a simple and easy-to-implement method to improve sample diversity and creativity of diffusion-based image generative models and to prevent training data reproduction.

Published: 2024-08-05

Learn more
design

Integrating Present and Past in Unsupervised Continual Learning

We formulate Osiris, a unifying framework for unsupervised continual learning (UCL), which disentangles learning objectives that encompass stability, plasticity, and cross-task consolidation.

Published: 2024-04-29

Learn more
design

CoLLEGe: Concept Embedding Generation for Large Language Models

CoLLEGe is a meta-learning framework capable of generating flexible embeddings for new concepts using a small number of example sentences or definitions.

Published: 2024-03-22

Learn more
design

Reawakening Knowledge: Anticipatory Recovery from Catastrophic Interference via Structured Training

We discover a curious and remarkable property of LLMs fine-tuned sequentially in this setting: they exhibit anticipatory behavior, recovering from the forgetting on documents before encountering them again.

Published: 2024-03-14

Learn more
design

Self-Supervised Learning of Video Representations from a Child's Perspective

We train self-supervised video models on longitudinal, egocentric headcam recordings collected from a child over a two year period in their early development.

Published: 2024-02-01

Learn more
design

Learning and Forgetting Unsafe Examples in Large Language Models

We explore the behavior of LLMs finetuned on noisy custom data containing unsafe content and propose a simple filtering algorithm for detecting harmful content based on the phenomenon of selective forgetting.

Published: 2023-12-20

Learn more
design

LifelongMemory: Leveraging LLMs for Answering Queries in Long-form Egocentric Videos

LifelongMemory is a new framework for accessing long-form egocentric videographic memory through natural language question answering and retrieval.

Published: 2023-12-07

Learn more