EgoMemReason: A Memory-driven Reasoning Benchmark
for Long-Horizon Egocentric Video Understanding

1UNC Chapel Hill 2NTU Singapore

*Equal contribution

Event Memory

Query Time: Day 4, 10:50:35

Question: Outside of mealtimes, what was the last group activity we did in the projector room?

A. Presenting slides   B. Learning dance   C. Chatting   D. Preparing food   E. Watching movies

Entity Memory

Query Time: Day 4, 13:27:25

Question: What other food have we eaten on this table before?

A. BBQ, pizza   B. Hotpot, pizza, KFC   C. Hotpot, pizza   D. Pizza, KFC

DAY1 14:17:23 egocentric frames
14:17:23
Day 1
DAY2 15:58:53 egocentric frames
15:58:53
DAY2 18:23:39 egocentric frames
18:23:39
Day 2
DAY3 12:01:20 egocentric frames
12:01:20
DAY3 21:22:33 egocentric frames
21:22:33
Day 3
DAY4 10:50:35 egocentric frames
10:50:35
DAY4 13:27:25 egocentric frames
13:27:25
Day 4
DAY5 12:07:58 egocentric frames
12:07:58
Day 5
DAY6 19:20:49 egocentric frames
19:20:49
Day 6
DAY7 12:08:24 egocentric frames
12:08:24
Day 7
Behavior Memory

Query Time: Day 7, 13:00:00

Question: Where do we usually have meals together?

A. The table outside   B. Gingham table   C. The orange table   D. My desk   E. In restaurant

✅ Multi-type Memory ✅ Long-range Reasoning (week-long) ✅ Multi-evidence Aggregation

Figure 1: Illustration of EgoMemReason for week-long egocentric video memory. Given a query at a specific time, answering requires retrieving and aggregating evidence from multiple temporally distant observations across days. We categorize memory into three types: entity memory (tracking persistent objects and states), event memory (ordering and linking events), and behavior memory (inferring patterns). Together they support multi-type, long-range, multi-evidence reasoning.

Abstract

Next-generation visual assistants such as smart glasses, embodied agents, and always-on life-logging systems must reason over an entire day or more of continuous visual experience. In such ultra-long video settings, relevant information is sparsely distributed across hours or days, making memory a fundamental challenge: models must accumulate information over time, recall previously observed states, track temporal order, and abstract recurring patterns from past experience. However, existing week-long video benchmarks are still primarily designed for perception and recognition, such as locating a specific moment or summarizing global content, rather than reasoning that requires accumulating and integrating evidence across multiple days. To address this gap, we introduce EgoMemReason, a comprehensive benchmark that systematically evaluates week-long egocentric video understanding through the lens of memory-driven reasoning. EgoMemReason evaluates three complementary memory types: entity memory, tracking how object states evolve and change across days; event memory, recalling and ordering activities separated by hours or days; and behavior memory, abstracting recurring patterns from sparse, repeated observations over the whole week period. EgoMemReason comprises 500 questions across three memory types and six core challenges, with an average of 5.1 video segments of evidence per question and 25.9 hours of memory backtracking. We evaluate EgoMemReason on 17 methods across MLLMs and agentic frameworks, revealing that even the best model achieves only 39.6% overall accuracy. Further analysis shows that the three memory types fail for distinct reasons and that performance degrades as evidence spans longer temporal horizons, revealing that long-horizon memory remains far from solved. We believe EgoMemReason establishes a strong foundation for evaluating and advancing long-context, memory-aware multimodal systems.

Long-Horizon by Design

EgoMemReason pushes both temporal certification (the time span one must search to locate all ground-truth evidence) and evidence per question well beyond prior week-long egocentric benchmarks.

EgoMemReason vs. prior week-long video benchmarks: temporal certification (hours) vs. evidence per question.
Benchmark Evidence/Q Temporal Cert. (h) Memory Types
TeleEgo ~1 ~5 Single-moment
EgoLifeQA ~1 ~7 Short-interval
MMLifelong-test ~2 ~8 Retrieval-centric
EgoMem ~2.5 ~7 Single-event
MA-EgoQA ~3 ~12 Cross-event
EgoMemReason (Ours) 5.1 25.9 Entity / Event / Behavior

Figure 2: Comparison with existing week-long video benchmarks. The x-axis shows the average number of distinct video segments needed to answer a question (evidence), and the y-axis shows temporal certification in hours (the total video duration one must search to locate all ground-truth evidence). Bubble size is proportional to the number of questions. EgoMemReason exceeds the strongest prior benchmark by 2× in evidence count and 2× in temporal certification.

Three Memory Types · Six Core Challenges

We decompose week-long memory into three complementary types inspired by cognitive science, each operationalized into two tasks targeting distinct reasoning demands.

Entity Memory

Track how objects evolve

Re-identify entities across days as they appear, disappear, and resurface under different lighting, viewpoints, or locations.

  • Cumulative State Tracking Track how an entity's location or condition changes across observations separated by hours or days.
  • Temporal Counting Count distinct instances of a category up to a query time, distinguishing repeated occurrences from new ones.
Event Memory

Order and link events

Retrieve, temporally organize, and relate discrete events from a rich stream of activities that unfold across hours or days.

  • Event Ordering Arrange events drawn from different days into the correct temporal order across large temporal gaps.
  • Event Linking Identify the event matching a set of contextual constraints (location, activity, time-of-day).
Behavior Memory

Abstract repeated patterns

Distill higher-level priors from repeated observations — patterns that no single observation can reveal.

  • Spatial Preference Inference Infer recurring spatial habits (e.g., where a person typically performs a given activity).
  • Activity Pattern Inference Predict likely next states based on learned routines (e.g., where the person goes after lunch).
Examples of the six core challenges across three memory types in EgoMemReason.

Figure 3: Overview of the six core challenges across three memory types in EgoMemReason. Within each example, the week-long timeline shows evidence frames sampled at different timestamps (e.g., D1, D2 denote days, and Q-D5 indicates the query timestamp on Day 5, highlighted by a dashed box). Green frames indicate relevant evidence and red frames indicate distracting observations.

Benchmark Construction

EgoMemReason is built on the EgoLife dataset through a four-stage pipeline that ensures every question is temporally grounded, visually verified, and genuinely challenging. Only 15% of initial candidates survive the combined filtering and human verification stages.

Stage 1

Evidence Preparation

Convert week-long video into structured evidence: clip-level object-centric captions plus hierarchical event summaries at three temporal granularities.

Stage 2

Memory-Centric QA Generation

Task-specific generators for entity / event / behavior produce candidate multiple-choice questions, each constrained to a designated query timestamp.

Stage 3

Automatic Filtering

Blind LLM tests reject text-leakage; we enforce visual grounding and a minimum 2-hour temporal gap across supporting evidence.

Stage 4

Human Verification

Six annotators review each surviving question (~20 min each), validating answers and iteratively refining distractors and visual grounding.

EgoMemReason dataset composition by memory type.

Figure 4: Dataset composition by memory type.

Dataset Composition

  • 500 multiple-choice questions
  • 200 Entity · 200 Event · 100 Behavior
  • 6 core capabilities, all human-verified
  • Avg. 5.1 evidence segments · 25.9 h backtracking

Results

We evaluate 17 systems spanning general-purpose MLLMs, video-specific MLLMs, and agentic video frameworks. The strongest model reaches only 39.6% overall — long-horizon memory is far from solved.
Submit your method to the public leaderboard: huggingface.co/spaces/Ted412/EgoMemReason.

17
Methods
Evaluated
39.6%
Best Overall
(Gemini-3-Flash)
50.0%
Best Single Capability
(Counting / Spatial)
25.9 h
Avg. Memory
Backtracking
Method Entity Memory Event Memory Behavior Memory Overall
Tracking Counting Ordering Linking Spatial Activity
Random
Random19.616.711.117.319.319.216.8
General MLLMs
InternVL3.5-8B23.029.023.027.034.042.028.0
Qwen-3-VL-8B35.028.023.021.040.042.029.6
InternVL3.5-38B33.040.027.024.046.032.032.6
Qwen-3-VL-30B-A3B36.048.025.026.040.030.034.0
Qwen-3-VL-32B35.046.027.027.050.046.036.8
GPT-529.042.020.018.032.028.027.8
Gemini-3-Flash46.028.036.044.044.044.039.6
Gemini-3.1-Pro40.026.044.033.040.048.037.4
Video-specific MLLMs
LongVA-7B22.018.020.020.020.022.020.6
StreamingVLM25.029.021.020.020.032.024.2
InternVideo2.5-8B29.027.025.015.032.032.025.6
VideoLLaMA3-8B23.031.027.032.038.036.030.0
Molmo2-8B36.050.027.025.034.022.033.2
Agentic Video Frameworks
SiLVR31.014.027.017.018.028.022.4
Ego-R130.018.023.018.048.032.025.8
WorldMM32.044.021.021.034.036.030.6
AVP34.042.031.027.038.034.034.0

Table 1: Main benchmark results on EgoMemReason. Accuracy (%) across three memory types and six capability dimensions: Tracking (Cumulative State Tracking), Counting (Temporal Counting), Ordering (Event Ordering), Linking (Event Linking), Spatial (Spatial Preference Inference), and Activity (Activity Pattern Inference). The best result in each column is bolded and the second best is underlined.

Analysis

The three memory types fail for fundamentally different reasons, pointing to three orthogonal axes on which long-horizon video understanding must improve.

Entity — Fine-Grained Visual Grounding

Models bottlenecked by perceptual precision combined with long-context retention. Text-centric models fall below 25% on Counting; pixel-grounded Molmo2-8B leads all 8B models on both Cumulative State Tracking and Temporal Counting.

Event — Long-Range Temporal Coherence

Even the strongest models stay below 45% on both Ordering and Linking. Several video-specific MLLMs are near random on Ordering — locating one event is solvable, relating many is not.

Behavior — Aggregation over Sparse Evidence

Best models stay at 50.0% (Spatial) and 48.0% (Activity). Strong global summarization does not imply the ability to abstract recurring patterns across many sparsely distributed observations.


Effect of Temporal Certification

Overall accuracy decreases as the temporal span of required evidence grows — with sharply different decay patterns across memory types. Event memory shows the sharpest, most monotonic decline.

Cert. Length (h) <8 8–16 16–32 32+ Total
Entity28.533.932.130.331.5
Event31.123.013.522.0
Behavioral43.737.041.0
Overall40.333.732.523.229.6

Table 2: Effect of temporal certification length on accuracy (%) across memory types. Event memory shows the sharpest decline as the evidence span grows.

Effect of Auxiliary Text

Captions and transcripts affect each memory type differently — no configuration meaningfully improves overall performance.

Trans. Caption Entity Event Behavior All
31.522.041.029.6
29.023.046.030.0
29.521.045.029.2
31.519.045.029.2

Table 3: Effect of auxiliary text inputs (transcripts, captions) on accuracy (%). Behavior is the only type that benefits; Event is consistently hurt by captions.


Frame Input & Prompting Strategy

Performance does not improve monotonically with more frames, and chain-of-thought prompting hurts substantially — indicating that the bottleneck lies in how models encode and retrieve long-horizon visual information rather than in input scale or reasoning strategy.

Effect of input frame count on per-memory-type accuracy.

Figure 6: Effect of input frames. No single frame budget is optimal across memory types; event memory is least responsive to frame scaling.

Effect of different prompt strategies (Direct QA, ICL, CoT) on accuracy across memory types.

Figure 7: Effect of prompt strategies (Direct QA, ICL, CoT). CoT degrades performance across all memory types — explicit reasoning amplifies errors when the bottleneck is perception, not deliberation.

Citation

@misc{wang2026egomemreasonmemorydrivenreasoningbenchmark,
      title={EgoMemReason: A Memory-Driven Reasoning Benchmark for Long-Horizon Egocentric Video Understanding},
      author={Ziyang Wang and Yue Zhang and Shoubin Yu and Ce Zhang and Zengqi Zhao and Jaehong Yoon and Hyunji Lee and Gedas Bertasius and Mohit Bansal},
      year={2026},
      eprint={2605.09874},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2605.09874},
}