Local Time
SF

An R&D lab studying memory as a foundational substrate for . Across language, vision, audio, and world models.

read.the.work

What we study

Six properties a memory layer must have.
Open research directions, and constraints on the runtime.

[1] Persistent coherence

The same system, across sessions and model upgrades, should be able to refer to a single intelligible past. We study what makes a memory layer hold its shape under churn — and where the cache-as-history pattern silently fails.

[2] One substrate, many runtimes

We are looking for the minimum representation under which agents, workers, and simulation stacks can share the same memory. The hypothesis: most “retrieval pipelines” are reinventing the same substrate poorly.

[3] Compound intelligence

Memory should compound, not flatten. We study promotion, revision, and consolidation as first-class operators — borrowed in spirit from how biological memory keeps sharpening rather than degrading with scale.

[4] Governed crossings

What crosses into a model’s context is a decision worth examining. We treat selection, budgeting, and explanation as part of the memory problem, not a layer bolted on top.

[5] Trust at machine speed

Lineage and receipts are how memory becomes legible — to operators, to evaluators, and to the system itself. We are interested in what changes when a system can inspect its own recall.

[6] Prove it, then ship it

Memory failures are slow to surface and expensive to unwind. We build the evaluation harnesses we wish existed, and publish what we learn — including when the runtime is wrong.

Research Areas

Memory for intelligent systems, not just chatbots.

01 Memory for Agents Shipping

The most-studied memory in modern AI is the context window. We are asking what comes after it: a representation in which meaning, phrasing, and time live in the same space, and where what an agent learns can stack rather than dissolve at the next session boundary. The runtime is the place we test what we believe.

Open questions we are working on right now:

  • Compounding vs. flattening. When does adding more recall improve a system, and when does it make retrieval generic? We measure this directly.
  • Honest budgets. Token ceilings as a research instrument — what selection and packing decisions become visible only when capacity is finite?
  • Scoped isolation without fragmentation. How tiers (local, shared, durable) and namespaces can stay separate without splitting the substrate they sit on.
  • Receipts as a research surface. What can a system learn about itself when every crossing into the prompt is recorded and traceable?
02 Memory for Video Generation Research

Temporal coherence is a memory problem in disguise. We study what it takes for a generative video model to keep character identity, scene continuity, and physical state across hundreds of frames — without drifting into a different world halfway through. Early notes argue that frame-level conditioning is not enough; some form of persistent state is required.

  • Persistent entity representations across frame sequences
  • Scene-graph memory for spatial and relational consistency
  • Policy-gated injection into diffusion conditioning
  • Deterministic replay for debugging visual artifacts
  • Designed for Sora-class and open-source video pipelines
03 Memory for World Models Research

Long-horizon agents — in games, simulators, and eventually robotics — need to maintain and update beliefs about an environment that outlasts any single rollout. We are interested in persistent state layers that survive resets, generalize across instantiations of the same world, and stay legible enough to debug. This is where memory, planning, and identity start to look like the same problem.

  • Belief-state memory that updates with new observations
  • Multi-scale temporal abstraction: frames, episodes, lifetimes
  • Governed state transitions with audit trails
  • Integration with reinforcement learning and planning loops
  • Targeting embodied agents and open-ended environments

Memory is what a system chooses to keep.

Storage is cheap; selection isn’t. The interesting research lives in the choice.