← HTML page · All pages

Infrastructure debugging mode

When you're debugging infrastructure

Two runs. Same inputs. Different state. No replay, no diff, no explanation.

One of three operational modes of the same person: someone building an operating system for their own AI agents. Operating mode, Building pipelines mode.

You enter this mode when something breaks and you need to understand why. Your observability stack watches everything except the thing that actually matters: what the agent believed and why. The debugging process is reconstructing truth from scattered logs, guessing at state transitions, and hoping to reproduce the issue. You're a log archaeologist. Neotoma makes you a platform engineer with replayable state.

Escaping

Log archaeologist — reverse-engineering truth from logs

Into

Platform engineer with replayable state

Manual orchestration → declarative trust

Tax you pay

Writing glue (checkpoint logic, custom diffing, state serialization)

What you get back

Debugging speed, platform design time, sleep

Same question, different outcome

Without a state layer, agents return stale or wrong data. With Neotoma, every response reads from versioned, schema-bound state.

Pipeline reproducibility

without state layer

Replay yesterday's ingestion pipeline.

Pipeline completed. 3 entity conflicts unresolved.

with state layer

Replay yesterday's ingestion pipeline.

Pipeline replayed deterministically. State matches v47.

Same pipeline, different results

Two runs of the same pipeline with identical inputs returned different entity states. Without content-addressed versioning, there was no way to detect or prevent the drift.

State mutation visibility

without state layer

What changed on entity acme-config since deploy?

No changes detected.

with state layer

What changed on entity acme-config since deploy?

2 mutations: field 'rate_limit' updated at 14:32, 'region' at 14:38.

Invisible overwrite, broken downstream

An upstream agent silently overwrote a shared entity. Downstream consumers read stale state and produced incorrect outputs. The mutation was invisible to observability tooling.

Compliance & audit

without state layer

Trace output of eval run 2841 to source.

Source data unavailable. Log retention expired.

with state layer

Trace output of eval run 2841 to source.

Output traces to observations #4091, #4092. Full chain available.

Missing provenance, failed audit

An evaluation harness needed to trace an agent's output to its source data. Without an immutable observation log, the audit trail had to be reconstructed manually from logs.

State reconstruction

without state layer

Reconstruct agent state at 03:12 UTC crash.

State unavailable. Last checkpoint: 22:00 UTC.

with state layer

Reconstruct agent state at 03:12 UTC crash.

State reconstructed from 847 observations. Timeline to 03:12 ready.

Can't reconstruct state after failure

A production agent crashed mid-run. The in-memory state was lost. Without an append-only observation log, the team had no way to reconstruct what the agent knew at the time of failure.

Why this happens

×Cannot reproduce agent runs: same inputs yield different state

×State mutations invisible to debugging and observability tooling

×Debugging production failures requires manual log archaeology

×No provenance trail for state changes across pipeline steps

×No portable state layer; agent memory locked to one vendor's runtime

×Agent state routed through third-party services with no data residency or compliance guarantees

Failure modes without a memory guarantee

Non-reproducible agent runs in production

Invisible state mutation across sessions

No provenance linking outputs to source data

Ordering-sensitive state drift across orchestration steps

No proof of data residency or access control for compliance

State layer locked to one vendor; no portability across runtimes

Agent runs are not reproducible

Two runs of the same agent with identical inputs produce different results. State mutations between sessions are invisible; there is no versioned history to compare, no observation log to replay. Debugging means reading logs and guessing.

State mutations are invisible

Ad-hoc state management overwrites values in place. When an entity changes, the previous value is gone. There is no diff, no provenance, no way to know which agent or pipeline step introduced the change.

No audit trail for compliance or evaluation

Evaluation harnesses need to compare agent outputs against known-good state. Compliance requires tracing decisions to source data. Without an immutable observation log, neither is possible without rebuilding the audit trail manually.

State layer locked to one vendor's runtime

Each agent runtime provides its own memory abstraction: none portable, none interoperable. Migrating to a new orchestration framework means rebuilding state management from scratch. There is no standard state layer that works across vendors.

No data residency guarantees for agent state

Agent state flows through third-party APIs with no contractual guarantee about where it's stored, who can access it, or whether it's used for model training. For teams with SOC 2, HIPAA, or GDPR obligations, opaque provider memory is a compliance gap that manual audits cannot close.

Your application teams ship in tight cycles. Your state layer should too. If you can't replay an agent run, you can't debug it. If you can't debug it, you can't iterate. Neotoma makes agent state inspectable, diffable, and replayable, so your debugging cycle is minutes, not days of log archaeology.

AI needs

What you need from your AI tools, and what current tools don't provide.

How Neotoma solves this

Neotoma replaces the glue you've been hand-rolling (checkpoint logic, state serialization, custom diffing) with deterministic primitives. Append-only observations, versioned history, and replayable timelines become infrastructure you build on, not plumbing you maintain.

[

Deterministic state evolution

Every state transition is content-addressed and versioned. Same observations always produce the same entity state: no ordering sensitivity, no silent drift. Agent runs become reproducible by construction.

](/deterministic-state-evolution)[

Append-only observation log

Observations are immutable. Corrections add new data; they never overwrite. The full state can be reconstructed from the observation log at any point in time.

](/reproducible-state-reconstruction)[

Full provenance and replayable timeline

Every entity, relationship, and fact links back to the observation that created it. Replay the timeline to any historical state. Diff versions to understand what changed and when.

](/replayable-timeline)[

Schema-first validation

Entity types enforce schema constraints at write time. Malformed or invalid data is rejected before it enters the memory graph, preventing garbage-in-garbage-out failures across agent runtimes and orchestration layers.

](/schema-constraints)

What actually changes

You stop writing glue. Checkpoint logic, state serialization, custom diffing, retry handlers. The guarantees you've been hand-rolling become primitives. You declare invariants instead of building safety nets.

When something fails, you query the timeline instead of reconstructing it from logs. Post-mortems take thirty minutes because provenance answers "what changed and when" directly. Your team stops treating agent state as a black box and starts treating it like any other part of the stack they can reason about.

The job shifts from reactive firefighting to proactive platform design.

Key differences

How your needs differ from Agent system builders:

  • Primary layer: infrastructure/platform (runtimes, orchestration, observability), not application workflows
  • Adoption motion: evaluate guarantees first, then standardize across teams
  • Decision buyer: platform or reliability leads with infrastructure budget and longer review cycles
  • Success metric: reproducible runs and auditability at the platform level, not just better agent outputs

Data types for better remembrance

The entity types you'll store most often.

agent_session

Session state with versioned context windows and accumulated facts

action

Agent actions with inputs, outputs, timestamps, and provenance

pipeline

Multi-step orchestration workflows with step-level state tracking

evaluation

Eval results, benchmarks, and regression tracking tied to entity state

audit_event

Immutable log of state transitions, entity mutations, and corrections

tool_config

Agent tool configurations, MCP server bindings, and runtime parameters

entity_graph

Resolved entities with typed relationships and temporal evolution

runbook

Operational procedures and agent behavioral rules with version history

When you don't need this

If your agents are stateless request-response (no accumulated context, no entity tracking), standard logging and tracing are sufficient. Neotoma is for when agents accumulate state across sessions and pipeline steps, and you need that state to be reproducible, traceable, and auditable.

Other modes

The same person operates in multiple modes. The tax differs; the architecture that removes it is the same.

[

Operating mode

Cross-tool operating

Every session starts from zero. You re-explain context, re-prompt corrections, re-establish what your agent already knew.

](/ai-native-operators)[

Building pipelines mode

Pipeline building

Entity resolution by inference. Corrections that don't stick. Memory regressions you absorb because the architecture won't.

](/agentic-systems-builders)

In infrastructure debugging mode, the tax is writing glue: checkpoint logic, custom diffing, state serialization. Neotoma removes that tax and gives you deterministic primitives to build on instead. Same architecture removes the tax in every mode.

Built because I hit every failure mode on this page while running a twelve-server agentic stack against a production monorepo.

Deep dive: Building structural barriers that incumbents can't copy

Install in 5 minutesView architecture →