← HTML page · All pages

Operating mode

When you're operating across tools

Every session starts from zero. You re-explain context, re-prompt corrections, re-establish what your agent already knew.

One of three operational modes of the same person: someone building an operating system for their own AI agents. Building pipelines mode, Infrastructure debugging mode.

You enter this mode every time you open Claude, Cursor, ChatGPT, or Codex to get work done. Nothing your agent learns in one session is guaranteed to be there in the next, and when it gets something wrong, there's no way to correct it that sticks. You're the context janitor — the human sync layer between every tool. Neotoma gives you continuity so you can steer instead of drive.

Escaping

Context janitor — human sync layer between tools

Into

Operator with continuity — steering, not driving

Turn-by-turn prompting → review-and-steer

Tax you pay

Re-prompting, context re-establishment, manual cross-tool sync

What you get back

Attention, continuity, trust in their tools

Same question, different outcome

Without a state layer, agents return stale or wrong data. With Neotoma, every response reads from versioned, schema-bound state.

Tasks

without state layer

What are my open tasks?

No tasks found.

with state layer

What are my open tasks?

3 open tasks. Next due: submit proposal by Friday.

Task created in Claude, invisible in Cursor

You told Claude to track a deadline. Later you asked Cursor for open tasks. The deadline didn't exist because each tool maintains its own disposable context with no shared state.

People & contacts

without state layer

Email the latest draft to Priya.

Sent to priya@oldco.com.

with state layer

Email the latest draft to Priya.

Sent to priya@newco.io.

Stale contact, wrong email sent

You updated a contact's email in one conversation. The next session used the old address because provider memory silently compressed or discarded the correction.

Financial records

without state layer

Find the Whole Foods receipt from Feb 8.

No receipts found matching that query.

with state layer

Find the Whole Foods receipt from Feb 8.

Whole Foods, Feb 8 ($47.32). Stored from conversation on Feb 8.

Receipt stored, then lost

You shared a receipt in a chat session. Weeks later, you needed it for an expense report. The AI had no record of it; conversation-scoped memory doesn't persist documents.

Events & commitments

without state layer

Do I have anything due this week?

Nothing scheduled.

with state layer

Do I have anything due this week?

Follow up with Kenji re: proposal, due Thursday.

Commitment forgotten between sessions

You told your AI you'd follow up with a client by Thursday. By Wednesday, neither tool remembered; the commitment was locked in a prior session's expired context.

Why this happens

×No persistent state across sessions; every AI conversation starts from zero

×Fragmented document sources scattered across email, drives, and screenshots

×Repetitive context-setting in every new AI interaction

×Lost commitments and forgotten action items between sessions

×Personal data (receipts, contacts, preferences) stored in provider memory with no control over retention or training use

Failure modes without a memory guarantee

Lost commitments across tools

Tool-to-tool context loss

Silent state drift over time

Weak correction loop; no way to fix what the agent got wrong

Personal data in opaque provider memory with no deletion control

Memory locked to one vendor's ecosystem

Every session starts from scratch

You explain the same project context, preferences, and constraints in every new conversation. Provider-side memory is conversation-scoped at best; it doesn't follow you across tools, and it silently drifts as models compress or discard context.

Commitments vanish between tools

You tell Claude to remind you about a deadline. Later you ask Cursor for your open tasks. The deadline doesn't exist because each tool maintains its own disposable context. Action items created in one session have no guarantee of surviving to the next.

No way to correct what the agent got wrong

When an AI tool extracts the wrong date, associates the wrong contact, or misidentifies an entity, there's no correction mechanism. You can't tell the system "that's wrong" in a way that persists. The mistake reappears next time.

Your personal data in someone else's memory

Your receipts, contacts, health information, and financial records live in provider-hosted memory. There is no transparency into retention, no guarantee against training use, and no delete button. The data most personal to you is stored in a system you don't control.

AI needs

What you need from your AI tools, and what current tools don't provide.

How Neotoma solves this

Neotoma removes the tax you pay re-explaining your world to every tool. Every conversation, entity, and commitment persists as versioned state. Switch between Claude, Cursor, and Codex without losing context. Correct once and the correction sticks.

[

Cross-tool persistent state via MCP

Every agent connected to Neotoma reads from and writes to the same memory substrate. Store a task in Claude and retrieve it from Cursor. State is shared, not siloed.

](/cross-platform)

Automatic entity extraction every turn

The agent loop extracts people, tasks, events, preferences, and commitments from every conversation turn and persists them as versioned entities before responding.

[

Corrections that stick

Submit a correction once. It creates a new observation that supersedes the incorrect value. Same question, same answer, every time. The correction traces back to when and why it was made.

](/deterministic-state-evolution)[

Full conversation replay

Every conversation and turn is stored with provenance. Inspect what was known at any point in time. Diff state across versions.

](/versioned-history)

What actually changes

Without persistent state, you're the driver on every turn. Every prompt carries the full weight of what came before because the system won't hold it for you.

With Neotoma underneath, the pattern shifts. The agent arrives at each session already knowing what it knew last time. Your role moves from composing detailed instructions to reviewing what the agent already knows and correcting when it's off.

Less typing, fewer prompts, shorter sessions that accomplish more. You stop thinking about whether the system remembers and start thinking about what you're actually trying to do. Not "let me re-explain my situation"; "here's what changed since yesterday."

Data types for better remembrance

The entity types you'll store most often.

conversation

Persistent chat sessions with full turn history across tools

message

Individual conversation turns with role, content, and extracted entities

task

Commitments, reminders, and action items with status and deadlines

note

Captured thoughts, observations, and reference material

contact

People and their details (email, role, organization, preferences)

event

Calendar events, deadlines, and temporal commitments

preference

User preferences and configuration that persist across sessions

receipt

Purchase records, invoices, and expense tracking

When you don't need this

For one-off questions, quick summaries, or single-document analysis, your AI tools already work fine. Neotoma is for when you need what you told one tool to still be true when you open another, and for when you need to know that a correction you made actually stuck.

Other modes

The same person operates in multiple modes. The tax differs; the architecture that removes it is the same.

[

Building pipelines mode

Pipeline building

Entity resolution by inference. Corrections that don't stick. Memory regressions you absorb because the architecture won't.

](/agentic-systems-builders)[

Infrastructure debugging mode

Infrastructure debugging

Two runs. Same inputs. Different state. No replay, no diff, no explanation.

](/ai-infrastructure-engineers)

In operating mode, the tax is re-prompting, context re-establishment, and manual cross-tool sync. Neotoma removes that tax and gives you back the attention and continuity it was consuming. Same architecture removes the tax in every mode.

Built by someone who runs every workflow (email, finance, content, tasks) through the same agentic stack.

Deep dive: Agentic retrieval infers. It doesn't guarantee.

Install in 5 minutesView architecture →