Neotoma vs database memory

A relational database (SQLite, Postgres) provides strong consistency, column types, and fast queries. But standard CRUD usage overwrites previous state on every UPDATE, losing history, provenance, and the ability to reconstruct past entity state. Neotoma uses a database as its storage backend but adds the observation/reducer architecture that delivers memory guarantees.

Relational databases are the default tool for structured data. The question is not whether a database can store agent state - it can. The question is whether standard CRUD patterns deliver the guarantees that production agent memory requires.

Why not just use SQLite or Postgres for agent memory?

Database (CRUD)

A database with standard CRUD operations stores the current state of each entity as a row. UPDATEs overwrite previous values. There is no built-in audit trail, no observation log, and no mechanism to reconstruct historical state or detect conflicting writes from multiple agents.

Neotoma

Neotoma uses SQLite (or Postgres) as its storage backend but imposes an append-only observation log, deterministic reducers, schema validation, and field-level provenance on top. The database stores the data; the architecture delivers the guarantees.

Guarantee comparison

PropertyDatabase (CRUD)Neotoma
Deterministic state evolution
Versioned history
Replayable timeline
Auditable change log
Schema constraints
Silent mutation risk
Conflicting facts risk
Reproducible state reconstruction
Human inspectability (diffs/lineage)
Zero-setup onboarding
Semantic similarity search
Direct human editability

When to use which

Use Database (CRUD) when

You have a single writer, do not need historical state reconstruction, and are comfortable with last-write-wins semantics. Standard application development patterns are sufficient for your use case.

Use Neotoma when

Multiple agents write to the same entities. You need to know what was true at any past moment, resolve conflicts deterministically, enforce schemas across writers, or prove provenance for audits. You want the guarantees without building the architecture yourself.

Common questions

Doesn't Neotoma use a database internally?

Yes. Neotoma uses SQLite locally and Postgres when configured. The guarantees do not come from the storage engine - they come from the architectural pattern on top: immutable observation log, deterministic reducers, schema registry, and field-level provenance tracking.

Can I add audit tables to get the same guarantees?

You can build audit triggers, history tables, and event logs on a database. If you also add deterministic merge logic, schema validation, provenance tracking, and idempotent observation handling, you will have rebuilt Neotoma's architecture. The question is whether that is the best use of your engineering time.

What about temporal databases or event-sourced schemas?

Temporal extensions (e.g. Postgres temporal tables) and event-sourced patterns move in the right direction. Neotoma's contribution is the complete stack: content-addressed identity, deterministic reducers, schema-bound observations, and field-level provenance - integrated and tested as a single system.