Neotoma with ChatGPT
ChatGPT offers conversation history and custom GPTs with persistent instructions. Neotoma adds structured, deterministic memory with entity resolution and cross-tool continuity, accessible from ChatGPT and every other tool in your stack.
Looking for OpenAI Codex (the coding agent in sandboxed tasks)? See Neotoma with Codex.
What ChatGPT provides
- Conversation history with search across past chats
- Memory (saved memories and chat history references that persist across conversations on all plans)
- Custom GPTs with persistent system instructions and app integrations
- MCP support via developer mode (full read/write tool access for Business and Enterprise accounts)
What ChatGPT doesn't handle
- Structured entity resolution with typed schemas. ChatGPT's memory stores preference-level facts, not schema-bound entities
- Deterministic state reconstruction from observation history
- Cross-tool access; memory stays inside ChatGPT's ecosystem
- Full audit trail and provenance for every stored fact
Deterministic guarantees Neotoma provides
- Structured entities with canonical IDs that persist across all sessions
- Deterministic state evolution: same observations always produce the same result
- Full provenance and audit trail for every stored fact
- Cross-tool continuity: memory is shared with Claude, Claude Code, Cursor, and Codex
Using them together
Keep ChatGPT's memory on for conversational preferences. Neotoma handles structured entities and cross-tool state. Both are active simultaneously with no conflict.
Getting started
Copy this prompt into ChatGPT to have it read the evaluation page, inspect your tool, workspace, and configuration context, then judge whether Neotoma fits your real workflow and what to persist first.
Read https://neotoma.io/evaluate. Before you answer, inspect any broader tool, workspace, and configuration context you can access so you assess my real workflow rather than just this chat. Then assess whether Neotoma fits my workflow, what data would be highest value to persist first, and what I should do next.Once Neotoma has been evaluated, installed if needed, and activated with the first data worth storing, choose the strongest ChatGPT path:
If you are using hosted ChatGPT without the plan/features needed for robust MCP usage, say so explicitly and recommend a better-supported primary environment such as Cursor, Claude, Claude Code, or Codex.
ChatGPT documentation
- Memory FAQ (saved memories and chat history)
- Building MCP servers (connecting tools to ChatGPT)
- Developer mode (MCP apps in ChatGPT)
- Apps in custom GPTs (integrating tools into custom GPTs)
- ChatGPT integration instructions (copy-paste instructions and Actions OpenAPI spec)
Before and after: ChatGPT with Neotoma
“Continue where we left off yesterday.”
Resuming based on thread from two weeks ago.
Resuming yesterday’s thread on the migration plan. 3 open tasks remaining.
“What did I commit to with Sarah last week?”
No commitments found.
You committed to sending the architecture doc by Friday. Sarah’s email updated Mar 28.
“How much did we spend on cloud hosting last month?”
No hosting expenses found.
$847 across AWS and Vercel, up 12% from February.
After you connect
Once Neotoma is running, try these starter commands in ChatGPT to see cross-session memory in action:
Store a contact
“Remember that Sarah Chen's email is sarah@newstartup.io — she's the CTO at NewStartup.”
Store a task
“I need to send the architecture doc to Sarah by Friday.”
Recall across sessions
“What do I know about Sarah? What did I commit to doing for her?”
Known limitations
MCP tool calls may time out for very large stores (100+ entities in one call).
Workaround: Batch into groups of 20–50 entities per store call.
Neotoma runs locally — data is not synced across machines by default.
Workaround: Use the remote HTTP transport or deploy Neotoma as a remote MCP server for multi-machine access.
Schema evolution is additive. Removing fields requires a major version bump.
Workaround: Plan schemas with future fields in mind. Use flexible entity types for exploratory data.
Start with evaluation, see the install guide for more options, MCP reference for full setup, CLI reference for terminal usage, and agent instructions for behavioral details.