<!--
Full-page Markdown export (rendered HTML → GFM).
Source: https://neotoma.io/install/manual
Generated: 2026-04-28T13:34:21.271Z
-->
# Manual install
[← Install](/install)
If you prefer to run the commands yourself:
Manual install
Run these commands yourself on the host machine.
```
# 1. Install globally (required for agent harnesses — avoids per-repo PATH prompts)
npm install -g neotoma
# 2. Inspect state in one call (no shell introspection needed)
neotoma doctor --json
# 3. Run the full idempotent setup for the current harness
neotoma setup --tool claude-code --yes # or: --tool cursor | --tool codex | --tool openclaw
```
### After installation
Once installation is complete, activation follows. If evaluation already identified the first data to store, carry that forward. Otherwise determine it now, then run **detect context -> discover -> propose -> preview -> ingest -> reconstruct -> query -> correct**. After first value is visible, configure the current tool for robust ongoing usage.
**Troubleshooting: `which neotoma` fails after install?** Your shell manager (mise, nvm, fnm) is probably not active in the agent's non-interactive shell. Run `neotoma doctor --json` and read `.neotoma.path_fix_hint` for the exact activation line. Common fixes to add to `~/.zshenv` or `~/.zshrc`:
```
mise: eval "$(mise activate zsh)"
nvm: source "$NVM_DIR/nvm.sh" # in ~/.zshenv for non-interactive shells
fnm: eval "$(fnm env)"
```
1. **Preference selection** - if evaluation already established the priority data types and onboarding mode, carry them forward. Otherwise choose which data types matter most (project files, chat transcripts, meeting notes, financial docs, code context, custom paths) and pick a mode: quick win, guided, or power user.
2. **Discovery** - continue from any candidate data already identified during evaluation. If that work has not happened yet, the agent scans shallowly based on your preferences, groups results into domains (not file counts), and checks for chat transcript exports and platform memory.
3. **Propose and confirm** - for each domain the agent explains why it was selected, what entities it likely contains, and what timeline value it could unlock. You confirm per-folder or per-file before anything is stored.
4. **Ingest and reconstruct** - confirmed files are ingested and the agent reconstructs the strongest timeline with provenance - every event traced to a specific source file.
5. **Query and correct** - the agent surfaces a follow-up query against the reconstructed timeline and offers next actions, then asks whether the timeline is accurate and supports corrections (wrong merge, wrong date, source exclusion).
## Try it now
Once Neotoma is running, try these prompts in any connected tool to see it working:
Store a contact
“Remember that Sarah Chen's email is sarah@newstartup.io. She started there in March.”
Then in a different session or tool: “What's Sarah Chen's email?”
Track a commitment
“I told Nick I'd send the architecture doc by Friday.”
Later: “What did I commit to this week?”
Test a correction
“Actually, Sarah's email changed to sarah@acme.co.”
Then: “What's Sarah's email? Show me the history.” Both old and new are preserved with timestamps.
◆
## Start the API server
The API server provides the HTTP interface that MCP and the CLI communicate through.
Start API
Bring up the local API so MCP and CLI can connect.
```
# Run API server (development)
neotoma api start --env dev
# Run API server (production)
neotoma api start --env prod
```
◆
## Connect MCP
Add Neotoma to your MCP client configuration (Cursor, Claude, or Codex):
MCP config
Add this client config to connect Neotoma over stdio.
```
{
"mcpServers": {
"neotoma-dev": {
"command": "/absolute/path/to/neotoma/scripts/run_neotoma_mcp_stdio.sh"
},
"neotoma": {
"command": "/absolute/path/to/neotoma/scripts/run_neotoma_mcp_stdio_prod.sh"
}
}
}
```
**After adding MCP config:** restart your AI tool (Claude Code, Cursor, Claude Desktop, etc.) so it picks up the new server. MCP servers are loaded at startup.