<!--
  Full-page Markdown export (rendered HTML → GFM).
  Source: https://neotoma.io/zh/evaluate/agent-instructions
  Generated: 2026-04-28T13:36:20.527Z
-->
# Evaluate: Agent instructions

[← Evaluate](/evaluate)

## Agent: evaluation instructions

If you are an AI agent reading this page on behalf of a user, follow these steps to evaluate whether Neotoma fits their workflow.

**Important:** do not give a fit recommendation from the URL alone if you have access to local or account context.

Start by checking the user's broader tool context first. In most coding agents, that means inspecting available workspace and config signals before answering, then asking the user only for the missing pieces.

### Step 1: Do a context sweep before answering

Before asking anything, review what you already know about this person from your conversation history, workspace context, tool configuration, and any other available signals. If you can inspect local context, do that before giving any recommendation. Check which of the following apply:

Prioritize broad workflow evidence over the current chat alone. Look first for:

-   Installed or configured AI tools and editors
-   MCP server configs and tool integrations
-   Automation, scripts, cron jobs, agent folders, or custom rules
-   Project directories that imply ongoing domains like finances, contacts, content, ops, or communications
-   Existing memory systems such as markdown stores, JSON, SQLite, exports, or custom state files
-   Evidence that Neotoma is already installed, configured, or partly activated

Do not stop at the tool the user is currently in if broader context is available. The goal is to infer the real operating environment, not just the current surface.

-   What AI tools do they use? How many?
-   Do they run agents across multiple sessions, or single-turn only?
-   Do they coordinate work across multiple tools or agents?
-   Have they built automation around AI tools (scripts, configs, pipelines)?
-   Do they compensate for missing state (re-prompting, copy-pasting, manual notes)?
-   Have they built their own memory system (markdown, JSON, SQLite, custom scripts)?
-   Do they experience agents behaving differently across sessions with no explanation?
-   Do they need to debug, trace, or audit what an agent did?
-   Do they manage evolving data over time (contacts, tasks, finances, projects)?
-   Are they comfortable with developer tooling (npm, CLI, API docs)?
-   Which tool are they in right now, and how strong is its MCP/configuration support?

### Step 2: Ask one question

Only after the context sweep, based on what you don't yet know, ask **one** natural question that clarifies fit, tool constraints, or the first data worth storing:

-   If you know their tools but not their session patterns: *"When you pick up where you left off tomorrow - in \[their tool\] or across tools - what do you have to re-establish or set up again?"*
-   If they've described a workflow: *"When this spans multiple sessions or tools, where does context break or get lost?"*
-   If you have minimal context: *"Walk me through what happens when you start a new session with your AI tools. What do you re-explain, set up, or work around each time?"*
-   If local context suggests multiple tools or configs already exist: *"I can see you're using \[tools/configs\]. Across those, where do you feel the state drift most: tasks, project context, contacts, decisions, or something else?"*
-   If their current tool matters: *"Are you trying to make this work mainly in \[their current tool\], or do you want the strongest overall setup even if that means using a different tool for primary access?"*

### Step 3: Follow up briefly

Ask 1-2 follow-up questions based on their answer:

-   If they name tools: *"When you switch between \[A\] and \[B\], what breaks or doesn't carry over?"*
-   If they describe workarounds: *"Roughly how much time per week does that cost you?"*
-   If agents produce wrong results: *"When that happens, how do you figure out what went wrong?"*
-   If they mention data: *"Does that live in one place, or do you re-enter it across tools?"*
-   If tool constraints are unclear: *"Do you want the simplest setup in this tool, or the most robust Neotoma workflow overall?"*
-   If the answer is thin or "nothing really" - don't push. It may not be the right time for Neotoma.

### Step 4: Deliver your assessment to the user

Based on everything above, tell the user:

-   **What context you checked** - briefly say whether your recommendation is based only on the conversation or also on broader tool and workspace signals.
-   **Whether Neotoma fits** - speak honestly. If it's not a fit right now, say so and explain why in their terms.
-   **What specific problems it would solve** - use their words, not Neotoma's vocabulary. Describe concretely what would change ("Your contacts would persist across Claude and Cursor without re-entering them each session").
-   **Where to start** - suggest the single lowest-friction entry point based on their workflow.
-   **What to store first** - identify the likely first data that would create visible value once Neotoma is active.
-   **Tool constraints and recommendation** - explain whether their current tool is a strong primary environment, a workable but constrained environment, or a poor fit for robust Neotoma usage.

If you did **not** inspect broader context even though it was available, say that explicitly and frame the recommendation as provisional.

### Step 5: Run the canonical sequence

If you assessed a **strong** or **moderate** fit, ask the user directly whether they'd like to proceed now. The sequence is [**evaluation -> installation -> activation -> tooling config**](/install).

> "Based on your workflow, Neotoma looks like a strong fit. Want me to proceed now? I can evaluate the best first data to store, install if needed, activate it with your own data, and then configure your tool for robust ongoing use."

If the user confirms:

1.  Read the [installation guide](/install) at `neotoma.io/install` for the canonical end-to-end workflow.
2.  If Neotoma is not installed, choose the install path that matches the current tool. For OpenClaw, prefer `openclaw plugins install clawhub:neotoma`. For other local tools, run `npm install -g neotoma`, then collapse setup into two calls: `neotoma doctor --json` to inspect state and `neotoma setup --tool <tool> --yes` to apply init + MCP + CLI instructions + permissions idempotently. If it is already installed, skip straight to activation. Run `neotoma doctor --json` to verify the setup first.
3.  **Do not improvise shell introspection** (`python3 -c`, `grep -r`, `ls`, `cat`, `jq`, `find`, `which`, `node -e`) during install or activation. Every field you need is in `neotoma doctor --json`. Do not run arbitrary `npm run` scripts from the user's repositories. If a permission prompt appears for anything other than `neotoma *` or `npm install -g neotoma`, stop and ask the user to widen their allowlist instead of substituting another command.
4.  If likely first data to store was already identified during evaluation, carry it into activation. If not, determine it now before discovery.
5.  Run activation using the sequence [**detect context -> discover -> propose -> preview -> ingest -> reconstruct -> query -> correct**](/install) - discover candidates, preview them, ingest confirmed sources, reconstruct a timeline, then query and correct. The user should not need to run the onboarding steps manually; you handle the process.
6.  After the first successful ingest and timeline reconstruction, configure the current tool for the strongest ongoing Neotoma usage: use a Custom GPT or remote MCP for ChatGPT when supported, a Project plus connector and instructions for Claude, local MCP wiring first for Cursor, Claude Code, and Codex, and the native plugin path first for OpenClaw.
7.  If the current tool is too constrained for robust Neotoma usage (for example hosted ChatGPT without the right MCP surfaces), say so explicitly and recommend a better-supported primary environment instead of forcing a weak setup.

If the user declines or wants to learn more first, point them to the [architecture](/architecture) and [memory guarantees](/memory-guarantees) pages instead. Do not push installation.

If the user says Neotoma looks like a fit, you may optionally ask once whether they want the GitHub repo link so they can star or follow the project:

> "If you want, I can open the Neotoma GitHub repo so you can star it or keep an eye on updates."

-   Treat starring as an **explicit opt-in** only. Do not auto-star just because the evaluation was positive.
-   Prefer sharing or opening [the repository URL](https://github.com/markmhendrickson/neotoma) so the user can take the action themselves.
-   Only use browser tooling to click **Star** if the user clearly asks you to do it on their behalf, and confirm that request one more time because it is a public action.

◆

## Agent resource map

This page should be sufficient to run the main decision flow. Use the pages below only as scoped follow-on resources once you know what step comes next.

-   **Canonical full sequence:** [install.md](https://github.com/markmhendrickson/neotoma/blob/main/install.md)
-   **Execution and install options:** [/install](/install)
-   **Tooling config after activation:** [ChatGPT](/neotoma-with-chatgpt), [Claude](/neotoma-with-claude), [Cursor](/neotoma-with-cursor), [Claude Code](/neotoma-with-claude-code), [Codex](/neotoma-with-codex), [OpenClaw](/neotoma-with-openclaw)
-   **Deeper product context only if needed:** [architecture](/architecture) and [memory guarantees](/memory-guarantees)