Connect via custom GPT with OpenAPI
Neotoma with ChatGPT · Full step-by-step setup: tunnel, Actions auth, instructions, OpenAPI paste.
Setup
You can also integrate Neotoma as an action inside a custom GPT. This approach uses the Neotoma API's OpenAPI spec directly and works with any ChatGPT plan that supports custom GPTs.
- Install a tunnel provider — Neotoma's
--tunnelflag needs either ngrok or Cloudflare Tunnel (cloudflared) installed on your machine. Install one:# ngrok (via Homebrew) brew install ngrok ngrok config add-authtoken <YOUR_NGROK_TOKEN> # — or Cloudflare Tunnel — brew install cloudflaredngrok requires a free account and auth token from dashboard.ngrok.com. You can set the token as an environment variable instead of running
ngrok config:# In your shell profile or .env export NGROK_AUTHTOKEN=<YOUR_NGROK_TOKEN>Cloudflare Tunnel works without an account for quick tunnels. If both providers are installed, Neotoma auto-detects which to use; pass
--tunnel-provider ngrokor--tunnel-provider cloudflareto choose explicitly. - Start the API server with a tunnel:
neotoma api start --env prod --tunnelAdd
--backgroundto run as a background process. Logs go to~/.config/neotoma/logs_prod/api.logand can be viewed withneotoma api logs --env prod.neotoma api start --env prod --tunnel --backgroundhttps://Optional. Enter host only (e.g.
abc123.ngrok.io); copyable URLs below usehttps://+ this host. - Create or edit a custom GPT — go to chatgpt.com/gpts/editor and open the Configure tab.
- Add a new action — under Actions, click “Create new action”, then click Import from URL. Enter your Neotoma API's actions spec URL (reduced spec that stays within GPT Actions operation limits):
https://<tunnel-host>/openapi_actions.yaml - Paste recommended custom GPT instructions into the GPT's Instructions field:
You are an assistant that uses Neotoma MCP actions for memory persistence and retrieval. Execution order (mandatory every turn): 1) Run bounded retrieval for entities implied by the user message. - Use retrieve_entity_by_identifier for names/emails/identifiers. - Use retrieve_entities for related types (task, contact, event, etc.). 2) Run one store call for this turn's conversation + user message (+ implied entities). 3) Only after steps (1) and (2), continue with other tools and compose the user response. Do not respond before completing steps (1) and (2). Storage recipes: Unified store (preferred one call): - store with entities: - index 0: { entity_type: "conversation", title? } - index 1: { entity_type: "agent_message", role: "user", content: "<exact message>", turn_key: "{conversation_id}:{turn_id}" } - index 2+: optional extracted entities implied by the message ... - Set the GPT name to “Neotoma” in the Name field (optional but recommended so the assistant identifies as Neotoma).
- Configure authentication — set auth type to API Key (Bearer) in the GPT Actions UI and pass
Authorization: Bearer <token>. Neotoma's OpenAPI spec includesbearerAuth. No OAuth client ID or secret needed. Your API base (from the host field above) for reference:In the GPT Action's API Key field, paste your bearer token only (e.g. fromhttps://<tunnel-host>ACTIONS_BEARER_TOKENor a key-derived token from your Neotoma server). If you use OAuth instead, paste these into the Authentication modal:Authorization URL
https://<tunnel-host>/mcp/oauth/authorizeToken URL
https://<tunnel-host>/mcp/oauth/token - Save and publish — the custom GPT now has full read/write access to your Neotoma memory graph via the API's REST endpoints.
Back to Neotoma with ChatGPT · Install guide · MCP reference