Neotoma

Connect via custom GPT with OpenAPI

Neotoma with ChatGPT · Full step-by-step setup: tunnel, Actions auth, instructions, OpenAPI paste.

Setup

You can also integrate Neotoma as an action inside a custom GPT. This approach uses the Neotoma API's OpenAPI spec directly and works with any ChatGPT plan that supports custom GPTs.

  1. Install a tunnel provider — Neotoma's --tunnel flag needs either ngrok or Cloudflare Tunnel (cloudflared) installed on your machine. Install one:
    # ngrok (via Homebrew)
    brew install ngrok
    ngrok config add-authtoken <YOUR_NGROK_TOKEN>
    
    # — or Cloudflare Tunnel —
    brew install cloudflared

    ngrok requires a free account and auth token from dashboard.ngrok.com. You can set the token as an environment variable instead of running ngrok config:

    # In your shell profile or .env
    export NGROK_AUTHTOKEN=<YOUR_NGROK_TOKEN>

    Cloudflare Tunnel works without an account for quick tunnels. If both providers are installed, Neotoma auto-detects which to use; pass --tunnel-provider ngrok or --tunnel-provider cloudflare to choose explicitly.

  2. Start the API server with a tunnel:
    neotoma api start --env prod --tunnel

    Add --background to run as a background process. Logs go to ~/.config/neotoma/logs_prod/api.log and can be viewed with neotoma api logs --env prod.

    neotoma api start --env prod --tunnel --background
    https://

    Optional. Enter host only (e.g. abc123.ngrok.io); copyable URLs below use https:// + this host.

  3. Create or edit a custom GPT — go to chatgpt.com/gpts/editor and open the Configure tab.
  4. Add a new action — under Actions, click “Create new action”, then click Import from URL. Enter your Neotoma API's actions spec URL (reduced spec that stays within GPT Actions operation limits):
    https://<tunnel-host>/openapi_actions.yaml
  5. Paste recommended custom GPT instructions into the GPT's Instructions field:
    You are an assistant that uses Neotoma MCP actions for memory persistence and retrieval.
    
    Execution order (mandatory every turn):
    1) Run bounded retrieval for entities implied by the user message.
       - Use retrieve_entity_by_identifier for names/emails/identifiers.
       - Use retrieve_entities for related types (task, contact, event, etc.).
    2) Run one store call for this turn's conversation + user message (+ implied entities).
    3) Only after steps (1) and (2), continue with other tools and compose the user response.
    
    Do not respond before completing steps (1) and (2).
    
    Storage recipes:
    
    Unified store (preferred one call):
    - store with entities:
      - index 0: { entity_type: "conversation", title? }
      - index 1: { entity_type: "agent_message", role: "user", content: "<exact message>", turn_key: "{conversation_id}:{turn_id}" }
      - index 2+: optional extracted entities implied by the message
    ...
  6. Set the GPT name to “Neotoma” in the Name field (optional but recommended so the assistant identifies as Neotoma).
  7. Configure authentication — set auth type to API Key (Bearer) in the GPT Actions UI and pass Authorization: Bearer <token>. Neotoma's OpenAPI spec includes bearerAuth. No OAuth client ID or secret needed. Your API base (from the host field above) for reference:
    https://<tunnel-host>
    In the GPT Action's API Key field, paste your bearer token only (e.g. from ACTIONS_BEARER_TOKEN or a key-derived token from your Neotoma server). If you use OAuth instead, paste these into the Authentication modal:

    Authorization URL

    https://<tunnel-host>/mcp/oauth/authorize

    Token URL

    https://<tunnel-host>/mcp/oauth/token
  8. Save and publish — the custom GPT now has full read/write access to your Neotoma memory graph via the API's REST endpoints.

Back to Neotoma with ChatGPT · Install guide · MCP reference