<!--
  Full-page Markdown export (rendered HTML → GFM).
  Source: https://neotoma.io/ar/neotoma-with-chatgpt-connect-custom-gpt
  Generated: 2026-04-28T13:35:36.459Z
-->
# Connect via custom GPT with OpenAPI

[Neotoma with ChatGPT](/neotoma-with-chatgpt) · Full step-by-step setup: tunnel, Actions auth, instructions, OpenAPI paste.

Looking for remote MCP (developer mode) instead? See [Connect ChatGPT via remote MCP](/neotoma-with-chatgpt-connect-remote-mcp).

## Setup

You can also integrate Neotoma as an action inside a [custom GPT](https://help.openai.com/en/articles/20001049-apps-in-custom-gpts-for-business-accounts-beta). This approach uses the Neotoma API's OpenAPI spec directly and works with any ChatGPT plan that supports custom GPTs.

1.  **Install a tunnel provider:** Neotoma's `--tunnel` flag needs either [ngrok](https://ngrok.com/download) or [Cloudflare Tunnel (cloudflared)](https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/downloads/) installed on your machine. Install one:
    
    Code snippet
    
    Copy the exact snippet shown below.
    
    ```
    # ngrok (via Homebrew)
    brew install ngrok
    ngrok config add-authtoken <YOUR_NGROK_TOKEN>
    
    # or Cloudflare Tunnel
    brew install cloudflared
    ```
    
    ngrok requires a free account and auth token from [dashboard.ngrok.com](https://dashboard.ngrok.com/get-started/your-authtoken). You can set the token as an environment variable instead of running `ngrok config`:
    
    Code snippet
    
    Copy the exact snippet shown below.
    
    ```
    # In your shell profile or .env
    export NGROK_AUTHTOKEN=<YOUR_NGROK_TOKEN>
    ```
    
    Cloudflare Tunnel works without an account for quick tunnels. If both providers are installed, Neotoma auto-detects which to use; pass `--tunnel-provider ngrok` or `--tunnel-provider cloudflare` to choose explicitly.
    
2.  **Start the API server with a tunnel**:
    
    Code snippet
    
    Copy the exact snippet shown below.
    
    ```
    neotoma api start --env prod --tunnel
    ```
    
    Add `--background` to run as a background process. Logs go to `~/.config/neotoma/logs_prod/api.log` and can be viewed with `neotoma api logs --env prod`.
    
    Code snippet
    
    Copy the exact snippet shown below.
    
    ```
    neotoma api start --env prod --tunnel --background
    ```
    
    Your Neotoma API host
    
    https://
    
    Optional. Enter host only (e.g. `abc123.ngrok.io`); copyable URLs below use `https://` + this host.
    
3.  **Create or edit a custom GPT:** go to [chatgpt.com/gpts/editor](https://chatgpt.com/gpts/editor) and open the **Configure** tab.
4.  **Add a new action:** under Actions, click “Create new action”, then click **Import from URL**. Enter your Neotoma API's actions spec URL (reduced spec that stays within GPT Actions operation limits):
    
    Code snippet
    
    Copy the exact snippet shown below.
    
    ```
    https://<tunnel-host>/openapi_actions.yaml
    ```
    
    ▶OpenAPI import unavailable? Paste the spec directly
    
5.  **Paste recommended custom GPT instructions** into the GPT's **Instructions** field:
    
    Code snippet
    
    Copy the exact snippet shown below.
    
    ```
    You are an assistant that uses Neotoma MCP actions for memory persistence and retrieval.
    
    Execution order (mandatory every turn):
    1) Run bounded retrieval for entities implied by the user message.
       - Use retrieve_entity_by_identifier for names/emails/identifiers.
       - Use retrieve_entities for related types (task, contact, event, etc.).
    2) Run one store call for this turn's conversation + user message (+ implied entities).
    3) Only after steps (1) and (2), continue with other tools and compose the user response.
    
    Do not respond before completing steps (1) and (2).
    
    Storage recipes:
    
    Unified store (preferred one call):
    - store with entities:
      - index 0: { entity_type: "conversation", title? }
      - index 1: { entity_type: "conversation_message", role: "user", sender_kind: "user", content: "<exact message>", turn_key: "{conversation_id}:{turn_id}" }
      - index 2+: optional extracted entities implied by the message
    ...
    ```
    
    عرض المزيد
    
6.  **Set the GPT name to “Neotoma”** in the **Name** field (optional but recommended so the assistant identifies as Neotoma).
7.  **Configure authentication:** set auth type to **API Key** (Bearer) in the GPT Actions UI and pass `Authorization: Bearer <token>`. Neotoma's OpenAPI spec includes `bearerAuth`. No OAuth client ID or secret needed. Your API base (from the host field above) for reference:
    
    Code snippet
    
    Copy the exact snippet shown below.
    
    ```
    https://<tunnel-host>
    ```
    
    In the GPT Action's **API Key** field, paste your bearer token only (e.g. from `ACTIONS_BEARER_TOKEN` or a key-derived token from your Neotoma server). If you use **OAuth** instead, paste these into the Authentication modal:
    
    Authorization URL
    
    Code snippet
    
    Copy the exact snippet shown below.
    
    ```
    https://<tunnel-host>/mcp/oauth/authorize
    ```
    
    Token URL
    
    Code snippet
    
    Copy the exact snippet shown below.
    
    ```
    https://<tunnel-host>/mcp/oauth/token
    ```
    
    ▶Using OAuth instead?
    
8.  **Save and publish:** the custom GPT now has full read/write access to your Neotoma memory graph via the API's REST endpoints.

[Back to Neotoma with ChatGPT](/neotoma-with-chatgpt) · [Install guide](/install) · [MCP reference](/mcp)