Peer sync lets two explicit Neotoma instances exchange selected entity state without a central hub. Each side stores immutable sync-originated observations and the reducer computes snapshots locally, so peers stay independent stores of truth that converge through signed webhooks. Available since v0.12.0.
Use peer sync when:
A team operator wants to mirror selected entity types (issues, product feedback, runbook notes) from a personal instance to a shared "central" instance.
Two self-hosted instances need to stay in step without a hosted broker.
A hosted operator instance wants to receive guest-tagged entities from many personal instances over a signed webhook.
MCP tools
add_peer({ peer_id, peer_name, peer_url, direction, entity_types, sync_scope?, auth_method, ... })— registers a remote instance. Returns a generatedshared_secretwhenauth_method: "shared_secret"and no secret is provided.list_peers()— enumerates registered peers and theirlast_sync_at.get_peer_status({ peer_id })— peer snapshot plusremote_health(a/healthprobe and a semver compatibility verdict, the same gate theneotoma compatCLI uses).remove_peer({ peer_id })— deactivates a peer.sync_peer({ peer_id, limit? })— bounded outbound batch (default 200, max 500). RequiresNEOTOMA_PUBLIC_BASE_URLandNEOTOMA_LOCAL_PEER_ID.resolve_sync_conflict({ entity_id, strategy, sender_peer_url?, guest_access_token? })—prefer_remotere-fetches the remote snapshot;prefer_localretains local state;manualflags the entity for operator review.
HTTP routes
Environment
NEOTOMA_PUBLIC_BASE_URL— public base URL of this API (no trailing slash). Used assender_peer_urlon outbound sync so the peer can fetch entity snapshots back.NEOTOMA_LOCAL_PEER_ID— stable id this instance presents assender_peer_id. Must match the value the peer recorded as theirpeer_idfor you.NEOTOMA_HOSTED_MODE=1— operator opt-in for hosted / multi-tenant deployments. When set, the inbound webhook handler rejects anysender_peer_urlwhose hostname resolves to a private, loopback, or link-local address. This blocks a hostile peer from naminghttp://127.0.0.1to coerce snapshot fetches against the host's loopback.
Example
Register a peer and run a bounded outbound sync:
# On the personal instance
neotoma peers add \
--peer-id central \
--name "Team Central" \
--url https://central.example.com \
--types issue,product_feedback \
--target-user-id <central_user_uuid>
neotoma peers status central
neotoma peers sync central
The personal instance batches up to 200 changed observations (default limit), POSTs them to https://central.example.com/sync/webhook with X-Neotoma-Sync-Signature-256, and writes the resulting last_sync_at watermark back to its own peer_config row. Replays of the same signed payload are deduplicated by idempotency key so a flaky network does not produce duplicate observations.
Selective sync
Set sync_scope: "tagged" on the peer and stamp eligible entities with sync_peers: ["central"]. Only those entities are delivered:
{
"entity_type": "issue",
"title": "Reset MCP token after rotation",
"sync_peers": ["central"]
}
Loop prevention via subscriptions
For steady-state propagation, create a substrate subscription with sync_peer_id: "central". The bridge skips events whose source_peer_id matches the destination, so replicated rows do not bounce back. Replicated writes carry observation_source: "sync" and rank below local sources in the reducer's default tie-breaking.
Full reference
docs/subsystems/peer_sync.md covers the full inbound / outbound paths (sync_webhook_inbound.ts, sync_webhook_outbound.ts, peer_sync_batch.ts, full_sync.ts, conflict_resolver.ts), the semver compatibility rules in src/semver_compat.ts, and the seeded peer_config schema.
See substrate subscriptions, security hardening, API reference, and MCP reference. .