MCP server

Status: live. Works with Claude Code, Claude Desktop, Cursor, Windsurf, and any other host that speaks the Model Context Protocol.

Install

pip install 'oramemory[mcp]'

The [mcp] extra pulls in the official mcp Python SDK. You get a new console script:

oramemory-mcp      # stdio MCP server — run by your AI host, not by you

Configure

Each host loads MCP servers from a JSON config. Point it at oramemory-mcp with your API key in env.

Claude Code / Claude Desktop

~/.config/claude/claude_desktop_config.json (macOS: ~/Library/Application Support/Claude/claude_desktop_config.json):

{
  "mcpServers": {
    "oramemory": {
      "command": "oramemory-mcp",
      "env": {
        "AM_API_KEY":  "om_live_...",
        "AM_AGENT_ID": "claude-code"
      }
    }
  }
}

Cursor

~/.cursor/mcp.json — same shape:

{
  "mcpServers": {
    "oramemory": {
      "command": "oramemory-mcp",
      "env": { "AM_API_KEY": "om_live_...", "AM_AGENT_ID": "cursor" }
    }
  }
}

Windsurf / other MCP hosts

Any host that spawns an MCP server as a subprocess over stdio works with the same command + env pattern.

Tools exposed

Tool What it does
memory_read Semantic search — call BEFORE starting non-trivial work
memory_list Paginated list (newest first)
memory_get Fetch one memory by id
memory_add Store a memory (decision, preference, fact)
memory_update Edit fields — content change auto-versions
memory_delete Remove a memory
memory_history Full version log of one memory
memory_usage Plan, ops this month, memories stored
am_help_json Tool manifest for agent self-discovery

Environment variables

Var Purpose
AM_API_KEY Required. Your om_live_… key.
AM_AGENT_ID Default agent for every tool call (e.g. claude-code). Fallback: mcp-host.
AM_BASE_URL Override API base URL (self-hosted).

Suggested agent prompt

Once the MCP server is wired up, give your AI a standing instruction:

You have access to a persistent memory tool (oramemory).

  • Before starting any non-trivial task, call memory_read with a 1-line summary of the task.
  • After every significant decision, call memory_add with one atomic fact. Tag with decision, bug-fix, convention, etc. Set importance between 0.5 and 0.9.
  • Don't dump long paragraphs — one fact per memory_add.

Hosted SSE endpoint

For zero-install use, connect your MCP host directly to the hosted SSE endpoint — no pip install, no subprocess. Every request is authenticated by the Authorization header you put in the config.

URL:      https://oramemory.com/mcp/sse
Transport: sse
Auth:     Authorization: Bearer om_live_...
Agent:    X-OraMemory-Agent: <optional agent id, e.g. "claude-code">

Claude Desktop / Claude Code (SSE mode)

{
  "mcpServers": {
    "oramemory": {
      "transport": "sse",
      "url": "https://oramemory.com/mcp/sse",
      "headers": {
        "Authorization":      "Bearer om_live_...",
        "X-OraMemory-Agent":  "claude-code"
      }
    }
  }
}

Cursor / Windsurf

Same shape as above — any MCP host that supports SSE transport with custom headers.

Security notes

  • Each SSE connection is scoped to one API key — tenants can never see each other's memories.
  • https://oramemory.com/mcp/healthz is a public health probe (no auth). Everything else requires a valid bearer key.
  • DNS-rebinding protection is on: the server accepts Host: oramemory.com only.

stdio vs hosted SSE — which to pick?

Situation Pick
Single developer laptop, full control oramemory-mcp (stdio)
Shared machine where installing Python is painful Hosted SSE
Corporate host that can't spawn subprocesses Hosted SSE
Fully-offline / air-gapped work oramemory-mcp (stdio) + self-hosted API

Both transports expose the same 9 tools with identical behavior.