OpenClaw Context Integration
OpenClaw bills against whatever reaches the context window. System prompt, history, tool output, attachments, and compaction artifacts all count. Nocturnus cuts that payload before the next run, first through MCP and completely through a Context Engine.
This cost claim is an inference from OpenClaw's official docs: OpenClaw says those inputs count toward the context window, exposes token and cost reporting, and compacts when the window is full. If the payload is bloated, the next run is more expensive and more likely to need compaction.
- You already run OpenClaw and want to cut the token-heavy part of the bill without changing models.
- You want OpenClaw's context lifecycle (
ingest,assemble,compact) to route through NocturnusAI. - You need a practical path: fast MCP registration now, full Context Engine control next.
Path A: Register Nocturnus as an MCP Server (fastest)
OpenClaw can store outbound MCP server definitions via openclaw mcp set. This is the quickest way to make NocturnusAI MCP available to OpenClaw-managed runtimes.
# 1) Save an MCP server definition in OpenClaw config
openclaw mcp set nocturnus '{
"url":"http://localhost:9300/mcp/sse",
"headers": {
"X-Database":"default",
"X-Tenant-ID":"default"
}
}'
# 2) Verify
openclaw mcp list
openclaw mcp show nocturnus --json openclaw mcp set/list/show/unset only manages config. It does not connect to the target server or validate reachability in real time.
Once attached in your runtime/surface, Nocturnus tools (tell, ask, teach, forget, context, etc.) give OpenClaw a deterministic fact layer without changing your model provider setup.
Path B: Build a Context Engine Plugin (deep integration)
OpenClaw's Context Engine API is the right place to make NocturnusAI your canonical context assembler. OpenClaw calls the engine during ingest, assemble, and compact lifecycle stages.
Plugin sketch
import { definePluginEntry } from "openclaw/plugin-sdk/plugin-entry";
import { delegateCompactionToRuntime } from "openclaw/plugin-sdk/core";
const NOCTURNUS = process.env.NOCTURNUS_URL ?? "http://localhost:9300";
const HEADERS = {
"Content-Type": "application/json",
"X-Database": process.env.NOCTURNUS_DB ?? "default",
"X-Tenant-ID": process.env.NOCTURNUS_TENANT ?? "default",
};
export default definePluginEntry({
id: "nocturnus-context",
name: "Nocturnus Context Engine",
register(api) {
api.registerContextEngine("nocturnus-context", () => ({
info: { id: "nocturnus-context", name: "Nocturnus Context", ownsCompaction: false },
async ingest({ message }) {
// Persist each turn into Nocturnus (choose /tell or /context ingest strategy).
await fetch(`${NOCTURNUS}/context`, {
method: "POST",
headers: HEADERS,
body: JSON.stringify({ turns: [message.content], maxFacts: 1 }),
});
return { ingested: true };
},
async assemble({ messages, tokenBudget }) {
// Use latest user intent as goal, then ask Nocturnus for optimized context.
const latest = messages[messages.length - 1];
const ctx = await fetch(`${NOCTURNUS}/context/optimize`, {
method: "POST",
headers: HEADERS,
body: JSON.stringify({
maxFacts: 25,
sessionId: "openclaw-main",
// replace with your own goal extraction
goals: [{ predicate: "priority_support", args: ["acme"] }],
}),
}).then(r => r.json());
const contextText = ctx.entries
.map((e: any) => `${e.predicate}(${e.args.join(", ")})`)
.join("\n");
return {
messages: [
...messages.slice(-8),
{ role: "system", content: `Nocturnus facts:\n${contextText}` },
],
estimatedTokens: Math.ceil((contextText.length * 1.2) / 4),
systemPromptAddition: "Prefer Nocturnus facts over guessed context.",
};
},
async compact(params) {
// Keep OpenClaw's native compaction behavior unless you own compaction.
return await delegateCompactionToRuntime(params);
},
}));
},
}); Enable the engine in OpenClaw
{
"plugins": {
"slots": {
"contextEngine": "nocturnus-context"
},
"entries": {
"nocturnus-context": {
"enabled": true
}
}
}
} Validate + restart:
openclaw doctor
cat ~/.openclaw/openclaw.json | jq '.plugins.slots.contextEngine'
# restart gateway after plugin/config updates legacy. Fix the plugin or switch the slot back to "legacy".
Suggested Lifecycle Mapping
| OpenClaw lifecycle | Nocturnus endpoint | Why |
|---|---|---|
| ingest | POST /context | Lightweight turn ingestion and fact extraction for future runs. |
| assemble | POST /context/optimize | Goal-driven, deduplicated context with contradiction handling. |
| afterTurn | POST /context/diff | Incremental updates between turns to reduce repeated payload. |
| compact | POST /context/summary | Observe KB pressure and compaction triggers; optionally delegate to runtime. |
Research References
- OpenClaw Context (official)
- OpenClaw Token Use and Costs (official)
- OpenClaw Context Engine (official)
- OpenClaw MCP CLI + registry behavior (official)
- OpenClaw Plugin Architecture (context engine registration)
- Nocturnus Context Optimization guide
- Nocturnus Context Management API