SDKs & Agents
Official client libraries for Python and TypeScript — with context optimization that cuts token costs by 97%. Plus native integrations for LangChain, CrewAI, AutoGen, LangGraph, OpenAI Agents SDK, and Anthropic SDK.
Python SDK
Full async and sync clients with LangChain tool wrappers. Published on PyPI — no source build needed.
Installation
pip install nocturnusai
# With framework integrations
pip install nocturnusai[langchain]
pip install nocturnusai[crewai]
pip install nocturnusai[autogen]
pip install nocturnusai[langgraph]
pip install nocturnusai[openai-agents]
# Install all integrations
pip install nocturnusai[all] Sync Client
from nocturnusai import SyncNocturnusAIClient
with SyncNocturnusAIClient("http://localhost:9300") as client:
# Store facts
client.assert_fact("parent", ["alice", "bob"])
client.assert_fact("parent", ["bob", "charlie"])
# Teach a rule
client.assert_rule(
head={"predicate": "grandparent", "args": ["?x", "?z"]},
body=[
{"predicate": "parent", "args": ["?x", "?y"]},
{"predicate": "parent", "args": ["?y", "?z"]},
]
)
# Infer
results = client.infer("grandparent", ["?who", "charlie"])
print(results) # [grandparent(alice, charlie)] Async Client
from nocturnusai import NocturnusAIClient
async with NocturnusAIClient("http://localhost:9300") as client:
await client.assert_fact("human", ["socrates"])
results = await client.infer("mortal", ["?who"])
print(results) Context Optimization
The Python SDK includes full support for the Context Management Engine — the feature that cuts token costs by 97%:
from nocturnusai import SyncNocturnusAIClient
with SyncNocturnusAIClient("http://localhost:9300") as client:
# Goal-driven context optimization
ctx = client.optimize_context(
goals=[{"predicate": "eligible_for_sla", "args": ["acme_corp"]}],
max_facts=25
)
print(f"{len(ctx.entries)} facts, {ctx.token_estimate} tokens")
# 15 facts, 820 tokens — vs. 150K unoptimized
# Incremental diffs for multi-turn conversations
diff = client.diff_context(session_id="session-42")
# Only sends what changed since last call LangChain Integration
Drop Nocturnus into any LangChain agent in one line. Four pre-built tools map directly to the core API:
| Tool Name | Maps To | Description |
|---|---|---|
| nocturnusai_assert | /tell | Store a fact |
| nocturnusai_query | /query | Pattern match stored facts |
| nocturnusai_infer | /ask | Run logical inference |
| nocturnusai_context | /memory/context | Get salience-ranked context window |
Quick Start
from nocturnusai import SyncNocturnusAIClient
from nocturnusai.langchain import get_nocturnusai_tools
client = SyncNocturnusAIClient("http://localhost:9300")
tools = get_nocturnusai_tools(client) With a LangChain Agent
from langchain_anthropic import ChatAnthropic
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
llm = ChatAnthropic(model="claude-sonnet-4-6")
prompt = ChatPromptTemplate.from_messages([
("system", "You are an assistant with access to a verified knowledge base."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
result = executor.invoke({
"input": "Alice is Bob's parent. Bob is Charlie's parent. Who is Charlie's grandparent?"
})
print(result["output"])
# "Alice is Charlie's grandparent." CrewAI Integration
Five BaseTool subclasses and a Storage backend for CrewAI agents. Each tool has a Pydantic input schema for structured argument validation.
| Tool | Purpose |
|---|---|
| NocturnusAITellTool | Assert a fact into the knowledge base |
| NocturnusAIAskTool | Run logical inference queries |
| NocturnusAITeachTool | Define logical rules |
| NocturnusAIForgetTool | Retract facts |
| NocturnusAIContextTool | Get salience-ranked context window |
Quick Start
from nocturnusai import SyncNocturnusAIClient
from nocturnusai.crewai import get_nocturnusai_tools, NocturnusAIStorage
client = SyncNocturnusAIClient("http://localhost:9300")
tools = get_nocturnusai_tools(client)
storage = NocturnusAIStorage(client=client) With a CrewAI Agent
from crewai import Agent, Task, Crew
reasoner = Agent(
role="Knowledge Reasoner",
goal="Store facts and answer questions using logical inference",
backstory="You are an expert at structured reasoning.",
tools=tools,
)
task = Task(
description="Alice is Bob's parent. Bob is Charlie's parent. "
"Who is Charlie's grandparent?",
agent=reasoner,
expected_output="The grandparent relationship",
)
crew = Crew(agents=[reasoner], tasks=[task])
result = crew.kickoff() AutoGen Integration
Five plain Python tool functions and an async Memory protocol implementation for AutoGen agents.
Quick Start
from nocturnusai import SyncNocturnusAIClient
from nocturnusai.autogen import get_nocturnusai_tools, NocturnusAIMemory
client = SyncNocturnusAIClient("http://localhost:9300")
# Get tool functions: tell, ask, teach, forget, context
tools = get_nocturnusai_tools(client)
# Or use as agent memory
memory = NocturnusAIMemory(client=client) Tool Functions
The five tool functions work with or without autogen-agentchat installed:
| Function | Purpose |
|---|---|
| nocturnusai_tell | Assert a fact (predicate + JSON args) |
| nocturnusai_ask | Query via inference (use ?-prefixed variables) |
| nocturnusai_teach | Define a logical rule (JSON head + body) |
| nocturnusai_forget | Retract a fact |
| nocturnusai_context | Get salience-ranked context window |
Memory Protocol
NocturnusAIMemory implements the AutoGen Memory interface (add, query, update_context, clear, close), storing messages as NocturnusAI facts with salience-ranked retrieval.
LangGraph Integration
A checkpoint saver that persists LangGraph graph state as NocturnusAI facts, using scopes for thread isolation.
Quick Start
from nocturnusai import SyncNocturnusAIClient
from nocturnusai.langgraph import NocturnusAICheckpointSaver
client = SyncNocturnusAIClient("http://localhost:9300")
saver = NocturnusAICheckpointSaver(client=client)
# Use with a LangGraph compiled graph
app = graph.compile(checkpointer=saver) How It Works
Each checkpoint is stored as a fact with predicate lg_checkpoint and args [thread_id, state_json, metadata_json]. LangGraph threads map to NocturnusAI scopes for isolation. The saver implements put, get_tuple, and list for full checkpoint lifecycle management.
OpenAI Agents SDK Integration
Five tool functions that work with or without the openai-agents package. When the package is installed, functions are automatically decorated with @function_tool.
Quick Start
from nocturnusai import SyncNocturnusAIClient
from nocturnusai.openai_agents import get_nocturnusai_tools
client = SyncNocturnusAIClient("http://localhost:9300")
tools = get_nocturnusai_tools(client)
# Use with an OpenAI Agent
from agents import Agent
agent = Agent(
name="reasoner",
instructions="You are a knowledge reasoning agent.",
tools=tools,
) Anthropic SDK Integration
JSON schema tool definitions and a dispatcher function for use with the Anthropic Messages API. Zero framework dependencies — works with the raw anthropic SDK.
Quick Start
from nocturnusai import SyncNocturnusAIClient
from nocturnusai.anthropic_tools import get_nocturnusai_tool_definitions, handle_tool_call
client = SyncNocturnusAIClient("http://localhost:9300")
tools = get_nocturnusai_tool_definitions()
# Pass tool definitions to Claude
response = anthropic_client.messages.create(
model="claude-sonnet-4-6",
tools=tools,
messages=[{"role": "user", "content": "Alice likes Bob. Who likes Bob?"}],
)
# Handle tool calls from Claude's response
for block in response.content:
if block.type == "tool_use":
result = handle_tool_call(client, block.name, block.input) Tool Definitions
Returns 5 Anthropic-compatible tool definitions with full JSON schemas: nocturnusai_tell, nocturnusai_ask, nocturnusai_teach, nocturnusai_forget, and nocturnusai_context. The handle_tool_call() dispatcher routes tool names to the appropriate client methods.
TypeScript SDK
Zero-dependency typed client. Works in Node.js 18+ and modern browsers. Published on npm.
Installation
npm install nocturnusai-sdk Usage
import { NocturnusAIClient } from 'nocturnusai-sdk';
const client = new NocturnusAIClient({
baseUrl: 'http://localhost:9300',
database: 'mydb',
tenantId: 'default',
});
// Assert facts
await client.assertFact('parent', ['alice', 'bob']);
await client.assertFact('parent', ['bob', 'charlie']);
// Assert a rule
await client.assertRule(
{ predicate: 'grandparent', args: ['?x', '?z'] },
[
{ predicate: 'parent', args: ['?x', '?y'] },
{ predicate: 'parent', args: ['?y', '?z'] },
]
);
// Infer
const results = await client.infer('grandparent', ['?who', 'charlie']);
console.log(results); MCP Client
The SDK also includes an MCP client for JSON-RPC 2.0 tool calls:
import { NocturnusAIMCPClient } from 'nocturnusai-sdk';
const mcp = new NocturnusAIMCPClient({
baseUrl: 'http://localhost:9300'
});
// Initialize MCP session
await mcp.initialize();
// Discover available tools
const tools = await mcp.listTools();
// Call a tool
const result = await mcp.callTool('ask', {
predicate: 'grandparent',
args: ['?who', 'charlie']
});