Framework Integrations

Drop NocturnusAI into your agent stack in one line. Cut token costs by 97% with native integrations for the most popular AI frameworks.

Installation

pip install nocturnusai                    # core SDK
pip install nocturnusai[langchain]         # + LangChain tools
pip install nocturnusai[crewai]            # + CrewAI BaseTool subclasses
pip install nocturnusai[autogen]           # + AutoGen tool functions + Memory
pip install nocturnusai[langgraph]         # + LangGraph checkpoint saver
pip install nocturnusai[openai-agents]     # + OpenAI Agents SDK tools
pip install nocturnusai[all]               # everything

LangChain

Four pre-built tools that plug directly into any LangChain agent. Assert facts, query, infer, and get salience-ranked context.

from nocturnusai import SyncNocturnusAIClient
from nocturnusai.langchain import get_nocturnusai_tools

client = SyncNocturnusAIClient("http://localhost:9300")
tools = get_nocturnusai_tools(client)
# Pass tools to any LangChain agent

Full LangChain docs →


CrewAI

Five BaseTool subclasses with Pydantic input schemas, plus a Storage backend for crew-level knowledge persistence.

from nocturnusai import SyncNocturnusAIClient
from nocturnusai.crewai import get_nocturnusai_tools, NocturnusAIStorage

client = SyncNocturnusAIClient("http://localhost:9300")
tools = get_nocturnusai_tools(client)
storage = NocturnusAIStorage(client=client)

Full CrewAI docs →


AutoGen

Five plain Python tool functions and an async Memory protocol implementation. Works with or without autogen-agentchat.

from nocturnusai import SyncNocturnusAIClient
from nocturnusai.autogen import get_nocturnusai_tools, NocturnusAIMemory

client = SyncNocturnusAIClient("http://localhost:9300")
tools = get_nocturnusai_tools(client)
memory = NocturnusAIMemory(client=client)

Full AutoGen docs →


LangGraph

Checkpoint saver that persists graph state as NocturnusAI facts. Maps threads to scopes for isolation.

from nocturnusai import SyncNocturnusAIClient
from nocturnusai.langgraph import NocturnusAICheckpointSaver

client = SyncNocturnusAIClient("http://localhost:9300")
saver = NocturnusAICheckpointSaver(client=client)
app = graph.compile(checkpointer=saver)

Full LangGraph docs →


OpenAI Agents SDK

Five tool functions, auto-decorated with @function_tool when the package is installed. Falls back to plain functions without it.

from nocturnusai import SyncNocturnusAIClient
from nocturnusai.openai_agents import get_nocturnusai_tools

client = SyncNocturnusAIClient("http://localhost:9300")
tools = get_nocturnusai_tools(client)
# Agent(name="reasoner", tools=tools)

Full OpenAI Agents docs →


Anthropic SDK

JSON schema tool definitions and a dispatcher for the Anthropic Messages API. Zero framework dependencies.

from nocturnusai import SyncNocturnusAIClient
from nocturnusai.anthropic_tools import get_nocturnusai_tool_definitions, handle_tool_call

client = SyncNocturnusAIClient("http://localhost:9300")
tools = get_nocturnusai_tool_definitions()
# response = anthropic.messages.create(tools=tools, ...)

Full Anthropic docs →


Any Framework

Don't see your framework? NocturnusAI works with any Python or TypeScript agent via HTTP API, MCP protocol, or the direct SDK.


Context Optimization Across All Frameworks

Every integration above gives your agent access to the Context Management Engine. Instead of stuffing your entire knowledge base into every LLM prompt, call optimize_context() with goals to get only the facts that matter — 97% fewer tokens billed per request, regardless of framework.

# Works with any framework above
from nocturnusai import SyncNocturnusAIClient

client = SyncNocturnusAIClient("http://localhost:9300")
ctx = client.optimize_context(
    goals=[{"predicate": "eligible_for_sla", "args": ["acme_corp"]}],
    max_facts=25
)
# ctx.entries → 15 facts, 820 tokens, full provenance
# vs. 150K tokens without optimization

What's Next?

SDKs →

Full Python and TypeScript SDK documentation

MCP Integration →

Connect via Model Context Protocol in 30 seconds

Quickstart →

Install and run your first query in 5 minutes