"AI-Assisted System Introspection: AimDB Meets the Model Context Protocol"
A distributed system built on AimDB is, by construction, self-describing. Every record declares its schema via SchemaType, its buffer semantics at registration time, its producer-consumer topology in the dependency graph and its observability signals via Observable. This isn't instrumentation bolted on after the fact — it's intrinsic to the data contracts.
The question was always: who benefits from all that metadata? Dashboards and CLIs, obviously. But the real leverage comes when you give it to an AI assistant that can reason about the system as a whole.
That's what aimdb-mcp does. It's a Model Context Protocol server that exposes a running AimDB instance — records, buffers, graph topology, live values and architecture state — to any MCP-capable AI client. Connect it to your editor and ask questions in natural language. The AI navigates the same typed contracts that your MCU firmware, edge gateway and cloud service use.
Why MCP, Not a Custom API
The Model Context Protocol is an open standard for AI-tool integration. It defines a JSON-RPC interface over stdio that any compatible client — VS Code with GitHub Copilot, Claude Code or a custom agent — can connect to. Building on MCP means AimDB's introspection layer works with the AI tools your team already uses, without a proprietary integration.
The server advertises its capabilities (tools, resources, prompts) and the client calls them as needed. No SDK. No client library. Just a binary that speaks JSON-RPC:
{ "servers": { "aimdb": { "type": "stdio", "command": "aimdb-mcp", "env": { "AIMDB_WORKSPACE": "${workspaceFolder}" } } } }
Drop that in .vscode/mcp.json and every AI interaction in your editor can see your running AimDB system.
What the AI Can See
The MCP server exposes three layers of system knowledge, each building on the one below.
Layer 1: Discovery and Records
The starting point is always: what's running?
discover_instances scans for running AimDB instances on Unix sockets and returns their versions, protocol capabilities and permission sets. From there, list_records returns every record in the instance — name, buffer type, capacity, producer and consumer counts, timestamps and whether the record is writable.
This means an AI assistant can answer questions like "What records exist on the edge gateway?" or "Which records haven't been updated in the last hour?" without any pre-configuration. The system describes itself.
get_record retrieves the current value as JSON. query_schema infers the JSON Schema from live values — field names, types and example data. Together, they let the AI understand not just that a record exists, but what it contains and what shape it has.
Layer 2: Graph Introspection
AimDB's dependency graph — the DAG of sources, transforms and taps — is the system's architecture. The MCP server exposes it through three tools:
| Tool | What it returns |
|---|---|
graph_nodes | Every record with its origin (source, link, transform, passive), buffer config, tap count, connector status |
graph_edges | Directed edges showing data flow between records |
graph_topo_order | Topological ordering — the sequence in which records are initialized |
An AI assistant with access to the graph can answer structural questions: "What feeds into the forecast validation record?", "Which records have no consumers?", "Show me the data flow from the KNX connector to the WebSocket output." These are the questions that, in a traditional system, require someone to trace code paths manually across services.
Layer 3: Architecture Design
Beyond inspecting a running system, the MCP server includes an architecture agent for design-time assistance. The architecture state lives in .aimdb/state.toml — a declarative description of records, tasks, binaries and connectors. The AI can read this state, propose changes and validate them against a live instance.
Proposals follow a structured workflow: the AI suggests a change (add a record, modify a buffer, wire a connector), the human confirms or rejects via resolve_proposal, and the state file is updated only on confirmation. The AI is a design partner, not an autonomous agent.
validate_against_instance closes the loop by comparing the declared architecture against what's actually running, surfacing missing records, buffer mismatches and connector drift.
Practical Examples
"What is the current temperature from station alpha?"
The AI calls get_record with live data — no dashboard needed.
"How does data flow from the KNX connector to the browser dashboard?"
graph_nodes and graph_edges trace the full path.
"I need to add a CO₂ sensor to the building automation system."
The AI walks through propose_add_record with the right buffer type and connector metadata — then waits for confirmation.
"Is the running system still in sync with the architecture spec?"
validate_against_instance returns a structured diff.
Why Data-First Makes This Work
Most systems require extensive instrumentation before an AI can reason about them — custom metrics exporters, documentation that falls out of date, schema registries that cover serialization but not topology.
AimDB doesn't need any of that because the metadata is already there. A record that exists is discoverable. A record that implements Observable has a signal the AI can read. A record wired through .transform() has an edge in the dependency graph. The MCP server doesn't add instrumentation — it exposes what the data contracts already declare.
This is a direct consequence of the data-driven philosophy: when the data model is the architecture, introspection is not a feature you add — it's a property you get.
Decision Memory
One more detail worth noting. The MCP server includes a save_memory tool that persists design context — what was considered, what was decided, what alternatives were rejected and why — to .aimdb/memory.md. This means the rationale behind architectural decisions survives across sessions. A new team member (or the same AI in a future conversation) can read the decision log and understand not just what the system looks like, but why.
Getting Started
Install the MCP server:
cargo install aimdb-mcp
Add it to your editor's MCP configuration (.vscode/mcp.json for VS Code) and point it at your workspace. The server discovers running instances automatically via Unix socket scanning.
The full tool reference is in the aimdb-mcp README. The architecture agent prompts — including onboarding, breaking change review and troubleshooting — are built into the server and available to any connected AI client.
Have questions? Open an issue on GitHub or join the discussion.