Infrastructure

MCP Servers

Twelve custom Model Context Protocol servers that transform Claude Code from a general-purpose assistant into a domain-aware system with hands on the keyboard, eyes on the vault, and memory across sessions.

Python MCP Protocol FastMCP Ollama SQLite stdio httpx

In Plain English

MCP (Model Context Protocol) is a standard that lets AI assistants connect to other software. These servers act as bridges: one connects the AI to your notes in Obsidian (a note-taking app), another controls a game server, another manages local AI models running through Ollama (a tool for running AI on your own computer). Together they give the AI the ability to actually do things instead of just talking about them.

Problem

Claude Code ships with a powerful set of general-purpose tools: file reading, writing, searching, and shell execution. For most coding tasks, that is plenty. But the moment you ask it to do something domain-specific, like routing a note into the correct folder of an Obsidian vault based on its emotional content, or giving an item to a specific player on a running Don't Starve Together server, or browsing trending AI art models on CivitAI, those generic tools hit a wall. You end up writing fragile shell one-liners, explaining context every single session, and hoping the AI remembers your conventions. It never does.

The Model Context Protocol changes this equation entirely. MCP is an open standard that lets AI assistants call purpose-built tools over a lightweight stdio pipe. Each server registers its tools at startup, Claude Code discovers them automatically, and from that point forward the AI can call those tools by name with structured parameters. No shell hacks, no repeated context, no hoping the AI guesses the right API endpoint. The tools carry their own documentation, their own parameter schemas, and their own error handling.

This collection of twelve servers grew organically from real workflows. Every time a task required repetitive explanation or brittle workarounds, it became a candidate for a dedicated MCP server. The result is a system where Claude Code understands the local environment deeply: it knows the vault structure, the running models, the game server state, the task lists, and the file system health, all through clean, typed tool interfaces rather than ad-hoc prompting.

Architecture

Knowledge & Memory Vault & Files External Services AI Infrastructure Automation Gaming Claude Code stdio protocol 100+ tools, auto-discovery JSON Schema validation local-brain brain_search, brain_classify remember_fact 8 tools | Python/FastMCP memory-db entities, sessions, error_check cognitive self-improvement 14 tools | Python/SQLite ripgrep search_text fast code search 3 tools | Python vault-manager route_note, analyze_file custom Ollama model 7 tools | Python/FastMCP obsidian-vault scan_vault, find_orphans link analysis 6 tools | Node.js file-utils smart_cleanup duplicate detection 18 tools | Python civitai search_models, browse_trending download LoRAs/embeddings 9 tools | Python/httpx ticktick add_task, quick_task multi-account support 8 tools | Python/API others context7, sqlite firefox-devtools MCP ecosystem ollama chat, embeddings compare_models 7 tools | Python/Ollama local-ai-manager benchmark, GPU status create_model, health checks 14 tools | Python/FastMCP training model fine-tuning dataset prep 5 tools | Python desktop-automation OCR, screenshots click_at, type_text visual desktop control 8 tools | Python/pyautogui dst-server give_item, set_season spawn, console_cmd live game server control 14 tools | Python/RCON 100+ tools across 12 servers, all discoverable via stdio
Scroll to explore diagram

Key Features

Semantic Knowledge Access

local-brain + memory-db

The local-brain server provides vector-based semantic search across the entire codebase using Ollama embeddings, classifies incoming requests to route them to the correct agent, and answers architectural questions by combining indexed knowledge with a local LLM. The memory-db server adds structured memory with entities, relationships, sessions, and a cognitive self-improvement loop that tracks which recalled memories actually helped solve problems.

Intelligent Vault Management

3 servers, 28 tools

Three specialized servers handle different vault concerns. The vault-manager wraps a custom-trained Ollama model that knows the vault's folder structure, naming conventions, and routing rules. It can classify a note as shadow work and route it to the correct psychology subfolder with proper frontmatter. The obsidian-vault server (Node.js) handles structural operations like orphan detection and link analysis. The file-utils server provides safe cleanup with profiles, duplicate detection, trash with restore, and operation history.

Full AI Infrastructure Control

ollama + local-ai-manager

The ollama server wraps every Ollama API endpoint as a callable tool: chat, embeddings, model info, pulling new models, and even side-by-side model comparison that benchmarks latency and tokens-per-second. The local-ai-manager goes further with GPU status monitoring, MCP server health checks across the entire fleet, automated model testing with quality metrics, and the ability to create new Ollama models from Modelfiles directly through tool calls.

Real-World Integrations

civitai + ticktick + dst

The CivitAI server lets Claude browse, search, and download Stable Diffusion models, LoRAs, and embeddings from the CivitAI marketplace. The TickTick server manages tasks across multiple accounts with smart parsing for priorities and due dates. The DST server controls a live Don't Starve Together dedicated server: giving items to specific players, changing seasons, spawning creatures, and reading game state. The desktop-automation server uses OCR and pyautogui to give Claude direct visual control over the Windows desktop.

How It Works

01

Server Registration

When Claude Code launches, it reads the MCP configuration file and spawns each server as a child process. Every server initializes over the stdio protocol, declaring its name and the list of tools it provides. Each tool carries a JSON Schema that describes its parameters, their types, and what the tool does. This happens once at startup and takes under a second for the entire fleet.

02

Tool Discovery and Selection

Claude Code aggregates all registered tools into a single catalog. When a user request arrives, the model sees the full tool list and selects the most appropriate one based on the tool descriptions and parameter schemas. If the user asks about vault health, Claude naturally reaches for vault-manager's scan_vault_health. If the user says "give a player 40 gold," it calls dst-server's item gifting tool. No routing logic is needed on the user's side because the tool names and descriptions carry enough semantic information.

03

Local Execution

Each MCP server processes requests entirely on the local machine. The ollama server calls the local Ollama API. The vault-manager queries a local Qwen model. The memory-db reads from a local SQLite database. The ripgrep server shells out to the rg binary. Nothing leaves the machine unless the tool explicitly calls an external API (like CivitAI or TickTick), and even those requests go through the user's own API keys. Results flow back through the same stdio pipe as structured JSON.

04

Composable Tool Chains

The real power emerges when Claude chains tools across multiple servers in a single conversation turn. It might call local-brain's brain_classify to determine the right agent, then vault-manager's route_note to file a note, then memory-db's save_session to log what happened. Each server is independent and stateless (apart from its own database), so these compositions are reliable and the failure of one tool does not cascade to others.

05

Safety and Confirmation

Destructive operations across the server fleet follow a consistent safety pattern. The file-utils server requires an explicit confirm=True parameter before deleting anything, and previews the files and sizes first. The local-ai-manager requires confirmation before deleting models. Protected paths and patterns are hardcoded so that critical files like .git directories, CLAUDE.md, and .obsidian configs can never be accidentally removed, even if the AI hallucinates a cleanup command.

Tech Stack

Protocol

MCP over stdio

Languages

Python (FastMCP), Node.js

AI Backend

Ollama (embeddings, chat, classification)

Storage

SQLite (memory.db, brain.db, embeddings)

External APIs

CivitAI, TickTick, Ollama REST

Desktop

pyautogui, Tesseract OCR, pygetwindow