Saturday, February 21, 2026

MySQL + Neo4j for AI Workloads: Why Relational Databases Still Matter

So I figured it was about time I documented how to build persistent memory for AI agents using the databases you already know. Not vector databases - MySQL and Neo4j.

This isn't theoretical. I use this architecture daily, handling AI agent memory across multiple projects. Here's the schema and query patterns that actually work.

The Architecture

AI agents need two types of memory:

  • Structured memory - What happened, when, why (MySQL)
  • Pattern memory - What connects to what (Neo4j)

Vector databases are for similarity search. They're not for tracking workflow state or decision history. For that, you need ACID transactions and proper relationships.

The MySQL Schema

Here's the actual schema for AI agent persistent memory:

-- Architecture decisions the AI made
CREATE TABLE architecture_decisions (
    id INT AUTO_INCREMENT PRIMARY KEY,
    project_id INT NOT NULL,
    title VARCHAR(255) NOT NULL,
    decision TEXT NOT NULL,
    rationale TEXT,
    alternatives_considered TEXT,
    status ENUM('accepted', 'rejected', 'pending') DEFAULT 'accepted',
    decided_at DATETIME DEFAULT CURRENT_TIMESTAMP,
    tags JSON,
    INDEX idx_project_date (project_id, decided_at),
    INDEX idx_status (status)
) ENGINE=InnoDB;

-- Code patterns the AI learned
CREATE TABLE code_patterns (
    id INT AUTO_INCREMENT PRIMARY KEY,
    project_id INT NOT NULL,
    category VARCHAR(50) NOT NULL,
    name VARCHAR(255) NOT NULL,
    description TEXT,
    code_example TEXT,
    language VARCHAR(50),
    confidence_score FLOAT DEFAULT 0.5,
    usage_count INT DEFAULT 0,
    created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
    updated_at DATETIME ON UPDATE CURRENT_TIMESTAMP,
    INDEX idx_project_category (project_id, category),
    INDEX idx_confidence (confidence_score)
) ENGINE=InnoDB;

-- Work session tracking
CREATE TABLE work_sessions (
    id INT AUTO_INCREMENT PRIMARY KEY,
    session_id VARCHAR(255) UNIQUE NOT NULL,
    project_id INT NOT NULL,
    started_at DATETIME DEFAULT CURRENT_TIMESTAMP,
    ended_at DATETIME,
    summary TEXT,
    context JSON,
    INDEX idx_project_session (project_id, started_at)
) ENGINE=InnoDB;

-- Pitfalls to avoid (learned from mistakes)
CREATE TABLE pitfalls (
    id INT AUTO_INCREMENT PRIMARY KEY,
    project_id INT NOT NULL,
    category VARCHAR(50),
    title VARCHAR(255) NOT NULL,
    description TEXT,
    how_to_avoid TEXT,
    severity ENUM('critical', 'high', 'medium', 'low'),
    encountered_count INT DEFAULT 1,
    created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
    INDEX idx_project_severity (project_id, severity)
) ENGINE=InnoDB;

Foreign keys. Check constraints. Proper indexing. This is what relational databases are good at.

Query Patterns

Here's how you actually query this for AI agent memory:

-- Get recent decisions for context
SELECT title, decision, rationale, decided_at
FROM architecture_decisions
WHERE project_id = ?
  AND decided_at > DATE_SUB(NOW(), INTERVAL 30 DAY)
ORDER BY decided_at DESC
LIMIT 10;

-- Find high-confidence patterns
SELECT category, name, description, code_example
FROM code_patterns
WHERE project_id = ?
  AND confidence_score >= 0.80
ORDER BY usage_count DESC, confidence_score DESC
LIMIT 20;

-- Check for known pitfalls before implementing
SELECT title, description, how_to_avoid
FROM pitfalls
WHERE project_id = ?
  AND category = ?
  AND severity IN ('critical', 'high')
ORDER BY encountered_count DESC;

-- Track session context across interactions
SELECT context
FROM work_sessions
WHERE session_id = ?
ORDER BY started_at DESC
LIMIT 1;

These are straightforward SQL queries. EXPLAIN shows index usage exactly where expected. No surprises.

The Neo4j Layer

MySQL handles the structured data. Neo4j handles the relationships:

// Create nodes for decisions
CREATE (d:Decision {
  id: 'dec_123',
  title: 'Use FastAPI',
  project_id: 1,
  embedding: [0.23, -0.45, ...]  // Vector for similarity
})

// Create relationships
CREATE (d1:Decision {id: 'dec_123', title: 'Use FastAPI'})
CREATE (d2:Decision {id: 'dec_45', title: 'Used Flask before'})
CREATE (d1)-[:SIMILAR_TO {score: 0.85}]->(d2)
CREATE (d1)-[:CONTRADICTS]->(d3:Decision {title: 'Avoid frameworks'})

// Query: Find similar past decisions
MATCH (current:Decision {id: $decision_id})
MATCH (current)-[r:SIMILAR_TO]-(similar:Decision)
WHERE r.score > 0.80
RETURN similar.title, r.score
ORDER BY r.score DESC

// Query: What outcomes followed this pattern?
MATCH (d:Decision)-[:LEADS_TO]->(o:Outcome)
WHERE d.title CONTAINS 'Redis'
RETURN d.title, o.type, o.success_rate

How They Work Together

The flow looks like this:

  1. AI agent generates content or makes a decision
  2. Store structured data in MySQL (what, when, why, full context)
  3. Generate embedding, store in Neo4j with relationships to similar items
  4. Next session: Neo4j finds relevant similar decisions
  5. MySQL provides the full details of those decisions

MySQL is the source of truth. Neo4j is the pattern finder.

Why Not Just Vector Databases?

I've seen teams try to build AI agent memory with just Pinecone or Weaviate. It doesn't work well because:

Vector DBs are good for:

  • Finding documents similar to a query
  • Semantic search (RAG)
  • "Things like this"

Vector DBs are bad for:

  • "What did we decide on March 15th?"
  • "Show me decisions that led to outages"
  • "What's the current status of this workflow?"
  • "Which patterns have confidence > 0.8 AND usage_count > 10?"

Those queries need structured filtering, joins, and transactions. That's relational database territory.

MCP and the Future

The Model Context Protocol (MCP) is standardizing how AI systems handle context. Early MCP implementations are discovering what we already knew: you need both structured storage and graph relationships.

MySQL handles the MCP "resources" and "tools" catalog. Neo4j handles the "relationships" between context items. Vector embeddings are just one piece of the puzzle.

Production Notes

Current system running this architecture:

  • MySQL 8.0, 48 tables, ~2GB data
  • Neo4j Community, ~50k nodes, ~200k relationships
  • Query latency: MySQL <10ms, Neo4j <50ms
  • Backup: Standard mysqldump + neo4j-admin dump
  • Monitoring: Same Percona tools I've used for years

The operational complexity is low because these are mature databases with well-understood operational patterns.

Too Much Work? Let AI Build It For You

Look, I get it. This is a lot of schema to set up, a lot of queries to write, a lot of moving parts.

Here's the thing: you don't have to type it all yourself. Copy the schema above, paste it into Claude Code or Kimi CLI, and tell it what you want to build. The AI will generate the Python code, the connection handling, the query patterns - all of it.

If you want to understand what's happening under the hood, start here:

Building a Simple MCP Server in Python

Then let your AI tool do the heavy lifting. That's literally what I did. The schema is mine, the architecture decisions are mine, but the implementation? Claude wrote most of it while I watched and corrected.

Use the tools. That's what they're for.

When to Use What

Use CaseDatabase
Workflow state, decisions, audit trailMySQL/PostgreSQL
Pattern detection, similarity, relationshipsNeo4j
Semantic document search (RAG)Vector DB (optional)

Start with MySQL for state. Add Neo4j when you need pattern recognition. Only add vector DBs if you're actually doing semantic document retrieval.

Summary

AI agents need persistent memory. Not just embeddings in a vector database - structured, relational, temporal memory with pattern recognition.

MySQL handles the structured state. Neo4j handles the graph relationships. Together they provide what vector databases alone cannot.

Don't abandon relational databases for AI workloads. Use the right tool for each job, which is using both together.

For more on the AI agent perspective on this architecture, see the companion post on 3k1o.

No comments:

Post a Comment

@AnotherMySQLDBA