Loading...
Loading...
Weekly AI insights —
Real strategies, no fluff. Unsubscribe anytime.
Written by Gareth Simono, Founder and CEO of Agentik {OS}. Full-stack developer and AI architect with years of experience shipping production applications across SaaS, mobile, and enterprise platforms. Gareth orchestrates 267 specialized AI agents to deliver production software 10x faster than traditional development teams.
Founder & CEO, Agentik {OS}
AI agents design schemas that anticipate growth, optimize slow queries automatically, and generate safe migrations. Zero outages in 40+ deployments.

I learned the most expensive lesson of my early career through a database design mistake.
Six months into a project, a single new feature required a query pattern the schema fundamentally could not support efficiently. The fix required migrating 40 million rows, rebuilding three indexes, and coordinating a three-hour maintenance window. The root cause was a schema decision I made in week one that took maybe twenty minutes to design.
That ratio, twenty minutes of design work causing three weeks of migration effort, is what makes database design uniquely consequential. And it is exactly the kind of high-stakes, pattern-heavy problem where AI agents provide their most durable value.
The failure mode of manual schema design is thinking about present requirements. You model the features you are building today. You normalize data to third normal form. You add indexes for the queries you have written.
Then the product evolves. Users want search across full-text content. Analytics need aggregations your schema cannot compute without full table scans. A new feature requires a query joining six tables that were designed independently.
AI agents design schemas with growth in mind. They do not just normalize your data. They ask what this data needs to support at scale.
When I describe a feature, the agent structures its response around interrogation before generation. "Will users query their own data or across all users? How many records per user are expected at 12 months, 24 months? Will you need full-text search? Real-time subscriptions? Analytics aggregations?"
Based on those answers, it makes explicit trade-offs. More normalization means smaller write amplification but more expensive reads. Denormalization means faster reads but complex update logic. JSON columns provide schema flexibility but limit queryability. The agent explains each trade-off rather than making silent assumptions.
-- Schema optimized for the query pattern: "list a user's projects with member count"
-- Agent generated this after asking about access patterns
CREATE TABLE projects (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
workspace_id UUID NOT NULL REFERENCES workspaces(id) ON DELETE CASCADE,
owner_id UUID NOT NULL REFERENCES users(id),
title TEXT NOT NULL,
description TEXT,
status TEXT NOT NULL DEFAULT 'draft'
CHECK (status IN ('draft', 'active', 'archived')),
member_count INTEGER NOT NULL DEFAULT 0, -- Denormalized for fast listing
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Composite index matching the dashboard query pattern
CREATE INDEX idx_projects_workspace_status_created
ON projects (workspace_id, status, created_at DESC);
-- Partial index for the common case: active projects only
CREATE INDEX idx_projects_workspace_active
ON projects (workspace_id, created_at DESC)
WHERE status = 'active';
-- Trigger to maintain denormalized member_count
CREATE OR REPLACE FUNCTION update_project_member_count()
RETURNS TRIGGER AS $$
BEGIN
IF TG_OP = 'INSERT' THEN
UPDATE projects SET member_count = member_count + 1 WHERE id = NEW.project_id;
ELSIF TG_OP = 'DELETE' THEN
UPDATE projects SET member_count = member_count - 1 WHERE id = OLD.project_id;
END IF;
RETURN NULL;
END;
$$ LANGUAGE plpgsql;The denormalized member_count column is a deliberate design decision. Without it, listing projects with member counts requires a GROUP BY aggregation joining two tables. With it, the list query is a simple index scan. The agent explains why and implements the trigger that keeps it consistent.
Schema design is a founding decision. Query optimization is continuous. Every new feature adds access patterns. Traffic growth exposes queries that worked at 10,000 rows but collapse at 10 million.
AI agents provide specific, actionable query optimization rather than generic advice.
Not "consider adding indexes to frequently queried columns." Rather: "The dashboard query executing on workspace_id = $1 AND status = 'active' performs a full table scan on projects (287,000 rows). Adding a partial index on (workspace_id, created_at DESC) WHERE status = 'active' will reduce this from 340ms to approximately 2ms at current data volume."
The specificity matters. Vague recommendations create analysis paralysis. Specific recommendations create action.
The most common query performance issue in ORM-heavy codebases. A query loads a list. Then another query loads related data for each list item. 20 items, 21 queries.
AI agents identify N+1 patterns by analyzing the relationship between code and query logs:
// N+1 pattern - what the agent finds
const posts = await prisma.post.findMany({
where: { workspaceId },
take: 20,
});
const postsWithData = await Promise.all(
posts.map(async (post) => ({
...post,
author: await prisma.user.findUnique({ where: { id: post.authorId } }),
tags: await prisma.tag.findMany({ where: { postId: post.id } }),
commentCount: await prisma.comment.count({ where: { postId: post.id } }),
}))
);
// 1 + 20 + 20 + 20 = 61 queries
// Fixed version - what the agent generates
const posts = await prisma.post.findMany({
where: { workspaceId },
take: 20,
include: {
author: { select: { id: true, name: true, avatarUrl: true } },
tags: { select: { id: true, name: true } },
_count: { select: { comments: true } },
},
orderBy: { createdAt: 'desc' },
});
// 1 query with 3 JOINsThe fix is straightforward once you see it. The challenge is finding it. In a codebase with hundreds of queries, N+1 patterns hide in abstraction layers and only reveal themselves through query logging.
Indexes are the highest-impact, lowest-risk optimization in any database. A missing index on a frequently queried column can mean the difference between a 2ms query and a 2-second query.
AI agents audit your schema against your query patterns and generate a prioritized index inventory:
-- High priority: eliminates full table scan on user lookup by email
CREATE UNIQUE INDEX idx_users_email ON users (email);
-- High priority: dashboard's most common query pattern
CREATE INDEX idx_projects_workspace_created
ON projects (workspace_id, created_at DESC);
-- Medium priority: used in 3 different search flows
CREATE INDEX idx_documents_workspace_title
ON documents USING gin(to_tsvector('english', title || ' ' || coalesce(content, '')));
-- Low priority: used rarely, but expensive when it runs
CREATE INDEX idx_audit_logs_user_created
ON audit_logs (user_id, created_at DESC)
WHERE action = 'delete'; -- Partial index for the rare caseNote the partial index for the audit logs query. Full indexes have write overhead. Every insert or update touching indexed columns requires updating the index. A partial index that only covers 5% of rows has 95% less write overhead while still optimizing the queries it covers.
Schema migrations deserve their fearsome reputation. A migration that locks a table for 30 seconds causes 30 seconds of downtime. In a production system processing thousands of requests per minute, that is thousands of failed requests.
AI agents generate migrations that account for the production reality:
Non-blocking index creation. Regular CREATE INDEX acquires a table lock. CREATE INDEX CONCURRENTLY builds the index without locking, but requires more care.
Column additions with defaults. Adding a NOT NULL column with a default in PostgreSQL before version 11 rewrites the entire table. In modern PostgreSQL it is instant, but you need to know the version.
Multi-step migrations for complex changes. Renaming a column live means deploying code that reads both old and new names, running the migration, then deploying code that only reads the new name. Never renaming and deploying simultaneously.
-- Safe migration: adding a NOT NULL column with default
-- Step 1: Add nullable (instant, no lock)
ALTER TABLE projects ADD COLUMN priority INTEGER;
-- Step 2: Backfill in batches (no lock, but takes time)
DO $$
DECLARE
batch_size CONSTANT INTEGER := 10000;
last_id UUID := '00000000-0000-0000-0000-000000000000';
BEGIN
LOOP
UPDATE projects
SET priority = 0
WHERE id IN (
SELECT id FROM projects
WHERE priority IS NULL AND id > last_id
ORDER BY id
LIMIT batch_size
)
RETURNING id INTO last_id;
EXIT WHEN NOT FOUND;
PERFORM pg_sleep(0.01); -- Brief pause between batches
END LOOP;
END $$;
-- Step 3: Add NOT NULL constraint (validates data, no rewrite in pg14+)
ALTER TABLE projects ALTER COLUMN priority SET NOT NULL;
ALTER TABLE projects ALTER COLUMN priority SET DEFAULT 0;This migration adds a NOT NULL column with zero downtime. A naive migration would rewrite the entire table.
Every production migration needs a tested rollback. Not a theoretical rollback. A rollback you have executed in a staging environment and verified works.
AI agents generate rollback scripts alongside forward migrations. They also estimate execution time and flag migrations that are difficult or impossible to rollback cleanly, prompting a design review before you commit.
For applications using reactive backends like Convex, schema design requires additional thinking. Reactive queries re-execute automatically when underlying data changes. The schema determines how much data changes with each mutation, which determines how much work the reactive layer does.
Designing for reactivity:
Separate frequently-changing from rarely-changing data. A user's profile (name, avatar, bio) changes rarely. A user's activity feed (posts, comments, reactions) changes constantly. Keep them in separate tables. Components subscribed to profile data will not re-render on every activity event.
Granular tables over monolithic ones. A "user" table with 40 columns means any change to any field triggers re-computation for every query touching that table. Splitting into user_profile, user_settings, user_stats narrows the blast radius of each change.
Computed fields as materialized views. If your UI needs a user's follower count, store it denormalized and maintain it via triggers. The alternative is a COUNT query running on every subscription refresh.
Here is the process I follow for every new project:
List all features. Write out every data operation your application will perform. Create, read, update, delete for each entity. Search queries. Analytics. Reporting.
Describe access patterns. For each query, who runs it? How often? With what filters? In what order? What is the expected data volume?
Let the agent propose a schema. It will generate a complete proposal with explanations for every decision.
Challenge with growth scenarios. "What happens at 1 million users? What if we add multi-tenancy? What if we need full-text search?"
Agent adjusts and generates migrations. Both the schema change and the rollback.
Agent generates sample queries. For every access pattern in step 1. Run them with EXPLAIN ANALYZE to verify the query plan matches expectations.
This process takes about two hours and produces a schema that would have taken days of research and experience to design manually. More importantly, it accounts for failure modes I would not have thought of.
I have run over 40 AI-generated migrations in production databases across multiple projects.
Zero data loss incidents. Zero unexpected downtime. Zero failed rollbacks.
That is not luck. The agent is more cautious than I am under deadline pressure. It generates the rollback script before the forward migration. It estimates execution time and flags long-running operations. It checks for table lock acquisition before suggesting any ALTER TABLE.
I have cut corners on migrations under deadline pressure and paid for it. AI agents do not cut corners. They do not feel deadline pressure.
Database design pairs naturally with API development since the schema defines what your endpoints can efficiently serve, and with performance optimization where database query patterns are often the primary bottleneck.
Q: How do AI agents help with database design?
AI agents assist database design by generating schemas from domain descriptions, implementing migration scripts, creating repository layers with proper query patterns, optimizing indexes based on access patterns, and ensuring referential integrity and data validation throughout.
Q: What database patterns work best with AI agents?
The repository pattern (separating database access from business logic), query builders with TypeScript types, migration-based schema management, and cursor-based pagination work best because they provide clear structure and type safety that agents can follow consistently.
Q: Can AI agents optimize database queries?
Yes, AI agents identify N+1 queries, suggest appropriate indexes, implement efficient pagination, optimize JOIN patterns, and refactor queries for better performance. They analyze query plans and access patterns to recommend structural improvements.
Full-stack developer and AI architect with years of experience shipping production applications across SaaS, mobile, and enterprise. Gareth built Agentik {OS} to prove that one person with the right AI system can outperform an entire traditional development team. He has personally architected and shipped 7+ production applications using AI-first workflows.

AI API Development: Schema to Production in Hours
14 endpoints, full validation, auth, pagination, rate limiting, and 67 passing tests. Three hours. AI API development is a different game now.

Real-Time Apps with AI Agents and Convex
Your users refresh to see new data. That's a 2015 architecture. Real-time with Convex and AI agents makes reactivity the default, not a bolt-on.

AI Performance Optimization: LCP From 4.2s to 1.1s
An AI agent analyzed, identified, and fixed seven performance bottlenecks in one hour. Manual optimization would have taken a week. Here's the process.
Stop reading about AI and start building with it. Book a free discovery call and see how AI agents can accelerate your business.