Loading...
Loading...
Weekly AI insights —
Real strategies, no fluff. Unsubscribe anytime.
Written by Gareth Simono, Founder and CEO of Agentik {OS}. Full-stack developer and AI architect with years of experience shipping production applications across SaaS, mobile, and enterprise platforms. Gareth orchestrates 267 specialized AI agents to deliver production software 10x faster than traditional development teams.
Founder & CEO, Agentik {OS}
Your users refresh to see new data. That's a 2015 architecture. Real-time with Convex and AI agents makes reactivity the default, not a bolt-on.

Your users refresh the page to see new data. That single behavior tells me everything I need to know about your architecture.
It means your application serves a snapshot of the world. Not the world as it is right now. The world as it was when the page loaded. Users have learned to compensate by hammering F5.
This was acceptable in 2015. It is not acceptable now. Real-time is an expectation, not a premium feature. When someone sends a message, the recipient sees it appear. When inventory changes, every storefront reflects it. When a teammate edits, you see their cursor.
The problem is that most teams still treat real-time as a bolt-on. They build a request-response application, ship it, get user complaints about stale data, and then add WebSockets as an afterthought. The result is a nightmare: two data layers, two consistency models, bugs that only reproduce in multi-tab sessions, and state synchronization code that nobody fully understands.
There is a better path. Build reactive from day one.
I have built real-time features on Firebase, Supabase Realtime, raw WebSockets, Socket.io, and Convex. Each taught me something. Convex is the first where reactivity is not an add-on to the architecture. It is the architecture.
Here is the core insight: Convex queries are not one-shot reads. They are subscriptions. When you write a query, every client that executes it automatically receives updates when the underlying data changes. Not polling. Not manual WebSocket management. Not explicit subscription setup.
You write the query. The reactive layer is free.
// A Convex query that is automatically reactive
// Any component using this query updates when tasks change
import { query } from './_generated/server';
import { v } from 'convex/values';
export const listTasks = query({
args: { workspaceId: v.id('workspaces') },
handler: async (ctx, { workspaceId }) => {
const identity = await ctx.auth.getUserIdentity();
if (!identity) throw new Error('Unauthenticated');
return await ctx.db
.query('tasks')
.withIndex('by_workspace', (q) => q.eq('workspaceId', workspaceId))
.filter((q) => q.neq(q.field('deleted'), true))
.order('desc')
.collect();
},
});// React component - automatically re-renders when tasks change
function TaskList({ workspaceId }: { workspaceId: Id<'workspaces'> }) {
const tasks = useQuery(api.tasks.listTasks, { workspaceId });
if (tasks === undefined) return <LoadingSpinner />;
if (tasks.length === 0) return <EmptyState />;
return (
<ul>
{tasks.map(task => <TaskItem key={task._id} task={task} />)}
</ul>
);
}No WebSocket setup. No subscription management. No unsubscribe on unmount. The framework handles all of it.
The amount of code you do not write is the code that cannot have bugs. Every WebSocket handler you never wrote is a race condition that never happened.
AI agents understand this reactive model deeply. They design schemas knowing queries will be subscribed to by multiple clients simultaneously. They structure data so a single mutation triggers minimal re-computation. They separate high-frequency data from low-frequency data to prevent unnecessary client updates.
Designing a schema for a reactive backend is different from designing for a request-response backend. Every mutation to a table triggers re-computation for all subscribed queries on that table.
Put user profile data and user activity data in the same table, and every activity event re-renders every profile component across every connected client. That is a lot of unnecessary work.
AI agents apply reactive schema design principles:
Separation by change frequency. Data that changes rarely (user profile, workspace settings) lives separately from data that changes constantly (activity feeds, presence, notifications). Rarely-changed data keeps subscriptions quiet.
Narrow query surfaces. Instead of subscribing to an entire users table, subscribe to the specific user record you need. The query surface determines the blast radius of any mutation.
Aggregates as first-class data. A workspace's member count, a post's like count, a project's open task count. These should be stored as fields and maintained with mutations rather than computed with COUNT queries on subscription refresh.
// Convex schema designed for reactivity
import { defineSchema, defineTable } from 'convex/server';
import { v } from 'convex/values';
export default defineSchema({
// Rarely changes - kept separate from activity
userProfiles: defineTable({
userId: v.string(),
name: v.string(),
avatarUrl: v.optional(v.string()),
bio: v.optional(v.string()),
}).index('by_user', ['userId']),
// Changes constantly - kept separate from profile
userPresence: defineTable({
userId: v.string(),
workspaceId: v.id('workspaces'),
lastSeen: v.number(),
status: v.union(v.literal('online'), v.literal('idle'), v.literal('offline')),
}).index('by_workspace', ['workspaceId']),
// Denormalized aggregate to avoid COUNT queries in subscriptions
workspaces: defineTable({
name: v.string(),
ownerId: v.string(),
memberCount: v.number(), // Maintained by mutations
projectCount: v.number(), // Maintained by mutations
}),
});Reactive architectures solve the stale data problem but introduce new challenges. AI agents handle these patterns correctly because they are well-documented in the Convex ecosystem.
When a user clicks "like," the UI should update immediately, before the server confirms. Waiting for round-trip confirmation creates a laggy experience that feels worse than no real-time at all.
But optimistic updates require correct rollback when the server rejects the mutation. Maybe the user already liked it. Maybe they lost their session. The UI needs to revert cleanly without visual jank.
// Convex mutation with optimistic update
import { useMutation } from 'convex/react';
import { api } from '../convex/_generated/api';
function LikeButton({ postId }: { postId: Id<'posts'> }) {
const likePost = useMutation(api.posts.like).withOptimisticUpdate(
(localStore, { postId }) => {
const post = localStore.getQuery(api.posts.getById, { postId });
if (post) {
localStore.setQuery(api.posts.getById, { postId }, {
...post,
likeCount: post.likeCount + 1,
likedByCurrentUser: true,
});
}
}
);
return (
<button onClick={() => likePost({ postId })}>
Like
</button>
);
}If the mutation fails, Convex automatically rolls back the optimistic update. No manual rollback code.
Presence data ("who's online," "who's typing") is fundamentally different from application data. It is:
Storing presence alongside application data contaminates your reactive graph. Every presence update triggers subscription refreshes for queries that include presence data.
The solution: treat presence as a separate, ephemeral layer. Convex has built-in presence infrastructure. The AI agent uses it rather than modeling presence as application data.
// Convex: presence as a separate ephemeral layer
export const updatePresence = mutation({
args: {
workspaceId: v.id('workspaces'),
status: v.union(v.literal('active'), v.literal('idle')),
},
handler: async (ctx, { workspaceId, status }) => {
const identity = await ctx.auth.getUserIdentity();
if (!identity) return;
// Upsert with automatic TTL - Convex cleans up stale presence
await ctx.db.patch(
await ctx.db
.query('userPresence')
.withIndex('by_user_workspace', q =>
q.eq('userId', identity.subject).eq('workspaceId', workspaceId)
)
.unique() ?? await ctx.db.insert('userPresence', {
userId: identity.subject,
workspaceId,
lastSeen: Date.now(),
status,
}),
{ lastSeen: Date.now(), status }
);
},
});Two users edit the same document simultaneously. Both read version 1. Both make changes. Both try to write version 2. One write succeeds. The other silently overwrites the first.
For collaborative editing, conflict resolution is non-negotiable. AI agents select the appropriate strategy:
Operational Transforms (OT) for connected collaboration where precise ordering matters. Real-time collaborative editors.
CRDTs (Conflict-free Replicated Data Types) for offline editing with eventual consistency. Notes, lists, counters. Data types that can merge without conflicts regardless of operation order.
Last-writer-wins with conflict detection for forms and settings. Detect the conflict, surface it to the user, let them resolve manually.
Every subscription consumes server resources. Every data mutation triggers computation for every subscriber. Naive reactive architectures are expensive reactive architectures.
AI agents are ruthless about subscription optimization:
Subscribe to exactly what you render. A user list component that only shows names and avatars should not subscribe to a query that includes emails, roles, and metadata. The subscription surface should match the render surface.
Paginate subscriptions. Subscribing to an entire collection of 10,000 items means 10,000 records flowing to the client. Paginated subscriptions load only what fits in the viewport, and react to mutations only within that window.
Batch mutations. If three form fields save independently with 100ms debounce, you get three mutations per edit session. A single mutation at save time costs less and creates less reactive turbulence.
// Paginated query - only subscribes to visible items
export const listTasksPaginated = query({
args: {
workspaceId: v.id('workspaces'),
paginationOpts: paginationOptsValidator,
},
handler: async (ctx, { workspaceId, paginationOpts }) => {
return await ctx.db
.query('tasks')
.withIndex('by_workspace', q => q.eq('workspaceId', workspaceId))
.order('desc')
.paginate(paginationOpts);
},
});Real-time is not just for messaging. Every application benefits:
Live dashboards. Analytics metrics, server health, business KPIs. Update as events happen, not on a 60-second refresh interval. An AI agent watching metrics in real-time can detect anomalies and alert faster than any polling-based system.
Collaborative tools. Shared documents, design tools, code editors. Concurrent editing with presence indicators showing where teammates are working.
E-commerce inventory. Show accurate stock levels to every customer simultaneously. When the last item sells, every product page reflects it within seconds. No overselling.
Notification systems. In-app notifications that appear without page reload. Read receipts. Activity feeds that grow as teammates take actions.
Live customer support. Support agents see customer activity in real-time. Typing indicators. Session sharing. Screenshare without plugins.
What used to require dedicated WebSocket infrastructure and careful state synchronization now comes for free with reactive backends. The AI agent writes the queries. The framework handles the subscriptions. Real-time becomes the default, not a feature request.
Pair this with solid deployment automation and monitoring to keep your reactive system healthy in production.
Q: How do you build real-time applications with AI agents?
AI agents build real-time apps using platforms like Convex that provide reactive queries and automatic data synchronization. The agent generates subscription-based components, handles optimistic updates, manages connection state, and ensures data consistency across concurrent users.
Q: What is the best tech stack for real-time AI applications?
The optimal stack is Convex for the real-time backend (automatic subscriptions, ACID transactions), Next.js for the frontend (Server Components, streaming), and TypeScript throughout. This combination enables real-time features without manual WebSocket management.
Q: How do real-time features work with AI streaming?
AI streaming and real-time data combine through server-sent events for AI responses and WebSocket/reactive queries for data updates. Convex handles the real-time data layer while the AI SDK handles streaming AI responses, providing a seamless user experience.
Full-stack developer and AI architect with years of experience shipping production applications across SaaS, mobile, and enterprise. Gareth built Agentik {OS} to prove that one person with the right AI system can outperform an entire traditional development team. He has personally architected and shipped 7+ production applications using AI-first workflows.

Database Design with AI: Schema to Scale
AI agents design schemas that anticipate growth, optimize slow queries automatically, and generate safe migrations. Zero outages in 40+ deployments.

Deployment Automation: AI Agents Handle DevOps
Thousands of production deployments, zero 2am wake-up calls. AI agents automate Vercel config, env management, and progressive rollouts that actually work.

Monitoring AI Apps: What You're Not Tracking
Your API returns 200 OK while the AI generates nonsense. Standard monitoring misses this entirely. Here's the AI-specific observability stack you need.
Stop reading about AI and start building with it. Book a free discovery call and see how AI agents can accelerate your business.