Loading...
Loading...
Weekly AI insights —
Real strategies, no fluff. Unsubscribe anytime.
Written by Gareth Simono, Founder and CEO of Agentik {OS}. Full-stack developer and AI architect with years of experience shipping production applications across SaaS, mobile, and enterprise platforms. Gareth orchestrates 267 specialized AI agents to deliver production software 10x faster than traditional development teams.
Founder & CEO, Agentik {OS}
Server actions, streaming with use(), optimistic updates, and error boundaries. React 19 was built for exactly the problems AI interfaces create.

Building AI interfaces in React 18 was a war against the framework. Streaming responses required hacky workarounds. Server-side AI calls needed custom API routes. Optimistic updates were boilerplate-heavy. Error states for async AI operations were verbose.
React 19 fixes all of this. Not incrementally. It was redesigned around exactly these use cases.
I have rebuilt three AI application UIs with React 19. Here are the patterns that actually work.
In React 18 with Next.js, every server-side AI call required: an API route handler, a fetch call from the client, error handling, loading states, and often a custom hook to manage it all. That was 50-100 lines of code per AI interaction.
React 19 server actions collapse this to 10-15 lines:
// actions/ai.ts - No API route needed
"use server";
import Anthropic from "@anthropic-ai/sdk";
import { revalidatePath } from "next/cache";
export async function generateCopy(
prompt: string,
context: { tone: string; audience: string }
): Promise<{ success: true; copy: string } | { success: false; error: string }> {
try {
const client = new Anthropic();
const message = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [
{
role: "user",
content: `Write marketing copy for: ${prompt}\n\nTone: ${context.tone}\nAudience: ${context.audience}`,
},
],
});
const copy = message.content[0].type === "text" ? message.content[0].text : "";
return { success: true, copy };
} catch (error) {
return {
success: false,
error: error instanceof Error ? error.message : "Generation failed",
};
}
}
export async function saveCopy(copy: string, projectId: string) {
// Direct database writes in server actions
await db.copies.create({ copy, projectId, createdAt: new Date() });
revalidatePath(`/projects/${projectId}`);
}// components/copy-generator.tsx - The client component
"use client";
import { useState, useTransition } from "react";
import { generateCopy, saveCopy } from "@/actions/ai";
export function CopyGenerator({ projectId }: { projectId: string }) {
const [copy, setCopy] = useState("");
const [error, setError] = useState("");
const [isPending, startTransition] = useTransition();
function handleGenerate(formData: FormData) {
startTransition(async () => {
const result = await generateCopy(
formData.get("prompt") as string,
{
tone: formData.get("tone") as string,
audience: formData.get("audience") as string,
}
);
if (result.success) {
setCopy(result.copy);
setError("");
} else {
setError(result.error);
}
});
}
return (
<form action={handleGenerate} className="space-y-4">
<textarea name="prompt" placeholder="Describe what to write..." />
<select name="tone">
<option value="professional">Professional</option>
<option value="casual">Casual</option>
<option value="urgent">Urgent</option>
</select>
<input name="audience" placeholder="Target audience..." />
<button type="submit" disabled={isPending}>
{isPending ? "Generating..." : "Generate"}
</button>
{error && <p className="text-red-500">{error}</p>}
{copy && (
<div>
<pre>{copy}</pre>
<button
type="button"
onClick={() => startTransition(() => saveCopy(copy, projectId))}
>
Save
</button>
</div>
)}
</form>
);
}No API route. No fetch call. No custom hook. The server action handles the AI call, and useTransition manages the pending state.
use() HookAI responses stream. Users expect to see text appear progressively, not wait for the full response. React 19's use() hook with streaming data makes this clean.
// app/api/stream/route.ts
import Anthropic from "@anthropic-ai/sdk";
import { NextRequest } from "next/server";
export async function POST(request: NextRequest) {
const { prompt } = await request.json();
const client = new Anthropic();
const stream = await client.messages.stream({
model: "claude-sonnet-4-20250514",
max_tokens: 2048,
messages: [{ role: "user", content: prompt }],
});
// Return a ReadableStream to the client
const readableStream = new ReadableStream({
async start(controller) {
const encoder = new TextEncoder();
for await (const chunk of stream) {
if (
chunk.type === "content_block_delta" &&
chunk.delta.type === "text_delta"
) {
controller.enqueue(encoder.encode(chunk.delta.text));
}
}
controller.close();
},
});
return new Response(readableStream, {
headers: {
"Content-Type": "text/plain; charset=utf-8",
"Transfer-Encoding": "chunked",
},
});
}// components/streaming-response.tsx
"use client";
import { useState, useCallback } from "react";
hook useStream(prompt: string | null) {
const [content, setContent] = useState("");
const [isStreaming, setIsStreaming] = useState(false);
const [error, setError] = useState<string | null>(null);
const startStream = useCallback(async (promptText: string) => {
setContent("");
setError(null);
setIsStreaming(true);
try {
const response = await fetch("/api/stream", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ prompt: promptText }),
});
if (!response.ok) throw new Error(`HTTP ${response.status}`);
if (!response.body) throw new Error("No response body");
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
setContent((prev) => prev + decoder.decode(value, { stream: true }));
}
} catch (err) {
setError(err instanceof Error ? err.message : "Stream failed");
} finally {
setIsStreaming(false);
}
}, []);
return { content, isStreaming, error, startStream };
}
export function AIWriter() {
const [prompt, setPrompt] = useState("");
const { content, isStreaming, error, startStream } = useStream(null);
return (
<div className="space-y-4">
<textarea
value={prompt}
onChange={(e) => setPrompt(e.target.value)}
placeholder="What should I write?"
className="w-full rounded border p-3"
/>
<button
onClick={() => startStream(prompt)}
disabled={isStreaming || !prompt.trim()}
className="rounded bg-primary px-4 py-2 text-primary-foreground disabled:opacity-50"
>
{isStreaming ? "Writing..." : "Write"}
</button>
{error && <p className="text-destructive">{error}</p>}
{content && (
<div className="prose max-w-none">
{content}
{isStreaming && (
<span className="inline-block w-2 animate-pulse bg-primary"> </span>
)}
</div>
)}
</div>
);
}The worst chat interface experience: user sends message, nothing happens for 2 seconds, then the response appears. The second-worst: user's message does not appear immediately, making the interface feel broken.
React 19's useOptimistic solves both:
"use client";
import { useOptimistic, useRef, useState, useTransition } from "react";
type Message = {
id: string;
role: "user" | "assistant";
content: string;
pending?: boolean;
};
export function ChatInterface() {
const [messages, setMessages] = useState<Message[]>([]);
const [isPending, startTransition] = useTransition();
const formRef = useRef<HTMLFormElement>(null);
// Optimistic state: shows pending messages immediately
const [optimisticMessages, addOptimisticMessage] = useOptimistic(
messages,
(state: Message[], newMessage: Message) => [...state, newMessage]
);
async function handleSubmit(formData: FormData) {
const content = formData.get("message") as string;
if (!content.trim()) return;
const userMessage: Message = {
id: crypto.randomUUID(),
role: "user",
content,
};
const pendingResponse: Message = {
id: crypto.randomUUID(),
role: "assistant",
content: "",
pending: true,
};
formRef.current?.reset();
startTransition(async () => {
// These appear instantly in the UI
addOptimisticMessage(userMessage);
addOptimisticMessage(pendingResponse);
// Actual server call
const response = await sendMessage(content, messages);
// Update real state once the server responds
setMessages((prev) => [
...prev,
userMessage,
{ id: pendingResponse.id, role: "assistant", content: response },
]);
});
}
return (
<div className="flex h-full flex-col">
<div className="flex-1 overflow-y-auto space-y-4 p-4">
{optimisticMessages.map((msg) => (
<div
key={msg.id}
className={`flex ${msg.role === "user" ? "justify-end" : "justify-start"}`}
>
<div
className={`max-w-[80%] rounded-lg px-4 py-2 ${
msg.role === "user"
? "bg-primary text-primary-foreground"
: "bg-surface text-foreground"
} ${msg.pending ? "animate-pulse opacity-70" : ""}`}
>
{msg.pending ? (
<span className="flex items-center gap-2">
<span className="h-2 w-2 animate-bounce rounded-full bg-current [animation-delay:0ms]" />
<span className="h-2 w-2 animate-bounce rounded-full bg-current [animation-delay:150ms]" />
<span className="h-2 w-2 animate-bounce rounded-full bg-current [animation-delay:300ms]" />
</span>
) : (
msg.content
)}
</div>
</div>
))}
</div>
<form ref={formRef} action={handleSubmit} className="border-t p-4">
<div className="flex gap-2">
<input
name="message"
placeholder="Message..."
className="flex-1 rounded border px-3 py-2"
disabled={isPending}
/>
<button
type="submit"
disabled={isPending}
className="rounded bg-primary px-4 py-2 text-primary-foreground"
>
Send
</button>
</div>
</form>
</div>
);
}The message appears instantly. The typing indicator shows while waiting. The real response replaces it smoothly. The user always knows what is happening.
AI operations fail. Rate limits, network issues, content policy violations. React 19's improved error boundary support makes handling these failures graceful.
// components/ai-error-boundary.tsx
"use client";
import { Component, ReactNode } from "react";
interface Props {
children: ReactNode;
fallback?: (error: Error, reset: () => void) => ReactNode;
}
interface State {
error: Error | null;
}
export class AIErrorBoundary extends Component<Props, State> {
constructor(props: Props) {
super(props);
this.state = { error: null };
}
static getDerivedStateFromError(error: Error): State {
return { error };
}
componentDidCatch(error: Error) {
// Log to your error monitoring service
console.error("AI operation failed:", error);
}
reset = () => {
this.setState({ error: null });
};
render() {
if (this.state.error) {
if (this.props.fallback) {
return this.props.fallback(this.state.error, this.reset);
}
return (
<div className="rounded-lg border border-destructive/50 bg-destructive/10 p-4">
<p className="font-medium text-destructive">AI operation failed</p>
<p className="mt-1 text-sm text-muted-foreground">
{this.state.error.message}
</p>
<button
onClick={this.reset}
className="mt-3 text-sm text-primary hover:underline"
>
Try again
</button>
</div>
);
}
return this.props.children;
}
}
// Usage
export function AIFeature() {
return (
<AIErrorBoundary
fallback={(error, reset) => (
<div className="text-center p-8">
<p className="text-muted-foreground">
{error.message.includes("rate limit")
? "You have hit the rate limit. Please wait a moment."
: "Something went wrong with the AI. Please try again."}
</p>
<button onClick={reset} className="mt-4 btn-primary">
Retry
</button>
</div>
)}
>
<AIComponent />
</AIErrorBoundary>
);
}When AI returns results (search results, recommendations, generated content), you often need to render large lists or complex content. React 19's concurrent rendering features prevent this from blocking the UI.
"use client";
import { startTransition, useState, useDeferredValue } from "react";
type SearchResult = {
id: string;
title: string;
excerpt: string;
score: number;
};
export function AISearch() {
const [query, setQuery] = useState("");
const [results, setResults] = useState<SearchResult[]>([]);
const [isSearching, setIsSearching] = useState(false);
// Deferred value: results update without blocking input
const deferredResults = useDeferredValue(results);
const isStale = results !== deferredResults;
async function handleSearch(newQuery: string) {
setQuery(newQuery);
if (!newQuery.trim()) {
setResults([]);
return;
}
setIsSearching(true);
startTransition(async () => {
const data = await semanticSearch(newQuery);
setResults(data);
setIsSearching(false);
});
}
return (
<div className="space-y-4">
<input
value={query}
onChange={(e) => handleSearch(e.target.value)}
placeholder="Search with AI..."
className="w-full rounded border px-4 py-2"
/>
{isSearching && (
<p className="text-sm text-muted-foreground">Searching...</p>
)}
<div
className={`space-y-3 transition-opacity ${
isStale ? "opacity-50" : "opacity-100"
}`}
>
{deferredResults.map((result) => (
<div key={result.id} className="rounded border p-4">
<h3 className="font-medium">{result.title}</h3>
<p className="mt-1 text-sm text-muted-foreground">{result.excerpt}</p>
<span className="mt-2 text-xs text-muted-foreground">
Relevance: {Math.round(result.score * 100)}%
</span>
</div>
))}
</div>
</div>
);
}useDeferredValue ensures the input remains responsive even when rendering many results. The stale indicator shows when results are updating without blocking interaction.
Putting it together: the complete React 19 architecture for an AI application:
// app/dashboard/page.tsx - Server component orchestrates data fetching
import { Suspense } from "react";
import { AIChat } from "@/components/ai-chat";
import { DocumentList } from "@/components/document-list";
import { AIErrorBoundary } from "@/components/ai-error-boundary";
export default async function DashboardPage() {
// Server-side data fetch - no loading state needed here
const documents = await getDocuments();
return (
<div className="grid grid-cols-2 gap-6 h-full">
{/* Document list - data is already loaded */}
<DocumentList documents={documents} />
{/* AI chat - wrapped in error boundary and suspense */}
<AIErrorBoundary>
<Suspense fallback={<ChatSkeleton />}>
<AIChat />
</Suspense>
</AIErrorBoundary>
</div>
);
}The pattern: server components fetch data, client components handle interaction, error boundaries catch failures, suspense manages loading states. Every piece does one thing.
The best React 19 AI applications feel seamless because the architecture accounts for every state: loading, error, pending, stale, and success. Users always know what is happening. Nothing ever looks broken.
Q: What is new in React 19?
React 19 introduces Server Components as the default rendering model, Server Actions for form handling and mutations, improved streaming and suspense, use() hook for reading promises and context, and better TypeScript support. These features are particularly powerful for AI applications that stream responses.
Q: How do React 19 Server Components help AI applications?
Server Components reduce client-side JavaScript by rendering on the server, enable direct database and API access without client exposure, and work seamlessly with AI streaming. AI responses can stream from Server Actions directly to the client without complex WebSocket setup.
Q: What React 19 patterns work best with AI agents?
The best patterns are Server Components for data fetching (no API boilerplate), Server Actions for mutations (type-safe form handling), Suspense for streaming AI responses, and the use() hook for resource loading. AI agents generate these patterns naturally because they are well-documented and type-safe.
Full-stack developer and AI architect with years of experience shipping production applications across SaaS, mobile, and enterprise. Gareth built Agentik {OS} to prove that one person with the right AI system can outperform an entire traditional development team. He has personally architected and shipped 7+ production applications using AI-first workflows.

Next.js 16 + AI: Build Intelligent Apps Fast
Next.js 16 solves AI's hardest problems: secret exposure, blocking UIs, and scaling costs. Here's the architecture that actually works in production.

TypeScript 5.9: Features That Actually Change Your Workflow
TypeScript 5.9 inference is smarter, builds are faster, and AI-generated code integrates better. Here's what actually matters for daily development.

Convex Real-Time Backend: The Complete Guide
Convex eliminates WebSocket plumbing, cache invalidation, and consistency headaches. Every query is a live subscription. Here's what that means in practice.
Stop reading about AI and start building with it. Book a free discovery call and see how AI agents can accelerate your business.