Loading...
Loading...
Weekly AI insights —
Real strategies, no fluff. Unsubscribe anytime.
Written by Gareth Simono, Founder and CEO of Agentik {OS}. Full-stack developer and AI architect with years of experience shipping production applications across SaaS, mobile, and enterprise platforms. Gareth orchestrates 267 specialized AI agents to deliver production software 10x faster than traditional development teams.
Founder & CEO, Agentik {OS}
14 endpoints, full validation, auth, pagination, rate limiting, and 67 passing tests. Three hours. AI API development is a different game now.

Last week I built a complete REST API with 14 endpoints, full input validation, JWT authentication, cursor-based pagination, per-route rate limiting, comprehensive error handling, and API documentation that actually stays current.
It took three hours.
Every TypeScript type was correct. All 67 tests passed on the first run. The documentation matched the implementation because they were generated from the same schema.
I have been building APIs for eight years. The fastest I ever shipped something comparable manually was four days. And I cut corners on the documentation.
API development is uniquely suited to AI assistance. The reason is not magic. APIs have well-defined contracts, established standards, and measurable quality criteria. There is a right way to handle pagination. A right way to format error responses. A right way to structure middleware. AI agents have internalized all of these patterns through training on enormous amounts of well-written API code.
The workflow starts before any implementation code is written. You define your data models and the operations you need at a high level. The AI generates the complete schema definition.
I describe my domain in plain language: "Users have a name, email, avatar URL, role (admin, member, viewer), and creation timestamp. Workspaces belong to a user, have a name, slug, and plan tier. Workspace members join via an invitation with a role. Projects belong to a workspace and have a title, description, status (draft, active, archived), owner, and array of assignee IDs."
From this description, the agent generates complete OpenAPI 3.1 specification with proper data types, required fields, nullable fields, relationship definitions, pagination patterns, and documented error responses. It applies REST best practices automatically. Consistent naming conventions. Appropriate HTTP methods for each operation. Idempotent endpoints where applicable.
The schema becomes the contract. Implementation, documentation, and tests all flow from it. This is not just a workflow preference. It is the difference between an API that is self-consistent and one that accumulates inconsistencies over time.
// Generated from schema description
import { z } from 'zod';
export const UserSchema = z.object({
id: z.string().cuid(),
email: z.string().email(),
name: z.string().min(1).max(100),
avatarUrl: z.string().url().nullable(),
role: z.enum(['admin', 'member', 'viewer']),
createdAt: z.string().datetime(),
updatedAt: z.string().datetime(),
});
export const CreateUserSchema = UserSchema.omit({
id: true,
createdAt: true,
updatedAt: true,
}).extend({
password: z.string().min(8).max(100),
});
export const UpdateUserSchema = CreateUserSchema.partial().omit({ password: true });
export type User = z.infer<typeof UserSchema>;
export type CreateUser = z.infer<typeof CreateUserSchema>;
export type UpdateUser = z.infer<typeof UpdateUserSchema>;Zod schemas serve triple duty: runtime validation, TypeScript type inference, and documentation generation. The agent knows this and uses it.
From the schema, the implementation flows in layers. Each layer has a specific responsibility, and the agent keeps them clean.
Each route handler does exactly three things: validate the incoming request, call the service layer, format the response. No database calls directly in handlers. No business logic in handlers.
// src/routes/workspaces.ts
import { Router } from 'express';
import { z } from 'zod';
import { WorkspaceService } from '../services/workspace-service';
import { authenticate, authorize } from '../middleware/auth';
import { rateLimit } from '../middleware/rate-limit';
import { validateBody, validateParams, validateQuery } from '../middleware/validate';
import { CreateWorkspaceSchema, WorkspaceParamsSchema, ListWorkspacesQuerySchema } from '../schemas/workspace';
const router = Router();
const workspaceService = new WorkspaceService();
// GET /workspaces - List user's workspaces
router.get(
'/',
authenticate,
rateLimit({ windowMs: 60_000, max: 100 }),
validateQuery(ListWorkspacesQuerySchema),
async (req, res, next) => {
try {
const result = await workspaceService.listForUser({
userId: req.user.id,
cursor: req.query.cursor,
limit: req.query.limit ?? 20,
});
res.json(result);
} catch (err) {
next(err);
}
}
);
// POST /workspaces - Create workspace
router.post(
'/',
authenticate,
rateLimit({ windowMs: 3_600_000, max: 10 }),
validateBody(CreateWorkspaceSchema),
async (req, res, next) => {
try {
const workspace = await workspaceService.create({
...req.body,
ownerId: req.user.id,
});
res.status(201).json(workspace);
} catch (err) {
next(err);
}
}
);
export { router as workspacesRouter };Every route has rate limiting. Every state-changing route has validation. Authentication wraps everything. The pattern is consistent across all 14 endpoints.
Business logic lives in services. Services do not know about HTTP. They take typed inputs and return typed outputs.
// src/services/workspace-service.ts
import { db } from '../lib/db';
import { WorkspaceConflictError, WorkspaceNotFoundError } from '../errors';
import type { CreateWorkspace, Workspace, PaginatedResult } from '../types';
export class WorkspaceService {
async create(data: CreateWorkspace & { ownerId: string }): Promise<Workspace> {
const existing = await db.workspace.findUnique({
where: { slug: data.slug }
});
if (existing) {
throw new WorkspaceConflictError(
`A workspace with slug "${data.slug}" already exists`
);
}
return db.workspace.create({
data: {
...data,
members: {
create: {
userId: data.ownerId,
role: 'admin',
}
}
},
include: { members: true }
});
}
async listForUser({
userId,
cursor,
limit,
}: {
userId: string;
cursor?: string;
limit: number;
}): Promise<PaginatedResult<Workspace>> {
const items = await db.workspace.findMany({
where: {
members: { some: { userId } }
},
take: limit + 1,
...(cursor && {
cursor: { id: cursor },
skip: 1,
}),
orderBy: { createdAt: 'desc' },
});
const hasMore = items.length > limit;
const data = hasMore ? items.slice(0, -1) : items;
return {
data,
nextCursor: hasMore ? data[data.length - 1].id : null,
};
}
}Cursor-based pagination implemented correctly. The fence-post error avoided. Consistent return type across all list endpoints.
Good API error handling is invisible to satisfied users and critical to frustrated ones. Every error needs to be caught, classified, and returned in a consistent format.
The agent implements a typed error hierarchy:
// src/errors/index.ts
export class AppError extends Error {
constructor(
public message: string,
public statusCode: number,
public code: string,
) {
super(message);
this.name = 'AppError';
}
}
export class NotFoundError extends AppError {
constructor(resource: string, id: string) {
super(`${resource} with id "${id}" not found`, 404, 'NOT_FOUND');
}
}
export class ConflictError extends AppError {
constructor(message: string) {
super(message, 409, 'CONFLICT');
}
}
export class AuthorizationError extends AppError {
constructor(action: string) {
super(`You do not have permission to ${action}`, 403, 'FORBIDDEN');
}
}
// Global error handler middleware
export function errorHandler(
err: unknown,
req: Request,
res: Response,
next: NextFunction
) {
if (err instanceof z.ZodError) {
return res.status(400).json({
error: {
code: 'VALIDATION_ERROR',
message: 'Invalid request data',
details: err.errors.map(e => ({
field: e.path.join('.'),
message: e.message,
})),
}
});
}
if (err instanceof AppError) {
return res.status(err.statusCode).json({
error: {
code: err.code,
message: err.message,
}
});
}
// Unexpected errors: log but don't leak details
console.error('Unexpected error:', err);
return res.status(500).json({
error: {
code: 'INTERNAL_ERROR',
message: 'An unexpected error occurred',
}
});
}Consistent error format across all endpoints. Validation errors include field-level details. Application errors use typed codes that clients can handle programmatically. Unexpected errors are logged without leaking implementation details to clients.
This is the part that surprises people most. Tests are not an afterthought. They are generated from the same schema as the implementation, simultaneously.
Every endpoint gets three test types:
Unit tests for service layer logic. These test business rules in isolation, without HTTP or database dependencies.
Integration tests verify each endpoint's behavior with real HTTP requests against a test database. Authentication behavior. Error responses. Pagination. Rate limiting.
Contract tests verify the implementation matches the OpenAPI schema. Any response that violates the defined schema fails immediately.
// Integration test for workspace creation
describe('POST /workspaces', () => {
it('creates a workspace and returns 201', async () => {
const user = await createTestUser();
const token = generateTestToken(user.id);
const res = await request(app)
.post('/workspaces')
.set('Authorization', `Bearer ${token}`)
.send({ name: 'My Workspace', slug: 'my-workspace', plan: 'free' });
expect(res.status).toBe(201);
expect(res.body).toMatchObject({
id: expect.any(String),
name: 'My Workspace',
slug: 'my-workspace',
});
});
it('returns 409 when slug is already taken', async () => {
const user = await createTestUser();
const token = generateTestToken(user.id);
await createTestWorkspace({ slug: 'taken-slug' });
const res = await request(app)
.post('/workspaces')
.set('Authorization', `Bearer ${token}`)
.send({ name: 'My Workspace', slug: 'taken-slug', plan: 'free' });
expect(res.status).toBe(409);
expect(res.body.error.code).toBe('CONFLICT');
});
it('returns 400 for invalid slug format', async () => {
const user = await createTestUser();
const token = generateTestToken(user.id);
const res = await request(app)
.post('/workspaces')
.set('Authorization', `Bearer ${token}`)
.send({ name: 'My Workspace', slug: 'Invalid Slug!', plan: 'free' });
expect(res.status).toBe(400);
expect(res.body.error.code).toBe('VALIDATION_ERROR');
expect(res.body.error.details).toContainEqual(
expect.objectContaining({ field: 'slug' })
);
});
it('returns 401 without authentication', async () => {
const res = await request(app)
.post('/workspaces')
.send({ name: 'My Workspace', slug: 'my-workspace', plan: 'free' });
expect(res.status).toBe(401);
});
});Happy path. Conflict case. Validation failure. Authentication missing. Four tests per endpoint, generated automatically. Multiply by 14 endpoints. That is 56 tests covering your most critical paths, written before you push to production.
The API documentation problem is universal. You write the docs. The implementation changes. The docs do not. Six months later the docs are actively misleading.
AI-generated APIs solve this structurally. The documentation is generated from the schema, the same source as the implementation. When you change the schema, regenerate both. They stay synchronized.
But good documentation is more than a schema dump. Each endpoint needs:
The agent generates all of this from the schema and service layer code. It infers documentation from the code itself rather than requiring you to maintain it separately.
Every API needs authentication. Most API authentication bugs come from inconsistent implementation, some endpoints protected, some not, different validation logic applied in different places.
AI agents implement authentication as middleware that applies to every route by default. Opting out requires an explicit public decorator. This inversion makes missing authentication a conscious choice rather than an oversight.
For the authentication patterns themselves, Clerk handles the heavy lifting. JWT validation, session management, token rotation. The API middleware calls Clerk's SDK to validate tokens, extracts the user ID, and attaches it to the request context. Every subsequent layer trusts req.user.id as the authenticated user.
| Task | Manual (Experienced Developer) | AI-Assisted |
|---|---|---|
| Schema design | 4 hours | 30 minutes |
| Route handlers (14 endpoints) | 8 hours | Generated |
| Service layer | 6 hours | Generated |
| Error handling | 3 hours | Generated |
| Input validation | 2 hours | Generated |
| Tests (67 tests) | 8 hours | Generated |
| Documentation | 4 hours | Generated |
| Total | 35 hours | 3 hours |
The three hours is real time. Schema design, review and adjust generated code, write a few business logic nuances the agent could not infer, run tests, deploy to staging.
The quality of the AI-generated code is often higher than manual output. Not because AI is smarter. Because it does not take shortcuts under deadline pressure. Every endpoint has rate limiting. Every input is validated. Every error case is handled. Consistently.
Q: How do AI agents build APIs?
AI agents build APIs through a schema-first approach: you describe your data models and operations, the agent generates OpenAPI specifications, implements route handlers with validation, creates service layers with business logic, writes comprehensive tests, and generates documentation — all from the schema definition.
Q: How fast can AI agents build a production API?
AI agents can build a complete REST API with 14+ endpoints, input validation, authentication, pagination, rate limiting, error handling, tests, and documentation in approximately 3 hours. The same scope traditionally takes 35+ hours of experienced developer time.
Q: What makes AI agents particularly good at API development?
APIs have well-defined contracts, established standards, and measurable quality criteria — exactly what AI agents excel at. There are right ways to handle pagination, error responses, and middleware that agents have internalized from training on extensive API code.
Full-stack developer and AI architect with years of experience shipping production applications across SaaS, mobile, and enterprise. Gareth built Agentik {OS} to prove that one person with the right AI system can outperform an entire traditional development team. He has personally architected and shipped 7+ production applications using AI-first workflows.

Database Design with AI: Schema to Scale
AI agents design schemas that anticipate growth, optimize slow queries automatically, and generate safe migrations. Zero outages in 40+ deployments.

Auth in the AI Era: Security by Default
Every security breach has one thing in common: someone rolled their own auth. AI agents implement Clerk, middleware, and RBAC without cutting corners.

AI Testing Automation: Way Beyond Unit Tests
AI agents generate, maintain, and evolve your test suite. From unit tests to E2E scenarios and security audits. No excuses left for skipping tests.
Stop reading about AI and start building with it. Book a free discovery call and see how AI agents can accelerate your business.