Loading...
Loading...
Weekly AI insights —
Real strategies, no fluff. Unsubscribe anytime.
Written by Gareth Simono, Founder and CEO of Agentik {OS}. Full-stack developer and AI architect with years of experience shipping production applications across SaaS, mobile, and enterprise platforms. Gareth orchestrates 267 specialized AI agents to deliver production software 10x faster than traditional development teams.
Founder & CEO, Agentik {OS}
Testing setups that make developers slow are abandoned. Here's the fast, modern testing stack for Next.js apps that actually gets used and maintained.

Testing setups fail in one of two ways. Either you don't have one (and you're shipping bugs), or you have one that's so slow and painful that everyone works around it (and you're still shipping bugs).
The goal is a testing setup that developers actually use because it's fast, runs in the right places, and catches real problems without requiring heroic effort to maintain.
Here's the stack I've standardized on for Next.js applications.
| Layer | Tool | Why |
|---|---|---|
| Unit and integration tests | Vitest | Much faster than Jest, same API, native ESM |
| UI component tests | Testing Library + Vitest | Test behavior, not implementation |
| E2E tests | Playwright | The industry standard, great debugging tools |
| AI-generated tests | Claude + custom scripts | For coverage gaps and regression tests |
| CI integration | GitHub Actions | Runs on every PR, blocks merge on failure |
# Install
npm install -D vitest @vitejs/plugin-react @testing-library/react @testing-library/jest-dom @testing-library/user-event
# For Next.js specifically
npm install -D @vitejs/plugin-react jsdom// vitest.config.ts
import { defineConfig } from "vitest/config";
import react from "@vitejs/plugin-react";
import path from "path";
export default defineConfig({
plugins: [react()],
test: {
environment: "jsdom",
globals: true,
setupFiles: ["./tests/setup.ts"],
include: ["**/*.{test,spec}.{ts,tsx}"],
exclude: ["node_modules", "e2e/**", ".next/**"],
coverage: {
provider: "v8",
reporter: ["text", "html", "json"],
include: ["src/**"],
exclude: ["src/**/*.stories.*", "src/**/*.d.ts"],
thresholds: {
lines: 70,
functions: 70,
branches: 60,
},
},
},
resolve: {
alias: {
"@": path.resolve(__dirname, "./src"),
},
},
});// tests/setup.ts
import "@testing-library/jest-dom";
import { cleanup } from "@testing-library/react";
import { afterEach, vi } from "vitest";
afterEach(() => {
cleanup();
});
// Mock Next.js router
vi.mock("next/navigation", () => ({
useRouter: () => ({
push: vi.fn(),
replace: vi.fn(),
back: vi.fn(),
}),
usePathname: () => "/",
useSearchParams: () => new URLSearchParams(),
}));
// Mock Next.js Image
vi.mock("next/image", () => ({
default: (props: Record<string, unknown>) => {
// eslint-disable-next-line @next/next/no-img-element
return { type: "img", props: { ...props, alt: props.alt } };
},
}));The testing library philosophy: test what users see and do, not implementation details.
// components/ui/__tests__/stat-card.test.tsx
import { render, screen } from "@testing-library/react";
import { describe, it, expect } from "vitest";
import { StatCard } from "../stat-card";
import { Users } from "lucide-react";
describe("StatCard", () => {
it("renders title and value", () => {
render(<StatCard title="Total Users" value={1234} />);
expect(screen.getByText("Total Users")).toBeInTheDocument();
expect(screen.getByText("1234")).toBeInTheDocument();
});
it("shows positive trend in green", () => {
render(
<StatCard
title="Revenue"
value="$1,000"
trend={{ value: 12.5, label: "vs last month", direction: "up" }}
/>
);
const trendElement = screen.getByText("+12.5%");
expect(trendElement).toHaveClass("text-green-600");
});
it("shows negative trend in red", () => {
render(
<StatCard
title="Churn"
value="5%"
trend={{ value: 2.1, label: "vs last month", direction: "down" }}
/>
);
const trendElement = screen.getByText("-2.1%");
expect(trendElement).toHaveClass("text-red-600");
});
it("renders icon when provided", () => {
render(<StatCard title="Users" value={100} icon={Users} />);
// Lucide icons render as SVG
expect(document.querySelector("svg")).toBeInTheDocument();
});
it("renders without optional props", () => {
// This catches missing default prop handling
expect(() => render(<StatCard title="Test" value={0} />)).not.toThrow();
});
});// app/api/users/__tests__/route.test.ts
import { describe, it, expect, vi, beforeEach } from "vitest";
import { GET, POST } from "../route";
import { NextRequest } from "next/server";
// Mock database calls
vi.mock("@/lib/db", () => ({
db: {
user: {
findMany: vi.fn(),
create: vi.fn(),
},
},
}));
import { db } from "@/lib/db";
describe("GET /api/users", () => {
beforeEach(() => {
vi.clearAllMocks();
});
it("returns users array", async () => {
const mockUsers = [
{ id: "1", name: "Alice", email: "alice@example.com" },
{ id: "2", name: "Bob", email: "bob@example.com" },
];
vi.mocked(db.user.findMany).mockResolvedValue(mockUsers as any);
const request = new NextRequest("http://localhost:3000/api/users");
const response = await GET(request);
const data = await response.json();
expect(response.status).toBe(200);
expect(data).toEqual(mockUsers);
});
it("returns 500 on database error", async () => {
vi.mocked(db.user.findMany).mockRejectedValue(new Error("DB connection failed"));
const request = new NextRequest("http://localhost:3000/api/users");
const response = await GET(request);
expect(response.status).toBe(500);
});
});npm install -D @playwright/test
npx playwright install// playwright.config.ts
import { defineConfig, devices } from "@playwright/test";
export default defineConfig({
testDir: "./e2e",
fullyParallel: true,
forbidOnly: !!process.env.CI,
retries: process.env.CI ? 2 : 0,
workers: process.env.CI ? 1 : undefined,
reporter: process.env.CI ? "github" : "html",
use: {
baseURL: "http://localhost:3000",
trace: "on-first-retry",
screenshot: "only-on-failure",
},
projects: [
{ name: "chromium", use: { ...devices["Desktop Chrome"] } },
{ name: "Mobile Safari", use: { ...devices["iPhone 12"] } },
],
webServer: {
command: "npm run dev",
url: "http://localhost:3000",
reuseExistingServer: !process.env.CI,
},
});// e2e/auth.spec.ts
import { test, expect } from "@playwright/test";
test.describe("Authentication", () => {
test("user can sign up with valid credentials", async ({ page }) => {
await page.goto("/sign-up");
await page.fill('[name="firstName"]', "Test");
await page.fill('[name="lastName"]', "User");
await page.fill('[name="email"]', `test-${Date.now()}@example.com`);
await page.fill('[name="password"]', "SecurePassword123!");
await page.click('[type="submit"]');
// Should redirect to onboarding or dashboard
await expect(page).toHaveURL(/\/(onboarding|dashboard)/);
});
test("shows error for invalid email", async ({ page }) => {
await page.goto("/sign-up");
await page.fill('[name="email"]', "not-an-email");
await page.fill('[name="password"]', "password123");
await page.click('[type="submit"]');
await expect(page.getByText(/valid email/i)).toBeVisible();
});
test("user can sign in and out", async ({ page }) => {
await page.goto("/sign-in");
await page.fill('[name="email"]', process.env.TEST_USER_EMAIL!);
await page.fill('[name="password"]', process.env.TEST_USER_PASSWORD!);
await page.click('[type="submit"]');
await expect(page).toHaveURL("/dashboard");
// Sign out
await page.click('[data-testid="user-menu"]');
await page.click('text=Sign out');
await expect(page).toHaveURL("/");
});
});# .github/workflows/test.yml
name: Tests
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
- run: npm ci
- run: npm run test:unit
env:
NODE_ENV: test
e2e-tests:
runs-on: ubuntu-latest
needs: unit-tests
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
- run: npm ci
- run: npx playwright install --with-deps chromium
- run: npm run test:e2e
env:
NEXT_PUBLIC_CONVEX_URL: ${{ secrets.TEST_CONVEX_URL }}
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY: ${{ secrets.TEST_CLERK_KEY }}
TEST_USER_EMAIL: ${{ secrets.TEST_USER_EMAIL }}
TEST_USER_PASSWORD: ${{ secrets.TEST_USER_PASSWORD }}
- uses: actions/upload-artifact@v4
if: failure()
with:
name: playwright-report
path: playwright-report/
retention-days: 7Not everything needs equal test coverage. Focus:
Unit tests (high priority):
Integration tests (high priority):
E2E tests (selective):
Skip for now:
A focused test suite that runs in 3 minutes beats a comprehensive one that runs in 30 minutes and gets disabled in CI.
Q: How do you set up automated testing for AI applications?
Set up automated testing with Vitest for unit tests, Playwright for end-to-end tests, and custom evaluation suites for AI output quality. Configure CI/CD to run all tests on every pull request. For AI-specific tests, define evaluation criteria (correctness, relevance, safety) rather than exact output matching.
Q: What testing framework should I use in 2026?
Use Vitest for unit and integration tests (fast, TypeScript-native, Vite-compatible), Playwright for end-to-end browser tests (reliable, cross-browser, auto-waiting), and axe-core for accessibility tests. This combination covers all testing layers and integrates well with AI-assisted development workflows.
Q: How much test coverage should AI applications have?
Aim for 80-95% test coverage for business logic, 100% coverage for security-critical paths, and comprehensive E2E tests for critical user flows. AI agents achieve these levels naturally because they generate tests alongside features. The marginal cost of high coverage is low when AI handles test generation.
Full-stack developer and AI architect with years of experience shipping production applications across SaaS, mobile, and enterprise. Gareth built Agentik {OS} to prove that one person with the right AI system can outperform an entire traditional development team. He has personally architected and shipped 7+ production applications using AI-first workflows.

AI Testing Automation: Way Beyond Unit Tests
AI agents generate, maintain, and evolve your test suite. From unit tests to E2E scenarios and security audits. No excuses left for skipping tests.

Build a Design System with AI: From Zero to Production
Design systems take months to build right. With AI and shadcn/ui as a foundation, you can ship a consistent, documented system in days. Here's how.

Monitoring AI Agents in Production: What You Actually Need
AI agents fail differently than traditional software. Silent hallucinations. Cost explosions. Loops. The monitoring setup that catches them first.
Stop reading about AI and start building with it. Book a free discovery call and see how AI agents can accelerate your business.