Loading...
Loading...
Weekly AI insights —
Real strategies, no fluff. Unsubscribe anytime.
Founder & CEO, Agentik {OS}
An AI agent analyzed, identified, and fixed seven performance bottlenecks in one hour. Manual optimization would have taken a week. Here's the process.

Our Largest Contentful Paint was 4.2 seconds. Google flagged it. Users bounced. I knew it needed fixing, but I kept pushing it to the next sprint.
Then I handed it to an AI agent.
One hour later, LCP was 1.1 seconds. Seven bottlenecks identified, prioritized, and fixed. Zero regressions. I would have spent a week reaching the same outcome manually. Not because the fixes were complicated, but because finding which fixes mattered required correlating dozens of metrics simultaneously, something humans do slowly and AI does in seconds.
That session changed how I think about performance work entirely. It is not a specialized skill you study for years. It is an analysis problem with well-established patterns. Exactly what AI agents do best.
Open Chrome DevTools. Stare at the network waterfall. Try to figure out why your page loads slowly.
The problem space is enormous. Bundle size. Render-blocking resources. Unoptimized images. Slow API responses. Inefficient database queries. Missing caching. Client-side JavaScript doing too much too early. CDN misconfiguration. Third-party scripts blocking the main thread.
Every one of these could be the primary bottleneck. Or it could be a combination. And the priorities shift based on which device your users actually use, which network conditions they're on, and which pages they visit most.
Traditional approach: open Lighthouse, get a score, squint at the recommendations, make educated guesses about which ones matter most.
AI performance agents take a fundamentally different approach. They analyze the entire picture simultaneously. Bundle sizes, render timing, network waterfalls, memory usage, Core Web Vitals across device profiles. They identify the specific bottlenecks with the highest impact. Then they prioritize by expected improvement, not by how annoying the problem is to fix.
The critical insight: you do not need to fix everything. You need to fix the three things that account for 80% of the problem. AI finds those three things immediately.
This is the principle of asymmetric optimization. Most performance gains come from a small number of high-impact changes. The long tail of micro-optimizations produces diminishing returns. Human developers get seduced by the long tail because it feels productive. AI agents skip straight to the 80-20 wins.
I always start bundle analysis. It produces the fastest improvement for the least risk. And it is where AI agents are genuinely absurd in their effectiveness.
Here is what a typical bundle analysis session surfaces:
Dead code detection. You imported a utility library for a single function. The entire library, all 40KB of it, ships to every user. The AI identifies it, suggests a 2KB alternative or a direct implementation, and estimates the impact. That single change often drops load time by 200-400ms.
Code splitting boundaries. Your settings page does not need the charting library. Your landing page does not need the form validation package. The agent maps which dependencies are imported on which routes and identifies splitting opportunities that dramatically reduce initial bundle weight.
Dynamic imports for heavy dependencies. That markdown renderer on your blog page? It loads on every page, including your landing page that contains no markdown. The AI flags these and implements import() calls that defer loading until the dependency is actually needed.
Tree-shaking failures. Some libraries are not properly tree-shakable despite claims to the contrary. The agent identifies these by analyzing bundle output, not import statements, and suggests alternatives.
Duplicate dependencies. Two packages import different versions of the same underlying library. Both ship. The AI catches version conflicts and deduplication opportunities.
I have seen a 680KB bundle drop to 280KB through bundle analysis alone. That is a 59% reduction in initial download size. On a 4G connection, that is roughly the difference between a page that feels fast and a page that feels broken.
// Before: entire library imported for one function
import { formatDate } from 'date-fns';
// After: direct import, tree-shakable
import { format } from 'date-fns/format';
// Even better: native when sufficient
const formatDate = (date: Date) =>
new Intl.DateTimeFormat('en-US', {
month: 'short',
day: 'numeric',
year: 'numeric'
}).format(date);The agent does not just find this pattern. It evaluates whether the native alternative covers the needed functionality, estimates the bundle savings, and implements the change with proper testing.
Images account for 60-70% of total page weight on most websites. They are the single largest optimization opportunity, and yet most applications ship images with zero optimization.
AI agents audit your entire image pipeline and implement comprehensive optimization:
Format modernization. WebP delivers 25-35% smaller file sizes than JPEG at equivalent visual quality. AVIF delivers 50% smaller. The agent converts your image pipeline and implements format negotiation so modern browsers receive AVIF while older browsers receive WebP or JPEG.
Responsive images. Serving a 2400px image to a mobile device with a 400px viewport. Every pixel beyond 400px is waste. The agent implements srcset and sizes attributes so each device receives the appropriate resolution.
Lazy loading. Images below the fold do not need to load on initial page load. The agent adds loading="lazy" to all non-critical images and implements Intersection Observer for custom lazy loading where the browser default is insufficient.
Priority hints. The browser does not know which images matter most. Your hero image should load before the product thumbnails. The agent adds fetchpriority="high" to above-the-fold images and fetchpriority="low" to decorative images.
Dimension specification. Images without explicit dimensions cause layout shifts, destroying your Cumulative Layout Shift score. The agent adds width and height attributes to every image.
| Image Optimization | Before | After | Savings |
|---|---|---|---|
| Format (JPEG to WebP) | 480KB | 320KB | 33% |
| Format (JPEG to AVIF) | 480KB | 240KB | 50% |
| Responsive sizing | 2400px hero | 800px mobile | 70% on mobile |
| Lazy loading | All at once | 60% deferred | 60% initial weight |
| Total impact on typical page | 2.8MB images | 680KB images | 76% reduction |
Frontend performance gets all the press, but the backend often contains the worst bottlenecks. And they are the hardest for humans to find manually.
N+1 query problems. A page loads a list of 20 items, then makes a separate database query for each item's author, category, and tag data. That is 61 queries for what should be 3. Response time scales linearly with list length.
AI agents find N+1 problems by analyzing the relationship between code patterns and query logs. They do not just identify the problem. They generate the corrected query using joins or batch loading with specific before/after performance estimates.
// Before: N+1 pattern
const posts = await db.posts.findMany({ limit: 20 });
const postsWithAuthors = await Promise.all(
posts.map(async (post) => ({
...post,
author: await db.users.findUnique({ where: { id: post.authorId } })
}))
);
// 21 database queries
// After: single query with join
const postsWithAuthors = await db.posts.findMany({
limit: 20,
include: { author: true }
});
// 1 database queryMissing indexes. The most high-impact, lowest-risk optimization in any database. A query that scans 100,000 rows for a WHERE clause on an unindexed column takes 340ms. The same query with the right index takes 2ms.
AI agents analyze query patterns against your schema definition and generate specific index recommendations with estimated performance impact. Not vague suggestions like "add indexes where needed." Specific: "Add a composite index on (workspace_id, created_at DESC). This eliminates the full table scan on the dashboard query."
Caching opportunities. User profile data changes once a month. Why fetch it from the database on every request? Product catalog updates nightly. Why invalidate the cache every minute?
The agent maps your data access patterns and identifies caching opportunities by change frequency and access frequency. It recommends appropriate TTLs and cache invalidation strategies for each data type.
API response over-fetching. Your user list endpoint returns 50 fields per user when the UI renders 5. The agent identifies over-fetching, suggests GraphQL or field selection, and estimates bandwidth savings.
I had one API endpoint that returned 8KB per user for a list that showed avatars and names. The AI switched to returning 200 bytes per user for the list. Network time dropped from 1.2 seconds to 80ms on mobile.
Google's Core Web Vitals are not abstract measures. They correlate directly with user satisfaction and directly impact search ranking.
Largest Contentful Paint (LCP) measures when the main content becomes visible. Target: under 2.5 seconds. The biggest culprits are unoptimized hero images, render-blocking resources, and slow server response times.
First Input Delay (FID) / Interaction to Next Paint (INP) measures responsiveness to user interaction. Target: under 200ms. The culprit is almost always JavaScript executing on the main thread when the user tries to interact.
Cumulative Layout Shift (CLS) measures visual stability. Target: under 0.1. Caused by images without dimensions, late-loading fonts, and dynamically injected content above existing content.
AI agents instrument your application to measure actual CWV scores, not just Lighthouse estimates. They collect field data from real users via the web-vitals library, identify the specific elements causing failures, and generate targeted fixes.
import { onLCP, onFID, onCLS, onINP } from 'web-vitals';
// Collect real user data
onLCP((metric) => {
analytics.track('web_vital', {
name: metric.name,
value: metric.value,
rating: metric.rating, // 'good' | 'needs-improvement' | 'poor'
element: metric.entries[0]?.element?.tagName,
});
});
onINP((metric) => {
analytics.track('web_vital', {
name: metric.name,
value: metric.value,
rating: metric.rating,
// INP includes the element that caused the delay
interactionTarget: metric.entries[0]?.target,
});
});This field data is more valuable than any synthetic test. It shows the 95th percentile experience of your actual users on their actual devices and connections.
You have optimized the bundle. You have optimized the server. You have optimized the images. The page still feels sluggish during interaction.
Runtime performance is the final frontier. And it is where JavaScript patterns make an enormous difference.
Expensive re-renders. React components re-rendering on every parent state change, even when their props have not changed. AI agents audit component trees and add React.memo, useMemo, and useCallback where the profiler shows unnecessary re-computation.
Long tasks on the main thread. Any JavaScript execution over 50ms blocks user interaction. Data processing, sorting, filtering operations that should run in a Web Worker. The agent identifies these, wraps them in workers, and adds proper loading states.
Memory leaks. Event listeners not cleaned up in useEffect. Interval timers that keep running after component unmount. Closure references that prevent garbage collection. These cause performance to degrade over time in long-running sessions.
Animation performance. CSS animations that trigger layout recalculation. The transform and opacity properties animate on the compositor thread without touching layout. Everything else causes jank. The agent audits animations and migrates offenders to GPU-accelerated properties.
One-time optimization is a temporary victory. Performance degrades constantly as new features, new dependencies, and new code paths accumulate. Without continuous monitoring, you are right back where you started in six months.
AI agents integrated into CI/CD prevent regressions by running performance checks on every deployment. They track bundle size changes, API response times, and Core Web Vitals scores across deployments.
When a pull request increases bundle size by more than 5KB, the CI pipeline flags it with a comment explaining what caused the increase and whether the impact is justified. When a commit introduces an N+1 query, the pipeline catches it before merge.
// Bundle size budget in your CI configuration
// Fails the build if any budget is exceeded
{
"budgets": [
{
"type": "initial",
"maximumWarning": "500kb",
"maximumError": "1mb"
},
{
"type": "anyComponentStyle",
"maximumWarning": "6kb"
}
]
}You never have to do a big "performance sprint" again. Regressions get caught at their source, when the diff is small and the fix is obvious.
Not all optimizations are equal. Here is the framework I use to prioritize:
| Priority | Optimization | Impact | Risk | Time to Implement |
|---|---|---|---|---|
| 1 | Bundle analysis + code splitting | Highest | Lowest | 1-2 hours |
| 2 | Image optimization pipeline | Large | Zero | 2-3 hours |
| 3 | N+1 query elimination | Large | Requires testing | 2-4 hours |
| 4 | Database indexing | Medium-Large | Low | 1-2 hours |
| 5 | Caching strategy | Medium | Requires design | 4-8 hours |
| 6 | CDN and edge optimization | Medium | Deployment change | 2-4 hours |
| 7 | Runtime render optimization | Low-Medium | Careful testing needed | 4-8 hours |
Work down this list in order. The first four items account for 80% of performance gains on most applications. Items five through seven are meaningful but rarely justify the effort until the others are done.
Across eight projects where I have applied AI-assisted performance optimization:
The zero regressions number is meaningful. Human-driven performance optimization under deadline pressure takes shortcuts. "We'll add tests later." The AI does not take shortcuts. It generates tests, validates behavior, and refuses to commit optimizations it cannot verify are correct.
Performance optimization is not a specialized skill that requires years of study. It is a pattern-matching problem with well-documented solutions. AI agents are extraordinary at pattern-matching against well-documented solutions.
The bottleneck is no longer ability. It is attention. Point the agent at the problem and get out of its way.
Q: How do AI agents optimize application performance?
AI agents optimize performance by analyzing Core Web Vitals, identifying bottlenecks through profiling, implementing fixes like code splitting and lazy loading, and validating improvements with before/after measurements. They systematically address LCP, FID, and CLS issues across entire applications.
Q: What performance improvements can AI achieve?
AI-driven optimization typically improves LCP by 50-70%, reduces bundle sizes by 30-50%, and achieves Core Web Vitals passing scores. Agents implement image optimization, code splitting, caching strategies, and database query optimization systematically across all pages.
Q: What is the best approach to AI-assisted performance optimization?
Start with automated performance audits (Lighthouse, WebPageTest), let AI agents prioritize issues by impact, implement fixes incrementally with before/after measurements, and verify no regressions. The systematic approach catches optimizations humans typically miss.
Full-stack developer and AI architect with years of experience shipping production applications across SaaS, mobile, and enterprise. Gareth built Agentik {OS} to prove that one person with the right AI system can outperform an entire traditional development team. He has personally architected and shipped 7+ production applications using AI-first workflows.

CI/CD with AI: Pipelines That Think
Your CI pipeline runs 2,400 tests on every commit. Most are irrelevant. AI-enhanced pipelines fix this and predict deployment failures before they happen.

PWAs with AI: Skip the App Store Entirely
The App Store takes 30% and days of review. PWAs skip all of it. With AI agents handling service workers and caching, the web-native gap is effectively zero.

Database Design with AI: Schema to Scale
AI agents design schemas that anticipate growth, optimize slow queries automatically, and generate safe migrations. Zero outages in 40+ deployments.
Stop reading about AI and start building with it. Book a free discovery call and see how AI agents can accelerate your business.