Loading...
Loading...
Weekly AI insights —
Real strategies, no fluff. Unsubscribe anytime.
Written by Gareth Simono, Founder and CEO of Agentik {OS}. Full-stack developer and AI architect with years of experience shipping production applications across SaaS, mobile, and enterprise platforms. Gareth orchestrates 267 specialized AI agents to deliver production software 10x faster than traditional development teams.
Founder & CEO, Agentik {OS}
MCP is to AI agents what HTTP was to browsers. One standard interface that means build once, works everywhere. Here's the real technical breakdown.

Before MCP, every AI tool integration was its own special snowflake. You wanted your agent to query a database? Write a custom integration for Claude. Then a different one for GPT-4. Then a third for Gemini. Three implementations of the same functionality, each with its own edge cases and bugs. Scale that across twenty tools and three AI providers, and you have sixty integrations to maintain.
MCP ends this. The Model Context Protocol is to AI agents what HTTP was to web browsers. One standard. Build once. Works everywhere.
If that sounds like marketing, let me get concrete. I have built MCP servers, consumed them from multiple platforms, and shipped them in production. The protocol works. And once you understand it properly, it rewires how you think about building AI-powered systems entirely.
MCP is a client-server architecture. The AI agent is the client. Your tools are the server. The protocol specifies how clients discover what a server can do, how they call capabilities, and how they handle responses.
An MCP server exposes three types of capabilities.
Tools are functions the agent can invoke. "Query the database." "Send an email." "Deploy to production." Each tool has a name, a description written in plain language, typed input parameters, and a return type. The agent reads the description, decides when invoking that tool makes sense, constructs the right arguments, and calls it. The key insight: the agent understands the description, not just the function signature.
Resources are data the agent can read without side effects. File contents, database schemas, user lists, configuration data. Resources are read-only by design. They provide context without modification ability. This separation matters for safety: you can give an agent read access to your entire database schema without giving it write access to any table.
Prompts are predefined instruction templates the server makes available. "Summarize this document following our style guide." "Review this code for security vulnerabilities." Prompts give the server influence over how the agent approaches specific tasks. When you want consistent behavior across different AI providers, prompts are how you enforce it at the protocol level.
The separation of these three capability types is deliberate and important. Tools have side effects. Resources do not. Prompts shape behavior. Understanding which category a capability belongs to tells you its risk profile and how to grant access appropriately.
MCP runs over JSON-RPC 2.0. If you have built any RPC system before, the basic structure is familiar.
Transport layer is flexible. Local servers use stdio. Remote servers use HTTP with Server-Sent Events for streaming. The protocol defines what messages look like, not how they travel.
The lifecycle starts with initialization. Client sends initialize with its capabilities. Server responds with its capabilities. Both sides know what the other supports before any work happens.
Then capability discovery. Client calls tools/list and gets back every tool the server exposes, with full schemas.
// Server response to tools/list
{
"tools": [
{
"name": "query_database",
"description": "Execute a read-only SQL query against the production database. Use this when you need current data about users, orders, or inventory. Never use for writes.",
"inputSchema": {
"type": "object",
"properties": {
"sql": {
"type": "string",
"description": "The SQL query to execute. Must be SELECT only."
},
"limit": {
"type": "number",
"description": "Maximum rows to return. Defaults to 100, max 1000."
}
},
"required": ["sql"]
}
}
]
}The agent reads that description. It now knows: this tool queries production data, read-only, returns rows, has a limit. The agent makes informed decisions about when and how to use it.
Then tool invocation.
// Agent calls tools/call
{
"method": "tools/call",
"params": {
"name": "query_database",
"arguments": {
"sql": "SELECT id, email, created_at FROM users WHERE plan = 'enterprise' ORDER BY created_at DESC",
"limit": 50
}
}
}
// Server response
{
"content": [
{
"type": "text",
"text": "[{\"id\": 1234, \"email\": \"cto@enterprise.com\", \"created_at\": \"2026-01-15\"}...]"
}
],
"isError": false
}Clean. Typed. Consistent across every AI provider that implements the spec.
The fastest path is the TypeScript SDK. Install it, define your tools, expose the server.
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
const server = new Server(
{ name: "my-mcp-server", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
// Define available tools
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: "get_customer_data",
description:
"Retrieve customer information by ID or email. Returns account status, plan, usage stats, and recent activity. Use this before making any decisions about a customer account.",
inputSchema: {
type: "object",
properties: {
identifier: {
type: "string",
description: "Customer ID (UUID) or email address",
},
},
required: ["identifier"],
},
},
],
}));
// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "get_customer_data") {
const { identifier } = request.params.arguments as { identifier: string };
const customer = await fetchCustomerFromDB(identifier);
return {
content: [
{
type: "text",
text: JSON.stringify(customer, null, 2),
},
],
};
}
throw new Error(`Unknown tool: ${request.params.name}`);
});
// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);That is a functioning MCP server. Any MCP-compatible client can now discover and call get_customer_data. Claude, GPT-4, Gemini, your own orchestrator. Same server, zero changes.
Here is the most important thing about MCP that most builders miss: the tool description is not documentation. It is the interface.
The agent reads the description to decide whether to use a tool and how to use it correctly. A bad description produces bad agent behavior. A good description produces good agent behavior.
Compare these two descriptions for the same tool:
Bad: "Send email to user."
Good: "Send a transactional email to a user. Use this for account notifications, security alerts, and system confirmations. Do NOT use for marketing content. The email will appear as coming from noreply@company.com. Rate limited to 10 emails per user per hour."
With the bad description, the agent might send marketing emails through your transactional pipeline, hit rate limits without warning, or misuse the tool entirely.
With the good description, the agent understands the appropriate use cases, the constraints, and the limitations. It makes better decisions without needing additional prompting.
The single biggest investment in MCP tool quality is writing descriptions that tell the agent not just what a tool does, but when to use it and what its limits are.
I spend as much time writing descriptions as I spend writing the implementation code. That time pays back immediately in reduced errors and better agent decisions.
MCP servers often need access to sensitive systems. Database credentials. API keys. Internal services. How you handle authentication matters.
For local servers (stdio transport), the server runs with the same permissions as the process that invoked it. Keep it simple: pass credentials through environment variables, read them at startup.
For remote servers (HTTP transport), you need proper authentication. OAuth 2.0 is the recommended approach for user-facing servers. For server-to-server, API keys with rotation are standard.
// Remote MCP server with API key auth
app.post("/mcp", authenticate, async (req, res) => {
// req.user is populated by authenticate middleware
// Only expose tools the authenticated user can access
const allowedTools = getToolsForUser(req.user);
// ...
});The security model for MCP tools should be the same as for any API: least privilege by default, explicit grants for elevated access, audit logging for sensitive operations.
Injection attacks are a real threat. Users can craft inputs that attempt to hijack tool calls or exfiltrate data through tool parameters. Validate every input with the same rigor you would apply to a public API endpoint. The agent is not a trusted caller.
The real power of MCP emerges when you compose multiple servers. An agent that connects to a database MCP server, a file system MCP server, a CRM MCP server, and a communication MCP server can execute complex multi-system workflows.
The agent handles the orchestration. It decides which tool to use at each step, passes data between systems, and handles errors. You handle the individual tool implementations. The protocol handles the connection.
A real example from a customer support agent I built:
Seven tool calls across four MCP servers. The agent orchestrated the entire workflow. I wrote four small, focused MCP servers. No glue code between them.
Multi-server composition is where MCP goes from "interesting protocol" to "fundamental infrastructure." Each server does one thing well. The agent connects the pieces.
Honesty matters here. MCP is not magic.
It does not solve latency. Remote tool calls take time. An agent that makes twenty sequential tool calls will be slow. Design your tools to minimize round trips. Batch where possible.
It does not solve cost. Every tool call adds tokens to the context. An agent that discovers fifty tools and processes all their schemas on every request burns money. Use capability filtering to only expose relevant tools per use case.
It does not solve reliability. If your MCP server has bugs or your underlying systems have outages, the agent gets bad data or errors. The protocol transmits information faithfully. It cannot improve the quality of the information itself.
And it does not eliminate the need for good agent error recovery patterns. Tool calls fail. Networks hiccup. Rate limits hit. Your agent needs to handle these gracefully regardless of whether it is using MCP or a proprietary integration.
MCP launched publicly in late 2024. The adoption curve since then has been steep. Major AI providers have committed to the spec. Hundreds of community-built servers exist for popular services: GitHub, Slack, Notion, Linear, Postgres, MongoDB, Stripe.
The pattern we are watching is similar to what happened with package managers. Before npm, reusing code across projects meant copying files. After npm, the ecosystem exploded because sharing became trivially easy. MCP is doing the same thing for AI tool integrations.
In a year, the question will not be "should we use MCP?" It will be "which MCP servers do we plug in?" The builders writing good MCP servers today are building infrastructure that will matter at ecosystem scale.
If you are building an agent system and not using MCP, you are accumulating integration debt that will need refactoring later. Start with MCP from day one. The protocol is stable, the ecosystem is growing, and the alternative is a proprietary integration maze.
Q: What is the Model Context Protocol (MCP)?
The Model Context Protocol (MCP) is an open standard created by Anthropic that enables AI models to securely connect to external data sources and tools. MCP provides a standardized way for AI agents to access databases, APIs, file systems, and other services through a consistent interface.
Q: How does MCP work?
MCP uses a client-server architecture where the AI agent (client) connects to MCP servers that expose tools and resources. Each server defines its capabilities using a standardized schema. The agent discovers available tools at runtime, selects the appropriate one, and calls it through the protocol. Communication happens over stdio or HTTP.
Q: Why is MCP important for AI development?
MCP standardizes how AI agents interact with external systems, replacing fragmented custom integrations with a universal protocol. One AI agent can connect to any MCP-compatible service without custom code, tools are reusable across projects, and built-in security through permission boundaries.
Q: What is the difference between MCP and function calling?
Function calling requires defining tools inline with each API call. MCP externalizes tool definitions into reusable servers that can be discovered, composed, and shared across different AI models and projects. MCP is a protocol layer above function calling.
Full-stack developer and AI architect with years of experience shipping production applications across SaaS, mobile, and enterprise. Gareth built Agentik {OS} to prove that one person with the right AI system can outperform an entire traditional development team. He has personally architected and shipped 7+ production applications using AI-first workflows.

Building Custom AI Agents from Scratch: What Works
Stop wrapping ChatGPT in a text box and calling it an agent. Here's how to build real agents with perception, reasoning, tools, and memory.

Tool Use Patterns for AI Agents: What Actually Works
An agent without tools is a chatbot with delusions. The tool matters less than how you describe it. Here are the patterns that work.

Agent Deployment Patterns: What Production Actually Demands
Deploying an AI agent is nothing like deploying a REST API. Agents are stateful, expensive, non-deterministic, and slow. Every standard assumption breaks.
Stop reading about AI and start building with it. Book a free discovery call and see how AI agents can accelerate your business.