Demystifying MCP

by Yash Mehta, Consultant

MCP Architecture

Demystifying MCP (Model Context Protocol)

AI systems rely on external tools to perform actions, but these tools are often written inside the same application as the agent and, over time, everything morphs into a monolith. So it's a good idea to separate concerns: let the agents do the thinking and a server do the actions.

This leads to a new challenge: each tool and API might have its own unique interaction format, forcing developers to write custom, boilerplate glue code for each integration. Enter the Model Context Protocol (MCP). MCP is an open protocol that lets AI agents (LLMs) interact with other applications in a standardised way. MCP like APIs will require you to write custom logic to interact with the applications, databases, etc, but instead of thinking in low‑level API calls, you expose higher‑level tools that perform the action.

MCP Core components

  1. Tools
  2. Resources
  3. Prompts

1. Tools

Tools are executable functions that perform actions on behalf of the users to accomplish a certain request or task. MCP allows clients to discover tools and trigger their execution. Naming tools thoughtfully is critical as LLMs need clear names to understand intent, and there can be name conflicts when multiple MCP servers are involved.

Some examples of Tools:

i. Retreival Tool

LLMs dont access to fresh or private data, a tool needs to be created that can fetch relevant data from vector stores, file systems, or databases. With this, AI Agents can utilise relevant data to provide a grounded response and cite verifiable resources.

ii. Calendar Actions

LLMs cant perform actions themselves, so a tool needs to be created that can perform tasks to achive an end result. If a user wants the Agent to setup meetings by looking at their calendar, an action tool create_calendar_invite can be created that can be triggered when the Agent has enough information about the users calendar either via context, MCP resource, or retreival tools.

iii. Data Analysis Tool

Custom tools can let the agent run complex analysis. Modern LLMs can reason; if you hand them cleaned data (or a tool to query it), they can extract insights faster.

Like any other piece of software, tools should be tested: unit, integration, security, and performance tests all apply here.

2. Resources

Resources represent any data that can be exposed to the agent via context. Resources are typically read-only; if you need to mutate state, expose a tool instead. Like tools, the protocol also lets clients available resources.

Some examples of Resources:

i. User Profile

For the agent to personalise responses, an MCP resource can fetch user-profile data from SaaS apps, a data warehouse, or a CDP. Pair this with a tool to update preferences when they change, so other systems (and agents) always see fresh data.

ii. Policies

Retreival can be performed as a part of the resource and relevent policies can be brought into context based on the requirements and relavance to the interaction.

Protect resources as you would protect any other sensitive data: apply access auditing, encryption, and authentication.

3. Prompts

Prompts are templates that can be used across applications to standardise the usage. For instance, a prompt template for summarizing an email might include placeholders for the sender, subject, and body. A "guided workflow" could mean the prompt instructs the agent to first extract key entities, then summarize, and finally suggest a reply, all within a single, reusable prompt resource.

Types of MCP Server Transport Types

MCP uses JSON-RPC 2.0 as its wire format and it translates MCP Protocol messages on both the client and server. MCP has 2 primary transport types:

  1. Standard Input/Output (stdio)
  2. Streamable HTTP

1. Standard Input/Output

Useful for local integrations and command‑line tools such as IDE extensions or local desktop apps. The client spawns a child process to execute the tool. Concurrency is limited (with exceptions), but when your tool needs to run in the same network and authentication context as the developer it’s often the simplest route.

2. Streamable HTTP

Useful when you need sessions and horizontal scalability. You can tune the infra independently and support concurrent executions. Perfect for SaaS integrations and performing actions outside the Agent Environment.

Creating a simple MCP Server using FastMCP

FastMCP is an open-source Python library that accelerate setting up MCP Servers by abstracting trivial processes to manage MCP. Think of it as FastAPI for MCP.

from fastmcp import FastMCP

# 1. Initialize the FastMCP server with a name.
SERVER_NAME = "Math MCP Server"
mcp = FastMCP(SERVER_NAME)

# 2. Define tools for the Agent to use
@mcp.tool
def add(a: float, b: float) -> float:
    """Adds two numbers together."""
    return a + b

if __name__ == "__main__":
    PORT = 4200
    MOUNT = "/math"
    print(f"Starting FastMCP server on port {PORT}, mounted at {MOUNT}...")
    mcp.run(
        transport="http",
        host="0.0.0.0",
        port=PORT,
        path=MOUNT,
        log_level="debug"
    )

Using the deployed MCP Server with AI Agents

1. LangGraph

LangGraph is a lightweight orchestration framework for Agents and define execution Graphs. LangGraph provides both low-level primitives and high-level prebuilt components for building agent-based applications.

client = MultiServerMCPClient(
    {
        "math": {
            "url": "http://localhost:4200/math/mcp",
            "transport": "streamable_http",
        }
    }
)
tools = await client.get_tools()
agent = create_react_agent(
    model=model,
    tools=tools
)

2. Google ADK

Agent Development Kit (ADK) is a flexible and modular framework for developing and deploying AI agents. While optimized for Gemini and the Google ecosystem, ADK is model-agnostic, deployment-agnostic, and is built for compatibility with other frameworks.

root_agent = Agent(
    model=model,
    name='mcp_agent_example',
    description='A helpful assistant for user questions.',
    instruction='Answer user questions to the best of your knowledge.',
    tools=[
        MCPToolset(
            connection_params=StreamableHTTPConnectionParams(
                url = "http://localhost:4200/math/mcp"
            ),
        )
    ],

)

MCP vs Traditional API

Tools are task-level wrappers over APIs. Rather than exposing endpoints directly, you package the sequence needed to achieve an outcome and let the agent call that single tool. MCP offers a way to standardise tools and ability host the servers remotely.

DimensionTraditional APIMCP
Abstraction levelCRUD-style endpoints, request/response focusedAction-oriented tools (e.g., send_email, query_orders)
CallerHumans writing codeLLM agents reasoning over name and descriptions
DiscoveryYou read Swagger/OpenAPI docsClient can list tools dynamically from the server
Context passingHeaders/body/query params you manageProtocol-managed context; resources surfaced to the agent
Error semanticsHTTP status codes + custom payloadsStructured tool errors the model can reinterpret and retry
Execution localeUsually remote service over HTTPLocal (stdio) or remote (Streamable HTTP) MCP servers
GoalIntegrate services into applicationsLet agents do things safely without bespoke glue code

Summary

By cleanly separating reasoning (the agent) from execution (MCP), you keep your architecture flexible and your codebase sane. Because MCP clients can discover and call new or updated tools at runtime, you rarely need to touch the agent when the server evolves (unless breaking changes). While API and MCP share a lot in common, they are design for completely different usages: APIs define low level usage while MCP Tools define actions. Think like APIs are for scripts and traditional programs, while MCP is for human like Agents designed to reason and interact with the world.

Further Reading

More articles

A Practical Look at Snowflake Openflow

How a unified, secure, and visual approach to data integration can simplify pipelines for teams of any size.

Read more

Experiencing GDG Sydney x Google I/O Extended: A Deep Dive into Innovation

Last Saturday, on a sun-soaked morning, my colleagues and I attended the GDG Sydney x Google I/O Extended event. We met a diverse group of people from various fields and regions, enjoying the chance to network and reconnect with familiar organisers. For those who couldn’t make it to the event, we’ve penned this article to share our experiences and key takeaways. Whether you're a developer, IT professional, or simply curious about the latest tech trends, read on to discover what GDG Sydney x Google I/O Extended was all about and what exciting innovations were unveiled.

Read more

Tell us about your project

Our offices

  • Washington, DC
    1900 Reston Metro Plaza
    Reston, VA 20190
  • New York
    132 West 31st St, 9th Floor
    New York, NY 10001
  • Denver
    18695 Pony Express Dr
    Parker, CO 80134
  • Bengaluru
    4/1 Bannerghatta Rd
    IBC Knowledge Park, Tower C, Level 2
  • London
    United Kingdom
    1 St Katharines Way, E1W 1UN
  • Sydney
    151 Clarence Street
    Sydney, NSW 2000
worldmap