
MCP Security Top 10 - Part 1: MCP Overview
This is the first part of a blog series about the Model Context Protocol (MCP) and the top 10 security risks associated with it. This article provides an overview of MCP, how it works, and how it improves AI code generation with agents, while introducing the security considerations that we'll explore throughout this series.
What is MCP?
The Model Context Protocol (MCP) is an open protocol that allows AI systems, typically powered by large language models (LLMs), to integrate with external tools and data sources in a standardized way. At its core, MCP follows a client-server architecture where an MCP server exposes a set of actions, called tools, each with defined input/output schemas and descriptive metadata (Introducing the Model Context Protocol \ Anthropic).
Official Site: Model Context Protocol – Introduction
When an AI assistant (the MCP client) connects to one or more MCP servers, it can discover available tools, query data, or modify external resources through well-defined request and response messages (Introducing the Model Context Protocol \ Anthropic). This standardized approach simplifies building AI-powered systems that can perform tasks beyond text generation, such as searching logs, modifying codebases, or managing databases.
Core Architecture
MCP defines clear roles and trust boundaries for each component:
- MCP Host: An AI application like Claude Desktop or an IDE (Cursor, Cline) that embeds an MCP client.
- MCP Client: Component that connects to servers, discovers tools, and relays requests from the AI to servers.
- MCP Server: Runs as a separate process or service, exposing "tools" with specific capabilities.
- Transport Layer: Communication happens via either:
- Standard I/O (stdin/stdout) for local servers
- HTTP with Server-Sent Events (SSE) for remote servers (Cursor – Model Context Protocol)
This architecture creates a security boundary between the AI model's core logic and external effects: the AI cannot directly touch files or systems but must invoke MCP server tools to act.

Tool Definition
Each tool has:
- A name and description (in natural language)
- An input schema (JSON Schema)
- An output schema (JSON Schema)
- A handler that performs the actual action (e.g., database query, file read)
The AI model discovers these tools and uses them to fulfill user requests (MCP Servers Explained: What They Are, How They Work, and Why Cline is Revolutionizing AI Tools - Cline Blog). For example, if the user says, "Fetch the latest weather for Paris," an MCP server with a get_weather
tool could be invoked automatically by the model.
Transport & Security Considerations
The choice of transport significantly affects the security posture of an MCP implementation:
-
STDIO (Standard I/O): Used for local integrations where the host app spawns the MCP server as a subprocess on the same machine. This avoids network exposure but requires proper OS-level sandboxing since the server runs with user permissions.
-
SSE/HTTP Transport: Used for remote deployments, requiring HTTPS/TLS to prevent man-in-the-middle attacks. Remote servers must implement strong authentication, typically using OAuth 2.1 flows as recommended by the MCP specification.
Code Examples
Here are simplified examples in TypeScript and Python, illustrating MCP servers with real functionalities.
TypeScript Example – Simple Weather MCP Server
// weather-server.ts
import fetch from 'node-fetch';
import { MCPServer, createTool } from 'mcp-sdk-ts';
// Example imports; actual library names may differ.
const getWeatherTool = createTool({
name: "get_weather",
description: "Fetch current weather for a given city.",
inputSchema: {
type: "object",
properties: {
city: { type: "string" }
},
required: ["city"]
},
outputSchema: {
type: "object",
properties: {
temperature: { type: "number" },
description: { type: "string" }
}
},
handler: async ({ city }) => {
const apiKey = process.env.WEATHER_API_KEY;
// OpenWeatherMap free API example: https://openweathermap.org/current
const response = await fetch(
`https://api.openweathermap.org/data/2.5/weather?q=${city}&appid=${apiKey}&units=metric`
);
const data = await response.json();
return {
temperature: data.main?.temp,
description: data.weather?.[0]?.description || "No description"
};
}
});
async function main() {
const server = new MCPServer();
server.addTool(getWeatherTool);
server.start(); // Starts listening (stdio or port, depending on config).
}
main().catch(console.error);
How to Run:
npm install node-fetch
(plus any MCP SDK dependencies).WEATHER_API_KEY=YOUR_KEY_HERE node weather-server.js
.- Connect an MCP client/agent to this server (e.g., via local stdio or a remote SSE endpoint).
Reference: OpenWeatherMap Current Weather API
Python Example – Basic Filesystem MCP Server with Sandboxing
# filesystem_server.py
import os
from mcp_sdk import MCPServer, MCPTool, Schema # Example placeholders.
def read_file(params):
path = params["path"] # Sandbox approach: restrict reads to a specific directory.
base_folder = "/home/user/mcp_sandbox"
full_path = os.path.realpath(os.path.join(base_folder, path))
if not full_path.startswith(base_folder):
raise ValueError("Access outside the sandbox is not allowed.")
with open(full_path, 'r', encoding='utf-8') as f:
content = f.read()
return {"content": content}
read_file_tool = MCPTool(
name="read_file",
description="Read the content of a file in the sandbox.",
input_schema=Schema(
type="object",
properties={"path": {"type": "string"}},
required=["path"]
),
output_schema=Schema(
type="object",
properties={"content": {"type": "string"}}
),
handler=read_file
)
if __name__ == "__main__":
server = MCPServer()
server.add_tool(read_file_tool)
server.start() # Could default to stdio or a local port.
Key Point: This example demonstrates sandboxing by implementing path validation - a crucial security practice that restricts file operations to a specific directory to prevent unauthorized access.
Authentication & Permission Models
The MCP specification supports various authentication approaches depending on the transport mechanism:
-
For local STDIO transports, authentication is generally simpler as the server runs under the user's control. Credentials might be passed via environment variables or configuration files.
-
For remote HTTP+SSE transports, the MCP spec recommends using OAuth 2.1 flows. This allows the AI client to prove it's acting on behalf of an authorized user through access tokens.
In enterprise settings, MCP servers should ideally implement a federated identity model where the AI agent assumes the identity of the end-user, ensuring it only accesses data the user is allowed to see.
IDE/Agent Integration
In modern AI-driven editors (like Cursor, Cline, or others), you can configure MCP by declaring a local command or a remote HTTP endpoint (Cursor – Model Context Protocol). The AI then automatically enumerates the available tools and calls them when necessary.
A minimal "client config" snippet might look like:
{
"mcpServers": {
"weatherServer": {
"command": "node",
"args": ["weather-server.js"],
"env": { "WEATHER_API_KEY": "abc123" }
}
}
}
When the editor loads, it spawns weather-server.js
as a subprocess and logs available tools (e.g., get_weather
).
MCP and AI Agents
MCP enables AI-driven "agents" to move beyond text generation and perform real-world tasks, such as file manipulation, database queries, and more (Claude 3.7 Sonnet and Claude Code \ Anthropic). This transforms the development workflow by allowing the AI to:
- Discover available tools through a standardized protocol
- Execute actions through a well-defined interface
- Process results and incorporate them into the AI's reasoning
However, each new integration point also opens potential security and privacy pitfalls. The key security concern is that MCP servers act as gateways between AI models and real-world effects, creating a critical trust boundary that must be secured.
Architectural Security Constraints
MCP's design assumes that MCP servers are constrained, well-behaved components. Some inherent security constraints include:
- MCP servers should be run with only the privileges necessary for their function
- Communication should be secured (TLS for network transport)
- Messages must follow the JSON-RPC schema and size limits
- Access controls must be implemented by developers; the protocol itself doesn't enforce restrictions
As the MCP documentation notes: "MCP servers currently run on the host, granting access to all host files and resources" - highlighting the need for proper sandboxing and containerization in production environments.
MCP Security Top 10 Series
This article is part of a comprehensive series examining the top 10 security risks when using MCP with AI agents:
- MCP Security Top 10 Series: Introduction & Index
- MCP Overview (this article)
- Over-Privileged Access
- Prompt Injection Attacks
- Malicious MCP Servers
- Unvalidated Tool Responses
- Command Injection
- Resource Exhaustion
- Cross-Context Data Leakage
- MITM Attacks
- Social Engineering
- Overreliance on AI
Teaser for Upcoming Security Topics
Later sections of this blog series will delve into the top 10 security concerns when using MCP with AI agents. These include:
- Over-Privileged Access: AI systems gaining excessive permissions that allow them to perform unauthorized or dangerous actions.
- Prompt Injection: Attackers manipulating AI behavior by inserting malicious instructions into seemingly innocent inputs.
- Malicious MCP Servers: Trojaned or compromised servers that execute harmful actions when connected to AI systems.
- Unvalidated Tool Responses: AI systems trusting potentially manipulated data returned by external tools without verification.
- Command Injection: Exploitation of unsanitized inputs that trick AI systems into executing unintended system commands.
- Resource Exhaustion: AI systems triggering excessive computational load through recursive or inefficient tool usage.
- Cross-Context Data Leakage: Sensitive information from one user or session inadvertently exposed to another through shared AI contexts.
- MITM Attacks: Interception of communications between AI clients and MCP servers allowing attackers to modify requests or responses.
- Social Engineering: AI-generated content crafted to manipulate human users into taking security-compromising actions.
- Overreliance on AI: Excessive trust in AI decision-making leading to reduced human oversight and unverified automated actions.
Conclusion
MCP standardizes how AI interacts with external tools, streamlining advanced use cases in software development (Introducing the Model Context Protocol \ Anthropic). The protocol's client-server architecture, standardized message format, and tool definition capabilities enable powerful AI agent workflows.
However, security must be a first-class concern, especially because an AI with tool access can cause real harm. Developers and organizations should carefully control which tools they expose to the AI, apply sandboxing, authenticate servers, verify third-party MCP code, and implement proper permission models for all tool access.
As we explore each security risk in depth throughout this series, we'll provide specific mitigation strategies to help you build secure MCP implementations.
Secure Your MCP Implementations with Garnet
As we've explored in this article, the Model Context Protocol offers powerful capabilities for AI integration, but it also introduces security considerations that traditional solutions may not address.
Garnet provides specialized runtime security monitoring designed to detect and block threats introduced by MCP servers during software development and deployment. Unlike conventional security tools, Garnet's approach focuses on runtime behavior monitoring, allowing it to identify suspicious activities that signature-based systems might miss.
With Garnet's Linux-based Jibril sensor, you can protect your environments at every stage:
- Build Pipeline Protection: Detect malicious activities during CI/CD processes (like GitHub Actions) where MCP servers might be leveraged
- Test Environment Security: Monitor runtime behavior during testing to catch potential security issues before they reach production
- Production Safeguards: Maintain continuous protection in live environments where MCP-enabled tools operate
The Garnet Platform provides centralized visibility into potential security risks, with integrations that deliver alerts directly within your existing workflows.
Learn more about securing your AI-powered development environments at Garnet.ai.