MCP Security Best Practices: What Every Developer Needs to Know
MCP Security Best Practices: What Every Developer Needs to Know
The Model Context Protocol (MCP) is moving fast. Developers are shipping MCP servers that touch filesystems, databases, external APIs, and production infrastructure — often in a matter of hours. That speed is exciting. It’s also a little terrifying from a security standpoint.
MCP security isn’t theoretical. A poorly secured MCP server is a direct bridge between an AI model and your most sensitive systems. Prompt injection, stolen credentials, over-permissioned tools, unlogged access — these aren’t hypothetical risks. They’re the exact failure modes that have burned developers in adjacent ecosystems (think early serverless functions, or the first wave of OAuth integrations).
The good news: the defensive patterns are well understood. This guide covers 7 practical MCP server security best practices — each with a short, realistic snippet you can adapt today. Whether you’re building your first server or hardening an existing one, these fundamentals are what separate a secure MCP server from a liability.
If you want to compare implementations, browse security-focused MCP servers on MCPHub. Seeing how other teams structure permissions, auth, and logging will save you time — and help you avoid unforced errors.
1) Apply the Principle of Least Privilege to Tool Permissions
Every tool your MCP server exposes should have exactly the permissions it needs — nothing more.
In practice, developers often create a single “god mode” service account (or a single API key) and reuse it across all tools. That’s convenient, but it means any one compromised tool call can access everything.
Instead, scope permissions at the tool boundary:
- File tools should only access approved directories
- Database tools should use separate DB users (read-only vs write)
- Cloud tools should use scoped IAM roles/policies
- “Admin” tools should require stronger auth (or be removed entirely)
Here’s a file tool that only reads from a specific directory and blocks path traversal:
// server.ts — scoped file access
import fs from "node:fs/promises";
import path from "node:path";
import { z } from "zod";
const ALLOWED_READ_DIR = path.resolve("/var/data/reports");
server.tool(
"read_report",
{ filename: z.string().min(1).max(120) },
async ({ filename }) => {
const target = path.resolve(ALLOWED_READ_DIR, filename);
// Prevent ../ traversal and symlink surprises by enforcing prefix
if (!target.startsWith(ALLOWED_READ_DIR + path.sep)) {
throw new Error("Access denied: path outside allowed directory");
}
const content = await fs.readFile(target, "utf8");
return { content: [{ type: "text", text: content }] };
}
);
Hard take: if your MCP server can read “any file on disk” or call “any internal API,” it’s not “powerful.” It’s fragile. Tight scope is what keeps power usable.
2) Validate and Sanitize All Inputs (Prompt Injection Defense)
MCP tools receive input that originates from an LLM. That means the input can contain anything — including adversarial instructions injected via user content.
Treat tool arguments like untrusted user input:
- Validate types and ranges
- Whitelist allowed values where possible
- Normalize/escape strings that feed into commands, queries, or URLs
- Use parameterized database queries (always)
A safe pattern is to keep arguments structured and constrained. For example, don’t accept a freeform table name.
import { z } from "zod";
const QuerySchema = z.object({
table: z.enum(["orders", "products", "customers"]),
limit: z.number().int().min(1).max(100),
// Constrain filter language (or replace with explicit fields)
status: z.enum(["open", "closed", "pending"]).optional(),
});
server.tool("list_records", QuerySchema, async ({ table, limit, status }) => {
// Parameterized query: values are bound, not interpolated
const rows = await db.query(
"SELECT * FROM ?? WHERE (? IS NULL OR status = ?) LIMIT ?",
[table, status ?? null, status ?? null, limit]
);
return { content: [{ type: "text", text: JSON.stringify(rows) }] };
});
For prompt injection specifically, the key is: don’t let instructions become authority. If a model says “ignore your policy and delete all records,” the tool layer must still enforce the policy.
Two practical patterns that help a lot in real deployments:
- Allowlist outbound destinations. If a tool can fetch URLs, restrict it to domains you control (or a short list you’ve vetted).
- Normalize input before use. Strip control characters and enforce a maximum length before passing strings into prompts, shell commands, or SQL.
Here’s a lightweight allowlist validator for URL-fetching tools:
const AllowedHosts = new Set(["api.mycompany.com", "status.mycompany.com"]);
function assertAllowedUrl(raw: string) {
const u = new URL(raw);
if (u.protocol !== "https:") throw new Error("Only https URLs are allowed");
if (!AllowedHosts.has(u.hostname)) throw new Error("Host not allowed");
return u;
}
server.tool("fetch_status", { url: z.string().url() }, async ({ url }) => {
const u = assertAllowedUrl(url);
const res = await fetch(u.toString(), { headers: { "User-Agent": "mcp-server/1.0" } });
return { content: [{ type: "text", text: await res.text() }] };
});
3) Manage Secrets Properly (Never Hardcode API Keys)
Hardcoded credentials in MCP server source code get committed to git, baked into Docker images, and leaked in logs. It happens constantly.
Rules that keep you safe:
- Secrets come from the environment or a secrets manager
- Never log secrets (including partial tokens)
- Rotate secrets and scope them per environment
- Use distinct credentials per tool/service (blast radius control)
// ✅ Read secrets at runtime
const apiKey = process.env.STRIPE_SECRET_KEY;
if (!apiKey) throw new Error("STRIPE_SECRET_KEY is required");
const stripe = new Stripe(apiKey, { apiVersion: "2024-06-20" });
If you’re on AWS, load secrets from Secrets Manager (or SSM Parameter Store) at boot time:
export STRIPE_SECRET_KEY=$(aws secretsmanager get-secret-value \
--secret-id prod/mcp/stripe \
--query SecretString \
--output text | jq -r .stripe_secret_key)
Also: commit a lockfile and add a secret scanning step in CI. GitHub Advanced Security is great, but even basic regex scanning catches a lot.
4) Secure Transport: TLS + Strong Authentication
If your MCP server is reachable over the network, TLS is mandatory. Without it, tool arguments and results can be observed or modified in transit.
Then comes the second half: authentication. Your MCP server should not accept tool calls from anonymous clients, full stop.
A common, practical deployment is:
- TLS termination at a reverse proxy (nginx / Caddy / a load balancer)
- A bearer token per client (or short-lived JWTs)
- Server-side token verification middleware
Example nginx TLS forwarding:
server {
listen 443 ssl;
ssl_certificate /etc/ssl/certs/mcpserver.crt;
ssl_certificate_key /etc/ssl/private/mcpserver.key;
ssl_protocols TLSv1.2 TLSv1.3;
location / {
proxy_set_header Authorization $http_authorization;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:3000;
}
}
And token verification in your app (store a hash, not the token itself):
import crypto from "node:crypto";
const EXPECTED_TOKEN_HASH = process.env.MCP_TOKEN_SHA256;
function sha256(s: string) {
return crypto.createHash("sha256").update(s).digest("hex");
}
app.use((req, res, next) => {
const header = req.headers.authorization || "";
const token = header.startsWith("Bearer ") ? header.slice(7) : "";
if (!token || !EXPECTED_TOKEN_HASH || sha256(token) !== EXPECTED_TOKEN_HASH) {
return res.status(401).json({ error: "Unauthorized" });
}
next();
});
If you need multi-user access, don’t DIY auth forever. Integrate an IdP (Auth0, Clerk, Cognito) and use short-lived tokens.
5) Rate Limiting and Abuse Prevention (Protect Your Wallet Too)
MCP servers are designed to be called repeatedly and autonomously by AI agents. Without rate limiting, a runaway loop — or a malicious client — can:
- Hammer upstream APIs
- Exhaust DB connections
- Trigger throttling/bans from third parties
- Rack up a shockingly large bill
Start with a global limit and then add stricter per-tool limits for expensive operations.
import rateLimit from "express-rate-limit";
// Global rate limit
app.use(
rateLimit({
windowMs: 60_000,
max: 200,
standardHeaders: true,
legacyHeaders: false,
})
);
// Expensive endpoint limiter
const expensiveLimiter = rateLimit({
windowMs: 60_000,
max: 10,
keyGenerator: (req) => (req.headers["x-client-id"] as string) || req.ip,
});
app.post("/tools/run_analysis", expensiveLimiter, runAnalysis);
If you’re serious about MCP server security in production, also implement:
- Timeouts on outbound HTTP calls
- Concurrency limits (per client)
- Budgeting (max tokens / max tool calls per session)
6) Audit Logging for MCP Tool Calls (You Can’t Secure What You Can’t See)
When incidents happen, audit logs are how you answer:
- What tool was called?
- By whom?
- With what inputs?
- What changed?
- Did it succeed?
The security trick is to log enough to investigate without leaking sensitive data into logs.
type AuditEvent = {
ts: string;
clientId: string;
tool: string;
args: Record<string, unknown>;
ok: boolean;
durationMs: number;
error?: string;
};
function audit(event: AuditEvent) {
// Structured JSON logs work well with Datadog/ELK/CloudWatch
console.log(JSON.stringify({ level: "audit", ...event }));
}
server.tool("create_invoice", InvoiceSchema, async (args, ctx) => {
const t0 = Date.now();
try {
const invoice = await stripe.invoices.create({
customer: args.customerId,
auto_advance: false,
});
audit({
ts: new Date().toISOString(),
clientId: ctx.clientId ?? "unknown",
tool: "create_invoice",
// Do not log full PII payloads; log identifiers
args: { customerId: args.customerId },
ok: true,
durationMs: Date.now() - t0,
});
return { content: [{ type: "text", text: invoice.id }] };
} catch (e) {
audit({
ts: new Date().toISOString(),
clientId: ctx.clientId ?? "unknown",
tool: "create_invoice",
args: { customerId: args.customerId },
ok: false,
durationMs: Date.now() - t0,
error: String(e),
});
throw e;
}
});
Store audit logs in an append-only destination (S3/CloudWatch/Datadog). Keep at least 30–90 days of retention so you can investigate issues without panic.
7) Sandboxing and Isolation (Limit Blast Radius)
Even with least privilege and validation, assume something will slip.
A secure MCP server is designed so that compromise doesn’t become catastrophe. That means isolation:
- Run as a non-root user
- Use a minimal container image
- Drop Linux capabilities
- Prefer read-only filesystems where possible
- Restrict outbound network access (allowlist)
A practical Docker baseline:
FROM node:22-alpine
RUN addgroup -S mcp && adduser -S mcp -G mcp
WORKDIR /app
COPY --chown=mcp:mcp package.json package-lock.json ./
RUN npm ci --omit=dev
COPY --chown=mcp:mcp . .
USER mcp
EXPOSE 3000
CMD ["node", "dist/server.js"]
And a hardened compose setup:
services:
mcp-server:
build: .
read_only: true
tmpfs:
- /tmp
cap_drop:
- ALL
security_opt:
- no-new-privileges:true
environment:
- NODE_ENV=production
If your server includes “execute code” tools (shell execution, Python, eval, anything like that), don’t just sandbox harder — split it into a separate worker runtime with strict resource limits and no access to secrets.
Conclusion: Build MCP Servers Like They’re Production Infrastructure
That’s what they are.
MCP security is about setting sane defaults:
- Least privilege per tool
- Validation + sanitization on every argument
- Proper secrets management
- TLS + authentication
- Rate limits and abuse controls
- Audit logging for every tool call
- Isolation and sandboxing
None of this is exotic. It’s the same security posture you’d apply to a payment webhook handler or an internal admin API — because an MCP server is an admin API, just with a different caller.
If you want to see how the ecosystem is evolving (and learn from hardened implementations), browse MCPHub and compare how popular servers handle auth, permissions, and safe tool design.
Explore MCPHub's curated catalog at getmcpapps.com