MCP Won: How One Protocol Conquered the AI Coding Ecosystem
In November 2025, Anthropic released the Model Context Protocol (MCP) — a simple, open standard for connecting AI assistants to data sources and tools. By February 2026, just 90 days later, every major AI coding tool had adopted it.
That's not gradual adoption. That's a sweep.
This is the story of how MCP won, why it happened so fast, and what it means for developers building in the AI-native era.
The Problem Before MCP
Before MCP, every AI coding tool had its own integration system:
- Cursor had a custom plugin API
- GitHub Copilot worked through VS Code extensions
- Cody/Sourcegraph built "OpenCTX" — their own context protocol
- Replit relied on built-in tool connections
If you wanted your tool to work with all of them, you built four different integrations. Most developers didn't bother. The result: a fragmented ecosystem where great tools were locked into individual platforms.
Worse, there was no way for an AI agent to discover what tools were available or learn how to use them without custom prompting or hardcoded instructions.
What MCP Got Right
MCP's design is deceptively simple. At its core: any server can expose "tools" (callable functions), "resources" (readable data), and "prompts" (reusable templates) via a standard protocol. AI clients connect once and get everything.
Three design decisions made it win:
1. Open spec, not open source product Anthropic published MCP as a protocol specification, not a product. This meant any tool could implement it without depending on Anthropic. Companies didn't have to "adopt Anthropic's product" — they just implemented a spec. That's a much easier political sell internally.
2. Transport agnostic MCP works over stdio, HTTP, and SSE. Whether you're running a local process or a hosted API, same protocol. This made local-first tools (Cursor, Claude Desktop) and cloud-native tools (Replit) equally easy to support.
3. Zero runtime dependencies MCP servers are just programs. You can build one in Python, TypeScript, Rust, Go — anything. There's no SDK you have to use, no framework to adopt. This removed the "not invented here" barrier that kills most ecosystem projects.
The 90-Day Sweep
Here's the adoption timeline:
| Tool | MCP Support Added | Notes |
| Claude Desktop | November 2025 | Day one — Anthropic's own client |
| Cursor | December 2025 | "Best IDE for AI coding" — massive user base |
| Windsurf (Codeium) | December 2025 | Through "Cascade" AI brain |
| Cody / Sourcegraph | January 2026 | OpenCTX → MCP migration |
| GitHub Copilot | January 2026 | Via VS Code settings; official GitHub MCP server |
| Replit Agent | February 2026 | Cloud-native, no local config needed |
Five of the six most widely-used AI coding assistants. In ninety days.
For context: it took years for REST to become the de facto web API standard. GraphQL has been around since 2015 and still hasn't fully displaced REST. MCP moved at a pace that most protocol advocates can only dream of.
What "Build Once, Run Everywhere" Actually Means
Here's the practical implication that most developers haven't absorbed yet:
Any MCP server you build today works with all of these tools.
You build a database MCP server for your company's internal data warehouse. It works in Claude Desktop for product managers, in Cursor for your engineers, in GitHub Copilot for the team that won't switch IDEs, and in Replit for your interns.
One server. Six clients. All of them.
This is a genuine "write once, run anywhere" moment — the kind that Java promised in 1995 and actually kind of delivered on, but for AI tool integration.
For open-source developers: if you publish an MCP server, you're not building for "Claude Desktop users." You're building for everyone using AI-assisted development.
The Shared Ecosystem Nobody Talks About
Here's something that surprises most developers when they first hear it: Cursor's MCP configuration file looks like this:
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "your-token"
}
}
}
}
Claude Desktop's claude_desktop_config.json looks identical.
That's not coincidence — that's deliberate. The spec was designed so the same configuration works across clients. The @modelcontextprotocol/server-github package from npm is maintained by GitHub and works in all six tools listed above without modification.
This is the network effect that makes MCP defensible. Every new MCP server that gets published makes every MCP client more valuable. Every new client that adopts MCP makes every existing server more valuable.
Where MCPHub Fits
With MCP adoption now universal, the problem has shifted.
It's not "how do I connect my AI tool to external data?" That's solved. The new problem is: which MCP servers are actually worth using?
There are now hundreds of MCP servers on npm, GitHub, and various registries. Most are experiments. Some are abandoned. A handful are production-ready tools that will meaningfully improve how you work.
That's what MCPHub is for.
We review and approve MCP servers before listing them. Every app in our catalog has been checked for:
- Active maintenance (commits in the last 6 months)
- Working installation instructions
- Clear documentation
- Real use cases (not just "I made this as a demo")
We organize them by category — databases, development, productivity, communication — so you can find what you need without wading through npm search results.
As of this writing, we have 47 curated apps. All of them work across Claude Desktop, Cursor, Windsurf, GitHub Copilot, Cody, and Replit.
What's Next for MCP
A few predictions based on what I'm watching:
MCP will become the default "tool calling" standard beyond coding tools. Email clients, project management tools, CRMs — any software that wants to expose its data to AI agents will implement MCP servers. Microsoft has already shipped an official Microsoft 365 MCP server.
Server marketplaces will matter more than clients. Once MCP is table stakes, differentiation moves up the stack. The companies that curate, validate, and make MCP servers discoverable will capture significant value.
Hosted MCP servers will become a SaaS category. Right now, most MCP servers run locally. As teams need shared access, providers will offer hosted, authenticated MCP endpoints. Think "MCP as a service."
MCP and agent frameworks will converge. Tools like LangChain, AutoGen, and CrewAI are already adding MCP support. The line between "agent framework" and "MCP client" will blur.
The Bottom Line
MCP succeeded where previous attempts at AI integration standards failed because it prioritized simplicity, openness, and developer ergonomics over completeness and control.
It's 90 days old and already universal.
If you're building anything that an AI should be able to use — a database, an API, an internal tool — MCP is the obvious choice. Build it once, and every AI coding tool your users prefer will be able to use it.
And if you want to find what's already built, browse the MCPHub catalog — 47 production-ready servers across 8 categories, all free to use.
Found this useful? The MCPHub catalog is open source and community-driven. Submit your MCP server for review.
Meta
- Word count: ~1,100
- Reading time: ~5 min
- Cover image brief: Split-screen showing 5-6 AI tool logos (Cursor, Claude, Windsurf, GitHub Copilot, Replit) all connected to a central hub/node. Clean, dark background. MCPHub branding subtle.
- SEO title: MCP Won: How One Protocol Unified the AI Coding Ecosystem in 90 Days
- Meta description: In November 2025, Anthropic released MCP. By February 2026, Cursor, Windsurf, GitHub Copilot, Cody, and Replit had all adopted it. Here's how one protocol conquered AI coding.