MCP Hit 97M Installs. Is Your Stack Ready?

Model Context Protocol crossed 97M installs in 12 months. Learn how MCP powers agentic AI workflows and what to do before your tools enforce it.

Scott Armbruster
12 min read
MCP Hit 97M Installs. Is Your Stack Ready?

A protocol you’ve probably never configured is now running inside 97 million installations. Model Context Protocol (MCP) crossed that number this month, and it’s the reason your AI agents can (or can’t) talk to your tools.

Twelve months ago, MCP was an experimental spec Anthropic published in late 2024. Today it’s the connective tissue of every major agentic AI platform. OpenAI, Google, and Microsoft all adopted it. Claude is using it to click buttons on your screen, scroll through documents, and navigate applications autonomously, all through your actual computer interface rather than traditional APIs.

If you’re building AI workflows and you haven’t thought about MCP compatibility, you’re building on sand. Here’s what’s happening and what you need to do about it.

The 60-Second Summary

WhatDetail
ProtocolModel Context Protocol (MCP), created by Anthropic in late 2024
Installs97 million as of March 2026
AdoptionOpenAI, Google, Microsoft, and Anthropic tooling all support MCP natively
PurposeStandardized way for AI agents to connect with external tools, data sources, and actions
Why it matters nowWithout MCP-compatible tooling, AI agents can’t natively handshake with your data or execute cross-tool actions
Cost barrier droppingGemini 3.1 Flash-Lite launched March 31 at $0.25/million input tokens, making MCP-powered agents viable at SMB budgets

What MCP Actually Is (Without the Jargon)

MCP is a standard interface between AI agents and the outside world. Think of it like USB for AI. Before USB, every device had its own proprietary connector. Printers, cameras, keyboards — each needed a specific cable and driver. USB standardized the connection. One port, any device.

MCP does the same thing for AI agents. Before MCP, connecting an AI to your CRM required custom API integration code. Connecting it to your file system required different code. Your calendar, database, support tickets? Each one was its own bespoke integration project.

MCP gives AI agents one protocol to connect to anything. An MCP server exposes tools and data sources. An MCP client (the AI agent) discovers and uses them. The AI doesn’t need custom code for each integration. It just needs to speak MCP.

I set up my first MCP server about four months ago to let Claude access a client’s internal knowledge base. The setup took 45 minutes. The equivalent custom API integration I’d built six months earlier for a similar use case took three days. That difference in implementation effort is why 97 million installs happened in under a year.

Why 97 Million Happened This Fast

Every major AI lab backed it. When Anthropic released MCP, skeptics (myself included) assumed it would be one vendor’s standard competing against others. That’s not what happened. OpenAI integrated MCP support, Google followed, and Microsoft’s developer ecosystem picked it up within months. When all four major AI platforms agree on a standard, adoption isn’t optional. It’s gravity.

AI agents needed it. I’ve been writing about why AI agents fail in practice for months. The number one reason: they can’t reliably connect to the tools they need. An agent that can reason but can’t read your database or update your CRM is an expensive autocomplete. MCP solved the plumbing problem that was keeping agents theoretical.

Computer use made it urgent. Claude now uses MCP to execute autonomous computer control. Mouse clicks. Screen scrolling. Full application navigation. Not through an API, but through your literal desktop interface. When I first tested Claude’s computer use capabilities, it opened a spreadsheet, found the data I described in natural language, and pasted a summary into a Slack message. All through MCP-mediated actions. The demo that used to require a team of developers to build is now a protocol configuration.

This is the same trajectory I described in GPT-5.4’s computer use capabilities. The difference: MCP means these capabilities work across providers, not just within one vendor’s walled garden.

What Happens When You Don’t Have MCP Compatibility

I’ll give you a real scenario from two weeks ago. A client runs a 12-person marketing agency. They’d built a solid automation stack: n8n for workflows, Claude for content generation, Airtable for project management, and a custom reporting pipeline that pulled analytics from three different platforms.

The content generation piece worked great. Claude produced drafts, the team reviewed them, everything flowed. Then they tried to add an AI agent that could autonomously research a client’s competitors, pull recent social data, draft a competitive analysis, and post it to the right Airtable record.

The agent couldn’t do it. Not because the AI wasn’t capable. Because the connections between tools didn’t exist in a format the agent could discover and use autonomously. The n8n workflows were human-triggered. The Airtable API required custom authentication handling for each table. The analytics platforms had different auth flows and data formats.

Each tool was an island. The AI agent could see exactly what needed to happen but had no way to reach across and do it.

After we added MCP servers for Airtable, the analytics platforms, and the file system, the same agent completed the full workflow in 8 minutes. No custom integration code. The agent discovered the available tools through MCP, understood their inputs and outputs, and orchestrated the entire process.

The gap between “AI that suggests” and “AI that does” is MCP.

How MCP Changes Your Workflow Architecture

If you’ve been building AI automations with traditional integration tools, MCP changes the architecture in three ways that matter.

1. Agents Discover Tools at Runtime

In a traditional n8n or Make workflow, you wire up every connection at build time. The workflow knows exactly which tools it calls and in what order. Rigid. Predictable. Breaks when requirements change.

With MCP, an AI agent discovers available tools when it runs. You expose a set of MCP servers, and the agent figures out which ones to use based on the task. If you add a new tool next week, the agent discovers it automatically. No rewiring.

I rebuilt a client’s n8n automation using this pattern last month. The original workflow had 23 nodes for a lead qualification process. The MCP-based version has 4 nodes: trigger, agent call, validation, and output. The agent handles the tool selection internally.

2. Cross-Tool Actions Become Native

Before MCP, getting an AI agent to read from your CRM, check inventory in your ERP, and send a Slack notification required three separate API integrations with three different authentication flows. Each one was a potential failure point.

MCP standardizes the connection layer. The agent authenticates once through the MCP client, and every MCP server it connects to follows the same protocol. One authentication pattern. One data format. One error handling approach.

This is what I mean when I talk about the AI integration gap. MCP is the first credible answer to the “how do I connect all this” problem at the SMB level.

3. Computer Use Becomes a First-Class Capability

This is the part that still surprises me. Claude’s computer use through MCP isn’t a gimmick. It means AI agents can interact with applications that don’t have APIs.

That legacy accounting software your team uses? The one with no API and an interface from 2009? An MCP-enabled agent can navigate it. The internal portal that only works in a specific browser? Same. The government compliance form that requires manual data entry into 47 fields? The agent fills it out.

I tested this with a client’s property management software that predates the concept of API access. Claude navigated the interface, pulled tenant payment data, and compiled it into a report. Took about 3 minutes for what previously took their office manager 90 minutes every Monday morning.

The Economics Just Tipped

Here’s why this week specifically matters. Gemini 3.1 Flash-Lite launched today (March 31) at $0.25 per million input tokens. That’s the compute layer that makes MCP-powered agents economically viable for small businesses.

Running a basic MCP-powered workflow with 20 tool calls costs pennies at that pricing. Six months ago, the same workflow on a frontier model would have run $2-5 per execution. For a workflow that runs 50 times a day, that’s the difference between $100/month and $2,500/month. The first number makes sense for a 10-person company. The second doesn’t.

Cheap compute plus standardized connectivity equals AI agents that small businesses can actually afford to run continuously. The infrastructure economics crossed a threshold this month that I don’t think the market has fully priced in yet.

An MCP server is a lightweight program that exposes specific tools, data sources, or actions to AI agents through the Model Context Protocol. It acts as a bridge between an AI agent and an external system. For example, an MCP server for Slack lets an AI agent read channels, post messages, and manage threads without custom API code. MCP servers can be built in Python, TypeScript, or any language that implements the protocol spec, and typically take 30-60 minutes to deploy for common tools.

How to Get Your Stack MCP-Ready

Step 1: Map Your Current Tool Connections

List every tool your AI workflows touch. CRM, project management, file storage, communications, analytics. For each one, note whether it has an MCP server available. The MCP server registry has grown to hundreds of pre-built servers. Your tools are probably covered already.

Step 2: Check Your AI Platform’s MCP Support

If you’re using Claude (Claude Code, Claude Desktop, or the API), MCP support is native. If you’re using OpenAI’s tools, check their MCP integration documentation. Same for Google’s AI Studio and Microsoft’s Copilot Studio.

If your AI platform doesn’t support MCP yet, that’s your biggest risk factor. When it eventually does (and it will — the standard is too dominant to ignore), every workflow you built without MCP in mind will need rework. The same model-agnostic principle I wrote about with AI stack expiration dates applies here. Build for the standard, not the vendor.

Step 3: Start With One High-Value Workflow

Don’t try to MCP-enable everything at once. Pick one workflow where the AI agent currently gets stuck because it can’t access a tool. Add the MCP server for that tool. Test the agent with the new capability.

For most of my clients, the highest-value first target is connecting their AI to internal documents and data. An MCP server for their file system or knowledge base turns a general-purpose AI into one that actually knows their business.

Step 4: Test Agent Autonomy Incrementally

MCP-enabled agents can do more than you expect. Start with supervised mode where the agent proposes actions and you approve them. Watch what it tries to do. Once you trust the patterns, expand its autonomy. This is the same incremental approach I recommend across the AI implementation spectrum — start narrow, expand with evidence.

Step 5: Monitor What Breaks

When you add MCP to existing workflows, some things will break. Rate limits on tools that weren’t designed for agent-speed access. Authentication tokens that expire faster than expected. Data format mismatches between what the MCP server provides and what your downstream systems expect.

Budget a week of monitoring after your first MCP deployment. Fix the friction points before scaling.

What Catches People Off Guard

MCP is becoming a requirement, not an option. When major platforms adopt a standard this aggressively, tool vendors follow. I’m already seeing SaaS tools advertise “MCP-compatible” in their feature lists. Within a year, “MCP support” will be a checkbox on enterprise procurement forms, right next to SOC 2 and GDPR compliance. If your internal tools don’t speak MCP, your AI agents operate with one hand tied behind their back.

The protocol is open. Unlike most vendor-created standards, MCP is genuinely open-source. Anthropic created it, but they don’t control it in a way that locks out competitors. That’s why OpenAI and Google adopted it willingly. Nobody wants to build against a proprietary standard they don’t own. Open standards get adopted. Proprietary ones get resisted. MCP chose the right path.

Agent capabilities compound. Every MCP server you add to your environment gives your AI agents a new capability. And capabilities compound. An agent that can read your CRM and check your calendar is useful. Add Slack access and it can schedule meetings with prospects automatically. Add your invoicing tool and it can follow up on overdue payments without human intervention.

I’ve watched a client go from “AI helps with email drafts” to “AI runs our entire client onboarding process” in the span of six MCP server additions over two months. Each addition was small. The cumulative effect was a self-funding AI portfolio that pays for itself three times over.

The Three Things to Do This Week

  1. Check your MCP exposure. Open your AI tool of choice. Ask it what MCP servers are available. If the answer is “none,” you’re behind. Start with the MCP server registry and identify the 3-5 servers that map to your most-used tools.

  2. Test one agentic workflow. Pick a task that currently requires you to copy-paste between two tools. Set up MCP servers for both tools. Ask your AI agent to do the task end-to-end. Time it. Compare.

  3. Audit your vendor stack for MCP readiness. Every tool in your workflow either supports MCP, will support it soon, or will become a bottleneck. Know which category each of your tools falls into before the compatibility requirement catches you off guard.

97 million installs in 12 months. MCP isn’t a spec to watch anymore. It’s infrastructure to build on. The businesses that wire their tools into this protocol now will have AI agents that actually work. The ones that wait will find out what happens when the ecosystem moves without them.

I’ve seen that movie before. It ends with an emergency migration that costs 10x what the planned integration would have.


Related Reading:

TAGS

Model Context ProtocolMCP agentic AIAI agent integration 2026MCP workflow setupAI tool connectivity

SHARE THIS ARTICLE

Ready to Take Action?

Whether you're building AI skills or deploying AI systems, let's start your transformation today.