Full Article
MCP crossed 97 million installs in 16 months - the agent connectivity standard is settled
Anthropic's Model Context Protocol reached 97 million installs on March 25, 2026, with every major AI provider now shipping MCP-compatible tooling. The fragmentation era for agent-to-tool integration is over.
Technology standards rarely emerge this fast, and when they do, the speed itself carries analytical signal. The React npm package took approximately three years to reach 100 million monthly downloads; Docker took roughly four years to reach comparable developer penetration. Anthropic's Model Context Protocol achieved comparable scale in 16 months - from launch in November 2024 to 97 million installs by March 25, 2026. The velocity matters not as a vanity metric but as evidence of something structurally unusual: MCP did not win on feature completeness or technical elegance. It won because it arrived at the precise moment when enterprises were trying to move AI agents from single-tool demonstrations into multi-tool production workflows, and no competing standard existed. Timing and ecosystem alignment, not differentiated capability, explain the adoption curve.
The adoption timeline is a cascade of institutional endorsements that compressed the typical standards cycle from years into months. Anthropic launched MCP with roughly 2 million monthly downloads in November 2024 - a respectable but modest base. OpenAI's adoption in April 2025 was the first significant signal of cross-vendor legitimacy; it pushed installs to 22 million and established that MCP would not be a single-vendor protocol. Microsoft's integration into Copilot Studio in July 2025 added enterprise distribution at scale, bringing the total to 45 million. AWS Bedrock support in November reached 68 million. By March 2026, every major AI vendor - OpenAI, Google, Microsoft, AWS, and Cloudflare - was shipping MCP-compatible tooling, and the Linux Foundation's Agentic AI Foundation had formalised the governance structure, signalling that the protocol was transitioning from a vendor initiative to a managed open standard. Each endorsement reinforced the next, in the classic pattern of platform tipping.
The operational significance is substantial and easy to understate. One of the most persistent hidden costs in AI agent deployment has been bespoke integration. An agent that reasons well but accesses tools through custom-built glue code, fragile JSON parsers, undocumented API call patterns, and scattered permission logic creates a maintenance and debugging overhead that compounds with every new tool added to the system. The result has been that enterprise AI deployments tend to be narrower than their technical capabilities would suggest - not because the models cannot handle complexity, but because the integration surface becomes unmanageable. MCP provides a standardised, auditable surface for tool discovery, access, and execution that reduces that integration tax at every step. Tool vendors publish MCP-compatible interfaces once; agent developers consume them through a common protocol. That standardisation is not glamorous, but it is the thing that allows agent capability to scale across an enterprise rather than being hand-crafted per use case.
For the competitive landscape, the protocol question is now settled - which means the competition has visibly shifted to managed services on top of the standard. The analogy is TCP/IP: once the transport and addressing layers were agreed in the early 1980s, the competitive action moved to what ran over them. Email, the web, and eventually streaming video were not protocol innovations; they were application-layer innovations that the standardised transport infrastructure made possible at scale. For MCP, the analogous next competition is over managed MCP gateways with enterprise identity and access management integration, audit tooling that satisfies compliance requirements, tool marketplaces with density and quality guarantees, and orchestration infrastructure that handles multi-step agent workflows reliably. These are the service layers where platform advantage will accrue over the next three to five years.
The enterprise procurement implication is that MCP has changed the unit of evaluation for AI agent infrastructure. Before a widely adopted standard existed, organisations evaluating AI agent platforms had to assess each vendor's proprietary integration approach - a comparison that was difficult to make objectively and that created high switching costs once a vendor was embedded. With MCP as the common layer, enterprises can now evaluate agent platforms on the quality of their gateway services, their IAM integration, and their marketplace without being locked into a proprietary integration architecture. That increases competitive pressure on existing platform vendors and lowers the barrier to entry for specialist infrastructure providers who can build excellent managed services on top of an open protocol.
For energy modellers and quantitative practitioners specifically, MCP's maturation opens a concrete and valuable workflow path. The typical quantitative research or market modelling environment involves a combination of internal data warehouses, third-party market data APIs, optimisation solvers, statistical computing environments, and bespoke analytical tools that have historically required custom integration work for each agent or automation workflow. MCP creates the possibility of accessing all of those resources through a single governed protocol - with tool discovery, permission management, and audit logging standardised at the infrastructure level rather than built per integration. That is the kind of infrastructure shift that has historically preceded step-changes in analytical productivity: when the plumbing becomes standard, the engineering effort can concentrate on the problem rather than on the connection.
The governance dimension deserves particular attention because it determines whether MCP will remain an open standard or fragment into incompatible vendor-specific extensions. The Linux Foundation's Agentic AI Foundation provides a formal governance structure - a technical steering committee, a process for specification changes, and intellectual property protections - that makes the standard more durable than a vendor-controlled protocol. The historical precedents for open-protocol governance in technology infrastructure - HTTP, SMTP, OAuth - suggest that foundation governance tends to preserve interoperability while allowing vendor differentiation at the service layer. If that pattern holds for MCP, the foundation governance model will prove to be as important to the protocol's long-term value as the technical specification itself.
Model View
Agent system performance = model quality x tool access quality x interface reliability. If the interface layer (MCP) is standardised and stable, overall system performance scales with the model and tooling rather than being bottlenecked by integration fragility.
Bottom Line
The one thing to remember — the strategic implication in its most compressed form.
The agent connectivity standard is decided - the next race is over managed services, marketplace density, and who extracts value from the protocol layer above.