AI Strategy

What If the Protocols End Up Mattering More Than the Models?

← Back to Blog

The entire conversation right now is about models. Who has the biggest one. Who has the fastest one. But a different question deserves attention: what about the connective tissue? The protocols that determine how AI systems talk to tools, to data, to each other? In December 2025, Anthropic donated the Model Context Protocol to the Linux Foundation, with OpenAI, Google, Microsoft, and AWS joining as founding members of the Agentic AI Foundation (Linux Foundation, 2025). Competing model providers are converging on shared protocol infrastructure. That is not a technical footnote. It is a signal about where durable value will form.

The hypothesis: the long-term competitive landscape in AI may be shaped more by protocol and interoperability standards than by model performance, because history consistently shows that protocols define the conditions under which products create value in real contexts.

Three Takeaways

First, standards outlast products. The TCP/IP protocol suite, developed in the 1970s (Cerf & Kahn, 1974), remains the foundation of the internet while countless applications have risen and fallen on top of it. In every major technology cycle, the protocol layer becomes the persistent infrastructure while the products that depend on it are replaced. Model performance will continue to improve and differentiate in the short term. But the connective layer, meaning the standards that determine how models access tools, data, and organizational context, is where durable architectural advantage forms.

Second, protocols solve the N-by-M integration problem that currently constrains enterprise AI deployment. Every AI model that needs to connect to a data source, tool, or workflow currently requires a custom integration. If you have N models and M tools, you need N times M connectors. A well-designed protocol collapses this to M plus N. This is the structural logic behind the Model Context Protocol and Google's Agent-to-Agent Protocol. MCP reached 97 million monthly SDK downloads within its first year, with adoption across every major AI platform (Linux Foundation, 2025). The speed of convergence reflects the severity of the integration problem the protocol solves.

Third, protocol adoption creates network effects that encode boundaries between human and machine decision-making into infrastructure. Katz and Shapiro (1985) demonstrated that the value of a network good increases with the number of users, creating path dependencies that make established standards self-reinforcing. Once a critical mass of developers builds on a protocol, switching costs escalate and the protocol becomes infrastructure. The design choices embedded in MCP, specifically what context a model can access, what tools it can invoke, and what approval flows are required, are governance decisions being made at the protocol level.

The Longer View

The history of infrastructure standardization provides the structural parallel. In the early decades of American railroads, competing companies used different track gauge widths, making it impossible to run trains across networks. The eventual standardization to 4 feet 8.5 inches enabled an interconnected national system. The gauge itself was unremarkable technology. Its value was interoperability. The AI ecosystem is currently in its multi-gauge era: every vendor's tools connect differently, every integration is bespoke, and the friction of incompatibility constrains what organizations can build. Whoever sets the standard gauge captures the ecosystem, not because the gauge is technically superior, but because it is the gauge everyone else builds on.

Network economics explains why this dynamic compounds. Katz and Shapiro (1985) formalized the observation that network goods exhibit increasing returns to adoption: each additional user makes the network more valuable for all existing users. Applied to AI protocols, this means early adoption advantages are self-reinforcing. The organizations and developers who build on a protocol early shape its evolution, and latecomers face integration architectures that were designed without their constraints in mind.

Lessig (1999) argued that technical architectures function as regulatory systems: code constrains behavior in ways that are functionally equivalent to law. Applied to AI protocols, the design choices embedded in MCP and A2A, specifically what data flows are permitted, what oversight mechanisms are required, and what logging and audit trails are built in, are regulatory decisions being made by engineers rather than legislators. Organizations that participate in shaping those protocols are shaping the governance environment for AI. Organizations that do not participate are accepting governance terms written by others.

My Two Cents

I started paying attention to the protocol layer early because I noticed that the conversations about AI strategy almost never included it. Model selection gets executive attention. Protocol architecture does not. But when I look at where organizational lock-in actually forms, it is not at the model layer. Models are increasingly interchangeable. The integration layer, meaning how your systems connect to AI capabilities and how those connections are governed, is where architectural decisions become difficult to reverse.

The formation of the Agentic AI Foundation tells us something important: the companies building the most capable models have concluded that competing on protocols is less valuable than converging on shared infrastructure. When Anthropic, OpenAI, and Google all agree to build on the same connective layer, the strategic implication for every other organization is clear. The protocol layer is not where you differentiate. It is where you participate or get designed around.

Try This

Start by understanding what protocols your current AI tools depend on. Ask your technical teams about MCP, A2A, and the emerging standards landscape. Build your AI architecture with interoperability as a design constraint, not an afterthought.

Read to Learn More

Academic: Katz, M. L., & Shapiro, C. (1985). Network externalities, competition, and compatibility. American Economic Review, 75(3), 424–440.

Industry: Linux Foundation. (2025, December 9). Linux Foundation announces the formation of the Agentic AI Foundation (AAIF). Linux Foundation.

References

Cerf, V. G., & Kahn, R. E. (1974). A protocol for packet network intercommunication. IEEE Transactions on Communications, 22(5), 637–648.

Katz, M. L., & Shapiro, C. (1985). Network externalities, competition, and compatibility. American Economic Review, 75(3), 424–440.

Lessig, L. (1999). Code and other laws of cyberspace. Basic Books.

Linux Foundation. (2025, December 9). Linux Foundation announces the formation of the Agentic AI Foundation (AAIF). Linux Foundation.