Model Context Protocol (MCP) has been a rapid development in the AI community, enabling AI agents to dynamically discover and execute API methods. This is a major advance in enabling Agentic AI across all industries, however, in the telecom industry this represents an inflection point in operators’ API strategies, perhaps a disruptive one. This page unpacks the key opportunities, challenges, and risks for telcos related to MCP.
Opportunity: Moving from a connectivity provider to an intelligent solution enabler
Communication Service Providers are keen to transform their business model from simply a network connectivity provider towards new solution offerings that further monetize the network intelligence capabilities and drive them up the value chain. Initiatives such as the GSMA Open Gateway CAMARA API’s and the 3GPP Network Exposure Function (NEF) have an objective to monetize network services using carefully constructed and committee-formed open APIs often based on a Service Oriented Architecture (SOA). These APIs enable application developers and hyperscalers to build intelligent solutions on top of the operators’ core networks.
With the wave of AI investment in the telecom market, the emergence of Agentic AI solutions brings a new opportunity for operators to enable intelligent solutions. Model Context Protocol is a recent development, introduced by Anthropic in November 2024 and since adopted by key AI providers. MCP can be considered the USB-C of AI; standardizing the interface between AI Large Language Models (LLMs) and software applications. MCP provides the ability for LLMs to access application APIs as tools to retrieve and comprehend data, diagnose problems, propose solutions, and to execute actions. MCP provides a session-based context that gives the LLMs the ability to iterate a task to completion with minimal guidance in the prompt.
MCP is fast becoming a game-changer in all industries. In telecom, at a minimum MCP will complement the existing Open API strategies and SOA framework, to provide an intelligent layer on top of the network. Operators can expose network services that can be consumed by Agentic AI agents in a myriad of ways with a low cost of development. For example, an agent could be used to prioritize network bandwidth from a home gateway according to the household requirements; all configured using simple natural language; “For the next two hours, give priority to my game system to minimize latency.”
Opportunity: Streamlining internal operations
Service Providers have long been working to standardize their internal infrastructure to reduce costs associated with proprietary implementations and vendor lock-in. The TM Forum Open API initiative is chartered with defining open API specifications that can allow vendor products to adhere to standards, enabling dual sourcing, and seamless deployment in multi-country global operators.
The downside of the API standardization is that it requires a weighty standards committee and a large commitment by vendors. It also can stifle vendor innovation.
With the emergence of MCP and Agentic AI, the need for API standardization is diminished. A MCP Server provides a natural-language description of the application or product capabilities (as “tools”). The LLM based Agentic AI agent can seamlessly figure out how to use a product API without the burden of traditional API integration. This capability promises to streamline operations and lower the operators’ IT costs, even when the vendor product incorporates proprietary APIs.
Opportunity: Faster innovation, improved composability
MCP enables Service Providers along with application developers to leverage AI models to quickly build, trial, and iterate new services. The cost of entry is lowered by leveraging the LLM capabilities. Due to MCP’s session context, the AI models themselves can discover which of the available tools can perform the task at hand.
Soon, the telecom network, BSS, and OSS systems will expose many MCP servers. Any single product will also provide various MCP servers for its different functions. AI models will make use of multiple MCP servers, either directly or through MCP composite services. All of this makes it very easy to deploy an army of Agentic AI agents to autonomously perform a multitude of tasks. Already there are tools available to create MCP servers from open API specifications and standard databases without writing much code. It won’t be long until vendors start packaging MCP servers with their products.
Risks: Competitive
Given the emerging prevalence of MCP in IT and telecom products, a service provider who embraces this new technology as a first adopter will have an advantage, both in terms of efficiency and in terms of service offerings. Conversely, the risk of inaction is severe. Hyperscalers will become a larger competitive threat, given their faster speed of innovation, and massive investments in AI technology. Ignoring the inflection point brought about by MCP represents a large risk to an operator’s commercial strategy. Religious faith in standards-based API specifications won’t hold back the AI technology wave, and operators will find themselves stranded on a technology island.
Risks: Security, privacy and governance MCP unlocks significant capabilities—but it also brings critical security, privacy, and governance challenges that demand careful oversight. MCP functions as a powerful discovery layer, yet in doing so, it may inadvertently expose API endpoints, core business logic, and underlying data models to automated rogue agents.
MCP represents a large attack surface, with multiple risks:
- Metadata leakage - MCP stores detailed model metadata, including data sources, parameters, and decision logic. If compromised, attackers could infer sensitive dataset characteristics, model vulnerabilities, business logic or trade secrets.
- Access control - Poorly configured MCP systems might allow over-permissioned users to access sensitive records (e.g., model explanations based on private data). There is a risk of insider threats abusing access to audit logs or training history.
- Data exposure - MCP often retains data lineage and history for compliance. Retained historical data might violate data minimization principles under GDPR or similar laws.
- Model inversion - Attackers could use MCP logs to reconstruct how a model was trained—enabling model inversion or membership inference attacks.
- Misconfigured logging - Logging sensitive information (e.g., PII in input examples, labels) into the MCP without proper redaction can lead to privacy violations.
Consequently, security must be approached not as a late-stage implementation detail, but as a foundational element of the solution’s architecture, deployment practices, and operational governance. This requires a comprehensive, multi-layered security approach. To safeguard MCP, the following controls are required.
- Strong authentication & access control
- Input validation and output sanitization guardrails
- Encryption in transit (and at rest)
- Comprehensive logging and monitoring
- Sandboxing and resource Isolation
- Version control and patching discipline
- Namespace and command management
In addition to the security practices, a formal governance program should be established with MCP, and other AI technologies.
- Executive sponsorship and governance board
- AI policies
- Risk and compliance management
- Data and model governance
- User education
- Ethical and responsible AI practices
- Auditing
MCP represents a critical technology development that will soon bring Agentic AI into many facets of our lives. Telecom operators must embrace this technology or risk being left behind. The opportunity is now for those service providers with initiative to be pioneers. While success requires a strong risk management approach, the results will be well worth it.