How context-aware agents and open protocols are driving real-world success in enterprise AI

Artificial intelligence moves from testing to operational applications. The excitement of large-scale linguistic models (LLMs) introduced many organizations to what AI could do, sparking a wave of pilots and prototype agents.
But as businesses push these systems into production, they encounter a key obstacle: general-purpose models lack the real-time operational context required by business decisions.
Principal Solutions Analyst for Cisco ThousandEyes.
LLMs are amazing, but they are built for breadth, not depth. They excel at discussion and summarizing, but lack the real-time, domain-specific context on which business decisions depend.
A chatbot can discuss financial rules, but cannot determine whether a particular trade violates internal policy. It can explain network concepts, but it can’t diagnose why your app is running slow right now without live telemetry. Simply put: AI is only as smart as the data and tools it has access to.
This gap is driving architectural changes across enterprise AI deployments. In business, intelligence isn’t about broad answers, it’s about planning precise, reliable action.
The rise of specialized business models
To address this dilemma, organizations are increasingly using small-scale linguistic models (SLMs), which are trained on domain-specific data for specific tasks. SLMs offer lower computational costs compared to larger models, faster response times, and the ability to work on-site with data sovereignty requirements.
Analysis of current workload patterns suggests that many agetic AI tasks can be handled by specialized SLMs, with larger models reserved for more complex reasoning tasks.
In fact, research from NVIDIA and others shows that most business deployments include a mix of SLMs and LLMs. But choosing the right model is part of the business AI challenge. For agents to work reliably, they also need a consistent way to access business systems.
That raises the importance of the infrastructure layer that connects thinking with operational reality.
MCP protocol: The backbone of enterprise-class agent systems
A key part of that infrastructure layer is the Model Context Protocol (MCP), an emerging open standard that enables AI models to communicate with business data sources and tools through a common and secure interface.
Released by Anthropic in late 2024 and subsequently donated to the Linux Foundation’s Agent AI Foundation (AAIF), MCP acts as a universal translator: it exposes data, telemetry, workflows, and actions in a consistent, structured way.
This is important for three reasons:
- Standardization makes large ecosystem agents possible. APIs vary across platforms and clouds; MCP shortens that complexity so agents can access systems without bespoke engineering.
- Contextualization provides agents with real-time visibility into organizational topology, conditions, and system status; allowing agents to query current conditions instead of using old training data or measurements.
- Governance ensures security. MCP architectures allow monitoring lines that define which system agents can access and which actions they can perform. With every action being evaluated, the question is “Did the agent respond?” but “Did you complete the job safely and correctly?”
Business startup AI
This evolution marks a turning point. In retrospect, the new phase paves the way for the maturity phase we are now entering: systems that are defined, secure, governed, and aligned to business outcomes.
Businesses need agents who understand their environment, access the right data, choose the right tools, and work within the right controls.
The combination of specialized models and standardized infrastructure protocols represents a maturity in enterprise AI architecture.
Rather than using general-purpose models for all functions, organizations are building separate systems: SLMs handle domain-specific workloads, master models address complex reasoning, and MCP provides standard, contextual access. Together, they make AI capable and reliable.
Eliminating AI waste through context and control
Consider IT service automation: an agent handling a network operation ticket can use MCP to access real-time telemetry from network monitoring systems, query information on historical events, and implement pre-approved maintenance workflows – all through standard communication rather than custom integration for each tool.
MCP’s streamlined access to business tools and data makes it easy to go from finding information to completing reliable work. If an agent encounters a problem, say, a DNS failure, it can use the protocol to understand the context, query for additional data, and determine next steps rather than simply failing.
So when a large e-commerce platform experiences a service breakdown, a well-connected agent can correlate live performance with historical patterns and make pre-approved fixes. What once took hours now happens in minutes, apparently in full.
Without a context-aware infrastructure, agents can also fall into expensive loops with multiple models consulting each other and consuming resources without progress. MCP prevents this by framing jobs when they finish, not the job.
What is in store for 2026?
As businesses push towards operational AI by 2026, the challenge is no longer an experimental model, it’s connecting intelligence to action.
The technical requirements are clear: the models must have access to the operational context, the actions must work within the defined management parameters, and the systems must scale economically to high operational loads.
Organizations building reliable AI systems invest in both specialized models and the infrastructure layer that connects them to business reality. MCP provides one way to set up this connection.
The future of business AI will not be achieved by model size. It will be won by context, communication, and control.
We show you the best AI website builder.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the tech industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you would like to contribute find out more here:



