The most consequential disciplines in shared services were each named into existence at a moment of crisis.
- Six Sigma was the language we built in the 1990s, after a decade of total quality management struggled to scale, with Lean Six Sigma forking from it in the early 2000s.
- Process mining arrived in the early 2000s when our enterprise systems began producing more event data than any team could read.
- Robotic process automation was named by Blue Prism in 2012 and crystallised as a market category over the next three years, after a stretch of inconsistent vendor messaging and stalled pilots.
- Intelligent and hyper automation was the name Gartner gave the realisation, in 2019, that RPA alone could not carry the weight early enthusiasm had placed on it, and that real value required orchestrating RPA with AI, process mining, and human workflow.
In each case, the name followed the failures. Practitioners systematised what they had learned the hard way, and an industry rallied around the result.
We are at exactly that point now with agentic AI in GBS. The tools are advancing faster than the operating models around them. The pilots are multiplying. The failures are starting to repeat with eerie precision. And the capability that needs to exist to carry the next decade does not yet have a name.
The failure mode is hiding in plain sight
The headline numbers are by now familiar.
- IDC's 2025 Workera-sponsored Analyst Brief, Closing the Gap: Verifying AI Skills in the Enterprise, reports that 92% of organisations are increasing their AI investment while only 1% rate themselves as AI-mature.
- Gartner's 2026 CIO and Technology Executive Survey, summarised in its 2026 Hype Cycle for Agentic AI, finds that 17% of organisations have deployed AI agents and more than 60% plan to within two years.
- HFS Research's 2026 Horizons: Agentic Services study, conducted with Genpact and surveying GBS and outsourcing leaders, identifies a single dominant obstacle to agentic AI scaling: 33% of respondents cite unprepared business processes, ahead of integration, talent, and governance.
Add the literacy layer, and the picture sharpens. The major AI literacy frameworks now in market mostly teach AI consumption. They train staff to use AI well. None train the inverse skill, which is the skill of articulating your own work clearly enough that an agent can act on it.
The AI literacy frameworks train staff to use AI well. None of the frameworks train the inverse skill, which is the skill of articulating your own work clearly enough that an agent can act on it.
Workera's 2025 enterprise study 'The $5.5 Trillion Skills Gap' finds that 69% of L&D leaders cannot meaningfully measure their organisation's AI skill levels at all. And anecdotally, the pattern is sharper still: in one recent qualitative study of Copilot users, most acknowledged formal training was useful in principle, yet ignored the onboarding videos in practice, learning instead through trial and error and their immediate team.
That last finding is the one to dwell on. The bottleneck most leaders are spending against is the model. The bottleneck the people running the work actually report is the work itself. Tribal knowledge, undocumented exceptions, judgment calls held as institutional intuition rather than documented logic, entire decision trees that exist only in the heads of senior practitioners. None of this is legible to an agent.
Unmapped decision logic is far more costly than unmapped procedural logic ever was.
We have lived this story before. RPA programmes across shared services, deployed at scale between 2018 and 2021, taught the industry an expensive lesson. A disproportionate share of total cost of ownership ended up in development and maintenance, because every bot rebuilt the missing process logic from scratch. The pattern, by now, is unambiguous.
Process clarity, not bot capability, was the constraint. Agentic AI presents the same problem at a higher altitude, because agents target decision work rather than transactional work.
A name for the missing capability: Agent-Ready Operations
If the failure mode is one of process, the consequence is also one of people. The capability I want to propose, and the one I see embryonic forms of inside the most progressive GBS organisations, is best called Agent-Ready Operations. It is not an AI capability. It is the human and organisational capability that determines whether AI capability can land, and whether the function that lands it remains a place where expertise is still made.
It rests on four core components, all of which need to be built intentionally rather than assumed.
1. Knowledge architecture. Agents do not learn the way people do, by absorbing context from a desk neighbour or a hallway conversation. They retrieve. That makes the question of what is documented, where it lives, how it is structured, and how reliably it can be found, an operating question rather than an IT one. Most GBS functions still treat institutional knowledge as a by-product of doing the work. Agent-Ready Operations requires treating it as a first-class asset.
2. Process intelligence, meaning the ability to observe and represent how work actually happens, not how the SOP says it does. Process mining is the entry point, but the bar is higher. You need an operating picture of exceptions, escalations, and informal handoffs, because those are precisely the parts an agent cannot infer.
3. Decision articulation, the literacy that current frameworks miss: the capacity, distributed across the workforce, to make judgment legible to an agent and to keep building it in people once the repetition that traditionally produced it has been automated away.
Who can let an agent act on which kind of work, who reviews its outputs, and who is on the hook when it gets something wrong is less an audit concern than an operating one: without explicit answers, every team defaults to either over-caution or quiet over-reach, and neither scales.
4. Adaptive change governance, the connective tissue. Everest Group's 2026 study GBS Change Management Strategies: Lessons Learned from 60 Leading GBS Organizations found that 75% rate change management as critical, yet 33% have no operating model for it, and 50% identify internal team buy-in as the single largest determinant of success. Agentic AI multiplies, rather than eases, that requirement.
What to do this budget cycle
Three moves are within reach of any GBS leader before their next planning window closes.
First, pick your top high-value live processes. The mistake most organisations are making in 2026 is treating agentic AI as an experimentation budget. The firms quietly pulling ahead are treating it as a scaling discipline applied to live operations.
Second, appoint named owners for those processes who are accountable for both the operating outcome and the agent-readiness of the work. Without a named owner the capability never compounds.
Third, build the human capability before procuring more agent capacity. A small cohort, drawn from the practitioners who actually run the work, is taught the four components above through applied work rather than classroom instruction.
The window is short. The Gartner 2026 CIO and Technology Executive Survey's 60% adoption forecast within two years means the firms that name and build Agent-Ready Operations in this budget cycle will become their enterprise's AI deployment hubs by 2028.
