Category: Organizational Change & Culture

AI implementation challenges: Why AI pilots fail to scale 

AI investment is accelerating across every industry. Pilots are everywhere. Early wins are easy to find. Yet measurable enterprise impact remains inconsistent. 

According to PwC’s 2026 Global CEO Survey, 56% of CEOs report no revenue or cost benefits from AI despite increased investment. 

This gap defines the current moment. AI is working in pockets, but it is not translating into enterprise performance. 

The core challenge is turning isolated AI success into repeatable value across the enterprise. 

The pilot paradox: proof of concept is not proof of value 

Most organizations treat pilot success as evidence that scaling is simply a matter of replication. That assumption breaks down quickly. 

Only 12% of CEOs report both revenue growth and cost reduction from AI

Pilots operate in controlled conditions. They bypass the constraints that define real execution. Governance is simplified. Dependencies are minimized. Decision latency is reduced. Success criteria are narrow and often tied to speed or output rather than outcomes. 

Enterprise value operates under different conditions. 

Enterprise value is the measurable, repeatable improvement in how an organization performs across its operating system. It shows up in financial outcomes, execution speed, decision quality, and sustained adoption across teams. 

A pilot proves that AI can work. It does not prove that the organization can produce these outcomes consistently. 

Local wins vs enterprise constraints 

Teams can achieve meaningful gains within their own scope. They reduce manual work. They accelerate tasks. They improve individual productivity. 

These are local wins. 

Enterprise outcomes depend on how work flows across teams, how decisions move through the organization, and how systems interact. When those structures remain unchanged, local improvements do not scale. 

Research shows that up to 95% of AI projects fail to deliver measurable ROI at scale. 

This reflects a systems-level issue rather than a capability gap. 

AI amplifies the environment it enters. When workflows are fragmented and decision paths are unclear, AI increases the speed of fragmentation rather than resolving it. 

Portfolio sprawl and lack of prioritization discipline 

As pilots multiply, a new constraint emerges. Organizations accumulate use cases faster than they can evaluate or scale them. 

Leaders report difficulty moving beyond pilots into enterprise-wide deployment. This creates portfolio sprawl. 

Multiple teams pursue similar initiatives without coordination. Funding spreads across too many efforts. Success metrics vary by team. Low-value pilots persist because there is no clear mechanism to stop them. 

Without prioritization discipline, AI remains a collection of experiments rather than a coordinated system of value creation. 

Enterprise value requires clear sequencing, shared criteria for success, and active governance of the portfolio. 

Missing runbooks and operational governance 

Even when organizations identify promising use cases, scaling exposes another gap. There is no defined way of working for human and AI execution. 

Governance is often external to execution. Controls, monitoring, and accountability sit outside the workflow instead of being embedded within it. 

Organizations that embed AI into workflows, products, and services are two to three times more likely to see returns. 

This difference is operational. 

Scaling requires clear decision rights, defined escalation paths, validation mechanisms, and runbooks that guide how AI is used in daily work. Without these, trust erodes, adoption slows, and outcomes remain inconsistent. 

Failure patterns: why pilots stall at scale 

Across industries, the same patterns appear. 

  • Pilots remain isolated and never reach production workflows. 
  • Initial adoption fades as teams revert to familiar ways of working. 
  • Governance slows progress rather than enabling it. 
  • Trust declines when outputs are inconsistent or difficult to validate. 
  • Portfolios expand without focus, diluting impact. 

These issues follow predictable patterns within operating systems that have not evolved to support AI-enabled execution. 

What scaling actually requires 

Organizations that scale AI successfully shift their focus from experimentation to execution systems. 

  • They move from pilots to coordinated programs. 
  • They redesign workflows so AI is embedded in how work gets done. 
  • They clarify decision flow so insights translate into action. 
  • They embed governance into execution rather than layering it on afterward. 
  • They establish prioritization discipline so resources concentrate on the highest-value opportunities. 

Companies that build these foundations are significantly more likely to generate returns from AI. Then value begins to compound. 

The real constraint 

The limiting factor in AI value is the way the organization operates. 

AI exposes the gaps in decision flow, governance, workflow design, and adoption systems. When those gaps remain, pilots succeed but value does not scale. 

The organizations that move ahead are not those with the most pilots. They are the ones that redesign how work, decisions, and adoption operate together. 

They turn isolated success into repeatable performance. 

That is what separates experimentation from enterprise value. 


See where AI breaks down in your operating model

Most AI implementation challenges do not start with the technology. They emerge from how work flows, how decisions are made, and how governance is applied in daily execution. 

The AI-first operating model design assessment identifies where your current operating model limits scale, surfaces gaps in workflow, governance, and decision flow, and shows how to move from isolated pilots to coordinated execution. 


Frequently asked questions about AI implementation

What are the most common AI implementation challenges? 

The most common AI implementation challenges include unclear ownership of outcomes, weak governance, fragmented workflows, and lack of prioritization. Organizations often deploy AI without redesigning how work flows, which limits impact and prevents consistent value from scaling across teams. 

Why do AI projects fail to scale in enterprises? 

AI projects fail to scale in enterprises because pilots operate in isolation from real operating conditions. When expanded, they encounter governance gaps, cross-team dependencies, and unclear decision structures, which prevent repeatable execution and reduce overall business impact. 

What is the difference between an AI pilot and enterprise AI value? 

An AI pilot demonstrates that a use case can work under controlled conditions. Enterprise AI value requires repeatable performance across workflows, with measurable outcomes in cost, speed, quality, and adoption sustained over time across multiple teams and functions. 

What are AI scaling challenges in large organizations? 

AI scaling challenges in large organizations include portfolio sprawl, inconsistent workflows, lack of governance embedded in execution, and low adoption. These challenges prevent organizations from moving beyond isolated successes to coordinated, enterprise-wide impact. 

How do you scale AI in enterprise environments? 

Scaling AI in enterprise environments requires redesigning workflows, clarifying decision rights, embedding governance into execution, and prioritizing high-value use cases. Organizations must align operating models to support consistent, repeatable execution rather than relying on isolated experimentation. 

What is an AI governance framework and why does it matter? 

An AI governance framework defines how AI is controlled, monitored, and used within workflows. It matters because governance ensures trust, accountability, and consistency, enabling organizations to scale AI safely while maintaining performance, compliance, and decision integrity. 

How can organizations overcome AI implementation challenges? 

Organizations overcome AI implementation challenges by aligning their operating model to AI-enabled execution. This includes embedding governance, redesigning workflows, establishing clear ownership, and building adoption systems that reinforce new ways of working across teams. 

Why is AI adoption important for scaling value? 

AI adoption is critical because value only materializes when people consistently use AI within real workflows. Without sustained adoption, even well-designed solutions fail to deliver impact, and organizations remain stuck in pilot stages without achieving enterprise outcomes. 

AI transformation strategy: why programs outperform projects 

Why AI transformation strategy needs programs, not projects 

Enterprise AI investment continues to climb. The returns remain uneven. Even when experimentation succeeds, enterprise scale often remains elusive. 

The primary constraint is structural. Model quality continues to improve, but most organizations still run AI as a series of discrete projects. Discrete projects can deliver useful outputs, but they rarely create the continuity required for compounding enterprise value. The unit of execution is misaligned with how AI value is created. 

An effective AI transformation strategy needs a program model built for continuity, adoption, and sustained performance. The distinction matters because AI value depends less on whether a capability launches and more on whether the organization keeps improving how people use it, govern it, and measure it. 

Projects optimize scope. Programs optimize sustained outcomes A project is bounded by scope, timeline, and deliverables. That model can work for a warehouse build or a payroll rollout. It breaks down when leaders use it as the default structure for AI transformation

AI value rarely lives inside a single deliverable. 

Analysts need to trust the output. Governance needs to keep pace with model updates. Adoption needs to hold after the launch team moves on. None of those conditions closes on a delivery date. 

Programs are built to persist. They ask a better question: “Did performance improve, and is it still improving?” That question changes how leaders fund, govern, and measure AI work. A project-based AI rollout often tracks deployment milestones and usage counts. 

A program tracks performance metrics: cycle time reduction, cost-to-serve improvement, quality variation, risk exposure, and depth of role-based adoption. The inputs may look similar, but the operating discipline is different. 

That distinction is central to program management vs. project management in AI work. 

Why AI value realization stalls between funding cycles 

When AI is funded as a series of projects, momentum often resets every cycle. Each new funding cycle requires a new justification. Learning often stays with the team that ran the last initiative. 

Adoption gets treated as a post-delivery activity rather than a design requirement. Governance often trails capability deployment, creating a widening gap between what AI can do and what the organization is prepared to govern. 

The issue is not simply that individual projects end. The issue is that their learning, governance, adoption patterns, and value measures often end with them. 

MIT’s Project NANDA research shows a similar pattern. The research points to a deeper operating constraint: many enterprise AI systems do not learn, retain context, or adapt over time. 

For enterprise leaders, that is a continuity problem expressed through technical symptoms. AI initiatives can run long enough to consume budget, but still fail to build sustained confidence. Long enough to consume budget, but short enough to weaken confidence in the next AI initiative. 

For finance and portfolio leaders, this is a familiar governance problem showing up in a new context. Board conversations return to the same issue: funded initiatives that cannot be traced to measurable outcomes. 

Without continuity, leaders lack a reliable way to see which investments are compounding and which have stalled. The CFO lacks defensible value visibility. The CIO lacks a credible basis for prioritizing the next round of AI investment. 

Continuity as a structural design principle 

Continuity is the missing design element in many AI execution models. Leaders create continuity when strategy, execution, adoption, and measurement connect across initiatives instead of resetting with each one. 

In practice, continuity means the right elements persist between cycles: 

  • Outcome definitions tied to business performance 
  • Measurement frameworks that track performance over time 
  • Adoption models that reinforce how work actually gets done 
  • Governance cadence that supports decisions to scale, pause, or retire a capability 

Other elements evolve as the program matures: 

  • The model version or AI capability in use 
  • The workflows where AI is applied 
  • The specific teams and roles involved 

When they are absent, each cycle starts cold. Workflow changes get reopened. Metrics change definitions. Teams relearn what the last group already knew. 

McKinsey’s State of AI research helps illustrate the gap. Adoption is broad, while enterprise-scale continuity remains much less common. 

How continuous improvement in AI compounds performance 

Programs improve outcomes because they give insight a place to accumulate. 

Every cycle generates signals about what works, where users push back, which workflows absorb AI cleanly, and which workflows need redesign first. 

A project often leaves that learning in a closeout report after the team has moved on. A program carries it forward. 

That is continuous improvement in AI as an operating discipline. 

  • The compounding should show up in operational measures. 
  • Cycle time for AI-assisted decisions can drop as workflows are refined. 
  • Cost-to-serve can decrease as manual effort is removed. 
  • Quality can improve as variation is identified and reduced. 

Adoption can stabilize at higher levels when role-based enablement is built into execution from the start. 

McKinsey data suggests that organizations with higher AI maturity are nearly three times more likely to redesign workflows around AI instead of placing AI on top of existing processes. 

That redesign creates durable value only when it is sustained. One-time workflow changes tend to decay. Continuous improvement allows the gains to compound. That compounding effect requires an execution model designed to preserve what the organization learns. 

What program-based AI execution looks like in practice 

Program-based AI execution has observable properties that distinguish it from project-based work: 

Outcomes define the work 
The program is built around a measurable business outcome. AI capabilities are selected because they support that outcome. 

Measurement is continuous 
Investment, work, and results connect through one measurement spine. 

Execution is integrated 
Execution connects workflows, teams, and platforms. AI is embedded into real work instead of added as a separate layer. Product, operations, and governance stay coordinated throughout the program. 

Adoption is designed from the start 
Role-based enablement, behavior change, and reinforcement are part of the program plan from the beginning. McKinsey’s March 2026 analysis reinforces this point. The highest-performing organizations focus less on isolated AI deployment and more on embedding AI into how work actually runs. 

Governance operates within the cadence of the work 
Decision rights, escalation paths, and review cadences are defined early and adjusted as the work evolves. 

Learning loops are embedded 
Learning loops are embedded into the workflow. The program captures those signals as part of normal execution. 

What enterprise leaders need to change 

The leadership implication is specific. 

Leaders need to organize the portfolio around sustained outcomes instead of isolated initiatives: 

  • Funding should follow sustained outcomes rather than discrete initiatives 
  • Stage gates should carry learning into the next cycle 
  • Governance should sustain continuity across cycles 
  • Metrics should track sustained performance rather than delivery milestones 
  • Adoption and enablement should be embedded into execution 

AI should be treated as part of operating model evolution rather than a series of capability deployments. That shift creates the foundation for a durable AI transformation strategy

Closing the gap 

AI often stalls because the execution model was built for delivery completion rather than sustained adoption, governance, and performance improvement. 

The organizations pulling ahead are organizing AI around programs that sustain learning, adoption, and value realization across cycles. They design for continuity so results can compound. 


AI adoption is where value either compounds or stalls

AI value breaks down when teams do not change how work gets done. Adoption and change coaching embeds new behaviors into real workflows so results can scale and hold. 

Start with clarity before you scale.


Frequently asked questions about AI transformation strategy 

What is the difference between program management and project management in AI? 

Project management focuses on delivering defined outputs within a fixed scope and timeline. Program management focuses on sustained outcomes over time, connecting multiple initiatives, governance, and adoption into a continuous system that improves performance rather than resetting after each delivery cycle. 

Why do AI projects fail to deliver long-term value? 

AI projects often fail because they treat deployment as the finish line. Without sustained adoption, governance, and performance tracking, value does not persist. Learning is lost between cycles, and organizations struggle to connect AI capabilities to measurable business outcomes over time. 

What is an AI program and how does it work? 

An AI program is a structured, ongoing approach to embedding AI into workflows, governance, and decision-making. It connects strategy, execution, and measurement across cycles so improvements compound, enabling organizations to continuously refine performance and sustain value rather than restarting with each initiative. 

How do you measure AI value at scale? 

AI value at scale is measured through operational outcomes such as cycle time, cost-to-serve, quality, risk, and adoption depth. These metrics are tracked continuously across workflows, allowing leaders to see whether performance is improving over time rather than relying on one-time delivery milestones. 

Why is AI adoption critical to ROI? 

AI adoption determines whether capabilities translate into real performance improvements. If teams do not change how they work, AI remains underutilized. Embedding adoption into workflows ensures that tools are used consistently, enabling organizations to realize and sustain measurable business value. 

What does continuous improvement in AI mean? 

Continuous improvement in AI refers to using each execution cycle to refine workflows, models, and behaviors. Instead of treating AI as a one-time deployment, organizations build feedback loops into daily work so insights accumulate and performance improves steadily over time. 

How should leaders fund AI initiatives? 

Leaders should fund AI initiatives based on sustained outcomes rather than isolated projects. This means aligning funding with measurable performance improvements, maintaining continuity across cycles, and ensuring that learning, governance, and adoption persist instead of resetting with each new investment. 

What role does governance play in AI programs? 

Governance ensures AI operates safely and effectively within real workflows. In program-based execution, governance is embedded into daily operations, with clear decision rights, escalation paths, and review cadences that evolve alongside the work to support continuous performance improvement. 

How do you move from AI pilots to enterprise scale? 

Moving from pilots to scale requires shifting from isolated experiments to program-based execution. Organizations must connect workflows, embed adoption, track outcomes continuously, and carry learning forward so each cycle builds on the last rather than starting from scratch. 

AI adoption strategy: what leaders must do after AI go-live 

AI go-live creates visibility. It does not create value. 

After launch, teams experiment, attend training, and generate early activity. Yet despite rising investment, 56% of CEOs report no profit gains from AI over the past year (PwC Global CEO Survey, 2026). 

Why? 

Momentum fragments. Usage becomes uneven, managers revert to familiar rhythms, and governance drifts back to periodic review. Employees either use AI casually, avoid it, or work around it. In fact, 54% of executives cite culture and behavior as the primary barrier to scaling AI (Mercer, 2024). 

This is a structural issue, not a problem with motivation. When the operating system around AI does not change, adoption decays. 

A strong AI adoption strategy starts after go-live. Leaders must align incentives, embed governance in execution, redesign workflows, and make outcomes visible so AI becomes part of how work moves. 

Launch is not adoption 

Adoption is often misread. 

  • Logins show access. 
  • Training shows exposure. 
  • Prompt libraries show enablement. 

None confirm that work has changed. This gap between access and value is widespread: only 14% of CFOs report clear, measurable ROI from AI investments (RGP + CFO Research, 2026). 

Adoption exists when AI is used inside real workflows to improve outcomes. It shows up in how teams prepare decisions, analyze information, manage handoffs, resolve exceptions, and review results. 

Shift the question from “Are people using AI?” to “Where has AI changed how work moves?” 

For enterprise contexts, four expectations should be explicit: 

  • Roles: where human judgment remains essential and where AI supports analysis, synthesis, or routine execution 
  • Decisions: how AI-supported inputs are reviewed, trusted, challenged, and acted on 
  • Governance: controls that operate inside workflows, not outside them 
  • Reinforcement: how teams improve usage over time 

This is where AI change management moves beyond communication into behavior change in the work itself. 

Why post-launch decay happens 

Decay is predictable when AI is introduced into operating models designed for earlier ways of working. 

Four conditions drive it: 

1) Incentives reward the old workflow 

If goals still reward manual effort, activity volume, or legacy reporting, AI-enabled behavior remains optional. Teams experience AI as added work. 

What to change: connect AI-supported behaviors to the outcomes teams already own (cycle time, quality, cost, risk, experience) and remove or redesign outdated tasks. 

2) Leaders do not model the change 

If executive forums run the same way, the signal is clear: AI is optional. 

What to change: require AI-supported analysis in decision forums and demonstrate how human judgment validates and improves AI outputs. 

3) Governance sits outside execution 

Policy and committees cannot carry day-to-day decisions. 

What to change: define decision rights, validation standards, and escalation paths inside workflows so teams can move with clarity and control. 

4) Workflows are unchanged 

Layering AI onto inefficient processes limits value. 

What to change: redesign where AI supports preparation, analysis, communication, and exception handling; clarify where human ownership remains. 

What leaders must do differently 

After go-live, leadership behavior determines whether AI becomes embedded or ignored. 

At this stage, employees are not looking for messaging. They are looking for signals. What leaders ask for, inspect, and reward becomes the operating reality. 

Reinforce adoption by: 

  • Using AI-supported analysis in decision forums so teams see it as expected input 
  • Asking where AI changed outcomes, not where it was used 
  • Aligning performance objectives with AI-enabled work so behavior has consequences 
  • Removing redundant tasks made unnecessary by AI so capacity is not artificially constrained 
  • Making validation and oversight part of the work so trust increases over time 

Don’t undermine adoption by: 

  • Treating AI as optional productivity 
  • Adding expectations without adjusting capacity 
  • Demanding ROI while preserving legacy execution 
  • Leaving policy unclear, driving shadow AI 
  • Measuring activity instead of outcomes 

The difference is practical accountability at the level of work. Leaders do not need to control every use case, but they must define what good looks like and reinforce it consistently. 

Make value visible: incentives, metrics, modeling 

Adoption does not scale without reinforcement. Reinforcement requires visibility into what matters and why it matters. 

Three levers carry most of the weight. 

Incentives 

Incentives translate intent into behavior. If AI-enabled work does not influence how performance is evaluated, it will remain secondary. 

Avoid narrow usage targets. Those drive superficial adoption. Instead, connect AI-supported behavior to outcome movement such as reduced cycle time, improved quality, faster response, or clearer risk visibility. 

The practical test is simple: can a team explain how using AI changed their results, not just their activity? 

Metrics (AI ROI measurement) 

Measurement closes the loop between adoption and value. 

Many organizations track tool activity but cannot show operational impact, which aligns with broader market signals that only a small minority of organizations can clearly tie AI usage to financial outcomes (RGP + CFO Research, 2026). A stronger approach is to build a KPI spine that links AI use to performance indicators already owned by the business. 

This allows leaders to answer two questions at the same time: where AI is being used and whether it is improving how work performs. 

Executive modeling 

Modeling turns expectations into visible practice. 

When leaders require AI-supported preparation in reviews or use AI-generated scenarios to evaluate decisions, they show how AI fits into judgment and accountability. This removes ambiguity for teams and accelerates consistent adoption. 

Embed governance at the speed of work 

Governance is often treated as a separate layer. That approach slows adoption and creates confusion, while also increasing the risk of unmonitored “shadow AI” usage across teams—one of the fastest-growing enterprise AI risks. 

AI operates inside daily workflows. Governance must do the same. 

Embedding governance means defining how decisions are made, validated, and escalated within the work itself. Teams should not need to leave their workflow to determine what is allowed or how to proceed. 

Embed: 

  • Decision rights for AI-supported workflows so ownership is clear 
  • Validation standards for outputs so trust is earned, not assumed 
  • Monitoring for drift, misuse, and quality issues so risks are visible early 
  • Runbooks for escalation, rollback, and improvement so teams know how to act 
  • Feedback loops to update workflows as risks evolve so governance improves over time 

This approach increases both speed and control. Teams move faster because expectations are clear, and leaders maintain oversight because governance is built into execution. 

Build reinforcement loops 

Adoption is sustained through repetition, not initial enthusiasm. 

Reinforcement loops ensure that AI use improves over time rather than degrading after launch. These loops must be grounded in real work, not abstract training programs. 

Focus on: 

  • Role-specific expectations so each function understands how AI applies to its decisions 
  • Continuous enablement tied to real workflows so learning is immediately usable 
  • AI embedded in ceremonies and operating rhythms so usage becomes routine 
  • Manager coaching to help teams replace old behaviors with more effective ones 
  • Feedback channels to capture friction, trust issues, and improvement ideas 
  • Regular value reviews linking adoption to outcomes so progress is visible 

Programs outperform projects because they maintain these loops. A project introduces capability. A program ensures that capability evolves and compounds. 

Early warning signs of decay 

Leaders can detect adoption issues early by observing how work is actually happening. 

Watch for: 

  1. Usage concentrated in a few champions, indicating lack of role-based adoption 
  1. Meetings and decision forums unchanged, showing AI has not entered execution 
  1. Inability to link AI use to performance movement, revealing weak measurement 
  1. Governance questions slowing or stopping usage, indicating unclear boundaries 
  1. ROI requested after the fact rather than managed in-flight, showing a missing measurement system 

These signals are not failures. They are diagnostics that show where reinforcement and design need to improve. 

What changes when leaders take ownership 

When leaders actively own post-launch adoption, the organization moves from experimentation to discipline. 

Workflows become clearer. Decision-making accelerates because inputs are better prepared. Governance becomes more practical because it is embedded. Performance improves because outcomes are measured and managed consistently. 

This shift does not require perfect technology. It requires consistent alignment between how work is designed, how decisions are made, and how performance is evaluated. 

A practical AI adoption strategy after go-live 

A post-launch strategy should translate intent into operating design. 

Answer six questions: 

  1. Which workflows will change because of AI? 
  1. Which roles need new decision rights or validation responsibilities? 
  1. Which legacy tasks can be reduced or removed? 
  1. Which KPIs will show performance movement? 
  1. Which controls must operate inside the workflow? 
  1. Which reinforcement loops will sustain improvement? 

These questions provide a direct path from concept to execution. They also ensure that adoption and measurement are designed together, rather than addressed separately. 

Turn go-live into sustained value 

After launch, responsibility increases. 

Employees look for cues. Managers decide what matters. Governance moves from theory to practice. Leaders need evidence of impact. 

Start with diagnosis. Identify where adoption is weakening, which workflows need redesign, and how leadership can reinforce change. 

AI Adoption and Change Coaching helps leaders diagnose friction, rethink workflows, build role-based competency, and embed reinforcement systems. Where broader constraints exist, AI-First Operating Model Design aligns decision flow, KPI systems, governance cadence, and portfolio discipline. 

If AI has created activity without behavior change, act now to redesign how work runs so decisions, incentives, and governance drive measurable outcomes every day. 

See where your AI adoption strategy is breaking down

Technology is rarely the problem. Most organizations have an adoption gap hidden inside their workflows, incentives, and governance. In one week, you’ll get a clear view of where AI is failing to change how work gets done, and exactly what to fix first to start driving measurable outcomes.


Frequently asked questions 

What is an AI adoption strategy? 

An AI adoption strategy is the system of incentives, workflows, governance, and reinforcement that determines whether AI changes how work is performed after launch. It focuses on embedding AI into decision-making and execution so usage translates into measurable improvements in cycle time, quality, cost, and risk. 

Why does AI adoption fail after go-live? 

AI adoption often fails after go-live because the surrounding operating model does not change. Incentives, workflows, governance, and leadership behaviors remain aligned to pre-AI ways of working. As a result, teams revert to familiar patterns and AI becomes optional rather than embedded in daily execution. 

How do you measure AI ROI in the enterprise? 

Measure AI ROI by linking AI usage to operational KPIs such as cycle time, throughput, quality, cost-to-serve, and risk. Build a KPI spine that connects AI-supported workflows to business outcomes, allowing leaders to see both where AI is used and whether it improves performance. 

What is the difference between AI usage and AI adoption? 

AI usage reflects access and activity, such as logins or prompts. AI adoption occurs when AI changes how work is performed inside workflows. Adoption shows up in improved decisions, reduced handoffs, faster execution, and better outcomes rather than increased tool activity alone. 

What role do leaders play in AI adoption? 

Leaders shape adoption by defining expectations, modeling behavior, and aligning incentives. When leaders require AI-supported inputs in decisions and measure outcomes instead of activity, teams adopt AI more consistently. Without leadership reinforcement, adoption remains fragmented and declines over time. 

How should AI governance be structured? 

AI governance should be embedded within workflows, not managed as a separate layer. It must define decision rights, validation standards, autonomy boundaries, monitoring, and escalation paths so teams can use AI confidently while maintaining control and compliance at the speed of work. 

What are the early signs of AI adoption failure? 

Common signs include usage concentrated among a few individuals, unchanged meetings and decision processes, inability to link AI to performance improvements, governance confusion, and delayed ROI measurement. These signals indicate that adoption has not been embedded into workflows or reinforced effectively. 

How do incentives impact AI adoption? 

Incentives determine behavior. If performance systems reward legacy activities, AI-enabled work remains secondary. Align incentives with outcomes such as speed, quality, and efficiency improvements so teams see clear value in adopting AI-supported ways of working. 

What is post-launch AI change management? 

Post-launch AI change management focuses on reinforcing behavior after deployment. It includes role-based enablement, workflow redesign, governance integration, and continuous feedback loops to ensure AI becomes part of daily execution rather than a one-time implementation effort. 

How long does it take to see value from AI adoption? 

Initial value can appear quickly in targeted workflows, but sustained impact requires continuous reinforcement. Organizations that align incentives, governance, and workflows early can see measurable improvements within weeks, while broader enterprise value compounds over months as adoption scales. 

Enterprise AI agents: How organizations operationalize AI at scale

FAQ: What are AI agents?

AI agents are software systems that can perform tasks by interpreting input, making decisions within defined rules, and taking action. In enterprise environments, AI agents operate inside workflows to move work forward using governed data, permissions, and process logic.

FAQ: What are enterprise AI agents?

Enterprise AI agents are AI systems designed to operate within business workflows. They execute defined tasks, interact with enterprise systems, and follow governance rules, which allows organizations to move from AI-generated outputs to real work being completed inside operational environments.

For the past few years, most enterprise AI initiatives have centered on assistance. Copilots drafted emails, summarized documents, and generated code. They improved productivity at the edge of work, but they rarely completed work inside the systems where execution happens.

That boundary is starting to shift.

Enterprise AI agents are extending AI beyond generation and into execution. Instead of stopping at recommendations, these systems can trigger actions, move work forward within approved boundaries, and complete defined tasks inside workflows.

This shift changes how work moves from recommendation to execution.

Organizations are moving from isolated AI experiments to embedded operational capabilities. Prompt-based interactions are giving way to workflow-driven execution. Output generation is giving way to task completion.

The focus is shifting from what AI can produce to what AI can complete.

This shift matters because leaders are now evaluating how AI participates in real execution, not just how it improves individual productivity. The conversation is moving from access to models toward integration into the systems where work actually happens.

That raises a more practical question.

If AI can now participate in execution, where can that execution happen reliably and under control?

Why workflows are the natural environment for AI agents

FAQ: Why are workflows critical for enterprise AI agents?

Workflows provide the structure AI agents need to operate reliably inside real business processes. They connect data, approvals, and execution steps, which allows AI to move work forward instead of stopping at recommendations. Without workflows, organizations must manually coordinate actions across systems.

FAQ: Can AI agents work without workflow automation?

AI agents can generate outputs without workflows, but consistent execution depends on workflow automation. Workflows define process steps, permissions, and governance, which allow agents to complete tasks inside enterprise systems instead of relying on manual follow-through.

AI struggles to deliver consistent results when it sits outside the workflows where work is governed. Without structure, AI outputs still require people to coordinate systems, approvals, and next steps by hand.

Many early AI initiatives stall at this point.

When AI sits outside workflows, four constraints appear quickly:

  • Reliable access to governed enterprise data
  • Defined process steps, dependencies, and escalation paths
  • Clear ownership, approvals, and accountability
  • Connected execution paths across systems

The result is fragmentation. AI may generate useful output, but people still have to carry work across systems and teams.

Workflows address this problem by giving AI a governed place to operate.

They provide the structure AI agents need to operate reliably:

  • Structured processes with defined steps and owners
  • Embedded business logic, decision rules, and approvals
  • Secure, permissioned access to enterprise systems
  • Built-in governance, traceability, and auditability

Most importantly, workflows connect intent to action inside systems that can govern the result. They turn recommendations into executable steps and decisions into tracked outcomes.

This is why AI workflow automation is emerging as a practical foundation for enterprise AI execution.

Within these environments, AI agents can participate directly in real work. Workflow platforms become the coordination layer because they connect process logic, enterprise data, permissions, and approvals in one execution system. This is where platforms such as ServiceNow can support AI agents at scale because execution remains connected to real workflows, data, and controls.

With that structure in place, the next question is practical:

What do enterprise AI agents actually do inside those workflows?

What enterprise AI agents actually do

FAQ: What do enterprise AI agents actually do in business workflows?

Enterprise AI agents execute defined tasks inside workflows by triggering actions, moving work through process steps, and coordinating across systems. They reduce manual effort by handling routine activities such as data updates, service requests, and operational coordination within governed environments.

FAQ: How are AI agents different from AI copilots?

AI copilots generate suggestions or content to support individual users, while AI agents participate in execution inside workflows. Agents can trigger actions and progress tasks within defined processes, whereas copilots rely on users to carry work forward into enterprise systems.

The value of enterprise AI agents comes from how they reduce coordination overhead and move work through real processes. Their impact becomes visible when you look at how work moves across systems, approvals, and teams.

Workflow automation

AI agents can execute defined multi-step processes that previously required people to coordinate them manually.

In those workflows, agents can:

  • Trigger approved workflows
  • Move tasks through defined stages
  • Handle routine dependencies automatically

This expands AI workflow automation from isolated task handling into managed flow across the work itself.

Data enrichment

Enterprise decisions depend on context, and that context is often scattered across systems.

In structured workflows, AI agents can help by:

  • Pulling data from multiple connected systems
  • Validating records and reconciling inconsistencies
  • Updating records as workflows progress

This reduces manual lookups and gives downstream decisions better context.

Service request fulfillment

Internal and customer-facing requests often span multiple teams and systems.

In those scenarios, AI agents can:

  • Interpret the request
  • Route the request into the appropriate workflow
  • Complete defined parts of the process across the workflow

This can reduce resolution time and lower manual effort in routine scenarios.

Operational coordination

Many enterprise processes begin with an event, trigger, or exception.

In those environments, AI agents can respond by:

  • Starting the right workflow
  • Coordinating across teams
  • Pushing actions forward within defined timelines and escalation rules

This supports faster, more consistent execution across complex environments.

The human-in-the-loop reality

AI agents operate inside boundaries set by people, approvals, and policy.

Those boundaries typically include:

  • Escalation points
  • Approval thresholds
  • Exception handling

This creates a hybrid execution model in which AI accelerates routine action while people retain decision authority. This keeps execution governed, auditable, and aligned with business intent.

From capability to execution: Where AI agents are already operating

FAQ: Where are enterprise AI agents used today?

Enterprise AI agents are used in workflow-heavy environments such as IT service management, HR onboarding, customer support, and security operations. These use cases rely on structured workflows where agents can access data, follow process rules, and execute tasks within defined permissions.

FAQ: What does AI agents in production mean?

AI agents in production refers to agents that operate inside live enterprise systems and workflows. These agents execute real tasks, interact with governed data, and follow defined processes, which allows organizations to move from experimentation into consistent execution.

AI agents are already moving into production in workflow-heavy enterprise environments.

Current deployments tend to concentrate in workflows such as:

  • IT service management processes
  • HR request and onboarding workflows
  • Customer support operations
  • Security and incident response

In these environments, AI agents do not operate in isolation. They participate in execution inside systems that already manage requests, approvals, and data.

These deployments sit inside operational systems where AI can participate in execution under defined controls. Their effectiveness depends on how tightly they are integrated into workflows rather than how advanced the underlying models are.

In environments with mature workflow orchestration, ServiceNow AI agents help show how AI can operate within real enterprise constraints, including:

  • Access to governed enterprise data
  • Execution within structured processes
  • Operation within defined permissions and approval paths

These implementations represent early execution patterns that can scale across functions. They show how AI begins to add value when it is embedded in governed workflows rather than left at the edge of work.

As these patterns expand, the question shifts from where AI can operate to how organizations adapt their execution systems to support it.

What organizations can expect next

FAQ: What is an agentic AI enterprise?

An agentic AI enterprise embeds AI agents into workflows to support execution, coordinate operations, and assist decision-making inside governed systems. This approach focuses on integrating AI into how work happens rather than treating it as a standalone tool.

FAQ: How should organizations prepare for enterprise AI agents?

Organizations should focus on redesigning workflows, defining decision boundaries, integrating systems, and embedding governance into execution. Preparation requires aligning operating models with how AI participates in work rather than only deploying new tools.

As adoption expands, enterprise AI agents will begin to influence more of the execution system around them.

Expansion into complex decision flows

AI agents will increasingly participate in:

  • Multi-step decision processes
  • Cross-functional workflows
  • Dynamic, event-driven execution

This expands automation into more adaptive execution systems that can respond to changing conditions within defined boundaries.

Emergence of hybrid execution models

Future workflows will increasingly combine:

  • Human judgment
  • System logic
  • AI-driven action

This layered model will shape how work moves across the enterprise.

Operating model transformation

To scale this shift, organizations will need to redesign how work, decisions, and governance are structured.

Key changes include:

  • Defining decision boundaries between humans and AI
  • Embedding governance directly into workflows
  • Designing workflows and escalation paths that accommodate agent participation

This is where operating model design becomes critical. The focus broadens beyond deploying AI tools and toward designing execution systems that support sustained, governed use.

A broader definition of automation

This expands the meaning of automation. It changes how decisions are made, how actions are triggered, and how work is completed.

Execution becomes more continuous, more coordinated, and more responsive within defined limits.

The next phase of enterprise execution

The evolution of AI in the enterprise is increasingly defined by execution.

Enterprise AI agents expand AI’s role from assisting work toward completing defined work inside governed workflows. Their value emerges when they are embedded within execution systems that:

  • Provide structure
  • Coordinate execution across systems
  • Maintain governance and auditability

Organizations that integrate AI into these execution systems can move faster, reduce operational friction, and deliver more consistent outcomes.

Organizations that remain focused on experimentation will struggle to translate AI potential into business impact.

The next phase of enterprise AI will be shaped by which organizations can operationalize AI effectively inside real execution systems.

Continue the conversation

This shift toward execution-driven AI is becoming central to how enterprise leaders think about workflow design, governance, and the future of execution.

The most useful insights come from seeing how AI agents operate inside real workflows under real constraints.

At ServiceNow Knowledge 2026, these execution patterns are moving from concept to practice, with real examples of how AI agents are operating inside enterprise workflows.

That is where the next phase of enterprise execution is starting to take shape.

AI operating model: from experimentation to execution in 2026 

Why execution systems, not AI capability, determine enterprise results in an AI operating model 

Most organizations have already experimented with AI. Teams tested copilots, automated small tasks, and explored where models could improve productivity. Those efforts expanded capability, yet execution often remained unchanged. Work still moved through the same bottlenecks. Decisions still slowed in the same places. Outcomes improved in pockets, then plateaued. 

A new phase is taking shape. AI is moving into the flow of work itself. Instead of supporting isolated tasks, it participates in how work is executed across systems, teams, and decisions. 

Agentic AI sits at the center of this shift and is a defining element of the emerging AI operating model. These systems can take action within defined boundaries, execute tasks inside workflows, and coordinate next steps across systems. They extend execution capacity, yet their impact depends entirely on the environment they enter. 

The question facing leaders is clear. If AI is now part of execution, what determines whether it improves outcomes or accelerates existing constraints? 

AI value depends on how work actually moves 

Execution leaders recognize the pattern quickly. Teams deploy capable tools. Early results show promise. Then progress slows. Work becomes uneven. Outcomes vary across teams. 

The issue sits in how work moves through the organization. 

AI operates inside an existing system that includes workflows, decision flow, governance, and human interaction. That system determines how quickly work advances, where it stalls, and how consistently decisions translate into action. 

AI amplifies that system. 

When workflows are fragmented, AI increases the speed of fragmentation. When decision ownership is unclear, AI accelerates inconsistency. When governance is disconnected from execution, risk expands as activity scales. 

When work is structured clearly, the effect changes. AI reduces manual effort, shortens cycle time, and improves consistency across teams. Execution becomes more predictable because decision paths and workflows are already defined. 

This is why many organizations struggle to convert AI investment into measurable value. Capability expands, yet the operating system for execution remains unchanged. 

The operating model becomes the constraint 

An operating model defines how work gets done. It shapes how teams are organized, how decisions move, how governance supports speed, and how people and systems interact during execution. 

Execution leaders feel the impact of operating model constraints every day. Work slows at handoffs. Decisions wait for approval. Teams optimize locally while enterprise outcomes remain inconsistent. AI does not remove these constraints. It exposes them faster. 

Scaling AI requires evolving to an AI operating model that supports faster decision cycles, clearer ownership, and coordinated execution across systems. 

This includes: 

  • Defining decision flow so actions move without unnecessary escalation 
  • Embedding governance into workflows so control does not slow execution 
  • Aligning roles and accountability to human and AI collaboration 
  • Designing workflows that connect systems instead of fragmenting them 

Organizations that address these elements create an environment where AI can contribute to execution. Those that do not continue to absorb delays, inconsistency, and rework at greater speed. 

ServiceNow as a coordination layer for execution 

Enterprise work rarely lives in one system. It spans service platforms, collaboration tools, data environments, and line-of-business applications. Execution breaks down when work moves between these systems without coordination. 

A coordination layer becomes critical. It connects workflows, enforces decision logic, and ensures work progresses across systems with clarity and accountability. 

ServiceNow increasingly serves this role. 

It enables organizations to design workflows that span systems and teams, while embedding intelligence directly into execution. AI can participate in triaging requests, routing work, resolving routine tasks, and supporting decisions within defined workflows. Human judgment remains central, with AI extending execution capacity inside structured processes. 

This changes how work moves. Tasks no longer depend on manual coordination across systems. Decision paths are embedded into workflows. Governance operates within execution instead of sitting outside it. 

The result is coordinated execution at scale. Work advances with fewer interruptions. Decisions translate into action more consistently. Leaders gain greater control without introducing additional friction. 

Where leaders are focusing in 2026 

As organizations prepare for the next phase of enterprise AI, priorities are shifting toward areas where execution, experience, and workflows intersect. 

Accelerating employee productivity with AI agents 

AI agents are taking on repetitive operational work inside enterprise workflows. Service requests, case triage, and routine coordination tasks move faster when AI handles initial steps and escalates where judgment is required. 

Execution leaders focus on reducing manual effort while maintaining control over outcomes. Productivity improves when work flows through defined paths instead of relying on manual intervention. 

Reimagining employee service and onboarding journeys 

Employee experience reflects how work is executed behind the scenes. Onboarding, service delivery, and support processes improve when workflows are coordinated across HR, IT, and service teams. 

AI enables more responsive and adaptive journeys, yet the impact depends on how these workflows are designed. Leaders are redesigning service models so experiences feel consistent and predictable across the organization. 

Embedding AI into everyday workflows 

AI is moving into the systems where work already happens. Employees interact with AI in context, within workflows, rather than through separate interfaces. 

This reduces friction. Decisions happen faster because information, recommendations, and actions are available at the point of execution. Adoption improves because AI becomes part of daily work rather than an additional step. 

Creating clear roadmaps for enterprise AI adoption 

Leaders are moving away from isolated pilots toward structured programs. These roadmaps connect use cases, governance, workflow design, and adoption into a coordinated effort. 

Execution improves when AI initiatives are sequenced, governed, and aligned to outcomes rather than explored independently across teams. 

From experimentation to adoption at scale 

Scaling AI requires more than deploying new capabilities. It requires redesigning how work is executed and how people engage with that work. 

Organizations that succeed treat AI as part of an ongoing evolution toward an AI operating model aligned to enterprise AI strategy and adoption. They design workflows that support human and AI collaboration. They clarify decision ownership. They embed governance into execution. They invest in enablement so teams understand how to work within these new systems. 

Adoption becomes the central factor. 

When teams trust the system, understand their roles, and see how decisions translate into outcomes, new ways of working take hold. Performance improves because behavior changes, not because tools are available. 

Organizations that treat AI as a series of deployments continue to experience uneven results. Use cases succeed in isolation. Scaling remains difficult because the surrounding system has not evolved. 

What to watch at ServiceNow Knowledge 2026 

ServiceNow Knowledge 2026 will highlight how organizations are operationalizing AI within real workflows. 

Key themes include: 

  • AI-powered employee experiences that connect service delivery across functions 
  • Real examples of AI participating in execution within structured workflows 
  • Industry-specific transformations, including complex onboarding environments such as healthcare 
  • Structured approaches to AI strategy that connect experimentation to enterprise programs 

These examples reflect a broader shift. Organizations are moving from capability exploration to execution design. The focus is on how work, decisions, and systems operate together. 

AI success depends on how work is designed 

The next phase of enterprise AI will be defined by execution. 

Organizations that align workflows, decision flow, and governance with AI-enabled execution will move faster and more consistently. Those that do not will continue to experience friction, even as capability expands. 

Agentic AI changes how work can be performed. The AI operating model determines whether that potential translates into outcomes. 

As leaders prepare for ServiceNow Knowledge 2026, the priority becomes clear. Redesign how work moves, how decisions are made, and how teams operate together. When those elements align, AI contributes to execution in a way that scales. 


What is an AI operating model? 

An AI operating model defines how AI agents, workflows, decision flow, and governance work together to execute tasks across the enterprise. It focuses on how work actually moves, ensuring AI supports human judgment within structured processes rather than operating in isolation. 

How is an AI operating model different from traditional AI adoption? 

Traditional AI adoption focuses on deploying tools and capabilities. An AI operating model focuses on how those capabilities are embedded into workflows, decision systems, and governance as part of a broader AI adoption strategy. The difference shows up in execution, where coordinated systems enable consistent outcomes instead of isolated improvements. 

Why do enterprise AI initiatives fail to scale? 

AI initiatives often stall because they are introduced into fragmented workflows and unclear decision systems. Without defined ownership, governance, and workflow alignment, AI amplifies existing inefficiencies. Scaling requires redesigning how work moves, not just expanding AI capability. 

How does an operating model impact AI outcomes? 

The operating model determines how decisions are made, how work flows, and how teams coordinate execution. When these elements are aligned, AI improves speed and consistency. When they are not, delays and inconsistencies increase, limiting the value AI can deliver. 

What role does ServiceNow play in an AI operating model? 

ServiceNow acts as a coordination layer that connects workflows, systems, and decision logic across the enterprise. It enables AI to participate in execution within structured processes, ensuring tasks move consistently while maintaining governance and human oversight. 

What should leaders prioritize in an enterprise AI strategy? 

Leaders should focus on redesigning workflows, clarifying decision ownership, embedding governance into execution, and enabling teams to work effectively with AI. These priorities form the foundation of an effective enterprise AI strategy and adoption approach. Structured programs that connect these elements create the conditions for adoption at scale and sustained performance improvement. 

Crafting the modern organization: it’s all about fit, not a fixed formula 

Some organizations navigate change with speed and control, while others stall. The difference often comes down to operating model design, the blueprint for how work flows across people, process, technology, and governance. In an AI-saturated world, operating models perform best when they fit the organization’s context, strategic intent, and real business outcomes. 

This article outlines how modern organizations approach operating model design. It focuses on teaming structures and AI-enabled ways of working, drawing on frameworks such as Elabor8’s Teaming Primes of Organizational Design. The central point stays constant: operating models succeed when they match your context and trade-offs are made deliberately. 

Why deliberate operating model design matters in the age of AI 

An operating model is the engine that turns strategy into execution. It defines how people, processes, technology, and culture work together to deliver value. In a fast-changing environment, deliberate operating model design drives outcomes such as: 

  • AI-first competitive advantage: applying AI where it improves speed, quality, and decision-making 
  • Staying on track: aligning teams and decisions to enterprise priorities, supported by AI-enabled performance signals and real-time progress visibility. 
  • Working smarter: optimizing how you deploy people and resources, streamlining workflows, and improving productivity by shifting routine tasks to AI-assisted automation and agents. 
  • Adapting with speed: responding to disruption and capturing opportunity through scenario planning, forecasting, and AI-enabled market sensing. 
  • Designing around the customer: building operating choices that improve experience, consistency, and trust. 
  • Embedding AI capabilities: placing intelligence into core workflows and defining how humans and AI collaborate in decisions and execution. 
  • Managing risk: designing governance that monitors compliance, bias, security, and model drift across AI-enabled decisioning. 
  • Engaging your teams: clarifying roles, strengthening collaboration, and reinforcing autonomy with accountability. 

The Teaming Primes: a practical lens for organizing the enterprise 

The Teaming Primes provide a structured way to design how an organization delivers value. They describe fundamental patterns for organizing work, including shifts towards customer and product alignment and the operating implications of AI-enabled execution. These shifts span a spectrum. 

On one end are traditional structures: departments organized around projects, technical components (such as a specific IT system), or business functions. These designs prioritize efficiency within established boundaries. In today’s environment, AI often shows up here as automation and optimization inside the function (for example, using AIOps to stabilize IT operations). The result typically improves internal efficiency and reliability. 

On the other end are customer- or product-aligned approaches: structures designed around how value flows to the customer. Organizations may align around customer journeys, products and services, or end-to-end value streams. In these models, AI is designed into the flow of work to improve speed, quality, and decision-making across the system. 

A key takeaway from the Teaming Primes is that many organizations recognize misalignment and struggle to correct it. The framework positions the organization as an adaptive system that can continually refocus on value delivery as the business, competitors, and customers change. 

Teaming structures: how work gets done 

Within any operating model, teaming structures determine how people collaborate and how decisions move. Many organizations are shifting towards flexible, empowered, cross-functional teams that accelerate delivery and improve customer alignment. Common teaming patterns include: 

Functional teams: grouped by specialized skills (for example, marketing or engineering). 

  • Good for: deep expertise, clear roles, operational efficiency. 
  • Watch out for: siloed thinking, slow cross-functional communication, and limited visibility into the end-to-end customer experience. 

Divisional teams: grouped by product line, geography, or customer segment. 

  • Good for: focus on specific markets or products, faster decision-making within the division. 
  • Watch out for: duplicated effort, reduced cross-division collaboration, and fragmentation across “mini silos”. 

Matrix teams: where people report to more than one leader, such as a functional manager and a project manager. 

  • Good for: shared expertise across projects, flexibility in resource allocation. 
  • Watch out for: role ambiguity, competing priorities, and increased coordination overhead. 

Cross-functional product teams: small teams with diverse skills that own a product or customer journey end-to-end. 

  • Good for: rapid iteration, strong customer alignment, higher autonomy, and improved engagement. 
  • Watch out for: significant cultural change requirements, challenges to traditional management practices, and scaling complexity. 

Process- or value stream-aligned teams: organized around an end-to-end value stream (for example, order to cash). 

  • Good for: optimizing value delivery across multiple functions, reducing hand-offs. 
  • Watch out for: complex coordination across functions, difficult governance. 

Networked/distributed teams: rely on flexible connections and collaboration across geographies and, in some cases, external partners. 

  • Good for: access to global talent, flexible resourcing, collaboration with external experts. 
  • Watch out for: requires strong communication practices and tooling, and introduces cultural and time zone coordination challenges. 

Taken together, these patterns raise an important question: how is work organized in your own business today, and how well is that serving you? Are you seeing the benefits these structures promise, and are the trade-offs showing up in familiar ways? Understanding where your current model helps or hinders execution sets the foundation for choosing what comes next. 

Why context drives operating model choices 

The effectiveness of an operating model depends on organizational context. Selecting the right design requires clarity across: 

Goals and vision: what outcomes matter most across the short, medium, and long term? Examples include growth, market expansion, innovation, cost leadership, and experience leadership. Innovation-led strategies often benefit from empowered product teams. Efficiency-led strategies often benefit from more standardized, process-driven designs. 

Starting point and capabilities: assess strengths and constraints across people, process, technology, and culture. Identify legacy systems and entrenched behaviors that slow change. Clarify current skills and the capability build required to reach your target state. 

Industry and market dynamics: how quickly is the market changing, and what do customers and competitors signal? Fast-moving environments typically demand adaptable structures and shorter decision cycles. 

Target outcomes: define the measurable results the new operating model must produce, such as faster product launches, improved customer experience, lower cost-to-serve, higher engagement, and stronger innovation throughput. 

Culture and leadership: assess readiness for empowerment, experimentation, and distributed decision-making. Strong operating models depend on leaders who reinforce new behaviors and teams who feel safe to learn, iterate, and improve. 

Making change stick through people 

Operating model design often focuses on structure, process, and technology. Implementation succeeds through people. The model delivers value when teams understand the intent, adopt the behaviors, and change how work gets done. 

People resist change when the purpose feels unclear or the shift feels unmanageable. The COM-B model for behavior change is a useful lens. For someone to adopt a new behavior, they need: 

  1. Capability (C): the skills and knowledge to do the behavior. 
  2. Opportunity (O): the right environment, resources, and support. 
  3. Motivation (M): the desire and reason to change. 

Using COM-B, focus areas for successful rollout include: 

Explain the purpose and benefits (motivation): clearly communicate why the change matters and how it improves outcomes for teams and the enterprise. Connect the operating model to strategy, measurable results, and better day-to-day execution. When teams see the value and understand the direction, motivation rises. 

Equip teams with skills (capability): new operating models demand new behaviors and, increasingly, AI-enabled ways of working. Invest in training that covers collaboration rituals, agile delivery practices, data fluency, AI literacy (ethical use of generative AI), and AI oversight (how leaders validate and govern agent outputs). Reinforce the human skills that make cross-functional delivery work, such as feedback and active listening. 

Set up the environment for success (opportunity): skills scale when the environment reinforces them. That includes: 

  • New processes: redesign workflows to fit the new structure, including hand-offs, decision rights, and where AI agents support decisions. 
  • Supportive technology: provide the tools people need to collaborate, work transparently, and access the right data. 
  • Clear roles and responsibilities: define who owns what so teams can act with confidence. 
  • Remove friction: address physical and social barriers that block adoption by updating policies, aligning incentives, and replacing outdated habits. 
  • Sustain motivation: after launch, reinforce commitment through empowerment, leadership attention, and visible support mechanisms. 
  • Lead by example: leaders model the behaviors the operating model requires. 
  • Safe space to try: create a culture that supports experimentation, learning, and constructive feedback without fear. 
  • Recognize and reward: celebrate progress and reward teams for adopting new ways of working. 
  • Listen and adapt: gather feedback on what works, identify friction, and use what you learn to refine the model. 

Designing with purpose and strategic intent 

Designing and implementing a modern operating model is an iterative process: 

  1. Assess the current state: understand where you are today. 
  1. Set guiding principles: define the design rules anchored to strategy and outcomes. Use them to steer every operating model decision. 
  1. Test and learn: run smaller-scale pilots for new structures and ways of working, then iterate based on evidence. 
  1. Improve continuously: review and refine the operating model as conditions change across the enterprise and the market. 

With a deliberate, iterative approach and frameworks such as Elabor8’s Teaming Primes, organizations can design operating models that fit their context and accelerate progress towards strategic goals. 

The goal is clarity on who you are, where you are headed, and how you organize to deliver outcomes on that path. People make the model real through daily decisions and execution. 

Navigating the next wave of organizational change: insights from the front line 

Every organization is feeling the pressure to adapt faster than ever. Successful transformation demands clarity, commitment, and the right tools, far beyond a simple acknowledgement that change is necessary.  

We brought together a panel of industry leaders for a frank discussion on the challenges and successes of modernizing organizational practices. The conversation spanned topics from shifting mindsets to integrating new technology and revealed key insights for any company preparing for its next stage of evolution. 

The potholes on the road to agility 

A common pain point emerged immediately: the pace of change itself. Many established organizations move too slowly, weighted down by traditional practices and rigid annual cycles. This adherence to old ways often leads to weak prioritization and delayed value, especially when it comes to deciding what work truly matters.  

The real risk lies in prioritizing opinion over economic value. A traditional project-based mentality encourages big, front-loaded expectations with a fixed scope, leaving little room for learning or incremental delivery. This pattern erodes return on investment because much of the potential value surfaces only at the very end of a long project. 

Key challenges highlighted 

The funding shift: Moving from project cost accounting to a product-based operating model is a major financial and cultural hurdle. It requires senior leadership to fund autonomous value streams with an eye toward continuous delivery instead of a single, fixed outcome. 

Mindset and cultural entropy: Getting long-tenured employees to abandon deeply ingrained workflows is challenging. Leaders need to actively support and reinforce the new way of working to prevent teams from reverting to comfortable but ineffective old habits. 

Initial expectations vs. reality: While the promise of Agile is often speed, the immediate gain is usually increased transparency and earlier feedback. Adopting a new framework enables you to deliver valuable increments sooner and surface issues earlier, even when the underlying work remains complex. 

The strategic path to product-centricity 

For organizations committed to making the leap, one team’s transition from an older tool (Planview) to Targetprocess provided a powerful case study.  

A key to their success was acknowledging early that they could not do it alone. Bringing in external partners and coaches added a “new voice in the conversation” and helped accelerate adoption of a structured scaling framework such as SAFe (Scaled Agile Framework).  

A pivotal decision was their shift in focus: they used the migration as an opportunity to re-evaluate their core processes. Instead of configuring a new tool around broken, outdated processes, they first defined how they wanted to work and then configured the platform to support that modern, product-focused methodology.  

The result was rapid deployment of the new system, immediate visibility into data quality issues that had been hidden for years, and, most importantly, the ability to challenge existing norms based on rich, objective data. This new transparency provided a clear line of sight from strategy to execution, helping to eliminate noise and focus capacity on the most valuable work. 

Looking ahead: the AI-powered portfolio 

Looking ahead, AI dominated the conversation. For many organizations, AI investment is a given; the real question is how to manage the corresponding surge in new, complex initiatives. Technology leaders must treat AI as a critical area for portfolio investment and move beyond viewing it as just another cost line. That shift requires leaders to: 

Quantify business value: Accurately measure the financial impact of AI initiatives, whether through cost savings, new revenue streams, or risk reduction. 

Manage the portfolio for innovation: Use the enterprise portfolio tool to track AI investments alongside core product development and ensure alignment with top-level organizational goals. 

Harness AI for the portfolio itself: Use new AI capabilities to analyze portfolio data, predict outcomes, and flag potential bottlenecks so prioritization becomes an informed, data-driven activity rather than a political one. 

Final takeaway: start simple, be tenacious 

For any organization on this journey, the advice from the panel was clear: anchor every decision in a specific business outcome. Be clear on the result you are trying to achieve, invest in the right expertise and tooling with confidence, and approach the work as a marathon that rewards sustained commitment.  

The incremental gains of true agility, transparency, and data-driven decision-making become the foundation for sustainable success. 


Change Management in AI Adoption: Effective Strategies for Managing Organizational Change While Implementing AI

Artificial intelligence (AI) is a living, learning capability that only achieves full impact when paired with human-centered change management. Think of AI and change management as a symbiotic pair: AI supplies the insight and automation that can reinvent how work gets done, while change management provides the human alignment, culture-building, and governance that let those insights take root and scale. Each amplifies the other.

Introducing AI reshapes how people make decisions, collaborate, and create value.

This blog explores how embedding proven change management practices into every stage of AI adoption—discovery, implementation, optimization, and value realization—turns isolated pilots into enduring, enterprise-wide advantage.

Successfully integrating AI into an organization requires personal investment from all affected parties, from leadership to frontline employees. Failure to secure this buy-in leads to wasted resources and resistance, as individuals grapple with fears of job displacement, loss of control, and uncertainty about AI’s purpose and impact.

To navigate this, organizations must adopt a strategic, human-centric approach, leveraging established change management practices. Success depends on:

  • Transparent, ongoing communication that addresses specific stakeholder concerns
  • Executive leadership that champions AI and cultivates adaptability
  • Early-stage engagement that co-designs the AI journey and validates value through pilot programs

Empowering people at every level is central to AI success. Organizations unlock strategic advantage by building a culture that values human-AI collaboration. Focusing exclusively on the mechanics of AI often sidelines its most important dimension: empowering your people.

1. Discovery & Strategy: Laying a Strong Strategic Foundation

Every successful AI adoption starts with a strong strategic foundation. First, surface the highest-impact opportunities across the business, from automating back-office workflows to embedding intelligence into customer-facing products. Use a proven readiness model to benchmark data, talent, and infrastructure against industry standards, revealing both strengths to leverage and gaps to close.

Translate those insights into a pragmatic roadmap that balances quick-win pilots with bold, long-horizon initiatives, each backed by a clear business case and defensible ROI model.

Throughout, bring the right voices to the table—executives, domain experts, compliance, and frontline teams—to secure sponsorship and reduce risk. Pair the technical plan with a targeted change management playbook: structured communications, hands-on enablement, and a culture-building program that turns wary employees into empowered AI champions.

The result is an AI strategy that is not just technically sound but financially disciplined and fully integrated into your organization’s DNA.

2. Implement & Integrate: Turning Vision into Action

With a strategy in place, delivery begins, translating ambition into capability that augments human decision-making and accelerates team performance. We weave AI into the tools teams already trust, whether Atlassian, ServiceNow, or bespoke platforms, so intelligence feels like a natural enhancement, not a disruptive shift.

Start with targeted pilots where the upside is clear and human expertise is indispensable, proving that algorithms combined with people outperform either alone. From day one, instrument workflows with performance and safety dashboards to detect and resolve drift, bias, or bottlenecks before they escalate.

In parallel, roll out role-specific enablement—from bite-size tutorials for frontline staff to deep-dive labs for data scientists—helping every employee master new capabilities and reinvest saved time into higher-value, creative work. By the end of this phase, AI is a trusted co-pilot that amplifies human judgment and frees talent to focus on what only people can do.

3. Tune & Optimize: Refining Performance and Experience

Post-implementation, sustained value depends on rigorous tuning. Establish a governance layer that blends security controls with clear accountability for model performance, ethics, and data privacy. A Center of Excellence—staffed by AI specialists and front-line power users—creates a real-time feedback loop for continuous improvement.

Ongoing scenario-based testing keeps bias, drift, and edge cases in check, ensuring AI systems remain trustworthy across conditions. Just as important, continue human enablement through onboarding sessions, refresher courses, and role-specific playbooks.

Targeted communications celebrate quick wins and share lessons learned, building confidence and curiosity across the organization.

4. Value Realization: Scaling Impact

When AI becomes an enterprise-wide capability, success is measured by how far and how sustainably it multiplies human potential. Wire each use case into a live scorecard of KPIs and value metrics, paired with ongoing pulse checks on adoption, readiness, and employee sentiment.

Advanced analytics surface underutilized areas or friction points, allowing teams to adjust both technology and supporting processes. Early wins are shared, scaled, and celebrated to accelerate momentum. Internal Centers of Excellence turn grassroots expertise into repeatable playbooks and reusable assets.

To ensure inclusive and ethical growth, maintain open forums and clear accountability across operations. This creates a scalable AI ecosystem that compounds value and supports the people driving your enterprise forward.

5. Future-Proofing: Sustaining Long-Term Advantage

AI is always evolving, and future-ready organizations evolve with it. Build for adaptability by championing continuous learning and expanding the AI frontier, from dashboards to prediction, prescription, and eventually autonomous support.

At every stage, AI should amplify human ingenuity. Algorithms handle the analysis so people can focus on strategy, creativity, and relationships. Promote this mindset through cultural touchpoints like guilds, lunch-and-learns, and communities of practice. Grow in-house talent that can lead future waves of innovation.

When technical roadmaps are interwoven with cultural evolution, AI becomes part of your organizational DNA: resilient, adaptable, and ready for what’s next.

Change Management Strategies for AI Success

  • Living Documentation: Keep artifacts current to reflect real-time changes in implementation.
  • Tailored Solutions: Adapt change approaches to your business context and tools.
  • Expert Guidance: Leverage experienced change professionals familiar with AI projects.
  • Proven Practices: Ground your approach in established principles from Lean Change Management or CMI.
  • People First: Involve employees early through workshops, feedback loops, and consistent communication.
  • Visual Clarity: Use change kanbans and impact maps to show how AI impacts different functions.

Earning Advocacy and Engagement

  • Communicate Clearly: Articulate the benefits of AI in plain language and address concerns transparently.
  • Empower Champions: Support influential employees who can advocate for AI change.
  • Invest in Training: Provide role-specific learning to build confidence and fluency.
  • Celebrate Wins: Highlight and amplify early successes to build enthusiasm and momentum.

The Bottom Line
Integrating AI into your organization requires more than just technical implementation. With a clear change strategy and a focus on people, you can orchestrate adoption, accelerate impact, and unlock the full potential of AI across your enterprise.

Technology Alone Won’t Cut It: Building an AI-Ready Culture to Support AI Transformation

Organizations invest heavily in AI tools and infrastructure—to the tune of well over $1 trillion globally since 2022—but often fail to generate meaningful results. The tech they’re implementing isn’t the issue. It’s the lack of cultural and operational readiness. AI only becomes valuable when it is embedded into the business, informing decision-making, improving workflows, and delivering measurable outcomes.

Many businesses treat AI adoption as an IT upgrade, assuming that implementing new tools will automatically improve efficiency. This approach frequently leads to underwhelming results. 

Companies that achieve real success take a different approach: they integrate AI into everyday operations, ensuring teams understand its capabilities and trust its recommendations. AI adoption requires organizations to rethink how work gets done, how decisions are made, and how data is used.

Change Management Determines AI’s Impact

AI disrupts workflows, decision-making, and job roles, making structured change management essential. Without clear leadership, employees may view AI as a threat rather than a tool. Resistance, confusion, and lack of trust can stall adoption.

Successful AI-driven organizations make change management a priority. Leaders must communicate AI’s role transparently and ensure employees see its value. 

When AI adoption is positioned as a tool for augmenting strategic decision-making, teams are more likely to engage. Deloitte, for example, has successfully integrated AI-powered document review into its legal and compliance teams by providing clear training and demonstrating tangible efficiency gains.

Companies also need to establish feedback loops. Employees who interact with AI daily should have input on refining models and improving usability. AI adoption should be an evolving process, not a one-time rollout.

Building a Data-Driven Culture to Make AI Work

AI adoption depends on a company’s ability to make informed, data-driven decisions. Moving from instinct-based decision-making to AI-backed strategies requires significant shifts in processes, incentives, and leadership priorities. But this isn’t going to happen if the organization’s culture doesn’t support that goal.

Trust is one of the biggest barriers to AI adoption. Employees often hesitate to rely on AI-generated recommendations because they don’t understand how AI reaches its conclusions. To bridge this gap, organizations must foster data literacy at all levels. Leadership should actively model data-driven decision-making, ensuring that teams see AI as a valuable input rather than an opaque black box.

Fostering trust also means maintaining human oversight, allowing users to validate AI-generated outputs, and continuously refining models based on user feedback. When employees understand and trust AI, they are more likely to integrate it into their decision-making processes.

For example, financial institutions use AI-powered fraud detection to flag suspicious transactions. AI models analyze transaction patterns in real-time, identifying anomalies that human analysts might miss. Instead of replacing fraud investigators, AI enables them to focus on the most urgent cases.

AI Must Be Embedded Into Business Systems

AI’s impact is diminished when it operates in isolation. Siloed data, disconnected workflows, and fragmented systems prevent AI from delivering its full value. The most successful organizations integrate AI into the platforms employees already use, such as CRM systems, finance software, and customer support tools. Intelligently orchestrating these systems across the organization ensures that AI insights are easily accessible and immediately actionable.

For instance, AI-powered customer support tools, like ServiceNow and Jira Service Management, are used by Amazon and Salesforce to analyze customer inquiries in real-time and recommend responses based on previous interactions. This streamlines service delivery while maintaining human oversight, improving both speed and accuracy.

The key to success is phased integration. Instead of deploying AI across the entire organization at once, companies should focus on high-impact use cases first—areas where AI can deliver quick wins. Once teams see tangible benefits, broader adoption follows more naturally.

AI Can Work Even When Data Isn’t Perfect

Data quality is often cited as a barrier to AI adoption, but waiting for a flawless dataset can delay progress indefinitely. Many leading AI initiatives thrive despite incomplete or inconsistent data. The best approach is to deploy AI where it can add value while simultaneously improving data practices.

A prime example is Subtle Medical, which enhances medical imaging even with imperfect datasets. Their AI models improve image resolution and reduce scan times, demonstrating that AI can deliver measurable benefits despite data limitations.

Final Thoughts

AI adoption requires more than acquiring the right technology. It requires building a culture that enables AI to generate business value. Companies that embed AI into existing systems, integrate it with decision-making processes, and actively manage change see the greatest impact. By ensuring AI works alongside human expertise rather than attempting to replace it, organizations can achieve sustained improvements and unlock AI’s full potential.

Organizational Change That Works: A Smarter, Smoother Approach

We all know businesses must continuously evolve to stay competitive. Yet, traditional approaches to organizational change often fail due to widespread disruption, internal resistance, and competing priorities. 

Research shows that as much as 88% of large-scale transformation initiatives do not achieve their intended results, often because they attempt to drive change too quickly and without the necessary alignment across teams. Organizations need a method that minimizes risk, delivers value quickly, and builds toward long-term success.

Guided Evolution offers a more effective path. Rather than pursuing sweeping overhauls that can destabilize an organization, this approach prioritizes incremental, adaptive improvements that align with the business’s strategic goals. By evolving in a controlled, intentional manner, companies can avoid the pitfalls of transformation fatigue and achieve sustainable success.

What is Guided Evolution?

Guided Evolution is a structured, step-by-step approach to change that reduces friction while accelerating value realization. Unlike traditional transformation efforts that attempt to overhaul entire systems at once, Guided Evolution enables organizations to implement meaningful, scalable improvements with minimal disruption.

This approach works because:

  • Changes are integrated into daily operations rather than introduced as abrupt shifts.
  • Incremental improvements build confidence and momentum across teams.
  • The organization continuously adapts to emerging needs rather than struggling through a single, large-scale transformation.

Achieving true enterprise-wide transformation is not just about modernizing individual workflows or integrating systems—it requires an orchestrated approach that optimizes how people, processes, and technology interact. Organizations that take a fragmented approach often experience inefficiencies, while those that evolve their Systems of Work, Systems of Insights, and Systems of Engagement in harmony are best positioned for long-term success.

Intelligent Orchestration: The Three Systems That Must Evolve Together

Change cannot happen in isolation. A truly effective transformation requires all three foundational systems within an organization to evolve in sync. Without coordination, isolated improvements in one area may create new inefficiencies elsewhere.

Guided Evolution ensures that transformation across these systems is deliberate and cohesive, reducing friction and maximizing impact.

System 1: Systems of Work (How the Organization Operates)

The way an organization operates—its workflows, tools, and processes—determines its efficiency and scalability. Many companies struggle with outdated systems and disjointed workflows that hinder productivity. Fragmented processes create inefficiencies, forcing employees to navigate multiple platforms or rely on manual workarounds that slow operations. 

For example, one study found that “70% of employees spend upwards of 20 hours a week chasing information across different technologies instead of doing their job.” Additionally, as businesses grow, scaling operations without a structured approach to workflow optimization becomes increasingly challenging, potentially costing organizations millions

Guided Evolution addresses these issues by introducing targeted automation initiatives that streamline workflows without overwhelming employees. Rather than attempting full-scale automation from the outset, businesses can begin by identifying the most inefficient processes and gradually implementing AI-driven enhancements. 

This phased integration allows teams to adjust at a manageable pace, increasing adoption rates and minimizing disruption. Cross-functional collaboration also improves as silos are gradually eliminated, making the transition toward optimized operations smoother and more sustainable.

System 2: Systems of Insights (How the Organization Makes Decisions)

Organizations thrive when they can make informed, data-driven decisions, yet many struggle with limited visibility, data inconsistencies, and decision-making bottlenecks. A lack of real-time insights prevents leaders from responding proactively to challenges, while siloed data makes it difficult to draw meaningful conclusions. When data remains fragmented across departments, translating insights into measurable actions becomes a cumbersome and often delayed process.

Guided Evolution helps overcome these challenges by first establishing a strong foundation for real-time insights. Implementing connected dashboards creates a unified source of truth, ensuring that decision-makers have access to accurate and timely data. 

From there, organizations can gradually apply predictive analytics to shift from reactive to proactive strategies, using historical patterns to anticipate future trends. 

Over time, AI-driven recommendations refine resource allocation and operational efficiencies, ensuring that insights lead directly to strategic improvements rather than remaining isolated reports with no clear action path.

System 3: Systems of Engagement (How the Organization Connects with People)

An organization’s ability to engage with employees and customers directly influences satisfaction, loyalty, and long-term success. However, many businesses struggle with disjointed engagement strategies that result in inconsistent experiences. 

Customers and employees alike expect seamless, personalized interactions—with one survey reporting that 82% of customers prefer chatbots over waiting for a representative—yet disconnected systems often create frustration. Manual processes further exacerbate the issue, slowing response times and preventing organizations from adapting to changing expectations.

Guided Evolution fosters stronger engagement by first focusing on high-impact, low-risk optimizations in customer service and employee workflows. By identifying areas where quick improvements can deliver immediate benefits, organizations build momentum for deeper transformation. 

AI-driven personalization can then be introduced in phases, allowing engagement strategies to evolve based on data rather than guesswork. Then, real-time feedback loops ensure that interactions remain relevant and continuously improve, reinforcing a dynamic engagement model that adapts to both customer and employee needs.

Why a Guided Approach to Change Matters

Large-scale transformation efforts often fail because they demand too much, too fast, leading to resistance and operational disruption. Guided Evolution provides an alternative that ensures sustainable change by making transitions manageable, measurable, and scalable.

Why This Works Better:

  • Reduces resistance by introducing more gradual shifts rather than radical disruptions.
  • Builds momentum through incremental wins that demonstrate value and ROI early in the process.
  • Creates a flexible framework that allows organizations to course-correct and refine their strategies as they evolve.

Example: A Realistic Path to AI-Driven Optimization

Rather than deploying AI-driven automation across the entire business in one sweeping initiative, organizations should start with the areas where automation can eliminate bottlenecks most effectively, such as IT workflows. Once success is demonstrated, AI-driven enhancements can expand into other areas, building trust and adoption across teams.

The Path Forward: Continuous Evolution

The ultimate goal of transformation is to create an enterprise where technology, processes, and people work in seamless coordination, all at the speed of change. However, this cannot be achieved overnight. The only way to get there is through intelligently orchestrated, step-by-step evolution across Systems of Work, Insights, and Engagement.

Organizations that embrace this guided approach to change will be better positioned to adapt, grow, and lead in the market of the future. The time to start is now.