Category: Organizational Change & Culture

Enterprise AI agents: How organizations operationalize AI at scale

FAQ: What are AI agents?

AI agents are software systems that can perform tasks by interpreting input, making decisions within defined rules, and taking action. In enterprise environments, AI agents operate inside workflows to move work forward using governed data, permissions, and process logic.

FAQ: What are enterprise AI agents?

Enterprise AI agents are AI systems designed to operate within business workflows. They execute defined tasks, interact with enterprise systems, and follow governance rules, which allows organizations to move from AI-generated outputs to real work being completed inside operational environments.

For the past few years, most enterprise AI initiatives have centered on assistance. Copilots drafted emails, summarized documents, and generated code. They improved productivity at the edge of work, but they rarely completed work inside the systems where execution happens.

That boundary is starting to shift.

Enterprise AI agents are extending AI beyond generation and into execution. Instead of stopping at recommendations, these systems can trigger actions, move work forward within approved boundaries, and complete defined tasks inside workflows.

This shift changes how work moves from recommendation to execution.

Organizations are moving from isolated AI experiments to embedded operational capabilities. Prompt-based interactions are giving way to workflow-driven execution. Output generation is giving way to task completion.

The focus is shifting from what AI can produce to what AI can complete.

This shift matters because leaders are now evaluating how AI participates in real execution, not just how it improves individual productivity. The conversation is moving from access to models toward integration into the systems where work actually happens.

That raises a more practical question.

If AI can now participate in execution, where can that execution happen reliably and under control?

Why workflows are the natural environment for AI agents

FAQ: Why are workflows critical for enterprise AI agents?

Workflows provide the structure AI agents need to operate reliably inside real business processes. They connect data, approvals, and execution steps, which allows AI to move work forward instead of stopping at recommendations. Without workflows, organizations must manually coordinate actions across systems.

FAQ: Can AI agents work without workflow automation?

AI agents can generate outputs without workflows, but consistent execution depends on workflow automation. Workflows define process steps, permissions, and governance, which allow agents to complete tasks inside enterprise systems instead of relying on manual follow-through.

AI struggles to deliver consistent results when it sits outside the workflows where work is governed. Without structure, AI outputs still require people to coordinate systems, approvals, and next steps by hand.

Many early AI initiatives stall at this point.

When AI sits outside workflows, four constraints appear quickly:

  • Reliable access to governed enterprise data
  • Defined process steps, dependencies, and escalation paths
  • Clear ownership, approvals, and accountability
  • Connected execution paths across systems

The result is fragmentation. AI may generate useful output, but people still have to carry work across systems and teams.

Workflows address this problem by giving AI a governed place to operate.

They provide the structure AI agents need to operate reliably:

  • Structured processes with defined steps and owners
  • Embedded business logic, decision rules, and approvals
  • Secure, permissioned access to enterprise systems
  • Built-in governance, traceability, and auditability

Most importantly, workflows connect intent to action inside systems that can govern the result. They turn recommendations into executable steps and decisions into tracked outcomes.

This is why AI workflow automation is emerging as a practical foundation for enterprise AI execution.

Within these environments, AI agents can participate directly in real work. Workflow platforms become the coordination layer because they connect process logic, enterprise data, permissions, and approvals in one execution system. This is where platforms such as ServiceNow can support AI agents at scale because execution remains connected to real workflows, data, and controls.

With that structure in place, the next question is practical:

What do enterprise AI agents actually do inside those workflows?

What enterprise AI agents actually do

FAQ: What do enterprise AI agents actually do in business workflows?

Enterprise AI agents execute defined tasks inside workflows by triggering actions, moving work through process steps, and coordinating across systems. They reduce manual effort by handling routine activities such as data updates, service requests, and operational coordination within governed environments.

FAQ: How are AI agents different from AI copilots?

AI copilots generate suggestions or content to support individual users, while AI agents participate in execution inside workflows. Agents can trigger actions and progress tasks within defined processes, whereas copilots rely on users to carry work forward into enterprise systems.

The value of enterprise AI agents comes from how they reduce coordination overhead and move work through real processes. Their impact becomes visible when you look at how work moves across systems, approvals, and teams.

Workflow automation

AI agents can execute defined multi-step processes that previously required people to coordinate them manually.

In those workflows, agents can:

  • Trigger approved workflows
  • Move tasks through defined stages
  • Handle routine dependencies automatically

This expands AI workflow automation from isolated task handling into managed flow across the work itself.

Data enrichment

Enterprise decisions depend on context, and that context is often scattered across systems.

In structured workflows, AI agents can help by:

  • Pulling data from multiple connected systems
  • Validating records and reconciling inconsistencies
  • Updating records as workflows progress

This reduces manual lookups and gives downstream decisions better context.

Service request fulfillment

Internal and customer-facing requests often span multiple teams and systems.

In those scenarios, AI agents can:

  • Interpret the request
  • Route the request into the appropriate workflow
  • Complete defined parts of the process across the workflow

This can reduce resolution time and lower manual effort in routine scenarios.

Operational coordination

Many enterprise processes begin with an event, trigger, or exception.

In those environments, AI agents can respond by:

  • Starting the right workflow
  • Coordinating across teams
  • Pushing actions forward within defined timelines and escalation rules

This supports faster, more consistent execution across complex environments.

The human-in-the-loop reality

AI agents operate inside boundaries set by people, approvals, and policy.

Those boundaries typically include:

  • Escalation points
  • Approval thresholds
  • Exception handling

This creates a hybrid execution model in which AI accelerates routine action while people retain decision authority. This keeps execution governed, auditable, and aligned with business intent.

From capability to execution: Where AI agents are already operating

FAQ: Where are enterprise AI agents used today?

Enterprise AI agents are used in workflow-heavy environments such as IT service management, HR onboarding, customer support, and security operations. These use cases rely on structured workflows where agents can access data, follow process rules, and execute tasks within defined permissions.

FAQ: What does AI agents in production mean?

AI agents in production refers to agents that operate inside live enterprise systems and workflows. These agents execute real tasks, interact with governed data, and follow defined processes, which allows organizations to move from experimentation into consistent execution.

AI agents are already moving into production in workflow-heavy enterprise environments.

Current deployments tend to concentrate in workflows such as:

  • IT service management processes
  • HR request and onboarding workflows
  • Customer support operations
  • Security and incident response

In these environments, AI agents do not operate in isolation. They participate in execution inside systems that already manage requests, approvals, and data.

These deployments sit inside operational systems where AI can participate in execution under defined controls. Their effectiveness depends on how tightly they are integrated into workflows rather than how advanced the underlying models are.

In environments with mature workflow orchestration, ServiceNow AI agents help show how AI can operate within real enterprise constraints, including:

  • Access to governed enterprise data
  • Execution within structured processes
  • Operation within defined permissions and approval paths

These implementations represent early execution patterns that can scale across functions. They show how AI begins to add value when it is embedded in governed workflows rather than left at the edge of work.

As these patterns expand, the question shifts from where AI can operate to how organizations adapt their execution systems to support it.

What organizations can expect next

FAQ: What is an agentic AI enterprise?

An agentic AI enterprise embeds AI agents into workflows to support execution, coordinate operations, and assist decision-making inside governed systems. This approach focuses on integrating AI into how work happens rather than treating it as a standalone tool.

FAQ: How should organizations prepare for enterprise AI agents?

Organizations should focus on redesigning workflows, defining decision boundaries, integrating systems, and embedding governance into execution. Preparation requires aligning operating models with how AI participates in work rather than only deploying new tools.

As adoption expands, enterprise AI agents will begin to influence more of the execution system around them.

Expansion into complex decision flows

AI agents will increasingly participate in:

  • Multi-step decision processes
  • Cross-functional workflows
  • Dynamic, event-driven execution

This expands automation into more adaptive execution systems that can respond to changing conditions within defined boundaries.

Emergence of hybrid execution models

Future workflows will increasingly combine:

  • Human judgment
  • System logic
  • AI-driven action

This layered model will shape how work moves across the enterprise.

Operating model transformation

To scale this shift, organizations will need to redesign how work, decisions, and governance are structured.

Key changes include:

  • Defining decision boundaries between humans and AI
  • Embedding governance directly into workflows
  • Designing workflows and escalation paths that accommodate agent participation

This is where operating model design becomes critical. The focus broadens beyond deploying AI tools and toward designing execution systems that support sustained, governed use.

A broader definition of automation

This expands the meaning of automation. It changes how decisions are made, how actions are triggered, and how work is completed.

Execution becomes more continuous, more coordinated, and more responsive within defined limits.

The next phase of enterprise execution

The evolution of AI in the enterprise is increasingly defined by execution.

Enterprise AI agents expand AI’s role from assisting work toward completing defined work inside governed workflows. Their value emerges when they are embedded within execution systems that:

  • Provide structure
  • Coordinate execution across systems
  • Maintain governance and auditability

Organizations that integrate AI into these execution systems can move faster, reduce operational friction, and deliver more consistent outcomes.

Organizations that remain focused on experimentation will struggle to translate AI potential into business impact.

The next phase of enterprise AI will be shaped by which organizations can operationalize AI effectively inside real execution systems.

Continue the conversation

This shift toward execution-driven AI is becoming central to how enterprise leaders think about workflow design, governance, and the future of execution.

The most useful insights come from seeing how AI agents operate inside real workflows under real constraints.

At ServiceNow Knowledge 2026, these execution patterns are moving from concept to practice, with real examples of how AI agents are operating inside enterprise workflows.

That is where the next phase of enterprise execution is starting to take shape.

AI operating model: from experimentation to execution in 2026 

Why execution systems, not AI capability, determine enterprise results in an AI operating model 

Most organizations have already experimented with AI. Teams tested copilots, automated small tasks, and explored where models could improve productivity. Those efforts expanded capability, yet execution often remained unchanged. Work still moved through the same bottlenecks. Decisions still slowed in the same places. Outcomes improved in pockets, then plateaued. 

A new phase is taking shape. AI is moving into the flow of work itself. Instead of supporting isolated tasks, it participates in how work is executed across systems, teams, and decisions. 

Agentic AI sits at the center of this shift and is a defining element of the emerging AI operating model. These systems can take action within defined boundaries, execute tasks inside workflows, and coordinate next steps across systems. They extend execution capacity, yet their impact depends entirely on the environment they enter. 

The question facing leaders is clear. If AI is now part of execution, what determines whether it improves outcomes or accelerates existing constraints? 

AI value depends on how work actually moves 

Execution leaders recognize the pattern quickly. Teams deploy capable tools. Early results show promise. Then progress slows. Work becomes uneven. Outcomes vary across teams. 

The issue sits in how work moves through the organization. 

AI operates inside an existing system that includes workflows, decision flow, governance, and human interaction. That system determines how quickly work advances, where it stalls, and how consistently decisions translate into action. 

AI amplifies that system. 

When workflows are fragmented, AI increases the speed of fragmentation. When decision ownership is unclear, AI accelerates inconsistency. When governance is disconnected from execution, risk expands as activity scales. 

When work is structured clearly, the effect changes. AI reduces manual effort, shortens cycle time, and improves consistency across teams. Execution becomes more predictable because decision paths and workflows are already defined. 

This is why many organizations struggle to convert AI investment into measurable value. Capability expands, yet the operating system for execution remains unchanged. 

The operating model becomes the constraint 

An operating model defines how work gets done. It shapes how teams are organized, how decisions move, how governance supports speed, and how people and systems interact during execution. 

Execution leaders feel the impact of operating model constraints every day. Work slows at handoffs. Decisions wait for approval. Teams optimize locally while enterprise outcomes remain inconsistent. AI does not remove these constraints. It exposes them faster. 

Scaling AI requires evolving to an AI operating model that supports faster decision cycles, clearer ownership, and coordinated execution across systems. 

This includes: 

  • Defining decision flow so actions move without unnecessary escalation 
  • Embedding governance into workflows so control does not slow execution 
  • Aligning roles and accountability to human and AI collaboration 
  • Designing workflows that connect systems instead of fragmenting them 

Organizations that address these elements create an environment where AI can contribute to execution. Those that do not continue to absorb delays, inconsistency, and rework at greater speed. 

ServiceNow as a coordination layer for execution 

Enterprise work rarely lives in one system. It spans service platforms, collaboration tools, data environments, and line-of-business applications. Execution breaks down when work moves between these systems without coordination. 

A coordination layer becomes critical. It connects workflows, enforces decision logic, and ensures work progresses across systems with clarity and accountability. 

ServiceNow increasingly serves this role. 

It enables organizations to design workflows that span systems and teams, while embedding intelligence directly into execution. AI can participate in triaging requests, routing work, resolving routine tasks, and supporting decisions within defined workflows. Human judgment remains central, with AI extending execution capacity inside structured processes. 

This changes how work moves. Tasks no longer depend on manual coordination across systems. Decision paths are embedded into workflows. Governance operates within execution instead of sitting outside it. 

The result is coordinated execution at scale. Work advances with fewer interruptions. Decisions translate into action more consistently. Leaders gain greater control without introducing additional friction. 

Where leaders are focusing in 2026 

As organizations prepare for the next phase of enterprise AI, priorities are shifting toward areas where execution, experience, and workflows intersect. 

Accelerating employee productivity with AI agents 

AI agents are taking on repetitive operational work inside enterprise workflows. Service requests, case triage, and routine coordination tasks move faster when AI handles initial steps and escalates where judgment is required. 

Execution leaders focus on reducing manual effort while maintaining control over outcomes. Productivity improves when work flows through defined paths instead of relying on manual intervention. 

Reimagining employee service and onboarding journeys 

Employee experience reflects how work is executed behind the scenes. Onboarding, service delivery, and support processes improve when workflows are coordinated across HR, IT, and service teams. 

AI enables more responsive and adaptive journeys, yet the impact depends on how these workflows are designed. Leaders are redesigning service models so experiences feel consistent and predictable across the organization. 

Embedding AI into everyday workflows 

AI is moving into the systems where work already happens. Employees interact with AI in context, within workflows, rather than through separate interfaces. 

This reduces friction. Decisions happen faster because information, recommendations, and actions are available at the point of execution. Adoption improves because AI becomes part of daily work rather than an additional step. 

Creating clear roadmaps for enterprise AI adoption 

Leaders are moving away from isolated pilots toward structured programs. These roadmaps connect use cases, governance, workflow design, and adoption into a coordinated effort. 

Execution improves when AI initiatives are sequenced, governed, and aligned to outcomes rather than explored independently across teams. 

From experimentation to adoption at scale 

Scaling AI requires more than deploying new capabilities. It requires redesigning how work is executed and how people engage with that work. 

Organizations that succeed treat AI as part of an ongoing evolution toward an AI operating model aligned to enterprise AI strategy and adoption. They design workflows that support human and AI collaboration. They clarify decision ownership. They embed governance into execution. They invest in enablement so teams understand how to work within these new systems. 

Adoption becomes the central factor. 

When teams trust the system, understand their roles, and see how decisions translate into outcomes, new ways of working take hold. Performance improves because behavior changes, not because tools are available. 

Organizations that treat AI as a series of deployments continue to experience uneven results. Use cases succeed in isolation. Scaling remains difficult because the surrounding system has not evolved. 

What to watch at ServiceNow Knowledge 2026 

ServiceNow Knowledge 2026 will highlight how organizations are operationalizing AI within real workflows. 

Key themes include: 

  • AI-powered employee experiences that connect service delivery across functions 
  • Real examples of AI participating in execution within structured workflows 
  • Industry-specific transformations, including complex onboarding environments such as healthcare 
  • Structured approaches to AI strategy that connect experimentation to enterprise programs 

These examples reflect a broader shift. Organizations are moving from capability exploration to execution design. The focus is on how work, decisions, and systems operate together. 

AI success depends on how work is designed 

The next phase of enterprise AI will be defined by execution. 

Organizations that align workflows, decision flow, and governance with AI-enabled execution will move faster and more consistently. Those that do not will continue to experience friction, even as capability expands. 

Agentic AI changes how work can be performed. The AI operating model determines whether that potential translates into outcomes. 

As leaders prepare for ServiceNow Knowledge 2026, the priority becomes clear. Redesign how work moves, how decisions are made, and how teams operate together. When those elements align, AI contributes to execution in a way that scales. 


What is an AI operating model? 

An AI operating model defines how AI agents, workflows, decision flow, and governance work together to execute tasks across the enterprise. It focuses on how work actually moves, ensuring AI supports human judgment within structured processes rather than operating in isolation. 

How is an AI operating model different from traditional AI adoption? 

Traditional AI adoption focuses on deploying tools and capabilities. An AI operating model focuses on how those capabilities are embedded into workflows, decision systems, and governance as part of a broader AI adoption strategy. The difference shows up in execution, where coordinated systems enable consistent outcomes instead of isolated improvements. 

Why do enterprise AI initiatives fail to scale? 

AI initiatives often stall because they are introduced into fragmented workflows and unclear decision systems. Without defined ownership, governance, and workflow alignment, AI amplifies existing inefficiencies. Scaling requires redesigning how work moves, not just expanding AI capability. 

How does an operating model impact AI outcomes? 

The operating model determines how decisions are made, how work flows, and how teams coordinate execution. When these elements are aligned, AI improves speed and consistency. When they are not, delays and inconsistencies increase, limiting the value AI can deliver. 

What role does ServiceNow play in an AI operating model? 

ServiceNow acts as a coordination layer that connects workflows, systems, and decision logic across the enterprise. It enables AI to participate in execution within structured processes, ensuring tasks move consistently while maintaining governance and human oversight. 

What should leaders prioritize in an enterprise AI strategy? 

Leaders should focus on redesigning workflows, clarifying decision ownership, embedding governance into execution, and enabling teams to work effectively with AI. These priorities form the foundation of an effective enterprise AI strategy and adoption approach. Structured programs that connect these elements create the conditions for adoption at scale and sustained performance improvement. 

Crafting the modern organization: it’s all about fit, not a fixed formula 

Some organizations navigate change with speed and control, while others stall. The difference often comes down to operating model design, the blueprint for how work flows across people, process, technology, and governance. In an AI-saturated world, operating models perform best when they fit the organization’s context, strategic intent, and real business outcomes. 

This article outlines how modern organizations approach operating model design. It focuses on teaming structures and AI-enabled ways of working, drawing on frameworks such as Elabor8’s Teaming Primes of Organizational Design. The central point stays constant: operating models succeed when they match your context and trade-offs are made deliberately. 

Why deliberate operating model design matters in the age of AI 

An operating model is the engine that turns strategy into execution. It defines how people, processes, technology, and culture work together to deliver value. In a fast-changing environment, deliberate operating model design drives outcomes such as: 

  • AI-first competitive advantage: applying AI where it improves speed, quality, and decision-making 
  • Staying on track: aligning teams and decisions to enterprise priorities, supported by AI-enabled performance signals and real-time progress visibility. 
  • Working smarter: optimizing how you deploy people and resources, streamlining workflows, and improving productivity by shifting routine tasks to AI-assisted automation and agents. 
  • Adapting with speed: responding to disruption and capturing opportunity through scenario planning, forecasting, and AI-enabled market sensing. 
  • Designing around the customer: building operating choices that improve experience, consistency, and trust. 
  • Embedding AI capabilities: placing intelligence into core workflows and defining how humans and AI collaborate in decisions and execution. 
  • Managing risk: designing governance that monitors compliance, bias, security, and model drift across AI-enabled decisioning. 
  • Engaging your teams: clarifying roles, strengthening collaboration, and reinforcing autonomy with accountability. 

The Teaming Primes: a practical lens for organizing the enterprise 

The Teaming Primes provide a structured way to design how an organization delivers value. They describe fundamental patterns for organizing work, including shifts towards customer and product alignment and the operating implications of AI-enabled execution. These shifts span a spectrum. 

On one end are traditional structures: departments organized around projects, technical components (such as a specific IT system), or business functions. These designs prioritize efficiency within established boundaries. In today’s environment, AI often shows up here as automation and optimization inside the function (for example, using AIOps to stabilize IT operations). The result typically improves internal efficiency and reliability. 

On the other end are customer- or product-aligned approaches: structures designed around how value flows to the customer. Organizations may align around customer journeys, products and services, or end-to-end value streams. In these models, AI is designed into the flow of work to improve speed, quality, and decision-making across the system. 

A key takeaway from the Teaming Primes is that many organizations recognize misalignment and struggle to correct it. The framework positions the organization as an adaptive system that can continually refocus on value delivery as the business, competitors, and customers change. 

Teaming structures: how work gets done 

Within any operating model, teaming structures determine how people collaborate and how decisions move. Many organizations are shifting towards flexible, empowered, cross-functional teams that accelerate delivery and improve customer alignment. Common teaming patterns include: 

Functional teams: grouped by specialized skills (for example, marketing or engineering). 

  • Good for: deep expertise, clear roles, operational efficiency. 
  • Watch out for: siloed thinking, slow cross-functional communication, and limited visibility into the end-to-end customer experience. 

Divisional teams: grouped by product line, geography, or customer segment. 

  • Good for: focus on specific markets or products, faster decision-making within the division. 
  • Watch out for: duplicated effort, reduced cross-division collaboration, and fragmentation across “mini silos”. 

Matrix teams: where people report to more than one leader, such as a functional manager and a project manager. 

  • Good for: shared expertise across projects, flexibility in resource allocation. 
  • Watch out for: role ambiguity, competing priorities, and increased coordination overhead. 

Cross-functional product teams: small teams with diverse skills that own a product or customer journey end-to-end. 

  • Good for: rapid iteration, strong customer alignment, higher autonomy, and improved engagement. 
  • Watch out for: significant cultural change requirements, challenges to traditional management practices, and scaling complexity. 

Process- or value stream-aligned teams: organized around an end-to-end value stream (for example, order to cash). 

  • Good for: optimizing value delivery across multiple functions, reducing hand-offs. 
  • Watch out for: complex coordination across functions, difficult governance. 

Networked/distributed teams: rely on flexible connections and collaboration across geographies and, in some cases, external partners. 

  • Good for: access to global talent, flexible resourcing, collaboration with external experts. 
  • Watch out for: requires strong communication practices and tooling, and introduces cultural and time zone coordination challenges. 

Taken together, these patterns raise an important question: how is work organized in your own business today, and how well is that serving you? Are you seeing the benefits these structures promise, and are the trade-offs showing up in familiar ways? Understanding where your current model helps or hinders execution sets the foundation for choosing what comes next. 

Why context drives operating model choices 

The effectiveness of an operating model depends on organizational context. Selecting the right design requires clarity across: 

Goals and vision: what outcomes matter most across the short, medium, and long term? Examples include growth, market expansion, innovation, cost leadership, and experience leadership. Innovation-led strategies often benefit from empowered product teams. Efficiency-led strategies often benefit from more standardized, process-driven designs. 

Starting point and capabilities: assess strengths and constraints across people, process, technology, and culture. Identify legacy systems and entrenched behaviors that slow change. Clarify current skills and the capability build required to reach your target state. 

Industry and market dynamics: how quickly is the market changing, and what do customers and competitors signal? Fast-moving environments typically demand adaptable structures and shorter decision cycles. 

Target outcomes: define the measurable results the new operating model must produce, such as faster product launches, improved customer experience, lower cost-to-serve, higher engagement, and stronger innovation throughput. 

Culture and leadership: assess readiness for empowerment, experimentation, and distributed decision-making. Strong operating models depend on leaders who reinforce new behaviors and teams who feel safe to learn, iterate, and improve. 

Making change stick through people 

Operating model design often focuses on structure, process, and technology. Implementation succeeds through people. The model delivers value when teams understand the intent, adopt the behaviors, and change how work gets done. 

People resist change when the purpose feels unclear or the shift feels unmanageable. The COM-B model for behavior change is a useful lens. For someone to adopt a new behavior, they need: 

  1. Capability (C): the skills and knowledge to do the behavior. 
  2. Opportunity (O): the right environment, resources, and support. 
  3. Motivation (M): the desire and reason to change. 

Using COM-B, focus areas for successful rollout include: 

Explain the purpose and benefits (motivation): clearly communicate why the change matters and how it improves outcomes for teams and the enterprise. Connect the operating model to strategy, measurable results, and better day-to-day execution. When teams see the value and understand the direction, motivation rises. 

Equip teams with skills (capability): new operating models demand new behaviors and, increasingly, AI-enabled ways of working. Invest in training that covers collaboration rituals, agile delivery practices, data fluency, AI literacy (ethical use of generative AI), and AI oversight (how leaders validate and govern agent outputs). Reinforce the human skills that make cross-functional delivery work, such as feedback and active listening. 

Set up the environment for success (opportunity): skills scale when the environment reinforces them. That includes: 

  • New processes: redesign workflows to fit the new structure, including hand-offs, decision rights, and where AI agents support decisions. 
  • Supportive technology: provide the tools people need to collaborate, work transparently, and access the right data. 
  • Clear roles and responsibilities: define who owns what so teams can act with confidence. 
  • Remove friction: address physical and social barriers that block adoption by updating policies, aligning incentives, and replacing outdated habits. 
  • Sustain motivation: after launch, reinforce commitment through empowerment, leadership attention, and visible support mechanisms. 
  • Lead by example: leaders model the behaviors the operating model requires. 
  • Safe space to try: create a culture that supports experimentation, learning, and constructive feedback without fear. 
  • Recognize and reward: celebrate progress and reward teams for adopting new ways of working. 
  • Listen and adapt: gather feedback on what works, identify friction, and use what you learn to refine the model. 

Designing with purpose and strategic intent 

Designing and implementing a modern operating model is an iterative process: 

  1. Assess the current state: understand where you are today. 
  1. Set guiding principles: define the design rules anchored to strategy and outcomes. Use them to steer every operating model decision. 
  1. Test and learn: run smaller-scale pilots for new structures and ways of working, then iterate based on evidence. 
  1. Improve continuously: review and refine the operating model as conditions change across the enterprise and the market. 

With a deliberate, iterative approach and frameworks such as Elabor8’s Teaming Primes, organizations can design operating models that fit their context and accelerate progress towards strategic goals. 

The goal is clarity on who you are, where you are headed, and how you organize to deliver outcomes on that path. People make the model real through daily decisions and execution. 

Navigating the next wave of organizational change: insights from the front line 

Every organization is feeling the pressure to adapt faster than ever. Successful transformation demands clarity, commitment, and the right tools, far beyond a simple acknowledgement that change is necessary.  

We brought together a panel of industry leaders for a frank discussion on the challenges and successes of modernizing organizational practices. The conversation spanned topics from shifting mindsets to integrating new technology and revealed key insights for any company preparing for its next stage of evolution. 

The potholes on the road to agility 

A common pain point emerged immediately: the pace of change itself. Many established organizations move too slowly, weighted down by traditional practices and rigid annual cycles. This adherence to old ways often leads to weak prioritization and delayed value, especially when it comes to deciding what work truly matters.  

The real risk lies in prioritizing opinion over economic value. A traditional project-based mentality encourages big, front-loaded expectations with a fixed scope, leaving little room for learning or incremental delivery. This pattern erodes return on investment because much of the potential value surfaces only at the very end of a long project. 

Key challenges highlighted 

The funding shift: Moving from project cost accounting to a product-based operating model is a major financial and cultural hurdle. It requires senior leadership to fund autonomous value streams with an eye toward continuous delivery instead of a single, fixed outcome. 

Mindset and cultural entropy: Getting long-tenured employees to abandon deeply ingrained workflows is challenging. Leaders need to actively support and reinforce the new way of working to prevent teams from reverting to comfortable but ineffective old habits. 

Initial expectations vs. reality: While the promise of Agile is often speed, the immediate gain is usually increased transparency and earlier feedback. Adopting a new framework enables you to deliver valuable increments sooner and surface issues earlier, even when the underlying work remains complex. 

The strategic path to product-centricity 

For organizations committed to making the leap, one team’s transition from an older tool (Planview) to Targetprocess provided a powerful case study.  

A key to their success was acknowledging early that they could not do it alone. Bringing in external partners and coaches added a “new voice in the conversation” and helped accelerate adoption of a structured scaling framework such as SAFe (Scaled Agile Framework).  

A pivotal decision was their shift in focus: they used the migration as an opportunity to re-evaluate their core processes. Instead of configuring a new tool around broken, outdated processes, they first defined how they wanted to work and then configured the platform to support that modern, product-focused methodology.  

The result was rapid deployment of the new system, immediate visibility into data quality issues that had been hidden for years, and, most importantly, the ability to challenge existing norms based on rich, objective data. This new transparency provided a clear line of sight from strategy to execution, helping to eliminate noise and focus capacity on the most valuable work. 

Looking ahead: the AI-powered portfolio 

Looking ahead, AI dominated the conversation. For many organizations, AI investment is a given; the real question is how to manage the corresponding surge in new, complex initiatives. Technology leaders must treat AI as a critical area for portfolio investment and move beyond viewing it as just another cost line. That shift requires leaders to: 

Quantify business value: Accurately measure the financial impact of AI initiatives, whether through cost savings, new revenue streams, or risk reduction. 

Manage the portfolio for innovation: Use the enterprise portfolio tool to track AI investments alongside core product development and ensure alignment with top-level organizational goals. 

Harness AI for the portfolio itself: Use new AI capabilities to analyze portfolio data, predict outcomes, and flag potential bottlenecks so prioritization becomes an informed, data-driven activity rather than a political one. 

Final takeaway: start simple, be tenacious 

For any organization on this journey, the advice from the panel was clear: anchor every decision in a specific business outcome. Be clear on the result you are trying to achieve, invest in the right expertise and tooling with confidence, and approach the work as a marathon that rewards sustained commitment.  

The incremental gains of true agility, transparency, and data-driven decision-making become the foundation for sustainable success. 


Change Management in AI Adoption: Effective Strategies for Managing Organizational Change While Implementing AI

Artificial intelligence (AI) is a living, learning capability that only achieves full impact when paired with human-centered change management. Think of AI and change management as a symbiotic pair: AI supplies the insight and automation that can reinvent how work gets done, while change management provides the human alignment, culture-building, and governance that let those insights take root and scale. Each amplifies the other.

Introducing AI reshapes how people make decisions, collaborate, and create value.

This blog explores how embedding proven change management practices into every stage of AI adoption—discovery, implementation, optimization, and value realization—turns isolated pilots into enduring, enterprise-wide advantage.

Successfully integrating AI into an organization requires personal investment from all affected parties, from leadership to frontline employees. Failure to secure this buy-in leads to wasted resources and resistance, as individuals grapple with fears of job displacement, loss of control, and uncertainty about AI’s purpose and impact.

To navigate this, organizations must adopt a strategic, human-centric approach, leveraging established change management practices. Success depends on:

  • Transparent, ongoing communication that addresses specific stakeholder concerns
  • Executive leadership that champions AI and cultivates adaptability
  • Early-stage engagement that co-designs the AI journey and validates value through pilot programs

Empowering people at every level is central to AI success. Organizations unlock strategic advantage by building a culture that values human-AI collaboration. Focusing exclusively on the mechanics of AI often sidelines its most important dimension: empowering your people.

1. Discovery & Strategy: Laying a Strong Strategic Foundation

Every successful AI adoption starts with a strong strategic foundation. First, surface the highest-impact opportunities across the business, from automating back-office workflows to embedding intelligence into customer-facing products. Use a proven readiness model to benchmark data, talent, and infrastructure against industry standards, revealing both strengths to leverage and gaps to close.

Translate those insights into a pragmatic roadmap that balances quick-win pilots with bold, long-horizon initiatives, each backed by a clear business case and defensible ROI model.

Throughout, bring the right voices to the table—executives, domain experts, compliance, and frontline teams—to secure sponsorship and reduce risk. Pair the technical plan with a targeted change management playbook: structured communications, hands-on enablement, and a culture-building program that turns wary employees into empowered AI champions.

The result is an AI strategy that is not just technically sound but financially disciplined and fully integrated into your organization’s DNA.

2. Implement & Integrate: Turning Vision into Action

With a strategy in place, delivery begins, translating ambition into capability that augments human decision-making and accelerates team performance. We weave AI into the tools teams already trust, whether Atlassian, ServiceNow, or bespoke platforms, so intelligence feels like a natural enhancement, not a disruptive shift.

Start with targeted pilots where the upside is clear and human expertise is indispensable, proving that algorithms combined with people outperform either alone. From day one, instrument workflows with performance and safety dashboards to detect and resolve drift, bias, or bottlenecks before they escalate.

In parallel, roll out role-specific enablement—from bite-size tutorials for frontline staff to deep-dive labs for data scientists—helping every employee master new capabilities and reinvest saved time into higher-value, creative work. By the end of this phase, AI is a trusted co-pilot that amplifies human judgment and frees talent to focus on what only people can do.

3. Tune & Optimize: Refining Performance and Experience

Post-implementation, sustained value depends on rigorous tuning. Establish a governance layer that blends security controls with clear accountability for model performance, ethics, and data privacy. A Center of Excellence—staffed by AI specialists and front-line power users—creates a real-time feedback loop for continuous improvement.

Ongoing scenario-based testing keeps bias, drift, and edge cases in check, ensuring AI systems remain trustworthy across conditions. Just as important, continue human enablement through onboarding sessions, refresher courses, and role-specific playbooks.

Targeted communications celebrate quick wins and share lessons learned, building confidence and curiosity across the organization.

4. Value Realization: Scaling Impact

When AI becomes an enterprise-wide capability, success is measured by how far and how sustainably it multiplies human potential. Wire each use case into a live scorecard of KPIs and value metrics, paired with ongoing pulse checks on adoption, readiness, and employee sentiment.

Advanced analytics surface underutilized areas or friction points, allowing teams to adjust both technology and supporting processes. Early wins are shared, scaled, and celebrated to accelerate momentum. Internal Centers of Excellence turn grassroots expertise into repeatable playbooks and reusable assets.

To ensure inclusive and ethical growth, maintain open forums and clear accountability across operations. This creates a scalable AI ecosystem that compounds value and supports the people driving your enterprise forward.

5. Future-Proofing: Sustaining Long-Term Advantage

AI is always evolving, and future-ready organizations evolve with it. Build for adaptability by championing continuous learning and expanding the AI frontier, from dashboards to prediction, prescription, and eventually autonomous support.

At every stage, AI should amplify human ingenuity. Algorithms handle the analysis so people can focus on strategy, creativity, and relationships. Promote this mindset through cultural touchpoints like guilds, lunch-and-learns, and communities of practice. Grow in-house talent that can lead future waves of innovation.

When technical roadmaps are interwoven with cultural evolution, AI becomes part of your organizational DNA: resilient, adaptable, and ready for what’s next.

Change Management Strategies for AI Success

  • Living Documentation: Keep artifacts current to reflect real-time changes in implementation.
  • Tailored Solutions: Adapt change approaches to your business context and tools.
  • Expert Guidance: Leverage experienced change professionals familiar with AI projects.
  • Proven Practices: Ground your approach in established principles from Lean Change Management or CMI.
  • People First: Involve employees early through workshops, feedback loops, and consistent communication.
  • Visual Clarity: Use change kanbans and impact maps to show how AI impacts different functions.

Earning Advocacy and Engagement

  • Communicate Clearly: Articulate the benefits of AI in plain language and address concerns transparently.
  • Empower Champions: Support influential employees who can advocate for AI change.
  • Invest in Training: Provide role-specific learning to build confidence and fluency.
  • Celebrate Wins: Highlight and amplify early successes to build enthusiasm and momentum.

The Bottom Line
Integrating AI into your organization requires more than just technical implementation. With a clear change strategy and a focus on people, you can orchestrate adoption, accelerate impact, and unlock the full potential of AI across your enterprise.

Technology Alone Won’t Cut It: Building an AI-Ready Culture to Support AI Transformation

Organizations invest heavily in AI tools and infrastructure—to the tune of well over $1 trillion globally since 2022—but often fail to generate meaningful results. The tech they’re implementing isn’t the issue. It’s the lack of cultural and operational readiness. AI only becomes valuable when it is embedded into the business, informing decision-making, improving workflows, and delivering measurable outcomes.

Many businesses treat AI adoption as an IT upgrade, assuming that implementing new tools will automatically improve efficiency. This approach frequently leads to underwhelming results. 

Companies that achieve real success take a different approach: they integrate AI into everyday operations, ensuring teams understand its capabilities and trust its recommendations. AI adoption requires organizations to rethink how work gets done, how decisions are made, and how data is used.

Change Management Determines AI’s Impact

AI disrupts workflows, decision-making, and job roles, making structured change management essential. Without clear leadership, employees may view AI as a threat rather than a tool. Resistance, confusion, and lack of trust can stall adoption.

Successful AI-driven organizations make change management a priority. Leaders must communicate AI’s role transparently and ensure employees see its value. 

When AI adoption is positioned as a tool for augmenting strategic decision-making, teams are more likely to engage. Deloitte, for example, has successfully integrated AI-powered document review into its legal and compliance teams by providing clear training and demonstrating tangible efficiency gains.

Companies also need to establish feedback loops. Employees who interact with AI daily should have input on refining models and improving usability. AI adoption should be an evolving process, not a one-time rollout.

Building a Data-Driven Culture to Make AI Work

AI adoption depends on a company’s ability to make informed, data-driven decisions. Moving from instinct-based decision-making to AI-backed strategies requires significant shifts in processes, incentives, and leadership priorities. But this isn’t going to happen if the organization’s culture doesn’t support that goal.

Trust is one of the biggest barriers to AI adoption. Employees often hesitate to rely on AI-generated recommendations because they don’t understand how AI reaches its conclusions. To bridge this gap, organizations must foster data literacy at all levels. Leadership should actively model data-driven decision-making, ensuring that teams see AI as a valuable input rather than an opaque black box.

Fostering trust also means maintaining human oversight, allowing users to validate AI-generated outputs, and continuously refining models based on user feedback. When employees understand and trust AI, they are more likely to integrate it into their decision-making processes.

For example, financial institutions use AI-powered fraud detection to flag suspicious transactions. AI models analyze transaction patterns in real-time, identifying anomalies that human analysts might miss. Instead of replacing fraud investigators, AI enables them to focus on the most urgent cases.

AI Must Be Embedded Into Business Systems

AI’s impact is diminished when it operates in isolation. Siloed data, disconnected workflows, and fragmented systems prevent AI from delivering its full value. The most successful organizations integrate AI into the platforms employees already use, such as CRM systems, finance software, and customer support tools. Intelligently orchestrating these systems across the organization ensures that AI insights are easily accessible and immediately actionable.

For instance, AI-powered customer support tools, like ServiceNow and Jira Service Management, are used by Amazon and Salesforce to analyze customer inquiries in real-time and recommend responses based on previous interactions. This streamlines service delivery while maintaining human oversight, improving both speed and accuracy.

The key to success is phased integration. Instead of deploying AI across the entire organization at once, companies should focus on high-impact use cases first—areas where AI can deliver quick wins. Once teams see tangible benefits, broader adoption follows more naturally.

AI Can Work Even When Data Isn’t Perfect

Data quality is often cited as a barrier to AI adoption, but waiting for a flawless dataset can delay progress indefinitely. Many leading AI initiatives thrive despite incomplete or inconsistent data. The best approach is to deploy AI where it can add value while simultaneously improving data practices.

A prime example is Subtle Medical, which enhances medical imaging even with imperfect datasets. Their AI models improve image resolution and reduce scan times, demonstrating that AI can deliver measurable benefits despite data limitations.

Final Thoughts

AI adoption requires more than acquiring the right technology. It requires building a culture that enables AI to generate business value. Companies that embed AI into existing systems, integrate it with decision-making processes, and actively manage change see the greatest impact. By ensuring AI works alongside human expertise rather than attempting to replace it, organizations can achieve sustained improvements and unlock AI’s full potential.

Organizational Change That Works: A Smarter, Smoother Approach

We all know businesses must continuously evolve to stay competitive. Yet, traditional approaches to organizational change often fail due to widespread disruption, internal resistance, and competing priorities. 

Research shows that as much as 88% of large-scale transformation initiatives do not achieve their intended results, often because they attempt to drive change too quickly and without the necessary alignment across teams. Organizations need a method that minimizes risk, delivers value quickly, and builds toward long-term success.

Guided Evolution offers a more effective path. Rather than pursuing sweeping overhauls that can destabilize an organization, this approach prioritizes incremental, adaptive improvements that align with the business’s strategic goals. By evolving in a controlled, intentional manner, companies can avoid the pitfalls of transformation fatigue and achieve sustainable success.

What is Guided Evolution?

Guided Evolution is a structured, step-by-step approach to change that reduces friction while accelerating value realization. Unlike traditional transformation efforts that attempt to overhaul entire systems at once, Guided Evolution enables organizations to implement meaningful, scalable improvements with minimal disruption.

This approach works because:

  • Changes are integrated into daily operations rather than introduced as abrupt shifts.
  • Incremental improvements build confidence and momentum across teams.
  • The organization continuously adapts to emerging needs rather than struggling through a single, large-scale transformation.

Achieving true enterprise-wide transformation is not just about modernizing individual workflows or integrating systems—it requires an orchestrated approach that optimizes how people, processes, and technology interact. Organizations that take a fragmented approach often experience inefficiencies, while those that evolve their Systems of Work, Systems of Insights, and Systems of Engagement in harmony are best positioned for long-term success.

Intelligent Orchestration: The Three Systems That Must Evolve Together

Change cannot happen in isolation. A truly effective transformation requires all three foundational systems within an organization to evolve in sync. Without coordination, isolated improvements in one area may create new inefficiencies elsewhere.

Guided Evolution ensures that transformation across these systems is deliberate and cohesive, reducing friction and maximizing impact.

System 1: Systems of Work (How the Organization Operates)

The way an organization operates—its workflows, tools, and processes—determines its efficiency and scalability. Many companies struggle with outdated systems and disjointed workflows that hinder productivity. Fragmented processes create inefficiencies, forcing employees to navigate multiple platforms or rely on manual workarounds that slow operations. 

For example, one study found that “70% of employees spend upwards of 20 hours a week chasing information across different technologies instead of doing their job.” Additionally, as businesses grow, scaling operations without a structured approach to workflow optimization becomes increasingly challenging, potentially costing organizations millions

Guided Evolution addresses these issues by introducing targeted automation initiatives that streamline workflows without overwhelming employees. Rather than attempting full-scale automation from the outset, businesses can begin by identifying the most inefficient processes and gradually implementing AI-driven enhancements. 

This phased integration allows teams to adjust at a manageable pace, increasing adoption rates and minimizing disruption. Cross-functional collaboration also improves as silos are gradually eliminated, making the transition toward optimized operations smoother and more sustainable.

System 2: Systems of Insights (How the Organization Makes Decisions)

Organizations thrive when they can make informed, data-driven decisions, yet many struggle with limited visibility, data inconsistencies, and decision-making bottlenecks. A lack of real-time insights prevents leaders from responding proactively to challenges, while siloed data makes it difficult to draw meaningful conclusions. When data remains fragmented across departments, translating insights into measurable actions becomes a cumbersome and often delayed process.

Guided Evolution helps overcome these challenges by first establishing a strong foundation for real-time insights. Implementing connected dashboards creates a unified source of truth, ensuring that decision-makers have access to accurate and timely data. 

From there, organizations can gradually apply predictive analytics to shift from reactive to proactive strategies, using historical patterns to anticipate future trends. 

Over time, AI-driven recommendations refine resource allocation and operational efficiencies, ensuring that insights lead directly to strategic improvements rather than remaining isolated reports with no clear action path.

System 3: Systems of Engagement (How the Organization Connects with People)

An organization’s ability to engage with employees and customers directly influences satisfaction, loyalty, and long-term success. However, many businesses struggle with disjointed engagement strategies that result in inconsistent experiences. 

Customers and employees alike expect seamless, personalized interactions—with one survey reporting that 82% of customers prefer chatbots over waiting for a representative—yet disconnected systems often create frustration. Manual processes further exacerbate the issue, slowing response times and preventing organizations from adapting to changing expectations.

Guided Evolution fosters stronger engagement by first focusing on high-impact, low-risk optimizations in customer service and employee workflows. By identifying areas where quick improvements can deliver immediate benefits, organizations build momentum for deeper transformation. 

AI-driven personalization can then be introduced in phases, allowing engagement strategies to evolve based on data rather than guesswork. Then, real-time feedback loops ensure that interactions remain relevant and continuously improve, reinforcing a dynamic engagement model that adapts to both customer and employee needs.

Why a Guided Approach to Change Matters

Large-scale transformation efforts often fail because they demand too much, too fast, leading to resistance and operational disruption. Guided Evolution provides an alternative that ensures sustainable change by making transitions manageable, measurable, and scalable.

Why This Works Better:

  • Reduces resistance by introducing more gradual shifts rather than radical disruptions.
  • Builds momentum through incremental wins that demonstrate value and ROI early in the process.
  • Creates a flexible framework that allows organizations to course-correct and refine their strategies as they evolve.

Example: A Realistic Path to AI-Driven Optimization

Rather than deploying AI-driven automation across the entire business in one sweeping initiative, organizations should start with the areas where automation can eliminate bottlenecks most effectively, such as IT workflows. Once success is demonstrated, AI-driven enhancements can expand into other areas, building trust and adoption across teams.

The Path Forward: Continuous Evolution

The ultimate goal of transformation is to create an enterprise where technology, processes, and people work in seamless coordination, all at the speed of change. However, this cannot be achieved overnight. The only way to get there is through intelligently orchestrated, step-by-step evolution across Systems of Work, Insights, and Engagement.

Organizations that embrace this guided approach to change will be better positioned to adapt, grow, and lead in the market of the future. The time to start is now.

Biological Metaphors for Organizational Design: Learning from Natural Intelligence Frameworks

Organizations, much like living organisms, exist in constantly changing environments. To survive and thrive, they must adapt, responding to new pressures, challenges, and opportunities. While traditional management models often emphasize rigid hierarchies and control mechanisms, nature provides a different blueprint—one built on adaptability, emergence, and distributed intelligence.

By studying biological systems, we can gain valuable insights into organizational design. The principles of evolution, self-organization, emergence, and distributed intelligence reveal pathways for creating adaptive, resilient enterprises. Just as ecosystems do not resist complexity but harness it for survival, organizations can rethink structure and strategy to embrace change as a competitive advantage.

The Parallel Between Biological Evolution and Organizational Adaptation

Evolution is not about the survival of the strongest but the survival of the most adaptable. In ecosystems, species find evolutionary niches—unique roles that ensure their survival. Likewise, organizations must continually refine their value propositions to carve out sustainable competitive advantages.

  • Biological Example: Darwin’s finches evolved distinct beak shapes based on available food sources, demonstrating that adaptability, rather than brute force, determines success.
  • Organizational Analogy: In the business world, companies that iterate, experiment, and pivot in response to market shifts are the ones that endure. Just as ecosystems foster diversity to sustain balance, organizations must cultivate innovation and learning to remain relevant.

This aligns with the idea of turning complexity into a competitive advantage rather than seeking to simplify it. Complexity can be an asset when managed correctly, enabling organizations to respond dynamically rather than reactively.

2. Principles of Emergence in Nature and Organizations

In nature, emergence occurs when simple interactions among individual components lead to complex, adaptive behavior. Ant colonies and schools of fish display remarkable coordination without central command.

  • Biological Example: In ant colonies, no single ant dictates the actions of the group. Instead, ants follow simple rules and respond to environmental cues, creating a sophisticated system that efficiently finds food, builds structures, and defends territory.
  • Organizational Application: When companies encourage decentralized decision-making, they enable emergent solutions that would be impossible under rigid, top-down control. Agile and Lean methodologies leverage this principle, allowing teams to self-organize and innovate in response to challenges.

Organizations that design for emergence rather than enforcing control can unlock new levels of agility and responsiveness.

3. Self-Organization: A Blueprint for Scalability and Resilience

Self-organization is a core feature of natural systems, where order arises through local interactions rather than central direction. This principle applies to everything from cellular structures to bird flocks in flight.

  • Biological Example: Flocks of birds exhibit coordinated movement patterns without a leader dictating direction. Each bird adjusts based on its neighbors, ensuring cohesion while maintaining flexibility.
  • Implication for Organizations: Enterprises can encourage autonomy while maintaining shared goals, much like how biological systems self-organize. Adaptive workflows, empowered teams, and flexible governance structures allow organizations to scale efficiently without losing coherence.

Rather than enforcing rigid operational models, organizations should create conditions where structure emerges naturally, balancing autonomy with alignment.

4. Distributed Intelligence: A Model for Collective Learning

Nature provides countless examples of distributed intelligence, where no single entity possesses all knowledge, yet the system as a whole functions adaptively.

  • Biological Example: Neural networks process vast amounts of information through distributed connections rather than a single command center. Similarly, fungal mycelial networks transfer nutrients and signals across vast forest ecosystems, enabling collective survival.
  • Organizational Application: Companies can foster distributed intelligence by democratizing data and empowering decision-making at all levels. Systems of Insight—where knowledge flows across teams rather than bottlenecking at the top—enable organizations to respond faster and more effectively to change.

By leveraging AI-driven analytics as an enterprise nervous system,” and intelligently orchestrating the technology and processes required to support the strategy, organizations can process and react to internal and external stimuli dynamically.

5. Conceptual Models for Organizational Learning and Transformation

Just as genetic material encodes an organism’s traits, organizations carry an inherent DNA—a set of values, principles, and structures that shape their behavior.

  • Organizational DNA: Organizations that intentionally shape their culture, knowledge-sharing practices, and decision-making frameworks create a foundation for long-term adaptability.
  • Ecosystem Thinking: Organizations should be viewed as interconnected ecosystems where various functions interact symbiotically, not as isolated entities. Encouraging mutual support across departments strengthens resilience and innovation.
  • Guided Evolution: Change does not have to be disruptive. Evolution in nature occurs through gradual, iterative refinements. Organizations that experiment in small, controlled ways can drive meaningful transformation over time without destabilizing operations.

Many experts in organizational theory believe the “organization as organism” metaphor falls apart under conditions of continuous change. We believe this concept of guided evolution makes the difference. With expert guidance leading steady, iterative improvements, organizations can rise to the challenge of continuous change and even turn it into an advantage.

6. Actionable Insights for Leaders

Leaders seeking to build adaptive organizations can take key lessons from biology:

  • Adopt Adaptive Structures: Move from rigid hierarchies to flexible, intelligently orchestrated models that enable resilience.
  • Embed Systems Thinking: Recognize how different functions interact, ensuring alignment across people, processes, and technology.
  • Experiment and Iterate: Treat initiatives like evolutionary experiments by constantly learning, refining, and adapting based on results.

By embracing these principles, organizations can move beyond static models of operation and design structures that evolve naturally in response to the world around them.

Conclusion

Success in today’s world is about navigating change effectively. Stability is stagnation. Just as ecosystems thrive through adaptability, organizations that embrace biological principles—emergence, self-organization, and distributed intelligence—will be best positioned for long-term resilience and growth.

Enabling ITSM Change Management Using Jira Service Management

In the fast-paced world of IT and software development, changes are inevitable. From software updates to infrastructure modifications, transitions can often lead to challenges and frustrations within an organization. But what if there was a way to manage these changes effectively, reducing the impact and scope of disruptions? Enter Jira Service Management (JSM), a powerful tool for enabling ITSM change management.

This is the first in a three-part series covering ITSM principles and applying them using JSM:

  • Enabling ITSM Change Management With JSM
  • Streamline Your ITSM—Service Catalog and CMDB Powered by JSM 
  • Perfecting Customer Management Using JSM

Change management is crucial in any organization. Without it, companies run the risk of encountering server downtimes, leading to confusion, stress, and frustration among employees and users alike. These downtimes not only affect productivity but can also tarnish a company’s reputation.

This article is based on the webinar, How to Enable Change Management With Jira Service Management. Watch the recording now to learn more about what’s discussed here and to see a thorough demo of JSM reflecting the key learning points. 

Unpacking the basic change management concepts 

The webinar linked above covered some important concepts every IT professional should know:

Change Management and Change Enablement

At the core of any IT operation lies the ability to manage and enable change effectively. But, what do these terms mean in the context of IT services and software development?

Change management, as defined by ITIL, is an Information Technology Service Management (ITSM) practice designed to minimize risks and disruptions. It ensures that critical systems and services remain functional amidst changes. This could mean anything from updating API documentation to deploying code to different environments. Any addition, modification, or removal that directly impacts services, processes, configurations, or documentation falls under this umbrella.

On the other hand, change enablement is a term used in Atlassian documentation. It refers to team standards that permit users to handle change requests effectively. Unlike change management, which is often associated with processing changes from outside, change enablement facilitates changes originating from within the organization.

Implementing change using ITIL 

It’s important not to rush the implementation of change. As counterintuitive as it might sound, taking extra time to set up and stick to a change management program can actually improve the process. It might seem to slow down work initially, but embracing ITIL patterns and automation will improve efficiency and reduce the heavy costs associated with botched tasks. The mantra here is to slow down to go fast.

Automation is a valuable tool for minimizing the burden of heavier tasks like documentation. Traditional tools may have complex, manual components that slow down processes and increase the chance of error. In contrast, tool automation can alleviate this heaviness. For example, automating ticket creation and linking various components can significantly reduce the time and effort required for these tasks.

Explore how AI-powered service management can take automation to a whole new level!

Roles and responsibilities in change management

Two key roles in change management are the Change Advisory Board (CAB) and the Release Manager.

Change Advisory Board (CAB)

The CAB plays a pivotal role in overseeing changes within an organization. Composed of senior individuals knowledgeable about the area undergoing change, the CAB provides a holistic perspective on the implications and potential impacts of proposed changes.

Release Manager

Working closely with the CAB is the Release Manager. This role involves reviewing content submitted by the development team, ensuring all aspects of a change request are in place, from documentation to testing assurances. The Release Manager serves as an agent to the CAB, mitigating risk through standardization and completion of requests.

In addition to their review responsibilities, the Release Manager coordinates the personnel involved in implementing changes, checks schedules for conflicts, tracks the process with the CAB, and ensures communication among all stakeholders.

The importance of timing in change management

However, effective change management isn’t just about having the right roles in place. It’s also about timing and planning. 

Respecting the process means submitting changes well before the release date. Common issues like time crunches for development and deployment can pose challenges to the change management process. To alleviate this, sufficient time should be allocated for change management processes during project planning. For example, incorporating an extra sprint for deployments could help manage changes more effectively.

Categorizing changes in a technology organization

Changes are categoric and can be differentiated based on size, risk, and urgency. Understanding these categories is crucial for efficient change management, particularly in a Continuous Integration/Continuous Deployment (CI/CD) setting.

There are three main types of changes:

  1. Standard Change: A low-risk, pre-authorized change that is well understood, fully documented, and proven. Due to CI/CD pipeline practices, standard changes are becoming more frequent.
  2. Normal Change: This refers to non-emergency deployments that must be scheduled and planned. These changes typically require a review from the Change Advisory Board (CAB)
  3. Emergency Change: These are changes that require immediate fixes due to an urgent issue. They often involve a separate procedure with a shorter timescale for approval and implementation.

Regardless of the type, no matter how small the change, it should not bypass the established process for change management. Each change must be properly documented, reviewed, and authorized to ensure minimal disruption to services and operations.

Moreover, understanding the nature of these categories and the associated efforts helps organizations manage changes efficiently. It provides clarity on the level of risk involved, the amount of effort required, and the urgency of the change.
Organizations may need to adjust internal policies based on the perceived risk level of each change. For instance, well-performing teams that have demonstrated their ability to manage risks effectively might be allowed to make production deployments multiple times per day.

Embracing ITSM change management in Jira Service Management

Effective change management strategies create a stable environment and help avoid panic-driven experiences. And at the heart of this strategy lies Jira Service Management.

JSM is a comprehensive tool that assists organizations in planning, controlling, and understanding the impact of changes on their business. It simplifies the change management process, from the initial change request to implementation.

With the ability to provide richer contextual information around changes, JSM empowers IT operations teams to better manage and mitigate potential disruptions. Furthermore, its customizable workflow—designed based on ITIL recommendations—helps service agents learn and adapt to change management processes. By implementing a change management process in JSM, companies can keep track of all changes, ensuring nothing slips through the cracks.

Jira Service Management’s alignment with ITIL 4 is one of its key strengths. This association allows it to offer a comprehensive solution that aligns with software development tools and agile practices, making it a favorite amongst software professionals.

This alignment with ITIL 4 makes ITSM change management in Jira Service Management less bulky than its predecessors and more adaptive to an agile mindset. This adaptivity is further enhanced by the free ITSM template within JSM. It includes change incident, new feature, problem, and service request issue types along with the corresponding request types, giving users a head start in their change management journey.

Additional customizable templates are available as well. 

The ease of use and familiarity of Jira Service Management reduces barriers to entry, making it approachable for professionals from the software side. It’s a tool designed to facilitate and not complicate, making it a go-to for many organizations seeking to streamline their change management processes.

Conclusion

In conclusion, the adoption of change management and change enablement practices, underpinned by ITIL patterns and automation, can bring about significant improvements in the efficiency and effectiveness of tasks within an organization. With tools like Jira Service Management, which aligns with ITIL 4 and supports agile practices, organizations can navigate changes smoothly, reducing the risk of disruptions and costly errors.

The journey towards effective change management may seem slow initially, but remember, slowing down to go fast can lead to long-term benefits. With the right tools and guidance, you can minimize risks, improve efficiency, and foster a culture that embraces change.

To dive deeper into how JSM can revolutionize your change management process, consider watching the recorded webinar, How to Enable Change Management With Jira Service Management. It offers practical insights and a demo that can help you understand the capabilities of Jira Service Management better.

Finally, Safe 6.0 Puts Continuous Learning Culture Where It Belongs

There is a real buzz in the SAFe community with the new changes in SAFe 6.0…

  • Strengthening the foundation for business agility
  • Empowering teams
  • Accelerating flow
  • Enhancing business agility across the business
  • Building the future with AI, Big Data, and Cloud, and
  • Delivering better outcomes with measure and grow and OKRs

Something that may have gotten lost with the buzz created by these themes is how the continuous learning culture competency has finally become part of the foundation for strengthening business agility.

For myself, this is a reminder that the continuous learning culture competency is not some afterthought or nice-to-have competency, but rather a foundational competency like Lean-Agile leadership. Why is this important?

SAFe 6.0 is about business agility

SAFe 6.0 is a framework for business agility. Business agility is the organizational capability for achieving economic advantage by sensing and responding faster than our competitors.

To do this, we need to learn faster than our competitors. And that means developing a continuous learning culture. Moving the continuous learning culture competency icon from its former side position in the framework to the foundation bar finally realizes the critical importance of continuous learning for business agility.

But sensing and responding to change is not just about learning and exploiting new business opportunities, as suggested by the Business Agility value stream. It’s also about sensing and responding to new opportunities to improve our way of working. Our ways of working must also embrace change through learning. Through iteration retrospectives, inspect-and-adapt problem-solving workshops, measure-and-grow workshops, and communities of practices, we have opportunities to improve our way of working iteratively. I often tell my clients if your way of working is the same two years from now as it is today, then you have missed the point.

Avoiding “cargo cult Agile”…

There is a misperception in much of our industry that if we can ritualistically execute a set of Agile practices, we will be agile. Thus, we often use the term “cargo cult Agile” to refer to this mindset.

No framework is immune to the cargo cult mindset. For example, SAI provides the Big Picture and a wealth of training assets and supporting resources that enable large organizations to get started on their business agility journey. Unfortunately, these same enabling assets which help start the journey can devolve into a cargo cult of prescriptive practices if the organizations do not develop a continuous learning competency. Worse, when leadership lacks a growth mindset, their go to strategy is often to strictly enforce compliance with the practices that make it easy to begin.

…by using SAFe as directed

SAFe roles, practices, and artifacts come from proven patterns for realizing Agile and Lean principles. There is nothing sacred about the practices themselves. They are practices to help you begin with realizing business agility throughout the enterprise. They are not rules. If you want to adjust, you simply need data to inform your decisions.

Improvement backlog items need to be written just like a feature with a benefit hypothesis. We need to know what useful performance and outcome data we can collect to determine if the improvement brought real benefit. Otherwise, when adapting and evolving practices, we are at risk of changing the practice because we don’t want to make the necessary changes to realize business agility. How many times have we heard teams complain that they can’t get anything done in a short timebox and ask if we can lengthen the timebox?

It comes back to continuous learning

This is why I am excited to see the continuous learning culture competency added to the foundation for business agility. Without developing both the Lean-Agile leadership and continuous learning culture, it is unlikely an organization can derive the benefits of business agility, regardless of how many practices they can precisely execute.

Dive deeper into all the changes in SAFe 6.0.