/

/

Article

Davos Insights: Turning VUCA into Value with Neuro-Symbolic AI

Ashwin Rao

Ashwin Rao

EVP of AI at o9 & Adjunct Professor at Stanford

February 11, 2026

14 read min

The Davos signal: VUCA is rising, and leaders want agentic AI that delivers outcomes

I recently attended the World Economic Forum in Davos, and two themes dominated conversations between leaders across industries.

The first was unmistakable and irrefutable: volatility, uncertainty, complexity, and ambiguity are rising faster than most organizations can absorb. Supply disruptions, cost shocks, channel shifts, and competitive moves are no longer episodic; they are continuous.

The second theme was more frustrating. While agentic AI has captured executive imagination, most pilots are struggling to move beyond impressive demos. Leaders spoke candidly about the same barriers: lack of trust, weak auditability, limited actionability, and difficulty governing AI-driven decisions at enterprise scale. The problem, to be clear, is not ambition. It is operating models.

In a VUCA environment, value is rarely lost because organizations lack data. It leaks because decisions move too slowly across silos, learning is not institutionalized, and execution drifts from intent. Demand, supply, and commercial plans are made in different systems, at different granularities, with different assumptions. Cross-functional decisions require multiple reconciliations and handoffs, so actions arrive too late to matter. Teams review performance, but root causes are not systematically tied back to the decisions and assumptions that produced outcomes, so the same mistakes repeat.

In this environment, latency itself becomes a direct cost. By the time a decision is made, the situation has already changed. What enterprises really need is not AI in isolation, but AI embedded into how decisions are made, executed, and improved. This is where the Agile, Adaptive, Autonomous Planning & Execution (APEX) model comes in.

APEX: the operating model that integrates horizontally and vertically

APEX is the ideal operating model for Integrated Business Planning & Execution (IBPE). It is designed to connect decisions across planning and execution horizons, from long-range strategic decisions (3–5 years) to tactical decisions (quarters and annual plans) to execution decisions that impact the next days and weeks, so that strategy and execution remain continuously aligned.

APEX brings demand planning, supply planning, and commercial planning into an IBPE process, constrained by financial guardrails. This horizontal integration is critical: in reality, every meaningful decision is a trade-off across functions and time.

APEX integrates horizontally across functions, vertically across levels of granularity, and across time horizons, from multi-year strategy to weekly execution.

This is the foundation for o9’s product suite: Demand Planning, Supply Planning, Commercial Planning, and Integrated Business Planning, built on the same platform and decision logic.APEX is faster planning, but also a systematic approach to becoming:

  • Agile: detect changes early, diagnose quickly, and respond with coordinated actions.
  • Adaptive: learn from every cycle (what worked, what failed, under which assumptions)and improve policies and playbooks over time.
  • Autonomous (the North star): progressively automate well-governed decisions and workflows, while keeping humans in control of strategy and high-stakes judgment.

To make this concrete, think of a standard situation card: a variance between an inventory plan and the outcome, used throughout this article to illustrate agile response, adaptive learning, and increasingly autonomous decision-making with AI agents.

  • Plan (what we intended): End-of-month inventory for a product category in Region West = 4.2 weeks of supply; service level target = 98%
  • Outcome (what happened): 6.8 weeks in DC1 (excess) + stockouts in DC3 (lost sales); service = 93%
  • Business impact: working capital +$X million, expedite +$Y, markdown risk +$Z, revenue leakage

The executive question: “What happened and why, what should we do now, and what should change so this doesn’t repeat?” Answering that question, repeatedly and reliably, is the core purpose of the APEX model.

What an enterprise AI agent really is (and why most agentic AI pilots stall)

o9 has built a platform for the APEX model with a path-breaking approach to agentic AI, highly distinguished from typical agentic AI being tried in enterprise settings today. Let’s first define what an enterprise AI agent is and explain why most agentic AI pilots don’t make it to enterprise practice. In business terms, an AI agent is a digital teammate that can understand a goal, gather the right context, run analyses, take actions through systems, and stop when it has delivered an answer—while staying within governance and safety constraints.

Formally, an AI agent has the following 5 components:

  • LLM: generates language-based thoughts, plans, tool calls, and decisions, but has no memory, tools, or goals by itself, and is stateless across calls.
  • Context construction: deciding what enterprise data, knowledge, and history the LLM should see at each step.
  • Tools and actions: the ability to query systems, run planning and optimization, and trigger workflows.
  • Control and cognition loop: defining goals, planning steps, re-planning when assumptions change, reasoning, and knowing when to stop.
  • Safety and evaluation: guardrails, approvals, budgets, traceability, and performance monitoring.

Our view is that most AI agents today rely far too heavily on component 1 (LLM). LLMs are very powerful for language-based planning and reasoning, but they do not have an intrinsic understanding of the ground realities of an enterprise (e.g., decision rights, policies, constraints, auditable evidence, etc.). In the inventory situation card example, LLM-heavy agents often produce a plausible narrative (forecast error, supplier issue, promo effect) but cannot reliably tie it to the exact plan version, assumptions, constraints, and decisions that were in force, so leaders cannot audit it, act on it, or govern it at scale. That’s why pilots look impressive in demos but stall in enterprise practice. 

We believe that in enterprise settings, component 1 (LLM) should be used in limited, conservative ways and instead rely much more on the other four components (context construction, tools/actions, control/cognition loop, and safety/evaluation). With this approach, an AI agent would tackle the inventory situation card example in the following manner:

  • LLM: turns “Why did inventory miss?” into a precise, structured enterprise question (SKU/location/time window/plan version/metrics).
  • Context construction: pulls the right plan versions, assumptions, constraints, and decision history to avoid hand-wavy answers.
  • Tools and actions: runs the actual enterprise analyses—variance attribution, root-cause decomposition, scenario runs, and triggers workflows.
  • Control and cognition loop: iterates: “Is this variance demand-driven, supply-driven, or policy-driven?” then re-plans as new evidence arrives.
  • Safety and evaluation: enforces decision rights (who can release inventory, approve expedites, change service targets), and logs evidence.

I will now briefly explain how we achieve this type of enterprise-healthy agentic approach with our neuro-symbolic agentic AI architecture.

o9’s Neuro-Symbolic Agentic AI: the 10–80–10 model that flips the typical agent architecture

Neuro-symbolic AI combines two complementary strengths - Neural AI and Symbolic AI. Neural AI (e.g, LLMs) is excellent at understanding and synthesizing messy, unstructured information. Symbolic AI is excellent at precision, constraints, traceability, and grounding in the enterprise’s “rules of the game.” The combination (neuro-symbolic) delivers both adaptability and trust. o9 has meticulously designed and implemented an enterprise-grade neuro-symbolic agentic AI platform that works exceptionally well for enterprise planning and execution.

o9’s key design choice is to flip the typical agentic architecture. Many agentic systems delegate the majority of work (≈80%) to an LLM and then attempt to patch reliability with prompting and retrieval workarounds. o9 uses LLMs (neural) for the first and last mile (≈10% each), and relies mainly on our enterprise knowledge graph (symbolic) for the middle mile (≈80%):

StageWhat the agent doesWhy it matters
First mile (≈10%) - Neural LLMsUnderstands the user’s situation and intent; translates it into a structured problem statement that Symbolic AI will understand.Makes the experience conversational while preserving meaning and scope.
Middle mile (≈80%) - mainly Symbolic AIRuns enterprise reasoning on a grounded model: plans, constraints, decisions, scenarios, and trade-offs.Delivers accuracy, auditability, and actionability across functions and horizons.
Last mile (≈10%) - Neural LLMsExplains results of Symbolic AI in business terms; produces narratives, reports, and recommended actions with rationale.Builds trust and enables fast executive decision-making.

Hence, o9’s neuro-symbolic agents delegate only about 20% of the work to neural AI (first + last mile) and about 80% to symbolic AI (the middle mile). So the 80-20 of typical agent architectures is flipped by o9 to 20-80 (or more precisely, 10-80-10) to deliver robust enterprise-grade agentic AI. 

How a typical 80-20 agentic architecture would operate on the  inventory situation card example:

  • User asks: “Why did Region West end up with excess here and stockouts there?”
  • The pilot agent passes along the user query along with a few exports/dashboards to the LLM, and the LLM responds with a narrative on what happened, speculating on why, and attempts a vague remediation proposal.
  • It struggles to: (i) reconcile multiple plan versions, (ii) prove causality, (iii) respect decision rights, (iv) produce decision-ready actions with constraints.
  • Result: interesting explanation, low confidence, low actionability → the pilot stalls.

How our 10-80-10 neuro-symbolic agentic architecture would operate on the inventory situation card example:

  • First mile (≈10%, Neural LLM): interprets the executive question and constructs a structured problem: which SKUs, which nodes, which time window, which plan baseline, which KPIs?
  • Middle mile (≈80%, Symbolic): runs grounded reasoning across plans, constraints, decision context, and trade-offs to sequence: diagnose → project → prescribe in a highly structured and precise manner.
  • Last mile (≈10%, Neural LLM): returns an executive-ready brief: drivers ranked by impact + recommended actions + rationale in business language.

The hard work of symbolic AI (middle mile) is performed on the final four components of each AI agent (context construction, tools/actions, control/cognition loop, and safety/evaluation) while neural AI (first and last mile) appears in the first component of each AI agent (i.e., the LLM).

We want to emphasize that the Symbolic AI technology of the middle mile is an investment we have made over the last 15 years. However, the power of this technology was not always easily accessible to those without technical expertise to work with this technology. With the advent of LLMs, we’ve now been able to create this neuro-symbolic agentic AI capability by sandwiching the rigor and precision of Symbolic AI (middle mile) with the intuitive and flexible interfaces of Neural AI (first and last mile). This means front lines and executives can easily access our powerful Symbolic AI.

The Enterprise Knowledge Graph: how o9 grounds agents in your business reality

The Enterprise Knowledge Graph (EKG) is our symbolic AI middle mile; it is the backbone that makes agents enterprise-grade. It represents not only data, but how value flows, how plans are created, how decisions are made, how policies and rules are learned, and how computation is connected and coordinated.

o9’s EKG is a four-layer system:

LayerWhat it enables for leaders
1) Value‑Chain Graph (digital twin of value flow and plans)A single, consistent view of products, customers, supply network, and financial guardrails—plus time‑phased plans and variances at the right granularity.
2) Decision‑Context Graph (enterprise memory of decisions and their context)A record of what was decided, when, by whom, why, under which assumptions, and what outcomes resulted. Enables accountability and learning.
3) Learned-Rules Graph (institutionalized policies & learned rules)Codifies what works under which conditions into policies, playbooks, and constraints that improve future decisions.
4) Connected-Compute Graph (the computation engine)Runs the right analytics, forecasting, optimization, and scenarios when triggered by events or questions; re-computes only what changed to ensure speed.

This four-layer architecture is what enables o9 to integrate horizontally (across functions) and vertically (across levels of granularity), and to connect planning to execution with governance gates. Let us now develop a better intuition for the 4 layers of the EKG by illustrating how each layer contributes something different in the context of the inventory situation card example:

  • 1) Value-Chain Graph (“What is true right now?”):  The consistent picture of product/location/time plans vs outcomes, plus the financial guardrails and constraints that matter.
  • 2) Decision-Context Graph (“What did we decide that led here?”): Which allocation/replenishment/promo decisions were taken, when, by whom, under what assumptions—and what outcomes followed.
  • 3) Learned-Rules Graph (“What did we learn so it doesn’t repeat?”):  Converts repeated variance patterns into policies/playbooks (e.g., when forecast volatility crosses X, adjust safety stock / reorder policies/promo guardrails).4) Connected-Compute Graph (“How do we recompute fast, only where needed?”): When a variance is detected, it triggers the right analytics/optimization/scenarios and updates only impacted computations, keeping response time low. This maps directly to event-driven triggers on “plan vs actual” variances and the canonical reasoning pathways for diagnosis, projection, remediation, and learning.
  • 4) Connected-Compute Graph (“How do we recompute fast, only where needed?”): When a variance is detected, it triggers the right analytics/optimization/scenarios and updates only impacted computations, keeping response time low. This maps directly to event-driven triggers on “plan vs actual” variances and the canonical reasoning pathways for diagnosis, projection, remediation, and learning.

We have invested in the Enterprise Knowledge Graph for over 15 years, but its power was not always easily accessible beyond technical experts. With the advent of LLMs, we have been able to enrich the EKG with unstructured information such as emails, meeting notes, competitor intelligence, and frontline tribal knowledge.

LLMs interpret this information and attach it as governed evidence to enterprise objects such as products, customers, decisions, and policies, turning loose context into durable enterprise memory. At the same time, advances in Neural AI have strengthened the Connected-Compute Graph through code generation and software design, creating a complementary, symbiotic relationship in which Neural AI enhances the EKG while Symbolic AI provides the grounding, precision, and governance required for enterprise-grade decisions.

The 4Ws: reclaiming value leakage with diagnosis, remediation, and learning

The key capability in our neuro-symbolic agentic AI framework is the ability to answer four questions that every leader always has. We refer to these four questions as the 4Ws:

  • What happened, and why? (variance attribution and root cause)
  • What is the current state? (a trustworthy “nowcast” of operational and financial posture)
  • What is likely to happen next? (forward projection under alternative assumptions and constraints)
  • What should we do? (decision-ready options, with trade-offs and recommended actions)

The answers to these 4Ws across planning and execution applications are provided by our neuro-symbolic agents, yielding a consistent executive experience from diagnosis to action in a rapid, reliable, and increasingly autonomous manner. The diagnosis/projection/remediation of the inventory situation card we’ve been talking about is a canonical example of being able to answer the 4Ws. The most crucial of these is the answer to the “what happened – and why” question. We call this Post-Game Analysis (PGA), and we employ Causal AI Agents (agents that attribute outcomes to decisions and drivers that caused them), to do PGA precisely and with high confidence.

PGA creates enormous value in two different ways:

  • Agility: it accelerates diagnosis and remediation. Teams can quickly isolate the drivers of a miss and respond while there is still time to change outcomes. For example, when the inventory situation card shows “excess in DC1 + stockouts in DC3”, PGA can pinpoint whether the miss came from demand shifts, supplier slippage, allocation policy, or an execution breakdown, and trigger the right fix (rebalance inventory, reroute in-transit supply, adjust replenishment, or pause/reshape a promotion) before the month is over. This means moving from a culture of “we missed” to a culture of “we know why” fast enough to recover value: reduce working capital trapped in excess inventory, prevent avoidable lost sales from stockouts, and cut expediting and firefighting.
  • Adaptivity: it institutionalizes learning. Over time, repeated PGA patterns reveal systemic biases and recurring failure modes, which can be codified into rules, guardrails, and playbooks—so the organization improves every cycle. Thus, repeated root-cause signatures become institutional memory: what used to require heroics becomes standard operating practice—so misses shrink, volatility becomes manageable, and performance compounds over time.

What this looks like in practice

Example 1: A supplier disruption meets a price and margin decision

A procurement team learns that a critical input cost is likely to rise by 5–15% in two months due to regional constraints. The commercial team asks: Should we pass costs through, redesign promotions, or accept margin pressure? Supply chain asks: What inventory posture do we take now to protect service? Finance asks: What is the working capital and EBITDA impact under different paths?

With o9’s 10–80–10 approach, agents translate the questions into structured analyses, run scenarios on the enterprise graph, and return decision-ready options with clear trade-offs.

Example 2: Promotion under-performance and inventory risk

A promotion is live, but early signals show that lift is underperforming while discount depth is eroding margins. Inventory is building in certain regions, while other stores are at risk of stockouts.

o9’s PGA attributes variances to the specific plan versions and assumptions that were in force, and links outcomes back to decisions. Over time, the enterprise learns which promotion patterns fail under which conditions and codifies improved guardrails, reducing repeated margin dilution and write-offs. Similar Integrated Business Planning & Execution (IBPE) trade-offs arise when growth opportunities collide with capacity constraints and financial guardrails, requiring leaders to balance revenue, margin, service, and cash in a way the organization can actually execute.

What leaders can expect: outcomes, not AI theater

The goal is not to deploy a chatbot. The goal is to build an operating capability that reduces value leakage and increases decision velocity. In practice, leaders focus on outcomes such as:

  • Faster cycle times from signal to decision to execution, so the organization responds while it still matters.
  • Improved plan-to-outcome performance across service, revenue, margin, and inventory, through better scenario quality and tighter execution alignment.
  • A measurable learning loop: decisions and assumptions are tracked, outcomes are attributed, and policies improve over time.
  • Higher productivity and higher agency: more roles can access decision intelligence asynchronously, without waiting for specialized analysts.

A typical starting point is a focused pilot around one high-value decision loop (e.g., promotion planning and performance, constrained allocation, cost-to-serve and pricing, or inventory posture). If you have a recurring plan-to-outcome miss you want to eliminate this quarter, start there. Success is defined in business metrics, and the pilot is designed to scale across domains through the shared enterprise knowledge graph.

At Davos, leaders weren’t asking whether AI could do clever things; they were asking whether it could be trusted to deliver outcomes. The APEX model answers that question by embedding neuro-symbolic AI into proven operating practices: grounding decisions in the enterprise, closing learning loops, and turning VUCA from a liability into competitive advantage.

Live Demo with an o9 Expert

Get a personalized walkthrough with an o9 solution specialist and see how to drive faster, smarter planning decisions across your enterprise.

About the authors

Ashwin Rao

Ashwin Rao

EVP of AI at o9 & Adjunct Professor at Stanford

Ashwin Rao is EVP-AI at o9 Solutions with the responsibility for o9's AI Strategy & Architecture as well as leading o9's R&D team. Ashwin is also an Adjunct Professor in Applied Mathematics at Stanford University, focusing his research and teaching in the field of Reinforcement Learning (RL), and has written a book on RL with applications in Finance, Supply-Chain and Dynamic Pricing. Previously, Ashwin was the Chief AI Officer at QXO, VP of AI at Target Corporation, Managing Director of Market Modeling at Morgan Stanley, and VP of Quant Trading Strategies at Goldman Sachs. Ashwin has a Ph.D. in Theoretical Computer Science from University of Southern California and a B.Tech in Computer Science from IIT-Bombay. Ashwin resides in Palo Alto, CA.

follow on linkedin