The Neurosymbolic Imperative for Enterprise AI

authors

Ashwin Rao
EVP of AI at o9 & Adjunct Professor at Stanford
8 read min
Register here to see my full keynote on this topic on June 4th in Amsterdam at aim10x Europe, o9’s Regional AI Summit for 400+ forward-thinking enterprise planning leaders and professionals.
Since the advent of Large Language Models, I have watched a wave of fascination sweep through the enterprise world.
Leaders across industries, from supply chain to finance to commercial planning, have been captivated by what LLMs can do. And rightly so. But after two decades of building AI systems, first on Wall Street, then in retail and distribution, and now in enterprise planning, I have come to believe that something critical is missing from the conversation.
The thesis I want to advance is straightforward but, I believe, consequential: Large Language Models alone cannot deliver reliable agentic AI for enterprise planning and execution. Only by deliberately fusing neural AI with symbolic AI, what the research community calls a neurosymbolic architecture, can we build agents that enterprises actually trust to run their operations.
This is not a theoretical position. It is a conclusion drawn from deploying these systems at scale across industries, and it carries implications for every technology leader deciding where to invest next.
The Seduction of the Demo
Let me start with what goes wrong. LLMs are extraordinary artifacts. Their ability to parse natural language, generate plausible responses, and handle ambiguity has rightly captivated the technology world. When you wire an LLM into an enterprise data source and ask it to summarise demand variances or suggest reorder points, the initial result can feel almost magical. Stakeholders lean forward. Budget conversations accelerate.
But then reality intervenes. The agent hallucinates a product code that does not exist. It recommends a replenishment quantity that violates a contractual minimum order. It confidently explains a root cause that, upon inspection, confuses correlation with causation because it has no structured understanding of the causal model underlying the supply chain. These are not edge cases. They are the predictable consequences of relying on a system whose intelligence is statistical pattern completion, powerful as that is, without grounding it in the deterministic logic and domain knowledge that enterprises run on.
I have watched too many promising AI initiatives stall at proof-of-concept precisely because of this dynamic. The LLM-heavy agent dazzles in a controlled setting and then fails to earn the trust of the planner, the procurement manager, or the CFO who must sign off on its recommendations. The gap between an attractive demo and an operational system is not a matter of fine-tuning. It is an architectural problem.
Two Kinds of Intelligence, One Architecture
The solution we have pursued at o9 is rooted in a simple observation: neural AI and symbolic AI possess complementary strengths, and the enterprise needs both.
Neural AI, the world of LLMs, deep learning, and reinforcement learning, excels at scalability, adaptability, and learnability. It can ingest unstructured data at massive scale: customer emails, weather feeds, social media signals, analyst reports. It can learn patterns that no human would codify into a rule. And it adapts, improving with each new data point in ways that rigid, hand-coded systems cannot.
Symbolic AI, the world of structured knowledge graphs, formal decision models, and constraint-based reasoning, excels at precision, explainability, and logical consistency. It knows that a warehouse has a finite capacity, that lead times follow contractual terms, that a forecast must reconcile across product hierarchies. It can trace every step in a chain of reasoning and present that trace to a human auditor. It does not hallucinate, because it operates on defined relationships rather than probabilistic generation.
The neurosymbolic approach fuses these two paradigms into a single architecture. In practice, this means our agentic AI systems use LLMs for the tasks they do best (interpreting natural-language queries, synthesising unstructured signals, generating human-readable explanations) while relying on the Enterprise Knowledge Graph and structured decision models for the tasks that demand rigour: causal inference, constraint satisfaction, hierarchical reconciliation, and deterministic computation.
This is not simply a matter of placing guardrails around an LLM. It is a fundamentally different design philosophy. The symbolic layer is not a safety net; it is a co-equal partner in every reasoning step the agent performs. When a planner asks our system why actual shipments deviated from the plan last week, the LLM interprets the question and presents the answer in natural language, but the root-cause analysis itself is executed by traversing the causal structure encoded in the knowledge graph. The result is an explanation that is not only fluent but verifiably correct.
From Architecture to Impact
Architecture matters because it determines what is possible in practice. And in practice, we are seeing the neurosymbolic approach unlock impact that pure LLM-based agents cannot deliver.
Consider root-cause analysis of supply chain variances, one of the most time-consuming activities in any planning organisation. When demand does not match forecast, or inventory levels diverge from plan, planners historically spend hours, sometimes days, tracing the chain of contributing factors across systems and spreadsheets. With neurosymbolic agents that combine the LLM's natural-language fluency with the knowledge graph's causal structure, we have seen planners reduce their investigative time by as much as 80 percent. That is not an incremental improvement. It is a transformation in the planner's role, freeing them to focus on judgment and strategy rather than data archaeology.
Or consider touchless execution in inventory management and logistics. When an agent can autonomously generate and execute replenishment orders, adjust safety stocks, and coordinate logistics within the precise constraints of enterprise policy (because the symbolic layer guarantees compliance with those constraints) the labour savings are enormous. Even small enterprises are realising tens of millions of dollars in cost reduction. These are not speculative projections; they are measured outcomes from live deployments.
Perhaps most significant is the impact on enterprise agility. The persistent challenge of cross-functional decision-making, what the industry calls Integrated Business Planning, is fundamentally a coordination problem. Demand, supply, finance, and commercial teams each operate with different models, different data, and different incentive structures. When a neurosymbolic agent can bridge these silos, translating between domains, reconciling plans across functions, and presenting a unified picture to leadership, the result is a dramatic compression of decision cycle times. We are seeing organisations complete cross-silo decision processes in a quarter of the time they previously required.
Why This Matters Now
The enterprise AI landscape is at an inflection point. The initial excitement over generative AI has given way to a more sober question: where can this technology actually deliver reliable, measurable value at scale? I believe the neurosymbolic approach provides the clearest answer to that question for the domain of planning and execution.
This is not because LLMs lack value; they are indispensable. It is because enterprises operate in a world of hard constraints, regulatory obligations, contractual commitments, and financial accountability. In that world, an AI agent that is fluent but unreliable is worse than no agent at all, because it erodes the trust that is the prerequisite for adoption. The symbolic layer, the structured data models, the knowledge graphs, the decision frameworks informed by deep domain expertise, is what converts an impressive language model into a trustworthy enterprise system.
My experience in both academia and industry reinforces this conviction. At Stanford, I research and teach reinforcement learning, which is fundamentally about sequential decision-making under uncertainty. The lesson of that field is clear: optimal decisions require both the ability to learn from data and the ability to reason about structure. Enterprises face exactly this challenge. Their planning environments are dynamic and uncertain, demanding the adaptability of neural methods. But their decisions must also respect structure (physical, financial, organisational) and that demands symbolic reasoning.
The Road Ahead
I want to be candid about what comes next. Neurosymbolic AI is not a silver bullet. Building the structured knowledge layer, the Enterprise Knowledge Graph, the causal models, the domain-specific decision frameworks, requires deep investment. It requires partnering closely with domain experts who understand the business at a granular level. It requires the patience to build a foundation that will compound in value as AI capabilities continue to advance.
But this investment is precisely what separates durable enterprise AI from the hype cycle. The organisations that invest now in their symbolic data infrastructure, not just their LLM capabilities, will be the ones that achieve autonomous, trustworthy planning and execution. The rest will continue to cycle through impressive demos that never quite make it to production.
Agentic AI for the enterprise has moved beyond experimentation. The question is no longer whether AI agents can help planners and decision-makers. The question is whether your architecture is designed to deliver agents that your organisation can actually trust. In my experience, the answer to that question starts with neurosymbolic AI.
Learn more at aim10x Summits 2026
If this topic resonates, I’d encourage you to come see my presentation at the upcoming aim10x Summits, where I’ll be exploring neurosymbolic AI in an enterprise context in more depth.
We’re bringing together 400+ leaders and practitioners across supply chain, commercial, procurement, and IT for a day focused on what’s actually working in practice. You can join us at aim10x Europe in Amsterdam on June 4th or aim10x Americas in Chicago on September 23rd.
Across the sessions, we’ll look at how teams are embedding AI into their operating models in ways that genuinely shape decisions, how organizations are connecting decisions more effectively across functions, and where companies are seeing real impact across service, cost, and working capital. We’ll also dive into how AI is being applied in day-to-day planning workflows, and what it takes to make faster, more confident decisions in real environments.
The day includes a mix of customer perspectives, live demos of agentic planning, and time set aside to connect with peers working through similar challenges.Attendance is free, but space is limited, so it’s worth registering early and bringing your team along. Hope to see you there.

aim10x Europe 2026:
o9’s Regional AI Summit
Explore how Europe’s most forward-thinking organizations are redesigning the operating model to enable agile, adaptive, and autonomous planning and execution.

aim10x Americas 2026:
o9’s Regional AI Summit
See how leading organizations across the Americas are transforming their operating models to turn VUCA into value with agile, adaptive, and autonomous planning and execution.
About the authors

Ashwin Rao
EVP of AI at o9 & Adjunct Professor at Stanford
Ashwin Rao is EVP-AI at o9 Solutions with the responsibility for o9's AI Strategy & Architecture as well as leading o9's R&D team. Ashwin is also an Adjunct Professor in Applied Mathematics at Stanford University, focusing his research and teaching in the field of Reinforcement Learning (RL), and has written a book on RL with applications in Finance, Supply-Chain and Dynamic Pricing. Previously, Ashwin was the Chief AI Officer at QXO, VP of AI at Target Corporation, Managing Director of Market Modeling at Morgan Stanley, and VP of Quant Trading Strategies at Goldman Sachs. Ashwin has a Ph.D. in Theoretical Computer Science from University of Southern California and a B.Tech in Computer Science from IIT-Bombay. Ashwin resides in Palo Alto, CA.











