/

/

Article

o9’s Approach to Responsible AI 

Ashwin Rao

Ashwin Rao

EVP of AI at o9 & Adjunct Professor at Stanford

10 read min

Enterprise-ready neuro-symbolic agents: autonomous within boundaries, auditable by default, and accountable to human owners.

Enterprises tend to welcome automation when it makes work faster, cheaper, and more reliable. What slows adoption is uncertainty. Uncertainty about who owns an outcome, what an agent might do next, how sensitive data is handled, and whether decisions can be defended under scrutiny. In planning and execution, a bad decision does not stay contained in a chat window. It shows up as excess inventory, missed service levels, margin leakage, broken commitments, and a loss of trust that takes quarters to rebuild.

At o9, we define Responsible AI as enterprise readiness. It is the set of controls that makes AI safe to deploy in production, repeatable to operate at scale, and accountable to business owners, executives, security teams, and legal teams. The objective is straightforward: deliver real autonomy where it is safe, and explicit governance where it is required.

This article explains how Responsible AI is designed into o9’s ‘neuro-symbolic’ agentic capabilities embedded across Demand Planning, Supply Planning, Commercial Planning, and Integrated Business Planning (IBP). It also reflects the concerns enterprises raise most often, and how our architecture addresses them.

The Enterprise Fears We Design For

Most concerns cluster into a small set of patterns.

The first is loss of control. When an agent proposes a decision or takes an action, leaders want clarity on who owns that outcome and how quickly automation can be stopped or constrained when conditions change.

Close behind is data exposure. Security teams want to understand what data an agent can access, where it is processed, how it is stored, and what prevents leakage of sensitive information.

Then comes explainability, because unexplainable decisions create operational and governance friction. People need to know why a plan changed, what evidence was used, which rules were applied, and whether the reasoning can be reconstructed later.

A quieter risk is silent degradation. Systems can drift as data changes, as products and markets evolve, or as underlying models are updated. Enterprises want to detect these shifts early, before they become business incidents.

Finally, there is regulatory and contractual risk, which ties everything together. For many organizations, the practical question is whether they can prove policy compliance, access control, and proper handling of PII, PHI, and confidential IP.

These are the realities Responsible AI must address. They also explain why “smart” is not sufficient.

o9’s Agentic AI is distinctive because it is built on a decade-long symbolic foundation that captures how enterprises plan, decide, govern, and learn, then pairs that rigor with modern LLM capability to deliver neuro-symbolic agents that executives can trust in the real world.

Dr. Ashwin Rao

Executive Vice President, AI Strategy and R&D

Autonomy With Boundaries, Designed into the System

Many AI agent discussions focus on intelligence at the model level. Enterprises care more about intelligence at the system level. They need to know what the agent is allowed to do, how it is constrained, how it is monitored, and what happens when things go wrong.

o9’s approach treats Responsible AI as an architectural property. It comes from a deliberate separation of capabilities.

The neural layer supports language understanding, pattern recognition, summarization, and natural interaction. It helps an agent interpret intent, form hypotheses, and propose actions.

The symbolic layer governs what is permitted. It holds explicit domain knowledge, rules, constraints, approval requirements, and traceability. It determines whether a proposal can proceed, how it should be executed, and what explanation must accompany it.

This neuro-symbolic design matters because planning and execution require rigor. Probabilistic systems are valuable for discovery and prediction. They also require guardrails when decisions carry financial, operational, and contractual consequences. Symbolic governance provides deterministic constraints, explicit policies, and decision traces that can withstand review.

The Enterprise Knowledge Graph as the Control Plane

The symbolic layer is powered by the o9 Enterprise Knowledge Graph (EKG), built through more than 15 years of investment. The EKG captures how an enterprise is structured (products, locations, customers, suppliers), along with decision context (what matters, to whom, and when).

It also represents rules and policies such as constraints, approvals, and guardrails, plus connected compute like models, solvers, and simulations used to execute decisions.

This is decision-grade knowledge. It makes auditability practical because entities, policies, and compute are represented as structured objects with lineage and meaning. When an agent recommends or executes an action, the system can tie that decision back to the enterprise context that governed it.

A Guide to the o9 Enterprise Knowledge Graph

The o9 Enterprise Knowledge Graph (EKG) is a four-layer, closed-loop system designed to transform how enterprises plan, decide, and execute.

Governance and Control that Matches Enterprise Operating Models

In production environments, autonomy has to come with accountability. Governance is designed around four questions: who owns outcomes, what authority has been delegated, how access is controlled and monitored, and how automation can be stopped or constrained.

Ownership starts with a clear operating model. Each production agent has a named business owner who defines objectives, acceptable risk, and escalation thresholds. Each agent also has an accountable technical owner who maintains configuration, policies, and monitoring, and who owns incident response.

Approvers are defined for action classes that require sign-off, such as plan publication or master data changes. This structure prevents automation without a signature and gives enterprises a dependable chain of responsibility.

Access control is enforced through role-based access control (RBAC) and least privilege. Agents execute under explicit identities, and permissions are scoped to what is required for the task. Recommendation, action, and approval can be separated as duties. Tool permissions are scoped and checked before execution, so an agent only reads what it needs and only acts where it has been explicitly authorized.

Auditability is built for after-the-fact scrutiny. Enterprises need to reconstruct who did what and why, even months later. That requires audit logs for reads, writes, tool calls, and approvals. It also requires decision traces that connect inputs to policies checked and compute invoked, and then to outputs produced. For critical actions and overrides, immutable event history supports non-repudiation, preserving the story as it happened.

Enterprises also need stop mechanisms that match different blast radii. A global halt can stop all agent actions and revert workflows to human-only control. A scoped stop can disable a specific agent, domain, or action category, such as blocking plan publication.

Degrade modes can switch the system to recommend-only while keeping explanations and monitoring intact. Circuit breakers can halt automation automatically when policy violations, anomalous behavior, or suspicious access patterns are detected.

Data Responsibility with Clear Boundaries and Customer Control

Trust depends on clarity about data. Responsible AI must define what data an agent can access, what it cannot retain, where processing happens, and how customers remain in control.

Agents operate on a need-to-know basis. Access is scoped by customer configuration, RBAC, and decision context in the EKG. In planning environments, this typically includes forecasts, supply plans, inventory, service levels, scenarios, and KPIs within the tenant boundary. It also includes enterprise context such as hierarchies and approved reference data required to interpret decisions, plus operational signals like exceptions, alerts, and execution feedback that help close the loop.

Data retention is treated as a governance choice, not an accidental side effect. Agents do not require long-lived memory of sensitive raw data to be effective. When persistence is needed, the system stores structured artifacts in governed systems: tasks, decisions, approvals, and traces, each with access controls and retention policies.

Sensitive fields can be masked or redacted in logs when policy requires it. This approach keeps operational memory bounded and auditable.

Processing boundaries must also be explicit. o9 supports enterprise deployment patterns aligned with customer security requirements, with options depending on the deployment model and configuration. Controls typically include tenant-level segregation for data, indices, and operational metadata, along with encryption in transit and at rest. Data residency and regional processing options are supported where applicable within a customer’s deployment model.

Sensitive categories such as PII, PHI, and confidential IP are handled as first-class policy citizens. Classification and tagging allow policies to apply based on sensitivity. Access boundaries ensure only roles with existing permissions can expose sensitive fields in agent workflows. Redaction and masking can remove sensitive attributes from prompts, traces, and views when required. Export controls reduce the risk of data exfiltration to unapproved destinations or channels.

Behavioral Safety Through an Operating Envelope

An enterprise agent is safe when its behavior stays predictable under pressure: novel situations, incomplete data, conflicting objectives, and adversarial inputs. o9 enforces this through an operating envelope, which defines allowed actions, required approvals, and escalation triggers.

Hard constraints are rules enforced by the symbolic layer. When a hard constraint is hit, the correct behavior is to stop and escalate. These constraints can include prohibiting binding commitments on behalf of the enterprise without explicit approval, blocking master data changes unless workflows are configured with governance gates, preventing RBAC or export control bypass, and requiring every tool invocation to pass policy checks before execution.

Soft constraints cover ambiguous or high-stakes situations that still fall within the permitted action space. High-impact actions beyond configured thresholds, cross-functional tradeoffs such as service versus margin versus capacity, low confidence due to limited evidence, and requests that appear unusual or inconsistent with governance policies should trigger human review. In these cases, escalation is a feature of maturity because it routes uncertainty to a human owner with context.

Policy-based reasoning is integrated into the agent loop. Policy evaluation is explicit and logged, including which rules were checked and the outcomes. Policies encode customer-specific governance such as thresholds, approvals, exceptions, and blackout periods. They are versioned and change-controlled so behavior evolves deliberately.

Safe systems also fail gracefully. When required inputs are missing, an agent can ask clarifying questions. When action is permitted but uncertainty is significant, it can return a bounded recommendation that makes assumptions explicit. When policy forbids action, it can refuse and escalate with a structured summary for a human owner.

Inside o9’s Enterprise-Grade Neuro-Symbolic AI Architecture

How o9 builds agentic AI that operates safely and reliably at enterprise scale.

Explainability and Auditability for Executives, Operators, and Auditors

Explainability in enterprise planning is an operational requirement. Executives need a narrative tied to business outcomes. Operators need actionable levers. Auditors need a trace they can follow.

o9 explanations are grounded in decision context: objectives, constraints, evidence, and tradeoffs. The system separates executive explanation from audit trace. Executive explanation provides a clear narrative in business terms, linked to KPIs and tradeoffs. Audit trace provides the precise chain of inputs, rules, compute, and outputs, with timestamps and identities.

Decision traces become practical because the EKG represents enterprise entities, policies, and compute as structured objects. A trace can include data lineage for inputs, the exact policies evaluated, the models or solvers invoked and their parameters, and the recommendations or actions produced, along with confidence and uncertainty signals.

Uncertainty is expressed in operational terms: where outcomes are stable versus fragile, what assumptions were made, what would change the decision, and which escalation triggers require review before execution.

Continuous Monitoring and Incident Readiness as an Operating Discipline

Responsible AI is sustained through operations. Agents are monitored like critical services, across reliability, security, and behavior.

Drift detection focuses on inputs, decisions, and outcomes. Input drift can show up as distribution shifts, missingness, or unusual patterns. Decision drift can appear as shifts in recommendation mix, override rates, or policy violation attempts. Outcome drift can emerge when realized business outcomes diverge from expectations after actions are taken.

Misuse detection addresses prompt injection, social engineering, and attempts to extract data. Controls include anomaly detection on access patterns, rate limits on sensitive operations, and policy enforcement that treats instructions as untrusted inputs unless authorized.

Improvement comes from structured feedback captured through accept and reject reasons, override rationales, and outcome attribution tied to downstream results such as service, inventory, and margin. Updates are governed, converting learnings into revised rules, thresholds, or models through change control.

Incident response playbooks assume incidents will occur and define roles and steps to handle them. Detection and containment can disable affected agents or action categories and preserve evidence.

Triage classifies severity across data risk, operational risk, and availability. Forensics reconstructs traces to identify gaps or adversarial inputs. Remediation updates policies, tools, and tests, and communicates actions taken.

Responsible AI as an Enabler of Scaled Autonomy

Responsible AI makes AI deployable at scale in environments where decisions have real consequences. When control, auditability, and accountability are designed into the system, AI becomes a credible lever for faster planning cycles, better scenario management, and more resilient execution.

o9’s goal is to deliver meaningful autonomy in enterprise planning and execution while preserving the governance structures enterprises rely on. The system is designed to remain safe under stress, with clear ownership, bounded authority, disciplined data handling, policy-driven behavior, explainability grounded in the Enterprise Knowledge Graph, and continuous monitoring backed by incident readiness.

That is what enterprise-ready neuro-symbolic agents are built to deliver: autonomy within boundaries, auditable by default, accountable to human owners.

Future-proof business solutions with GenAI

Supercharge planning and decisions with GenAI’s intelligent insights. See how it transforms your business.

About the authors

Ashwin Rao

Ashwin Rao

EVP of AI at o9 & Adjunct Professor at Stanford

Ashwin Rao is EVP-AI at o9 Solutions with the responsibility for o9's AI Strategy & Architecture as well as leading o9's R&D team. Ashwin is also an Adjunct Professor in Applied Mathematics at Stanford University, focusing his research and teaching in the field of Reinforcement Learning (RL), and has written a book on RL with applications in Finance, Supply-Chain and Dynamic Pricing. Previously, Ashwin was the Chief AI Officer at QXO, VP of AI at Target Corporation, Managing Director of Market Modeling at Morgan Stanley, and VP of Quant Trading Strategies at Goldman Sachs. Ashwin has a Ph.D. in Theoretical Computer Science from University of Southern California and a B.Tech in Computer Science from IIT-Bombay. Ashwin resides in Palo Alto, CA.

follow on linkedin