/

/

Article

What If Every Decision Taught You Something? The Rise of AI-Augmented Post-Game Analysis

Brad Palm

Brad Palm

Senior Vice President, EMEA

11 read min

Register here to learn more about this topic on June 4th in Amsterdam at aim10x Europe, o9’s Regional AI Summit for 400+ forward-thinking enterprise planning leaders and professionals.

There is a ritual that nearly every enterprise performs after a planning cycle ends, a quarter closes, or a product launch concludes. The leadership team assembles. Slides are prepared. Someone walks the room through what happened, what worked, what didn't, and what should be done differently next time. It goes by many names: post-mortem, after-action review, retrospective, lessons learned. And in most organizations, it is a process that is fundamentally broken.

I don't say this to be provocative for its own sake. I say it because the evidence is overwhelming. As Harvard Business Review has documented, our experience-based learning is filtered through cognitive distortions: business environments that celebrate outcomes over processes, advisory circles that censor uncomfortable truths, and our own confirmation biases that lead us to over-index on small, unrepresentative samples of what actually happened (Soyer & Hogarth, "Fooled by Experience," HBR, 2015).

Simply put, we remember what confirms our existing beliefs, we listen to the people who tell us what we want to hear, and we draw big conclusions from small amounts of evidence. The post-mortem, in its traditional form, is less a learning engine and more an exercise in narrative construction.

But what if the review didn't happen after the game was over? What if the system that planned the play also watched the execution, traced every outcome back to the decision that produced it, and fed that intelligence back into the next cycle automatically, continuously, and without political filtering?

That is the promise of AI-augmented post-game analysis. And it is no longer theoretical.

From Blame to Growth: The Cultural Shift That Changes Everything

Let me start with the finding that resonates deeply with the enterprise leaders I speak with across EMEA every day. The most corrosive dynamic in any planning organization is not inaccurate forecasts or misaligned supply. It is the fact that traditional reviews optimize for accountability rather than improvement. The meeting that is ostensibly about learning is, in practice, often about figuring out whose fault something was.

Google's landmark Project Aristotle study, which analyzed over 180 internal teams across two years, found that psychological safety (the shared belief that team members can take interpersonal risks without fear of negative consequences) was the single most important factor distinguishing high-performing teams from the rest. Teams that felt safe to speak up, admit mistakes, and challenge ideas without fear of retribution consistently outperformed their peers on every metric that mattered (Google re:Work; Edmondson,The Fearless Organization, 2018).

Now consider what AI-augmented post-game analysis does to this equation. When a system continuously and dispassionately connects decisions to outcomes, tracing forecast overrides, safety stock policies, procurement timing, and promotional commitments back to their downstream financial impact, the emotional charge around the review disappears. The question is no longer "who made the bad call?" It becomes "what does the data tell us about how to improve the next cycle?" Mistakes become data points, not liabilities. And that shift in framing is, according to the research, the single most powerful accelerator of organizational learning.

The Attribution Problem: Why We Reward Luck and Punish Sound Thinking

There is a well-documented cognitive error that pervades enterprise decision-making. Researchers call it outcome bias: the tendency to evaluate the quality of a decision based on the result it produced rather than the quality of the reasoning at the time it was made.

As Francesca Gino of Harvard Business School has shown, people consistently judge hiring decisions by whether the new employee performs well, and product decisions by whether the product succeeds in market, rather than assessing whether the process that led to those decisions was sound (Gino, "What We Miss When We Judge a Decision by the Outcome," HBR, 2016). In controlled experiments, researchers have demonstrated that subjects will rate the exact same decision as significantly worse when presented with a bad outcome, even when the outcome was determined entirely by chance (Baron & Hershey, 1988).

This has profound implications for enterprise planning. When a demand planner overrides a statistical forecast and the result happens to land, the organization celebrates the override as "expert judgment." When the same planner makes an equally reasonable override that misses due to unforeseeable demand volatility, the organization treats it as an error. Over time, this dynamic doesn't just demoralise individuals. It systematically distorts the organization's understanding of what good planning actually looks like.

AI-augmented post-game analysis solves this by separating decision quality from outcome variance. A system that records every forecast, every override, every parameter setting, and then traces those decisions through to their downstream consequences can distinguish between a sound decision that encountered bad variance and a flawed decision that got lucky. That distinction changes the entire culture around risk-taking and innovation within the planning organization.

The Speed Imperative: Why Delay Is Now a Competitive Risk

McKinsey's AI Transformation Manifesto, published in April 2026, makes a striking claim: speed is the defining organizational advantage of this era. Companies win not by having access to better technology (the tools are broadly available) but by operating models that redeploy resources rapidly, empower teams to act without excessive dependencies, and reduce what McKinsey calls the "latency" from insight to decision and decision to action (McKinsey, "The AI Transformation Manifesto," 2026).

Consider this in the context of traditional planning reviews. A monthly S&OP cycle that concludes with a retrospective meeting is, at best, a 30-day feedback loop. By the time the insight is surfaced, debated, and acted upon, the market has already moved. A quarterly business review is worse: a 90-day loop in a world where tariff policies shift overnight, consumer sentiment reverses in weeks, and supply disruptions propagate in hours.

AI-augmented analysis compresses this loop to near real-time. When plan-versus-execution variances are detected, attributed, and fed back into the planning logic continuously, the organization doesn't wait for the next review meeting to learn. It learns with every transaction, every shipment, every deviation. The compounding effect of that acceleration is not incremental. It is categorical. And the organizations that build this capability first don't just improve faster. They make the traditional review cadence obsolete.

From Episodic Reviews to Continuous Learning: What the Industry Is Telling Us

The analyst community is converging on this thesis. Gartner's 2025 supply chain technology trends identified Decision Intelligence (the combination of decision modelling, AI, and analytics to support and augment decision-making) as a top trend, noting that this technology allows supply chain leaders to understand how tools arrive at decisions and then improve them based on feedback (Gartner, "Top Supply Chain Technology Trends for 2025," March 2025). Separately, Gartner has predicted that by 2031, 60% of supply chain disruptions will be resolved without human intervention, driven by AI systems that can sense and act in real time (Gartner, March 2026).

McKinsey's 2025 State of AI survey found that 88% of organizations now use AI in at least one business function, yet only about a third have managed to scale it beyond experiments. The companies seeing the greatest impact, what McKinsey calls "AI high performers," treat AI not as an incremental efficiency tool but as a catalyst to transform their organizations, redesigning workflows and accelerating innovation (McKinsey, "The State of AI in 2025," November 2025).

What both of these research streams point toward is a common conclusion: the value of AI in enterprise planning is not in automating what humans already do. It is in creating a closed-loop system where every planning decision generates institutional learning, learning that compounds over time and creates competitive separation.

Post-Game Analysis in Practice: How the Loop Actually Closes

At o9 Solutions, we have been building toward this vision for years, and it has now become a central capability of our platform. Our APEX operating model (Agile, Adaptive, and Autonomous Planning and Execution) is built on the premise that plans, decisions, execution, and learning should not live in separate systems or separate meetings. They should form a single, continuous loop.

The "Adaptive" dimension of APEX is powered by what we call Post-Game Analytics (PGA). PGA analyses performance versus plan at a granular level, attributes gaps back to specific decisions and assumptions, and surfaces root causes across demand, supply, inventory, cost, and service. These insights are then used to refine policies, guardrails, and planning logic so that mistakes are not repeated and performance compounds over time. Crucially, PGA doesn't just surface the obvious last-mile failures. It traces outcomes back through entire decision chains, giving visibility to the upstream choices and supporting calls that created the conditions for success or failure but would never make it onto a slide in a traditional review.

To give a concrete example: our CEO, Chakri Gottemukkala, recently demonstrated a live Decision Replay capability at our aim10x Europe event. In the scenario, a CFO observes a sudden spike in excess inventory alongside declining service levels. Rather than assembling a cross-functional war room, the system autonomously accessed the Enterprise Knowledge Graph, isolated high-risk SKUs using an 80/20 lens, and then used multi-cycle decision records and self-learning models to identify the contributing factors. These included forecast overrides that diverged from system recommendations, strategic bulk purchases made under outdated pricing assumptions, safety stock policies misaligned with current demand patterns, and plant-level overproduction driven by misaligned performance incentives (o9 Solutions, "o9 CEO Charts the Next Agentic AI Frontier," 2025).

As Chakri put it: "Once you shine a light on why things are going wrong, improvement becomes mandatory. That is the real secret to change management."

This is what separates AI-augmented post-game analysis from a better dashboard. The system doesn't just show you what happened. It tells you why. It connects the outcome to the decision. And it feeds that intelligence back into the planning logic so the next cycle is better by design, not by hope.

Institutional Memory That Actually Compounds

Every enterprise I work with tells me they value institutional knowledge. And every enterprise I work with loses a staggering amount of it every year. Senior planners retire. Key analysts change roles. Regional experts move to different markets. And the knowledge they accumulated (the patterns they recognised, the context they carried, the judgment calls they had learned to make) walks out the door with them.

What organisations call "experience" is, in practice, a lossy, politically filtered compression of what actually happened. It exists in people's heads, in tribal knowledge, in the way someone "just knows" that a particular customer's order pattern shifts in Q3.

AI-augmented post-game analysis captures every decision-outcome pair and makes it queryable, durable, and available to the entire organisation. New team members don't start from zero. They inherit the full weight of organisational learning from day one. And that learning isn't someone's recollection of what worked. It is a structured, evidence-based record of what actually drove results across hundreds of planning cycles.

As o9's Co-Founder Chakri Gottemukkala has described it, the Enterprise Knowledge Graph approach allows all of the critical knowledge and experience within a company's operations to be digitised and incorporated into the Digital Brain, creating, for the first time, institutional memory that genuinely compounds rather than degrades.

The Uncomfortable Truth: Why Some Organisations Will Resist

I want to be candid about something. Not every organisation will embrace this shift willingly. Continuous, AI-augmented feedback will reveal things that some stakeholders would rather leave unexamined. It will show that certain "strategic decisions" are actually habitual reflexes: the same response applied to structurally different situations. It will expose the gap between what an organisation claims its decision-making process looks like and what it actually looks like in practice.

The organisations that resist will rationalise their reluctance as "preserving the human element." But the human element is not eliminated by AI-augmented analysis. It is redirected from tedious forensic reconstruction toward higher-order creativity, adaptation, and judgment. McKinsey's research supports this directly: as AI agents take on coordination, execution, and routine decision-making, human roles shift up the value stack, with engineers and planners spending less time on repetitive tasks and more on designing architecture, setting objectives, and making strategic trade-offs (McKinsey, "The AI Transformation Manifesto," 2026).

The question for every enterprise leader is not whether to adopt this capability. It is whether you adopt it while it is still a differentiator, or after it has become table stakes and your competitors have years of compounding learning advantage.

The Future Is a Closed Loop

The trajectory is clear. Gartner predicts that by 2030, half of all cross-functional supply chain management solutions will use intelligent agents to autonomously execute decisions (Gartner, May 2025). The organisations that will thrive are not those with the best planners or the most sophisticated models. They are the ones that build the infrastructure to learn continuously from every decision they make.

This is not about replacing human judgment. It is about giving human judgment the environment it deserves. One where every decision teaches, every outcome informs, and every cycle builds on the last. Where psychological safety isn't a workshop exercise but a structural feature of how the system works. Where the brilliant planner with 20 years of experience isn't competing against the AI but is amplified by it, their expertise is captured, validated, and made available to the entire organisation.

The post-mortem is dead. Long live the learning loop.

See AI-Augmented Post-Game Analysis Live in Action

If this topic resonates, I'd encourage you to join us at the upcoming aim10x Summits to go deeper. APEX (Agile, Adaptive, and Autonomous Planning and Execution) is the operating model we've built to help enterprises close the gap between strategy and results in volatile environments, and Post-Game Analytics is one of its foundational pillars. At the event, you'll see why.

We'll be running live demos of PGA on the o9 Digital Brain, showing in real time how the system traces execution back to decisions, surfaces root causes, and feeds learning directly into the next planning cycle.

Beyond PGA, the day covers the full APEX framework, including how enterprises are building agility into their planning processes, where autonomy and AI agents are starting to take on well-governed decisions at scale, and what it takes to connect functions and horizons on a single platform. We're bringing together 400+ leaders and practitioners across supply chain, commercial, procurement, and IT for a day focused on what's actually working in practice.

You can join us at aim10x Europe in Amsterdam on June 4th or aim10x Americas in Chicago on September 23rd.

Attendance is free, but space is limited, so it's worth registering early and bringing your team along. Hope to see you there.

aim10x Europe 2026:
o9’s Regional AI Summit

Explore how Europe’s most forward-thinking organizations are redesigning the operating model to enable agile, adaptive, and autonomous planning and execution.

aim10x Americas 2026:
o9’s Regional AI Summit

See how leading organizations across the Americas are transforming their operating models to turn VUCA into value with agile, adaptive, and autonomous planning and execution.

About the authors

Brad Palm

Brad Palm

Senior Vice President, EMEA

Brad Palm is Senior Vice President, EMEA at o9 Solutions, where he leads all sales and go-to-market strategy across the region. Previously, as Country Manager and Representative Director in Japan, he oversaw all functions—including sales, business development, presales, and marketing—driving expansion in one of o9’s most strategic growth markets. With leadership experience across the U.S., Japan, and now EMEA, Brad is known for scaling high-performing teams and delivering sustained commercial growth in complex enterprise environments.

follow on linkedin