Six months after deployment, an AI initiative stalls. The technology still works. But upstream, a data source it relied on was reformatted when another team upgraded their system. Downstream, the team consuming the AI's output had built new workflows around it, workflows that broke when the output varied outside expected bounds. A finance team adjusted their reconciliation process around the output format; when the model updated, the reconciliation stopped matching. An adjacent process that used to rely on human judgment at a handoff point now depends on an automated decision that behaves differently under edge conditions. None of these teams were in the room when the initiative was scoped.
In week five, I wrote about sensitivity to initial conditions: small differences in starting conditions producing wildly different outcomes over time. The scenario above is what that sensitivity looks like in practice when AI enters your value chain. I'm calling it the cascade effect. Introduce a change at one point in your operating model and the consequences propagate, sometimes in expected directions, often not. Last week I wrote about why value chain understanding and organisational context come before technology selection. Without seeing the chain, you can't anticipate where the cascade reaches.
Second and final post in the value chain engineering track. Last week covered why process understanding precedes technology selection. The decision making track (weeks five and six) established where AI decisions actually get made; this week looks at what happens when those decisions cascade through connected processes, and why the people conducting them determine whether it works.
What isolated analysis misses
Most AI initiatives get evaluated in isolation. Can the technology handle the task? Is the data good enough? Will the team adopt it? All necessary questions, but they stop at the initiative boundary. Automating invoice processing doesn't just affect accounts payable; it changes what suppliers provide, how exceptions get escalated, what financial reporting receives, and how audit trails are maintained. Each shift has its own downstream dependencies, owned by teams who weren't consulted when the initiative was scoped.
I wrote in Week 3 about why pilots don't surface this kind of complexity; they're isolated by design. The cascade only becomes visible when the initiative meets the full operating model. By then, the budget is committed and the timelines are set.
Why cascades can't be blueprinted
The traditional response is to analyse harder upfront. Map every dependency, model every scenario, then execute. But value chain dependencies don't work that way. Stephen Wolfram coined the term computational irreducibility for systems where simple rules, given iteration, produce complexity you can't shortcut; you have to run through each step to see where it leads. Peter Robin Hiesinger found the same thing in biology: the genome doesn't contain a blueprint for the brain, it contains rules that build one step by step. Business value chains behave the same way. Each process change reshapes the context for the next decision, and no amount of upfront modelling predicts how five connected teams will adapt once the change reaches their work.
This connects to last week's argument about continuous discovery. If the knowledge you need is distributed across people and the cascade only becomes visible as you work through it, the understanding has to be built iteratively. Each phase reshapes what you look at next; a snapshot taken at the start won't hold.
The cascade reaches people
When AI changes how a workflow operates, it changes what the people conducting that work need to know, how they make decisions, and when they need to intervene. Two in three companies struggle to reimagine workflows and upskill their workforce for AI, mostly because the deployment was scoped as a technology project, not a change to how people work.
The cascade reaches the person doing the work, not just the process boundary. If that person can't recognise when the AI is wrong, or lacks the context to understand what changed upstream, the initiative fails regardless of how well the technology performs.
Training someone to use an AI tool is different from preparing them for how their work changes because an AI tool was deployed three teams away. The second is a change management problem most organisations don't recognise until it surfaces as resistance, workarounds, or quiet non-adoption. The person in finance whose reconciliation inputs suddenly look different needs to understand what changed, why, and what to do when the new format doesn't match, and that goes well beyond AI training.
Conventional training programmes don't cover this. When five connected processes are affected by an AI initiative, five sets of people need to understand what shifted and what their role looks like now. Most organisations budget for training the team directly touched by the deployment. They rarely budget for the teams downstream. And the people furthest from the initiative but still in the cascade, the ones who experience the change as an unexplained shift in their inputs, are usually where adoption breaks down.
Readiness belongs in process design, not at the end of an implementation plan as a training line item: who in the cascade needs to be ready, for what specific change, and with what level of autonomy to intervene when things go sideways?
What separates the organisations that succeed
65% of high-performing organisations (those generating measurable returns from AI) have defined explicit processes for when AI outputs need human review, validation, or intervention. Among everyone else, it's 23%.
I referenced this figure in the governance track in the context of algorithmic transparency. In this context it says something about process design: high performers govern the handover points where AI outputs flow into human processes, identifying where in the cascade a human needs to step in and building that in before deployment.
That gap tells you what "AI readiness" actually means in practice: whether the people and processes have been designed for how the technology will interact with them.
It also tells you which initiatives are worth pursuing. The cascade works as a selection tool: when you can see how far a proposed change propagates (through how many teams, systems, and handoff points) you can make a more honest assessment of whether the value justifies the disruption. An initiative that looks compelling in isolation looks different when you trace its cascade through four downstream teams, two system integrations, and a regulatory handoff that nobody flagged during evaluation. Some of those cascades are manageable. Others suggest you're buying a $200,000 AI deployment and a $2 million change management programme alongside it.
The organisations scaling AI aren't running the most experiments. They're picking initiatives where they can trace the cascade and have the people ready at each connection point.
Next week starts a new track: the AI strategy paradox, and why three well-chosen use cases consistently outperform ten scattered pilots.
Sources
- "2 in 3 companies struggle to reimagine workflows and upskill AI talent" — Boston Consulting Group, AI Radar 2025: Value-Strategy Gap (2025), survey of 1,803 C-suite executives
- "65% of high-performing organisations define human-in-the-loop validation vs 23% of others" — McKinsey Global Institute, "The State of AI" (2025), 1,993 participants across 105 nations
- Computational irreducibility and Rule 110 — Stephen Wolfram, A New Kind of Science, Wolfram Media (2002)
- Iterative unfolding in neural development — Peter Robin Hiesinger, The Self-Assembling Brain, Princeton University Press (2021)
- Sensitivity to initial conditions — Edward Lorenz, MIT (1961); referenced in detail in the decision making track (Week 5)
- Network effects and cascade dependencies in complex systems — Albert-László Barabási, "Linked: How Everything Is Connected to Everything Else," Basic Books (2002)