Skip to main content

Blog

Insights on AI adoption, decision intelligence, and digital transformation

The order you do things matters more than what you do
AI StrategyApril 21, 2026

The order you do things matters more than what you do

Most organisations pick their AI initiatives, then run them. The sequencing follows executive priority or budget cycles, not dependencies. When dependent initiatives launch before their foundations exist, they stall or build workarounds that become technical debt.

Angel Horvat
Why 3-4 use cases beat 10 pilots (and how to choose them)
AI StrategyApril 14, 2026

Why 3-4 use cases beat 10 pilots (and how to choose them)

Organisations focused on fewer AI initiatives see 2.1× better ROI. But the standard advice (focus on three to four use cases) applies the principle at the wrong altitude. The mechanism works at the domain level, where each function chooses initiatives based on friction only they can see.

Angel Horvat
The cascade effect: how one AI initiative impacts everything upstream and downstream
Value Chain EngineeringApril 7, 2026

The cascade effect: how one AI initiative impacts everything upstream and downstream

AI initiatives get scoped in isolation, but their consequences propagate through every connected process, team, and system. Two in three companies struggle to reimagine workflows for AI, because the cascade reaches people, not just processes.

Angel Horvat
Value chain engineering: why process matters more than technology
Value Chain EngineeringMarch 31, 2026

Value chain engineering: why process matters more than technology

78% of organisations use AI in at least one function. Only 1% describe their implementations as mature. The gap is context: AI works at the edges because endpoints don't need organisational understanding. Moving closer to the core requires value chain engineering first.

Angel Horvat
Who owns what, when, and what triggers their involvement
AI GovernanceMarch 25, 2026

Who owns what, when, and what triggers their involvement

One in four failed AI initiatives traces back to weak governance. In most cases, ownership was vague enough that everyone assumed someone else was handling it. Accountability means someone specific owns every handover point, with defined scope, clear triggers, and an escalation path.

Angel Horvat
Bottom-up discovery: where organisational knowledge actually lives
Decision MakingMarch 24, 2026

Bottom-up discovery: where organisational knowledge actually lives

The knowledge that determines whether an AI initiative will work in production is distributed across the organisation. Most of it can't be extracted by asking.

Angel Horvat
When the business expects one thing and the technology delivers another
AI GovernanceMarch 19, 2026

When the business expects one thing and the technology delivers another

65% of high-performing organisations define when AI outputs need human validation or intervention, compared to 23% of everyone else. What separates them is alignment: business, operations, and technology sharing an understanding of what the system does, when it'll be wrong, and what happens in those situations.

Angel Horvat
The top-down trap: why executive AI strategies miss organisational reality
Decision MakingMarch 17, 2026

The top-down trap: why executive AI strategies miss organisational reality

Sponsoring a transformation and setting its direction need different knowledge. When leadership imposes direction without discovery, even fully backed initiatives fail.

Angel Horvat
When data means different things to different teams, AI outputs can't be trusted
AI GovernanceMarch 12, 2026

When data means different things to different teams, AI outputs can't be trusted

Only 12% of organisations report sufficient data quality for AI. But data quality is the surface issue. The harder problem is that the same data means different things depending on who's using it and why.

Angel Horvat
Why consultants, vendors, and system integrators can't solve the AI adoption problem
AI AdoptionMarch 10, 2026

Why consultants, vendors, and system integrators can't solve the AI adoption problem

42% of companies abandoned most AI initiatives in 2025, up from 17% in 2024. The helpers organisations bring in — consultants, vendors, integrators — all hit the same wall. The knowledge needed for good AI decisions is distributed across the workforce, and none of them can access it.

Angel Horvat
Why AI adoption stalls without organizational trust
AI GovernanceMarch 6, 2026

Why AI adoption stalls without organizational trust

Only 21% of organisations have governance mature enough for AI agents, while agentic AI usage surges from 23% to 74% within two years. The knowledge you need is usually there — it's just scattered across parts of the organisation that don't talk to each other.

Angel Horvat
Pilot purgatory: why 62% of organisations can't scale AI beyond experiments
AI AdoptionMarch 3, 2026

Pilot purgatory: why 62% of organisations can't scale AI beyond experiments

62% of organisations are stuck running AI pilots that never reach production. Only 7% have fully scaled. The gap between pilot and scale isn't technology; it's that each transition demands organisational elements that pilots never tested.

Angel Horvat
The information gap: the hidden root cause behind every AI failure
AI AdoptionFebruary 27, 2026

The information gap: the hidden root cause behind every AI failure

53% of CEOs say their teams can't align on their AI priorities. The root cause is structural: strategy, capability, and operationalization live in different parts of the organization, and in most companies, they never meet.

Angel Horvat
Why 80% of Enterprise AI Projects Fail
AI AdoptionFebruary 20, 2026

Why 80% of Enterprise AI Projects Fail

Harvard Business Review reports 80% of enterprise AI projects fail. Traditional IT had ~42% failure rates; add AI to the mix and they double. The real obstacles are organizational, not technical.

Angel Horvat