I first became interested in AI back in the early 2000s and even won a junior competition with a project focused on handwritten OCR (optical character recognition). Interestingly, already in 2000 we had almost all the core algorithms that are still used today. What we didn’t yet have was enough compute…
Over time, my interests shifted toward performance measurement, and later toward the broader field of strategy execution. Today, when LLMs dominate the headlines, we rarely go back to the fundamentals.
What actually makes learning in neural networks possible?
A few simple mathematical principles sit at the core of modern AI. And I see clear parallels in how these same principles can and should be used by organizations when executing their strategies.

What Makes It Possible for AI to Learn
One of the mathematical foundations of neural networks is the chain rule of calculus applied to compositions of functions. Neural networks are layered systems built from many simple, differentiable operations. The chain rule allows gradients to be computed through the entire composition. That is what makes learning possible at scale.
- During a forward pass, a neural network produces an output that is initially wrong.
- Using labeled data and a loss function, we can measure how far the result deviates from the desired outcome.
- Because all internal operations are differentiable, we can compute local derivatives for each transformation.
- By applying the chain rule, these local derivatives are combined into gradients of the loss with respect to every parameter in the model.
The system doesn’t only detect that an error occurred. It determines how each individual parameter contributed to that error, and in which direction changing it would move the outcome.
Strategy Execution as an Organizational Learning System
The same problem exists in organizations…
A good strategy implementation is a structured system of objectives, sub-objectives, initiatives, and performance indicators.
During execution, organizations need to detect the deviations from their intended direction as early as possible. That is why performance measures exist. They provide the first signal that reality is drifting away from assumptions.
But:
Knowing that “something went wrong” is almost useless on its own…
Execution only improves when the organization can see which elements of the system require adjustment, and how those adjustments are likely to influence results.
In neural networks, this is enabled mathematically by the chain rule. In strategy implementation, it becomes possible only when the strategy is properly decomposed, aligned with stakeholder expectations, and translated from vague aspirations into multiple levels of concrete, causally-connected objectives and indicators.
Executing a Properly Implemented Strategy
In this sense, effective strategy execution starts to look like a well-designed neural network.
When deviations occur, the organization can learn quickly – not just that performance is below expected, but which initiatives, processes, capabilities, or assumptions should be adjusted to move closer to stakeholder expectations.
When strategy is poorly articulated (abstract goals, no cause-and-effect, no meaningful indicators) the organization ends up in the same position as a model without usable gradients. It can see that results are bad, but it has no reliable way to decide what to change.
In both neural networks and organizations, learning becomes possible only when a system is built from interconnected components through which feedback can propagate.
When there is:
- Structure,
- Local accountability, and
- Measurable drivers
… continuous improvement becomes possible. Without them, organizations are left with little more than success/failure signals, and no mechanism for understanding how to improve.
So We Finally Know Where the Marketing Spending Goes?
The short answer is: “no” (and neither do neural networks).
In AI, we cannot point to a single neuron and say, “this caused the result.” Learning is still possible because the system is built so that feedback flows through many connected parts and gradually reshapes them.
Saying “marketing is working” is similar to saying “the model improved.” It shows direction, but it does not tell you what to change next.
What becomes useful is seeing patterns inside the system.
For example: revenue may stay flat while traffic increases, content engagement improves, and more leads are created, but win rates for deals fall and deals take longer to close.
Analytically, it is no longer a simple success/failure. It is a pattern of signals showing that some parts of the system are improving while others are misaligned.
The system doesn’t identify a single guilty campaign. It indicates where adjustment is needed.
This is what learning looks like when it works.
It may not tell you exactly where every dollar went, but it tells you something far more valuable: where the organization should move next.
Known Flaws: Finding Local Minimum
Similar to neural networks, with the known limitation of finding a local minimum instead of a global one, organizations should not trust their measurement frameworks 100%.
Strategy execution is an ongoing process of validating hypotheses in practice. Sometimes we validate low-level hypotheses and everything makes sense. Sometimes, we move up along the decomposition tree and eventually question our understanding of stakeholders and their needs.
Alexis Savkin is a Senior Strategy Consultant and the CEO of BSC Designer, a Balanced Scorecard platform. He has more than 20 years of experience in the field, with a background in applied mathematics and information technology. Alexis is the author of the “Strategy Implementation System”. He has published over 100 articles on strategy and performance measurement, regularly speaks at industry events, and his work is frequently cited in academic research.