In our previous article, we explored how reconciliation often dominates finance processes when architecture is fragmented and validation is not embedded. That same structural issue now appears in a new context: artificial intelligence.
AI is rapidly becoming part of the finance conversation. Forecasting, anomaly detection, scenario simulation and automated commentary are no longer theoretical use cases.
But a more fundamental question often goes unaddressed.
What happens when AI is applied to fragmented data, inconsistent KPI definitions and disconnected systems?
If the foundation is unstable, intelligence does not solve the problem. It scales it.
Reconciliation is a symptom, not the root cause
In many organizations, heavy reconciliation effort is treated as an operational challenge. More checks. More controls. More manual validation.
But reconciliation is rarely the root problem. It is a symptom of architectural fragmentation.
When data flows between multiple systems without shared logic, when adjustments are made locally and when integrations are partial rather than unified, inconsistency becomes inevitable. Manual validation then compensates for structural gaps.
The result is predictable. The more fragmented the architecture, the more reconciliation is required to restore alignment.
Adding AI on top of this structure does not remove the fragmentation. It accelerates whatever logic already exists, including inconsistencies.
The solution is not additional checking. It is structural coherence by design.
When KPI definitions drift
One of the most underestimated governance challenges is definition alignment.
While a KPI should have one agreed-upon definition within a shared decision-making context, the same underlying concept can be measured in multiple valid ways to answer different questions.
Different stakeholders may calculate the same KPI using different logic based on context. None of the definitions are necessarily wrong. This creates the classic “apples and oranges” problem. Comparisons become unreliable. Discussions become defensive. Reporting loses clarity.
During our discussion, Annika Habermann summarized the issue clearly:
“You can have different definitions for the same KPI, and they can all be correct. The problem is not the difference. The problem is that it is not transparent. If you know you are comparing apples and oranges, you can handle it. But if you do not know, then you have two numbers and assume one must be wrong.”
The solution is not to enforce uniformity at any cost. It is to document definitions properly.
Definitions must be written down.
They must be transparent.
They must be easy to access.
And there must be a clear way to connect them when comparisons are made.
Without documentation and shared ownership, numbers cannot be interpreted consistently.
Governance requires ownership
Documentation alone is not enough. Definitions require ownership.
Someone must be accountable for why a KPI is defined the way it is, how it is calculated and when it should be revised.
As Annika Habermann noted:
“A KPI definition should not just exist in a document. It needs a responsible owner who understands the logic behind it and can explain it.”
AI does not solve this. Applied to inconsistent logic, it simply scales inconsistency.
Governance must precede intelligence.
Standardization enables flexibility
Standardization is often perceived as restrictive. In reality, it enables flexibility.
When definitions are clear and systems share common structures, change becomes easier. Adjustments can be implemented with confidence because their impact is predictable.
As Annika Habermann noted during our discussion:
“When you have clear standards and shared definitions, you actually gain flexibility. You can change things faster because you know how everything connects.”
Structure does not reduce agility. It makes agility possible.
Data lineage and auditability
Modern finance functions operate in increasingly regulated environments. Stakeholders expect transparency. Regulators expect traceability.
It must always be possible to answer a simple question:
Where does this number come from?
If figures cannot be traced back to their origin system and transformation logic, confidence deteriorates and risk increases. Introducing AI into such an environment adds another layer of opacity.
Data lineage is not a technical luxury. It is a control mechanism.
A unified architecture makes traceability systematic rather than manual. Instead of reconstructing the journey of a number after the fact, the process itself documents it. Architecture then becomes more than an IT concern. It becomes a governance framework.
Foundation first, intelligence second
There is significant potential in automation, machine learning and generative AI within finance. But these capabilities depend on structured, consistent and reliable data flows.
Before advanced intelligence can create value, basic consistency must exist.
Systems must be integrated.
Definitions must be aligned.
Validations must be embedded.
Data lineage must be transparent.
As Annika Habermann emphasized during our panel discussion:
“We have to be very clear about what an LLM can do and what it cannot do. Our KPI calculations must remain deterministic. If you put the same numbers in, you must always get the same result.”
Large language models are probabilistic by design. They predict likely outcomes. Finance calculations cannot rely on probability. They require consistency and repeatability.
This distinction matters.
AI can support users. It can summarize structured data, generate draft commentary and help navigate complex information. It can suggest mappings between systems that are later reviewed by experts.
But it cannot replace deterministic financial logic. And it cannot compensate for weak governance.
If underlying definitions are inconsistent, AI will amplify inconsistency.
If data lineage is unclear, AI will accelerate uncertainty.
If architecture is fragmented, intelligence will not restore coherence.
There is also a cultural dimension to AI adoption.
Automation can create anxiety if employees fear that efficiency improvements will make their roles redundant. Without psychological safety, innovation slows.
Catarina Asplund raised this concern during our panel discussion:
“If people believe that automation will cost them their jobs, they will hesitate to improve processes. Leadership must create an environment where efficiency is encouraged, not feared.”
Technology readiness is not enough. Organizational readiness matters just as much.
From fragmentation to structural confidence
The real transformation in finance is not about adding more tools. It is about designing architecture that reduces friction and embeds control.
When systems speak the same language, when definitions are documented and shared, and when validation is built into processes, reconciliation effort decreases naturally. Trust increases. Analytical capacity expands. Advanced technologies can then be introduced without destabilizing the core.
AI can accelerate insight.
But only architecture can guarantee its reliability.
The question is therefore not whether finance should use AI.
It is whether the underlying structure is ready for it.