
AI depends on data foundations to be trusted. But unlike previous technologies, AI is actively reshaping what those foundations are and how they must be governed. Leaders who treat data foundations as a static prerequisite risk building an operating model for the past.
What Remains True
AI does not eliminate the need for clarity, ownership, and accountability.
In product-centric and regulated environments, business confidence still rests on knowing which data is authoritative, how change is controlled, and who owns the decision. Without this, AI only accelerates uncertainty. Faster answers with unclear provenance do not create value; they increase risk.
This is why PLM discipline, governance, and lifecycle thinking remain essential. Context and relationships still matter more than raw data volume.
What Has Changed
What has changed is how foundations are constructed.
Traditional data strategies assume structure must be fully engineered before intelligence can be applied. That assumption reflected the limits of earlier analytics and automation. Modern AI does not operate that way.
Today’s models can infer structure, meaning, and relationships across fragmented information. They assemble context dynamically, often at the point of decision, rather than relying solely on predefined schemas. As a result, some foundational work is shifting from upfront design to continuous interpretation.
This is not incremental. It is a structural change in how enterprises create usable context.
Why This Matters to Business Leaders
AI is no longer just consuming enterprise data or cross-platform analytics; it is becoming part of the foundation itself.
Foundations are becoming more semantic, more adaptive, and less rigid. That enables speed and earlier value creation, but it also introduces ambiguity. When structure is inferred rather than enforced, confidence must be actively managed.
Governance therefore becomes more important, not less. The executive question shifts from “Is the data modeled correctly?” to “When should we trust this insight, and who owns the decision?”
That is an operating-model issue, not a technical one.
The Implication for PLM and Digital Transformation
Many organizations fail by choosing extremes: either waiting for perfect data before applying AI, or deploying AI broadly and hoping it compensates for weak foundations.
The effective path sits between those positions. Authoritative data remains non-negotiable where risk, compliance, and safety apply. AI is used to accelerate interpretation, impact analysis, and cross-domain reasoning. AI-generated insight augments decisions; it does not replace accountability.
In this model, PLM tech must evolve into a decision enablement layer. AI is not removing the need for data foundations. It is exposing which parts must remain firm—and which must become adaptive. The leadership challenge is not how fast AI is deployed, but whether the organization stays in control as AI reshapes the ground it stands on.
The Missing Layer: Decision Enablement
The decision enablement layer sits between data availability and execution, translating connected data into decision-ready context. It answers not what data do we have, but what decision can we take now, with what confidence, and with what consequences.
In practice, this means assembling the minimum sufficient lifecycle context for a decision, exposing trade-offs and downstream impacts, and making decision rights and accountability explicit. It is also where explainability becomes non-negotiable: recommendations—whether human or AI-assisted—must be traceable to assumptions, data sources, and governance rules.
This becomes tangible when decisions are framed explicitly, for example:
- Engineering change approval: “Given this design modification, what is the quantified impact on cost, regulatory compliance, supplier risk, and manufacturing readiness if we approve it this week rather than next quarter?”
- Supplier substitution under disruption: “If we switch to this alternate supplier now, what is the exposure across quality, lead time, certification status, and sustainability commitments?”
- Decision escalation: “Based on confidence thresholds and risk classification, can this decision be executed autonomously, or does it require human review and sign-off?”
Without this layer, AI either remains analytical theater or is pushed into premature automation. With it, organizations can increase decision velocity without surrendering control. This is where AI stops displacing foundations and starts reshaping how enterprises actually decide.
What are your thoughts?
Disclaimer: articles and thoughts published on v+d do not necessarily represent the views of the company, but solely the views or interpretations of the author(s); reviews, insights and mentions of publications, products, or services do neither constitute endorsement, nor recommendations for purchase or adoption.

