As discussed in a previous post, PLM success can be defined around five critical success factors (CSFs) covering: 1) solution sign-off, 2) user adoption once deployed, 3) data quality, 4) platform and data accessibility, and finally 5) sustainability and scalability. Whereas most of the above drivers are perhaps not specific to a digital platform or discipline, PLM is somewhat unique as it brings unique challenges when it comes to ensuring scope clarity, measurable business value and effective data continuity across the Digital Thread.
Measuring PLM success can basically relate to two perspectives: either when 1) implementing a specific PLM solution, or 2) using product development processes and PLM tools as part of ongoing operations—once a solution has been successfully deployed to the business. In this post, we discuss 20 generic key performance indicators, 10 for each perspective.
Typically, critical success factors (CSFs) illustrate how to define success—not to confuse with key performance indicators (KPIs) which reflect how to measure success.
10 KPIs to measure PLM implementations
The following KPIs are usually tracked during PLM implementations:
- Number of requirements and use cases, met vs not met, initial vs new vs obsolete
- Number of approved deliverables against project roadmap timeline
- Expected business benefits and realization against timeline
- Implementation budget vs actual spent, by delivery milestones and against their respective deliverables
- Scope deviation and change requests
- Number of issues and risks
- Number and type of customizations and integrations
- Number of gaps and enhancement requests to vendor
- Number of test cases validated
- Data migrated: volumes and quality metrics (per reconciliation reports)
10 KPIs to measure product development processes, leveraging PLM in operations
The following KPIs are usually tracked once PLM solutions are deployed and in business-as-usual (BAU) operations:
- Number of issues / support calls / enhancement requests
- Number of continuous improvements, planned vs implemented (glide-path through backlog)
- Data volumes and compliance alignment, covering key business aspects, across product lines, maturity states, standards, by department, growth projections (such metrics are also relevant to track product development maturity)
- Number and type of tool licenses, bought vs used, per function and capability; license budget vs actual, including purchased vs rental / subscription
- Projected vs actual system performance
- Number of end users on-boarded and trained
- Number of new user methods, developed and embedded in training, user adoption / refresher rate across existing vs new users
- Number of roles redesigned to align to new operating model
- Number of connected suppliers and delivery performance (data accuracy, on-time delivery, adherence to process, etc.)
- Engineering-manufacturing data alignment, synchronization frequency and accuracy
Many other metrics can be considered based on the scope, maturity of the solution, levels of integration, target processes and usage, infrastructure, etc. Such decisions are typically driven based on continuous improvements, business maturity and priorities.
It is also important to consider the frequency of such metrics: when they are relevant to be measured monthly, weekly or even daily; bearing in mind that it can be time consuming to gather the relevant data, analyze and publish them. It is critical to understand how metrics are used and the underlying decision-making process and review governance.
What are your thoughts?
Disclaimer: articles and thoughts published on v+d do not necessarily represent the views of the company, but solely the views or interpretations of the author(s); reviews, insights and mentions of publications, products, or services do neither constitute endorsement, nor recommendations for purchase or adoption.