Measuring Progress in Knowledge Management Programs
Advice to the knowledge manager
Three questions are central to the business case for any performance improvement program, including knowledge
management:
- Value Proposition: What are the expected business or
organizational benefits?
- Cost: What resources will be required to realize the benefits?
- Time: How long will it take to realize
the benefits?
At the end of the day, your KM program must produce easy-to-understand
results that answer these questions. Measurement is a means for demonstrating
progress and sustaining stakeholder support. It is essential for understanding what is working, and for
determining what should be changed to ensure that the expected benefits are realized.
Measures fall into three classes:
- Input Measures (or Cost Measures): These
include expenses to create and support the program (e.g., consultants, direct staff, IT)
plus the time of participants.
- Process Measures (or Activity Measures): These
are measures of the level of participation in the KM program. Examples include: number of lessons learned,
number of project teams using KM approaches, Web page views, number of CoP members, and perceived utility
of KM processes and technology in improving the daily jobs of participants.
- Output Measures (or Outcome Measures or Results Measures):
These are the measures of progress in delivering on the value
proposition. They may be traditional "hard-dollar" business metrics like:
revenue growth, improved productivity (e.g., revenue/employee for a service business), improved
efficiency (e.g., return
on capital employed for a capital-intensive business), reduced capital or operating expense,
improved quality, reduced cycle time, or enhanced innovation (e.g., percentage of revenue
from new products). They may also be "soft-dollar"
measures like: success stories, cost avoidance, increased customer satisfaction, improved skills and
competencies, decreased time-to-competence, and improved ability to attract talent or capital. Organizations
often expect the KM program to influence their overall Key Performance Indicators and integrate its
results into a Balanced
Scorecard.
Input and process measures are called "leading indicators", whereas output
measures are called "lagging indicators." Because different stakeholders need
different data to take different decisions, sustainable KM programs employ a blend of leading and lagging
indicators.
The measures that make sense for your KM program depend on the culture of your organization. Begin by
asking around to understand what long-term success means to the different stakeholders (e.g., senior
managers, operational managers, colleagues, individual contributors, customers, suppliers) and what data
they will need to be convinced. Prime these early discussions by going over successes reported by other
organizations in benchmarking studies. However, be prepared for a vigorous discussion. Reported successes
are usually greeted by healthy skepticism ... from all stakeholders.
Following are some familiar objections, together with some ideas on how to respond to them.
- "We're different." The results don't apply to our organization.
- Of course, this
may be true. It is best to find results from organizations that are close to your own (perhaps competitors,
customers or companies that use similar business processes in a different sector) or where the differences
can be shown to be immaterial to the case for KM.
- "The knowledge manager can't claim all the credit."
- Organizational performance results are indeed based on a number of factors. Focus
on results reported by line management and play down those reported only by a knowledge
manager.
- Be aware that
organizations typically do not attribute results to a particular functional group (KM, IT, HR, …).
For example, Schlumberger reports these results for a specific KM program (InTouch): queries resolved 20 times faster
and $200 million/year revenue created or saved.
- This kind of reporting
is consistent with the way companies attribute revenue to new products. They don't carve up the
revenue by function (R&D
gets this percentage, Marketing gets that percentage, and so on). Of course, the analogy to KM isn't perfect
because company accounting systems track revenue for individual products, whereas they typically don't track cost
savings by program with the same rigor.
- This leads to a caveat when measuring the results of your own
KM program: Avoid percentage credit negotiation. In this approach, a functional group
(like KM or R&D) engages
business managers in a kind of negotiation about what percentage of the revenue or cost savings for
a particular program should be attributed to their efforts. Experience has shown that this is a waste of time and
reduces the credibility of the group doing the negotiating.
- "It is easy to measure progress in a business, but much harder for government
or non-profit organizations."
- It is true that "hard dollar" output measures are
less obvious for a government or non-profit organization. Concentrate on "soft dollar" measures
like customer satisfaction, cost avoidance, not making the same mistake twice, providing quick access
to correct information, etc.
Use what you learn to select a small set of measures for each pilot project:
- Measures that are meaningful
to management. For these decision-makers, the most useful are likely to be a combination of input
and "hard dollar" output measures, but be sure to include some anecdotal measures (e.g., success
stories) and process measures in the mix. While "hard dollar" measures are the most useful for
senior managers to decide if the program is producing the expected benefits, they are often the most
difficult measures to obtain.
- Measures that are useful to you as knowledge manager. These are primarily process measures that help
you assess overall program health, measure the utility of individual KM processes and technologies,
and guide next steps.
Measurement needs change over time
Organizations predictably move through five stages of KM maturity. (See APQC's
Stages of Knowledge Management Maturity for an in-depth discussion.) While measurement
needs change over time as experience is gained, it is important to measure progress at every stage.
At the outset, making an explicit ROI case for KM is difficult. Results from the
early stages of a KM program are likely to be more intangible, but still measurable. These include
reuse of materials and expertise, eliminating redundant efforts, avoiding making the same mistake twice,
and finding information quickly and easily.
The first critical measurement challenge arises as you prepare to launch pilot projects. This is where
organizations most often make the mistake of not being specific about what processes they expect to change
and how—and by failing to build the necessary measurement systems. You will find process measures the
easiest to obtain, but be sure to capture some output measures. Quantitative measures are best if you can
get them, but anecdotal measures, like success stories, are also valuable.
As your program matures and enough success has been achieved to combat early skepticism,
the measurement challenge changes. As knowledge manager, you will
continue to monitor KM-specific measures to drive and reinforce behavior, assess
progress, and build a business case for additional KM efforts. However, the importance of KM-specific
measures diminishes for line management. As KM becomes institutionalized, it must be possible to compare
KM to other organizational uses of money. In general, this requires standard business metrics for "apples-to-apples" comparisons.
New knowledge managers should be aware that few organizations have reached this end state where KM is "the
way we work."
Measurement Tips
Gleaned from experience with best-practice organizations.
- Start measuring early in the life of the program and continue to measure often.
- Measures are best "designed-in" to KM projects, not added later.
- Match the measure to the audience. For every measure, understand what stakeholders are interested
and what actions they could take as a result of having the data.
- Keep it simple. Focus on a few critical measures. Don't create measurement schemes that are more trouble
than they are worth—too
time-consuming, too expensive, too hard to understand.
- The best output measures are those
already used by the organization and widely understood by managers and individual contributors.
Avoid developing new ones.
- Aim for accuracy and balance among input, process and output measures.
- Don't raise unrealistic expectations
about ROI. Err on the side of caution; it is better to underestimate
than to overestimate. ROI is still primarily captured indirectly and by extrapolation.
- Stories are
powerful indicators of success and promotional tools. While not a replacement for "hard
dollar" measures,
they are useful to demonstrate progress to managers and to drive knowledge-sharing behavior
throughout the organization.
See also: Measuring KM Activity and Progress. (Vincent I. Polley and Reid G. Smith). Presented at Knowledge Leadership Forum 2007, New York, NY, 27 April, 2007.
|