Skip to content
Browse
BABOK Guide
BABOK Guide
10. Techniques
Introduction 10.1 Acceptance and Evaluation Criteria 10.2 Backlog Management 10.3 Balanced Scorecard 10.4 Benchmarking and Market Analysis 10.5 Brainstorming 10.6 Business Capability Analysis 10.7 Business Cases 10.8 Business Model Canvas 10.9 Business Rules Analysis 10.10 Collaborative Games 10.11 Concept Modelling 10.12 Data Dictionary 10.13 Data Flow Diagrams 10.14 Data Mining 10.15 Data Modelling 10.16 Decision Analysis 10.17 Decision Modelling 10.18 Document Analysis 10.19 Estimation 10.20 Financial Analysis 10.21 Focus Groups 10.22 Functional Decomposition 10.23 Glossary 10.24 Interface Analysis 10.25 Interviews 10.26 Item Tracking 10.27 Lessons Learned 10.28 Metrics and Key Performance Indicators (KPIs) 10.29 Mind Mapping 10.30 Non-Functional Requirements Analysis 10.31 Observation 10.32 Organizational Modelling 10.33 Prioritization 10.34 Process Analysis 10.35 Process Modelling 10.36 Prototyping 10.37 Reviews 10.38 Risk Analysis and Management 10.39 Roles and Permissions Matrix 10.40 Root Cause Analysis 10.41 Scope Modelling 10.42 Sequence Diagrams 10.43 Stakeholder List, Map, or Personas 10.44 State Modelling 10.45 Survey or Questionnaire 10.46 SWOT Analysis 10.47 Use Cases and Scenarios 10.48 User Stories 10.49 Vendor Assessment 10.50 Workshops

5.6 Learn Fast

5.6.1 Measure What Matters

Guide to Product Ownership Analysis

The value delivered from products must be identified and measured at three levels:
  • Strategic,
  • Product, and
  • Delivery.
All three levels provide critical insight for the product throughout its lifecycle. Although many metrics can be grouped into these categories, most Product Owners use a subset of these metrics. There are several aspects in choosing the right metric based on use, including:
  • Purpose of the product,
  • Definition of value,
  • Type of business model,
  • Go-to-market strategy,
  • Product lifecycle stage,
  • Timing of measurement, and
  • Monetization aspects.
There is a timing element involved in choosing the right set of metrics to be used for understanding value delivered through the product.

For example,
  • A customer acquisition rate as a metric can be used to measure the product launch success, whereas...
  • Measuring performance against specification, or product satisfaction per price point, provides a clearer view of product delivery performance, and products post-launch stability and growth.
Depending on the stage of the product lifecycle, different metrics provide different insights for course correction or sunset activities.

The choice of metric may depend on what role within the organization is using them. For example,
  • Financials and market-oriented metrics are more likely to be used by product management roles, rather than by Product Owner roles.
At a product level, the measures must indicate how the product serves the target customers and the type of experience the product provides. The product team must think about the implication of strategic and product measures so that the right alignment is achieved between business and strategic goals.

The delivery level measures the performance of internal delivery performance. The delivery measures provide an idea about how product features and requirements are getting added, given the delivery objectives.
Strategic measures determine the effectiveness of a product vision and strategy and are directly tied to different business objectives of an enterprise. Strategic measures must indicate how a product:
  • Fits into an enterprise's product portfolio,
  • Compares to a competitor's offering, or
  • Generates financial success in terms of cost, revenue, market share, etc.
How POA Helps Strategic Measures

Although the product management function is primarily responsible for understanding and using these measures to continuously align the strategic performance of the product, Product Owners need to have a good understanding of them including:
  • Product lifetime value,
  • Revenue share,
  • Average order value, and
  • Conversion rates etc.
The metrics highlight the current state of the business outcomes generated by the product. POs need to analyze these metrics to determine how the product strategy changes.

For example:
  • If the business model for a product is to self-fund future development effort, and the revenue shows a downward trend, the Product Owner may need to determine how long the product has before running out of funds. The iterations may have to include only higher priority features.
POA Techniques for Strategic Measures

BABOK® Guide Techniques
  • Benchmarking and Market Analysis: Compare solutions and products in the same context to identify missing aspects.
  • Financial Analysis: Understand the financial aspects of the product.
  • Metrics and Key Performance Indicators (KPIs): Metrics and key performance indicators measure the performance of a product.
Several metrics can be considered depending on the product context within the enterprise. The Strategy-Related Metrics Guide below simplifies the processes for determining the right metrics by providing foundational metrics and their considerations. The Product Owner can also use the practices outlined in the Metrics Guide to determine the right subset of metrics that can serve as KPIs to understand the strategic impact of the product.

Strategy-Related Metrics Guide

Metrics Description Consideration for Product Owner
Monthly or Annually Recurring Revenue per User (MRR/ARR) The ratio of product revenue generated
per month/annum to the number of total users.
Used when the product follows a subscriber model or has a contractual setting with customers. It indicates product profitability.

PO can use it as a benchmark for designing MVP/MMP scope and competitor analysis.
Return on Investment (ROI) The ratio of total revenue to the cost of investment. It is used as a measure for:
  • Strategic communication
  • Business case decisions
  • Budget approvals for the product
Customer Acquisition Cost (CAC) The total cost of onboarding a new customer to the product. A decreasing trend can be used as an indicator of product success at a strategic level.
Net Present Value (NPV) The discounted cash flow generated from the product minus cost incurred over the product lifetime. It is an estimate of the product value to the enterprise.

Used during business case creation and product
planning efforts.
Total Cost of Ownership (TCO) The total cost incurred throughout the product lifetime, including direct and indirect costs. It validates product concepts and gives an idea about the total cost incurred, including any indirect or hidden costs.
Internal Rate of Return (IRR) Rate of return or the discount rate when applied equates the net present value to be zero. This rate evaluates the possibility of alternate investment of the capital over developing the product.

Used for validating product concept with stakeholders.
Time-to-Market (TTM) Time is taken for the product, set of features, or a single feature to be available and usable by customers. Validates value or competitive advantage in the product, feature sets or a single feature.

Also, as a measure of the efficiency of the delivery process.


Case Study: Strategic Measures - Retailer
Background
Retailer Super C implemented an in-store pickup process for online orders and adjusted the physical location where orders could be picked up. They were working on enhancements to the related applications and processes to continue to grow the service offering.

The company was also implementing a grocery pickup service separate from the online order pickup. There were two separate, distributed development teams for each service.

Challenge
Super C executives wanted to measure how the use of in-store pickup was growing, against:
  • Initial revenue expectations,
  • Online grocery pickup, and
  • In-store sales.
They had set an initial goal of moving to 45% of online purchases being picked up in stores over the next three years.

Gary, the original Product Owner, was interested in how the product was performing so that any additional development or process changes could be addressed.

Action
Gary collected the Key Performance Indicators that upper management used to assess the success of in-store pickup for online orders. They included weekly and monthly views of:
  • Number of online orders picked up in-store vs. the total number of online orders.
  • Number of online orders for pickup vs. number of online grocery orders.
  • Revenue for online orders picked up in-store vs. total revenue of online orders.
  • Revenue for online orders for pickup vs. number of online grocery orders.
  • Number and revenue of online orders picked up in-store vs. number and revenue of individual order transactions in-store.
Outcome
Gary reviewed the trend lines and noticed overall favourability. He wondered if there was more to the story. He knew from previously observing the process and interviewing customers that there seemed to be certain hours of the day, and days of the week, that were more popular.

Gary suggested to the management staff that reviewing KPIs around those metrics could be insightful for enhancing the services offered.

Lessons Learned
Since Product Ownership can span the entire lifecycle of a product, it is useful for Product Owners to pay attention to KPIs, which are important metrics for managers to use for making strategic decisions. In this case, Gary could see that there may be more metrics important to the ongoing improvement of the product, which could also translate to operational process change, (for example, having more staff available for hand- delivering orders to customers during peak times).
The metrics grouped under "product measures" outline customer acceptance and the popularity of the product for the customer base. These metrics are one of the most critical sets of criteria used to adjust the scope and product features through a continuous feedback loop. The Product Owner must have clarity around the product measures that indicate the:
  • Desirability,
  • Experience,
  • Marketability, and
  • Value derived by the customers.
  • Examples within this category include:
    • Net Promoter Score (NPS),
    • Customer Acquisition Cost (CAC),
    • Churn,
    • Customer Satisfaction Score (CSAT), and
    • Customer Effort Score (CES).
How POA Helps Product Measures

The team can use these metrics to:
  • Fine-tune features and transactional experiences, and
  • Design to maximize the value delivered to the customers.
For example:
  • The feedback received for an MVP indicated that the NPS score turned out to be above 60. However, the Customer Effort Score (CES) indicated a higher effort. A Product Owner may infer from the measures that the holistic view of the product is good in:
    • Brand perception, and
    • Product capabilities to serve customer needs.
However, the customer journey to complete the transactions is taking more effort. In this scenario, the customer journey needs to be simplified and acceptance criteria need to be tightened for the stories in question.

POA Techniques for Product Measures

BABOK® Guide Techniques
  • Metrics and Key Performance Indicators (KPIs): Metrics and key performance indicators measure the performance in the context of a product.
The discovery of product metrics requires a clear alignment of business goals and customer goals. To identify the right set of metrics, the product team and the Product Owner must balance a diverse set of measures, which are influenced by:
  • Business strategy,
  • Product vision,
  • Market forces, and
  • Customer experience.
This requires critical decision-making and a structured thought process. The Product-Related Metrics Guide helps the Product Owner to identify and prioritize the right metrics from a product perspective by:
  • Providing some foundational metrics and their use considerations, and
  • Defining the best practices involved.
Product-Related Metrics Guide:
  • Providing some foundational metrics,
  • Their use considerations, and
  • The best practices involved.
Metrics Description Consideration for Product Owner
Net Promoter Score (NPS) A 10-point scale is used to gauge customers’ likelihood to recommend the product. A single holistic measure of the product’s perceived value to the customer.

Used to
  • Introduce new features,
  • Quantify customer experience, or
  • Measure stakeholder satisfaction after an iteration or sprint.
Customer Effort Score (CES) Evaluates the level of customer effort required to achieve their objectives, usually on a 5-point scale. Can be combined with NPS to determine areas of friction in a customer journey with the product,
which can eventually lead to backlog refinement.
Adoption Rates Measures customer adoption of product over a period. PO can use this metric to understand if high-value features are getting released first or not.
Feature Usage Rate A measure of which feature within the product is used most often. PO can use it to validate high-value features and streamline the customer journey about the feature with a high use rate.
Retention/Churn The number of product users retained, or churned, during a specific period. Either metric can be used but not necessarily both. Churn rate can be used to
  • Analyze when a product may need to be sunset,
  • Verify product goals, or
  • Assess market conditions.
A similar metric, such as bounce rate, can determine at a feature- level how many transactions are being dropped (to identify the high-risk feature).


Case Study: Product Measures - Retailer
Background
Super C adjusted their processes which improved the adoption of in-store pickup of online orders. The product development team turned their attention toward developing enhancements to the online and mobile applications to further increase customer adoption of the product.

Challenge
Super C had options for customers to pick up online orders of general merchandise and a grocery pick-up service. The Product Owner, Gary, noticed that pickup orders had levelled off between 35-38% for all online orders.

In speaking with a co-worker, Sheila, Gary found out that grocery pickup sales were soaring, outperforming strategic goals by almost 50% over the past three months. Gary's business counterparts had also noticed and tasked him to come up with a way to determine why the performance of online order in-store pickup was stagnant.

Action
One of the features the team built into the online shopping application was a Net Promoter Score (NPS), where customers could, with one click, let Super C know how likely they were to recommend the application to others. Gary pulled up the NPSs scores for the previous year and noticed that the scores were evenly distributed around the mid-point, with a few very low scores. There were not any very high scores.

Outcome
Gary consulted with the business, and they decided to add an optional free-text field in the NPS survey where a customer could enter comments explaining why they chose the score they did. One month after collection, over half of the NPS surveys returned included comments, in addition to the score, which provided the team with valuable insights to improve the product.

Lessons Learned
Metrics tell an important story, but sometimes there is additional context needed to make the necessary adjustments. When it comes to surveys, customers often will not take the time to provide more detail, so further analysis can be required if the metrics are not answering pressing questions.

In this case, Super C decided to take the extra step to ask for a sentence or two about the NPS they awarded, and they received needed feedback.
The metrics considered as indicators of delivery performance for product development outline the understanding and the ability of the product team, in developing a cohesive product. The delivery measures consider the:
  • Effectiveness of the development processes,
  • Solution architecture, and
  • Quality attribute of the product.
A drop in the measures usually indicates a gap or a challenge in the
  • Execution process,
  • Product scope, or
  • Team productivity.
Typical metrics include:
  • Burndown charts,
  • Team velocity,
  • Cycle time,
  • Lead time,
  • Throughput,
  • Escaped defects, and
  • Defect density.
How POA Helps Delivery Measures

The Product Owner uses the delivery measures to conduct a retrospective on the product backlog management process and evaluate the product team's performance.

For example:
  • A significant difference between higher lead time and cycle time may indicate an issue with:
    • Team capacity,
    • Prioritization process, or
    • A lack of the ability to elaborate backlog items.
POA Techniques for Delivery Measures

BABOK® Guide Techniques
  • Metrics and Key Performance Indicators (KPIs): Metrics and key performance indicators measure the performance in the context of a product.
The delivery measures are in control of the Product Owner to direct the delivery effort. The metrics used for the delivery objectives are determined so that the right features, and product functions, are delivered at the right time, throughout the product lifecycle. The Delivery-Related Metrics Guide helps Product Owners identify:
  • Typical delivery measures,
  • Use considerations,
  • Techniques, and
  • How-to guides for determining the right delivery metrics.

Delivery-Related Metrics Guide

Metrics Description Consideration for Product Owner
Sprint Goal Success A set of objectives for a sprint, agreed upon by the delivery team and Product Owner. Usually measured as pass or fail after the sprint cycle is completed. This measure helps validate:
  • Delivery risks,
  • Assumptions, and
  • Constraints.
It can indicate the effectiveness of the processes such as prioritization and release planning.
Escaped Defects A count or a ratio of defects that are uncovered by the customer per release.
  • Indicative of the quality of knowledge of the team in understanding the user stories,
  • The effectiveness of the acceptance criteria, and
  • The ability of the team to validate defects.
Defect Density The number of defects per lines of code (or story point, or per sprint). PO can use this metric to understand if high-value features are getting released first or not.
Scope Change Rate The ratio of additional effort included in the scope of sprints vs. the original effort estimated. Measures level of scope- creep that helps in sprint planning and effort estimation. Since changes are welcomed in agile initiatives, baseline assumptions for calculations must be agreed upon by stakeholders.
Burndown Chart Burndown chart indicates the difference between actual and planned effort over sprints or releases. PO can use it to adjust sprint plans and release plans, and to investigate any underlying causes of a significant difference, if one exists.
Team Velocity This is a measure that evaluates the total number of story points delivered per sprint. PO can use this measure as an indicator of predictability across sprints and adjust release plans.


Case Study: Delivery Measures - Food Manufacturer
Background
Poultry Plus had a successful first iteration implementation of a "speed-to- market" dashboard of decision-making data used by executive and operations management. There was a great deal of excitement around this new way of looking at data, and requests for additional dashboards were coming into the product development team. The Product Owner, Carla, assessed the requests as they arrived, and the team did their best to prioritize them within the product roadmap.

Challenge
Although Carla did a great job of communicating when a new request had been received and when development on the request had begun, the backlog of requested dashboards grew. Managers contacted Carla frequently asking when their request would be addressed. The further down the list the request was, the less likely Carla was to have a reasonable idea of when delivery could be expected. She turned to the Scrum Master, Joanne, to assist with a way to better set expectations.

Action
Carla and Joanne discussed how they could best measure the team’s capacity to deliver on the many requests from Poultry Plus executives. They quickly realized that they needed a better way to assess and determine how much work the team could complete during each fixed sprint. Joanne identified several typical metrics that teams use to measure their effectiveness. After some discussion, Carla settled on using story points to size each request when it was received. She felt this would help the team better understand the size and complexity of each request. Carla felt the second piece of the puzzle was to better understand the team’s capacity to deliver on these requests. Here again, she felt story points would be a good way to identify team velocity and she could use that to more effectively predict when requests could enter a sprint and be completed for delivery.

Outcome
At the sprint planning meeting, the development team agreed to adopt the new way of sizing user stories using points. They:
  • Focused on sizing the stories for each dashboard, giving more points to complex dashboards,
  • Noticed some variation in the total number of story points allocated to each sprint,
  • Redistributed the stories across the future sprints according to their predicted velocity, and
  • Agreed to adjust if they found their average velocity was different than predicted.
Over the next few backlog refinement meetings, Carla worked with the team to:
  • Estimate any new-dashboard requests with story points, and
  • Add them to the backlog according to priority.
The requests included points, and since they had determined the team's velocity, Carla could reliably give estimated date ranges for a dashboard to be ready.

Lessons Learned
When the team started working with velocity for a delivery measure, there was a temptation to continuously increase their velocity and get work done even faster. The result was team burnout and missed deliveries.

Also, defects were going into production, based on perceived pressure to move faster and faster.

Carla wanted to guard against setting unrealistic expectations both for the team and the business stakeholders. The team focused on limiting their velocity to a pace that was manageable for the team, while delivering quality functionality, frequently. The team settled into a routine that was reliable and satisfactory to the business stakeholders and end-users.