Project Metrics: A Practical Approach to Measuring Software Outsourcing Performance

By William Murphy, CEO, Dextrys

No one would argue that software service teams should regularly review their performance over time in order to compare their effectiveness to other teams and to their own past performance.  Our experience has shown that few teams actually collect and review meaningful information and fewer still use it to effectively make changes.  Most teams fall short by either not collecting the right data, not performing appropriate analysis, not taking effective actions on the findings.

Effective performance measurement is not difficult.  Most teams have the raw data available through the tools they use every day.  Providing your leads with the right training and applying the steps laid out below can help you quickly establish a lightweight routine for evaluating your teams and helping them to regularly improve.

Project Metrics – Basic Training

To start incorporating effective  data-driven performance  management into a software services team, first make sure the leads and project managers understand three of the basic concepts behind measuring performance:

  • Data are the raw measurements that provide the basis for evaluating performance. These are the raw defect counts, ticket counts, effort hours and so forth that your systems should automatically capture as your teams do their day to day work.
  • Information is the result of analyzing data to help you understand what the data mean. Data  by itself are just numbers and units.  Only once someone has applied rules and analysis does it yield meaningful information that you can use to make decisions.
  • Metrics define measures of performance.  In the US, we measure a car’s fuel efficiency as miles per gallon.  The EPA carefully defines the conditions and methods to calculate that value.  In an analogous way, metrics will define the performance of a testing, development, or support team.  Because of the wide variations in types of work, there are few standards, an you will often need to define the metrics for the teams in your organization.

In performance measurements, the three concepts work together:  Metrics provide the yardstick for measuring performance. You analyze the raw data to yield information about how your team measures and  ideas for how to improve.

Team Performance

Once your team understands the basic concepts, they are ready to set up a performance measurement routine

  1. Metrics definiton should be the first step in measuring the team’s performance  Every team is different, but in general, you want to look at three types of metrics
    a. Volume. How will you measure the amount of work that the team does? The key is defining the units of work that the team accomplishes and an objective measure for quantifying it. (Examples: tickets completed, story points successfully demonstrated, test cases executed.) story points demoed
    b. Efficiency. How will you measure the amount of work that the team does in a given period of time? Use the units selected for volume and then decide the time period (Per week? Per Sprint? Per month) and the granularity (Per person? Per team? Per hour?)
    c. Quality. How will you measure the quality of their work? This category can vary greatly depending on the type of service a team performs. Software defects per 40 developer hours is one useful measure for coding. For software testing, it might be some measure of defects reported in production or invalid defects reported.
    For each metrics, establish a target range (guardrails) that you expect the results to fall within.  (Plan to adjust those guardrails at least once after a two or three month ‘calibration’ period, once you have real data.)
  2. Capture the data.  Once you know what data you need for your metrics, review the data captured by your incident reporting, time tracking, and project management tools to make sure you can complete the measurements.  If there any gaps, adjust your processes or modify your tools to make the data capture part of the team’s regular work routine.
    When preparing the first few reports, you’ll find there are nuances in the data that you had not considered, when you created the metrics (e.g., does a test case count as being created when it is checked in, or when it is executed for the first time?) and you will need to extend those definitions so that your results are consistent.
  3. Analyze and Report.  Set up a regular reporting period.  Monthly is generally best; not so often that the review gets overwhelmed by operational details, but frequent enough to drive performance change within a quarter.

Your first few reports will focus just on understanding results in terms of the defined metrics and how they compared to the guardrails.  In addition to the metrics (“how many tickets per person per week”) use ad hoc reports to understand the results.  (E.g., a breakout of average time for each ticket type may show how the mix of work will affect productivity)  After three months, you will be able to trend your metrics.

Make sure that your performance report provides information of value the team’s clients.  The results should answer basic question about how much work the team is doing, how efficiently they are doing it, and to what level of quality.  If the results do not match expectations, then use ad hoc reports to figure out why there is  a difference.

Using Performance Reviews to Drive Improvement

Measuring and reporting performance only has value if your team uses what they learn to improve their results.  A few basic guidelines will make sure your teams are on the road to getting the value they should be:

  1. Engage the team.  During each reporting cycle, review the performance results with the entire team.
  2. Focus the review.  Before the meeting, select one or two performance areas where the team will concentrate its discussions
  3. Generate improvement actions. Each session should produce actions to be completed over the next month or quarter.
To identify targets for improvement, select one of the metrics that is outside of guardrails or one that is not trending in a consistently positive direction. You will need to analyze data beyond the regular metrics reports to generate ideas for improvement. Involve the whole team. Take the example from above about some ticket types taking much longer than others to resolve. Select one of those outlier types and have the team brainstorm ways to reduce the time required for those types of tickets. defect root cause analysis
Sometimes you will need data that is outside your normal data collection routine. A typical exercise would be to have the team record time usage data at a high resolution for a couple of weeks: Record every activity that takes more than 15 minutes. Analysis may find that a high percentage of the team’s time is spent on nonproductive activities like code check in, which can then be a target for automation or process improvement.

From your analysis, pick one or two concrete actions and track their execution and results through the regular performance review process.  Each quarter, look back on your record of improvements and see whether they have yielded the results you expected.

Metrics at Dextrys

Dextrys provides software development and testing services to technology driven companies in the US and worldwide. We hold regular team performance reviews with our clients to make sure our teams continue to improve month over month.  Let Dextrys help you reduce your software engineering costs and extend your engineering capacity. Visit our home page at www.dextrys.com.

About admin