Meaningful metrics

From CitconWiki
Revision as of 09:31, 22 October 2012 by Tamas.rev (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

What are the code quality metrics that we can use for

  • setting up standards within the team (if any)?
  • marketing our progress to senior management?

At Prezi they set up a solution to monitor the development process. Main question: who broke the tests?

  • Measure process quality. How are we working?
  • Some people don't care about quality or don't know they should care about quality.
  • The goal is to deliver as frequently as possible.

Measure:

  • the number of prod deployments / day
  • the number of broken builds
  • who broke the build, how frequently


You can use information radiators that show the state of the build without the committer. The goal is not to put blame on the dev who broke the build. One who commits the most is usually the one who breaks the build the most, so you've got to be careful not to break their spirits.

To market code coverage to management, use only a part of the codebase. Show them the coverage of the most important features.

Monitor coverage results closely. If the coverage goes down, investigate if there was a reason for removing any tests.

Monitor which file changes cause the build to break the most. This will give you an idea about where you need to refactor.

When you start using Findbugs, start with the priority 1 bugs, then work your way through the rest. You can set up a test that ensures that the number of bugs doesn't increase.

Set aside dedicated time to deal with technical debt. At Prezi, it's 1 day a week. At Skype (mobile department), it's 40% of developers' time.

For management, the number of bugs / version is the most relevant metric, but it comes half a year late.

Another metric tracked is the number of TODO comments. If it's above a certain limit, the build fails.

Measuring the number of cyclic dependencies can help in reducing system complexity.

When trying out new metrics, it's important to start gathering data. Often the case is that your initial assumption is not not proven, but during the process you'll figure out what exactly should be measured.

You should gather data, and then you need to find the information within.

Do metrics lead to bias? There are three aspects to the metrics (found here: http://www.slideshare.net/gilnahmias/agile-code-quality-metrics):

  • What we're trying to understand
  • What we actually measure
  • How does it make people behave

The effect on people can be totally different than intended.

The metrics have to be created by the team, because if they are imposed by the management, it will create aversion. An example for team-created metric is the loser-table, where you could write up teammates for any mistake they made (be it work or personal life), and rate the severity of the mistake. Soon team member started to write themselves up.

Re. behaviour created by metrics: you can't cheat 50 metrics - you might be able to cheat 1 or 2.

It's not a good idea to link metrics to compensation, but metrics can point out problems - then you need to work on them with whoever is affected. Metrics can help you define SMART goals. Company culture is key: do metrics assign the blame or simply provide information?

When should you act on problems indicated by quality metrics? Depends on the project: on a 1 year project, it doesn't make sense to spend 2-3 months refactoring.

When talking to customers / senior management, don't just say that refactoring is good and useful, say refactoring a particular component will lead to what specific benefits for them.

Can you predict how much implementation time can you save with refactoring? Not necessarily, but sometimes it's possible. E.g. given a system with a distinct component per customer; they contained code duplication - since there were new customers in line, it made business sense to refactor them.