Difference between revisions of "CI Feedback & Metrics"

From CitconWiki
Jump to navigationJump to search
Line 20: Line 20:
 
** Cyclomatic complexity vs. code coverage
 
** Cyclomatic complexity vs. code coverage
 
* Sonar
 
* Sonar
 +
 +
Using debt in coding
 +
* is it okay for taking on debt
 +
** Even if it is for meeting deadlines?
 +
* code review process makes a process
 +
* @JTF: positive correlation between speed and quality
 +
** That certain teams that put out features faster also put out in high quality.
 +
** With span of data over several decades
 +
* Different people work differently, Members of teams don't always approach problems of finishing tasks, in a way that is quality.

Revision as of 11:28, 24 August 2013


How do you measure?

On the product side, we can log when people are using features

  • on small scale, can interact with (call) the customer


What percentage of builds fail? Tradeoff of build failures, vs frequency of builds ?


Continuous deployment, measuring $/unit of work, can we measure customer-revenue outcomes from how we are committing our code?

defect rate, commit/build rate, what is the time to detect rate?

  • Granular feedback may or may not have as much value, compared to hardware costs and time-to-detection feedback
    • Any builds longer than 10 seconds are not okay

Feedback of code

  • Crap for J
    • Cyclomatic complexity vs. code coverage
  • Sonar

Using debt in coding

  • is it okay for taking on debt
    • Even if it is for meeting deadlines?
  • code review process makes a process
  • @JTF: positive correlation between speed and quality
    • That certain teams that put out features faster also put out in high quality.
    • With span of data over several decades
  • Different people work differently, Members of teams don't always approach problems of finishing tasks, in a way that is quality.