Difference between revisions of "CI Feedback & Metrics"

From CitconWiki
Jump to navigationJump to search
(Created page with " ==How do you measure?== On the product side, we can log when people are using features * on small scale, can interact with (call) the customer What percentage of builds fa...")
 
Line 11: Line 11:
  
 
Continuous deployment, measuring $/unit of work, can we measure customer-revenue outcomes from how we are committing our code?
 
Continuous deployment, measuring $/unit of work, can we measure customer-revenue outcomes from how we are committing our code?
 +
 +
defect rate, commit/build rate, what is the time to detect rate?
 +
* Granular feedback may or may not have as much value, compared to hardware costs and time-to-detection feedback
 +
** Any builds longer than 10 seconds are not okay
 +
 +
Feedback of code
 +
* Crap for J
 +
** Cyclomatic complexity vs. code coverage
 +
* Sonar

Revision as of 10:21, 24 August 2013


How do you measure?

On the product side, we can log when people are using features

  • on small scale, can interact with (call) the customer


What percentage of builds fail? Tradeoff of build failures, vs frequency of builds ?


Continuous deployment, measuring $/unit of work, can we measure customer-revenue outcomes from how we are committing our code?

defect rate, commit/build rate, what is the time to detect rate?

  • Granular feedback may or may not have as much value, compared to hardware costs and time-to-detection feedback
    • Any builds longer than 10 seconds are not okay

Feedback of code

  • Crap for J
    • Cyclomatic complexity vs. code coverage
  • Sonar