Difference between revisions of "CI Feedback & Metrics"
From CitconWiki
Jump to navigationJump to searchLine 21: | Line 21: | ||
* Sonar | * Sonar | ||
− | Using debt in coding | + | *Using debt in coding |
− | * is it okay for taking on debt | + | ** is it okay for taking on debt |
** Even if it is for meeting deadlines? | ** Even if it is for meeting deadlines? | ||
* code review process makes a process | * code review process makes a process | ||
Line 29: | Line 29: | ||
** With span of data over several decades | ** With span of data over several decades | ||
* Different people work differently, Members of teams don't always approach problems of finishing tasks, in a way that is quality. | * Different people work differently, Members of teams don't always approach problems of finishing tasks, in a way that is quality. | ||
+ | ** Mentality needs to such that there is a team ownership of lines of code, and potential bugs. | ||
+ | ** Perception of what is faster many not be the reality of what is faster | ||
+ | *** We might write lines of bad code without refactoring and improving and think we're doing it faster, but are we? | ||
+ | *** ''comparison to using hotkeys vs. how much time is actually used moving the mouse?'' | ||
+ | ** (discussion about measuring time of writing tests compared to time saved with tests) | ||
+ | * Do we need more time to write quality code? | ||
+ | ** Perhaps we need to invest more time with our colleagues, to teach Test Driven Development. | ||
+ | ** Do we always write tests first? Well, we can be happy that people are testing at all. | ||
+ | *** Metric, # of assertions should always go up over time. | ||
+ | **** Lines of code? Sometimes lines of code in fact go down. (which is very good, in fact) | ||
+ | * Measure # of commits per day | ||
+ | ** Every commit should also contain an assertion | ||
+ | ** Maybe we could do that per 15 minutes | ||
+ | *** Every 15 minutes, a timer goes off. After that time, we have a discussion. Should we commit? If not, should we revert? If not, make sure it's ready after another 15 minutes. |
Revision as of 10:45, 24 August 2013
How do you measure?
On the product side, we can log when people are using features
- on small scale, can interact with (call) the customer
What percentage of builds fail?
Tradeoff of build failures, vs frequency of builds ?
Continuous deployment, measuring $/unit of work, can we measure customer-revenue outcomes from how we are committing our code?
defect rate, commit/build rate, what is the time to detect rate?
- Granular feedback may or may not have as much value, compared to hardware costs and time-to-detection feedback
- Any builds longer than 10 seconds are not okay
Feedback of code
- Crap for J
- Cyclomatic complexity vs. code coverage
- Sonar
- Using debt in coding
- is it okay for taking on debt
- Even if it is for meeting deadlines?
- code review process makes a process
- @JTF: positive correlation between speed and quality
- That certain teams that put out features faster also put out in high quality.
- With span of data over several decades
- Different people work differently, Members of teams don't always approach problems of finishing tasks, in a way that is quality.
- Mentality needs to such that there is a team ownership of lines of code, and potential bugs.
- Perception of what is faster many not be the reality of what is faster
- We might write lines of bad code without refactoring and improving and think we're doing it faster, but are we?
- comparison to using hotkeys vs. how much time is actually used moving the mouse?
- (discussion about measuring time of writing tests compared to time saved with tests)
- Do we need more time to write quality code?
- Perhaps we need to invest more time with our colleagues, to teach Test Driven Development.
- Do we always write tests first? Well, we can be happy that people are testing at all.
- Metric, # of assertions should always go up over time.
- Lines of code? Sometimes lines of code in fact go down. (which is very good, in fact)
- Metric, # of assertions should always go up over time.
- Measure # of commits per day
- Every commit should also contain an assertion
- Maybe we could do that per 15 minutes
- Every 15 minutes, a timer goes off. After that time, we have a discussion. Should we commit? If not, should we revert? If not, make sure it's ready after another 15 minutes.