CI Feedback & Metrics

From CitconWiki
Jump to navigationJump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.


How do you measure?

On the product side, we can log when people are using features

  • on small scale, can interact with (call) the customer


What percentage of builds fail? Tradeoff of build failures, vs frequency of builds ?


Continuous deployment, measuring $/unit of work, can we measure customer-revenue outcomes from how we are committing our code?

defect rate, commit/build rate, what is the time to detect rate?

  • Granular feedback may or may not have as much value, compared to hardware costs and time-to-detection feedback
    • Any builds longer than 10 seconds are not okay

Feedback of code

  • Crap for J
    • Cyclomatic complexity vs. code coverage
  • Sonar
  • Using debt in coding
    • is it okay for taking on debt
    • Even if it is for meeting deadlines?
  • code review process makes a process
  • @JTF: positive correlation between speed and quality
    • That certain teams that put out features faster also put out in high quality.
    • With span of data over several decades
  • Different people work differently, Members of teams don't always approach problems of finishing tasks, in a way that is quality.
    • Mentality needs to such that there is a team ownership of lines of code, and potential bugs.
    • Perception of what is faster many not be the reality of what is faster
      • We might write lines of bad code without refactoring and improving and think we're doing it faster, but are we?
      • comparison to using hotkeys vs. how much time is actually used moving the mouse?
    • (discussion about measuring time of writing tests compared to time saved with tests)
  • Do we need more time to write quality code?
    • Perhaps we need to invest more time with our colleagues, to teach Test Driven Development.
    • Do we always write tests first? Well, we can be happy that people are testing at all.
      • Metric, # of assertions should always go up over time.
        • Lines of code? Sometimes lines of code in fact go down. (which is very good, in fact)
  • Measure # of commits per day
    • Every commit should also contain an assertion
    • Maybe we could do that per 15 minutes
      • Every 15 minutes, a timer goes off. After that time, we have a discussion. Should we commit? If not, should we revert? If not, make sure it's ready after another 15 minutes.

See also: Lean Software Development

What metrics?

  • Static analysis warnings
  • Code compiler warnings
    • e.g. codenarc
    • EDIT by macetw The tool I was trying to think of is [www.coverity.com Coverty,] which monitors NEW warnings, distinct from Existing warnings. Coverty is a non-free product.
    • Build failing if new warnings, or any warnings
    • e.g. copy/paste detection
  • Organizational dysfunction, of when team members are not pulling their weight in quality
    • How do we give visibility to management or to the team

Tool recommendation

  • To monitor wstatic analysis

What are the metrics for risk?

  • Metrics for risk are consistent within a project, but not across projects
    • Cyclomatic complexity may be high for a certain project

See also:

  • @JTF: ??

Associate defects across releases

  • fingerprint defects to releases.

Principals of product development flow reinertsen

"every time you run an assertion, you have a chance to learn something"

  • Metrics should ask questions, not give answers
  • Individuals should want it - it's not really for managers
  • Developers have had discussions about results, make plans accordingly

Tool idea:

  • Developer Karma plugin for Jenkins
  • Tool to identify "most failing" tests

%50 of Flickering tests identify code defects.

Participants

  • Scribe: @macetw
  • @Jtf
  • Emil
  • @EricMinick
  • Others (volunteers)