Difference between revisions of "Crap4J and other metric tools"

From CitconWiki
Jump to navigationJump to search
m (Reverted edits by Upinson (talk) to last revision by Simon.brandhof)
 
(One intermediate revision by one other user not shown)
(No difference)

Latest revision as of 04:27, 10 September 2012

In this session I was specifically interested in discussing metric tools that have a bias towards identifying bad code. Having a metric that "measures code health" is not the same as one that "finds code sickness", and I'm more interested (for the moment) in the later.

To start the discussion I introduced CRAP4J, a free tool that tries to identify code that, if you had to inherit it, you'd probably declare as crappy. The purpose of the crap score is act like a cholesterol test. If your cholesterol score is above 200 mg/dL you need to lower it; if a method crap score is above 30 you need to either write more tests or refactor it. This strong bias to action is what sets CRAP4J apart from other tools that can identify the same problems but require more effort on the part of the user to read the tea leaves.

After the discussion of crap we moved onto Dependometer which is being developed by ValTech. Their point was that circular dependencies are a serious code smell, a problem both for testing and maintenence more generally. They use this tool to perform architecture reviews very quickly. Apparently an Eclipse plug-in version is being developed.

Next mention was Panopticode which provides interesing visualizations for a whole set of metrics.

java2cdiff creates files that can be used by CodeCrawler, a language independent metrics/reverse-engineering tool. In the post-talk discussion and web browsing we found that there's now an Eclipse plug-in that provides similar views, X-Ray. This looks very cool!

JUCA is an interesting attempt to estimate coverage without actaully running the tests. I don't quite understand why you don't just run the tests with a coverage tool, but I'm sure there must be a reason. (?)

One good idea that came up was to use check-in velocity and to enforce code rules & refactor in those areas that have the highest velocity. StatSVN came up as a useful tool to measure check-in velocity.

It didn't come up during the session but the new Clover has a cloud of classes that identify very similar information to CRAP4J, the most complex + least tested code, as project risk. Size is complexity, color is coverage. I discuss it and other ways to visualize complexity in a project in this blog.

Available tools

  • Crap4J (Change Risk Analysis and Predictions software metric)
* Podcast on CRAP4J with Alberto Savoia and Andy Glover