Pain of testing legacy batch software
From CitconWiki
Jump to navigationJump to searchPain of batch testing legacy software
Problem description
- You have a legacy software that takes a lot of time to test:
- 2.5 months manually
- hours automatically
- Testing through UI
- Preparing data for tests takes hours
- Tests are grouped so preparing the environment takes less time. Downside: the tests in a group are depending in each other, hence a failure in the beginning of the group compromises the rest of the test results
Brainstorming
- When writing the tests, focus on the core functionality; then go with tests from the inside to the outside
- the need of tests: "do agile right"
- small steps: refactoring / making tests
- manual smoke tests are not good enough
- Solution to a specific problem: test SQL stored procedures in SQL. A.k.a. keep your tests simple
- Find independent test batches
- Test application changes too: test the code without the migration script; then run the migration script and run the after-migration asserts. This way you can verify the correctness of:
- the original systems (you might need to roll back to it)
- the migration script (you'll need to run it on the productin database)
- the state of the system after the migration
Maintaining automated tests
- Maintaining automated tests is important - as maintaining code is important too. Do it by:
- keep them maintainable (:
- make them abstract: use acceptance test frameworks as FitNesse. So you can define the tests in a human-readable format in 1 place and add the glue in another
- Keyword-driven acceptantce test frameworks:
- FitNesse http://fitnesse.org/
- Robot Framework http://code.google.com/p/robotframework/
- They define high-level keywords
- You can add deeper description to them
- Keyword-driven acceptantce test frameworks:
Too many broken legacy tests
- You cannot have too much tests
- In case of broken test hard to tell apart:
- broken functionality
- changed functionality
- Canary tests: an old, 100% good branch with its tests. If it fails ==> the environment is failing
- QA tends to write shallow tests: they cover every page but not every case
- Dev tends to write whitebox tests
- Delete a few tests which are ignored anyway
- Alternatively, you can mark them "skipped" since later they might reveal important information to you
- Alternatively, you can move them to a directory called "deleted"
- All these alternatives are about the definition of Legacy: Something that you inherited and you believe it has value
- It's worth maintaining tests as it's worth maintaining codebase too (i.e. refactoring)
- The book "Working effectively with legacy code" was referenced http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052