Difference between revisions of "Do you use your tests in prod"

From CitconWiki
Jump to navigationJump to search
(more notes added (Peter Zsoldos), formatting)
 
(No difference)

Latest revision as of 11:51, 16 November 2011

  • Ward Cunningham combines Eclipse foundation testing and documentation, in a way that's almost like reusing tests for monitoring.
  • Colin using cucumber tests for load testing and cucumber-nagios. Cucumber annotation allows you to push a test to cucumber-nagios.
    • One of his clients tests through prod to world pay with the test credit card.
    • Hibri's company uses a real credit card and refunds the payment.
  • Patrick wrote special cucumber tests to validate things in production.
  • JTF floats the idea that good testing in prod is important. Several people agreed that canary releases were the way to find out that your code REALLY works in prod, not just in abstract.
  • Claude says that functional tests suck at this, performance tests are much better.
  • 3 out of 30 or 40 are using functional tests in prod.
  • 2 of those people are in ops. Last is working on getting involved with ops team.
  • 6-8 out of the same group actually do perf tests.
  • JTF and Colin suggest that issues with NFR's are the cause of the gap.
  • JTF suggests that people will work with the tools available to solve a problem before they reach out to others.
  • Colin explains that they are having trouble making a thirdparty testing system play with their system and wants APIs to reduce duplication.
  • I think we need a sauce labs for perf testing. We should ask Jason Huggins while he's there.
  • Adrian says that developers must write the performance tests, and they continuously run them.
  • Claude says that perf tests can cause issues while being run, when they are blended.
  • Colin's issue is duplication in separate pipeline phases.
  • Adrian works in finance they have performance tests, and Colin works for web site owners who want a certain amount of capacity, and for many different clients.
  • Peter uses in-memory infrastructure to improve developer feedback, before using real - rerunning the same tests with different configs
  • Colin applies more realistic tests, after the 5 minute rule.
  • The pattern on different drivers is used, ala webdriver.
  • gradual releases (first enable 1% of users, then 2%, etc.) is reactive performance testing, and in some situations, you can't afford it (too few clients, or too big business impact if you upset them)


Big discussions on page object pattern: http://code.google.com/p/selenium/wiki/PageObjects

Firebug HTA performance info can be parsed.

Squirrel points out that render time is different for different browsers Tools discussed:

- Page speed 
- TSUNG.  uses erlang.  but has record and reply
- blitz.io gives 1000 concurrent connections.
- Appdynamics
- Dynatrace
- New Relic


Expect monitoring and testing to become more integrated in the future

Not mentioned in the discussion, but the copy of the Testing Club (news)paper given away at the conference has a nice introductory article on the topic: http://www.thetestingplanet.com/2011/11/the-future-of-software-testing-part-one-testing-in-production/