Performance Testing in-the-small

From CitconWiki
Jump to navigationJump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Performance Testing in-the-small

  • Start measuring from day 1, not specific
  • Try start with simple requirements, drift direction to somethoogin irrelevant is esay
  • Run performance tests, keep an eye on it
  • Each story is perf test, simple goals, eg. every single response time must be under 1s
  • Daily perf test, automated:
    • EC2 infrastructure
    • Run CI (Cruise), build pipelines, deployment pipelines
    • Comparing measurements between runs? Use baseline
  • Maintaining data? Generated data for use in production
  • Find the baseline?
  • Testing the whole stack? Run the profiler, take a picture asap, take ages to profile analyze,

profile asap, collect the charts then go back into

  • Get metrics back from production, looking for the present bottleneck
  • Get metrics from BI tools, monitoring
  • Different levels:
    • Component
    • System-level: establish a SLA
  • Performance measurements on daily basis, easier w/ processing intensive applications
  • Web => caching issues
  • Define what you want to look at? Robustness, reactivity, brute force
  • Issue: Perf test & CI
  • Gettign top-down from system to component
  • Unit-test does not make sense for perf level ??
  • Acceptable response time => measure at the component level
  • system level (does not) aggregate from component level
  • Create a CI job to verify that a perf requirement is still available
  • Run perf test on as close a production system as possible
  • Tests take long time to run, long time to analyze, short time to fix
  • The more of the system you test, the more time it takes: compromise between time to run and accuracy
    • Create subsections to test specific aspects
    • Test other portions
    • Problems with interactions
  • Careful when testing in isolations, hide things
  • Instrument application in environment
    • Ping message that goes through the system to collect metrics
    • Pinging pattern
  • Try to predict?
  • Use CI: detect problems, write a test, fix a pb & put the teset in CI
  • Cannot use tests as performance indicator, because they might not be relevant
  • Relying on structural informations for measuring "response time"
  • Developers not careful about log
  • Missing user stories! Stakeholders are performance monitoring guys
  • Perf tester is a stakeholder as important as a customer
  • Tests can take a long time to run
    • tests are independent