Difference between revisions of "Performance testing, automating"

From CitconWiki
Jump to navigationJump to search
(Created page with "Foo.")
 
Line 1: Line 1:
Foo.
+
TPCH
 +
* Improving an analytics platform
 +
* unnamed database vendor, data, scale, queries against data
 +
 
 +
Write performance stories
 +
* We have a speculation that makes a story
 +
* Some teams are unable to estimate a story for performance
 +
* PO should give acceptance criteria performance
 +
* Should each story allow for x time to find performance issues
 +
 
 +
Make it work
 +
* Make it work well
 +
* Make it work fast
 +
 
 +
JMeter
 +
* How does a process scale with # of users, throughput
 +
* Finding limits of the system, 20 second responses, for example, is that too slow? or is that part of subjective metrics?
 +
* Definition of
 +
 
 +
Solution:
 +
* Set baselines for metrics
 +
* At each release, we want to find out if we slowed down or sped up.
 +
 
 +
When does do we evaluate performance?

Revision as of 14:01, 24 August 2013

TPCH

  • Improving an analytics platform
  • unnamed database vendor, data, scale, queries against data

Write performance stories

  • We have a speculation that makes a story
  • Some teams are unable to estimate a story for performance
  • PO should give acceptance criteria performance
  • Should each story allow for x time to find performance issues

Make it work

  • Make it work well
  • Make it work fast

JMeter

  • How does a process scale with # of users, throughput
  • Finding limits of the system, 20 second responses, for example, is that too slow? or is that part of subjective metrics?
  • Definition of

Solution:

  • Set baselines for metrics
  • At each release, we want to find out if we slowed down or sped up.

When does do we evaluate performance?