Difference between revisions of "Anti-Automated Test Patterns"
Line 1: | Line 1: | ||
− | Anti-Automated Test Patterns | + | '''Anti-Automated Test Patterns''' |
* '''Ice cream cone''' | * '''Ice cream cone''' | ||
Inverse of triangle of having lots of manual testing at the top, and not a lot of unit tests at the bottom. Break it down by risk. | Inverse of triangle of having lots of manual testing at the top, and not a lot of unit tests at the bottom. Break it down by risk. | ||
− | '''Happy Path''' | + | * '''Happy Path''' |
Basic function of the system, but it's not really testing anything serious. False sense of being complete. Don't have complete code coverage that you need. Need to do more comprehensive testing. | Basic function of the system, but it's not really testing anything serious. False sense of being complete. Don't have complete code coverage that you need. Need to do more comprehensive testing. | ||
− | '''Local Hero''' | + | * '''Local Hero''' |
Wrote something that will always pass in environment because of how you've written it. Seems risky because you expect one thing, but explodes in the real world. Learn from the regressions that you get out of it. Get input from customer service. May not be looking at the business requirements properly. | Wrote something that will always pass in environment because of how you've written it. Seems risky because you expect one thing, but explodes in the real world. Learn from the regressions that you get out of it. Get input from customer service. May not be looking at the business requirements properly. | ||
Line 14: | Line 14: | ||
Beautiful thing in real world, but you're not testing it in the same way that the users are using it. They may be doing more complicated tasks than you're testing for. | Beautiful thing in real world, but you're not testing it in the same way that the users are using it. They may be doing more complicated tasks than you're testing for. | ||
− | '''2nd Class Citizen''' | + | * '''2nd Class Citizen''' |
A lot of duplicate code, and so the code bloats up, and it becomes a maintenance headache. Same thing over and over again. You might need to refactor your automated test. May need to reuse some of the code that you've written. | A lot of duplicate code, and so the code bloats up, and it becomes a maintenance headache. Same thing over and over again. You might need to refactor your automated test. May need to reuse some of the code that you've written. | ||
If it's a high-risk thing, then perhaps refactor. If it's used often, then use re-use. Do evaluation before you start to refactor. Don't pay off all of your tech debt all at once. | If it's a high-risk thing, then perhaps refactor. If it's used often, then use re-use. Do evaluation before you start to refactor. Don't pay off all of your tech debt all at once. | ||
− | '''Chain Gang''' | + | * '''Chain Gang''' |
Set up. Task and then Tear down. Merge two tests into one set up and tear down. Create dependencies within a chain of tests. Set up and tear down can be a PITA. | Set up. Task and then Tear down. Merge two tests into one set up and tear down. Create dependencies within a chain of tests. Set up and tear down can be a PITA. | ||
Can be acceptable in some time if the tests are passing and you feel comfortable that they're valid. Advice is to evaluate the risk in the gang, and if there are problems, then split it up. | Can be acceptable in some time if the tests are passing and you feel comfortable that they're valid. Advice is to evaluate the risk in the gang, and if there are problems, then split it up. | ||
Avoid it, but when you need it, then use it. | Avoid it, but when you need it, then use it. | ||
− | '''The Mockery''' | + | * '''The Mockery''' |
Use the real stuff. Over mock everything. Need something. Mock it. Not using the real world or the real servers you'll be working with. If mocking REST and SOAP calls, and then it fails on the live data. If there's a lot of mockery. | Use the real stuff. Over mock everything. Need something. Mock it. Not using the real world or the real servers you'll be working with. If mocking REST and SOAP calls, and then it fails on the live data. If there's a lot of mockery. | ||
Line 31: | Line 31: | ||
For example, with a maven repo, grab top layer, and it'll work. Customer pulls from inside, and then it broke the code, and weren't testing it. Purpose of mock objects is to avoid inspector. | For example, with a maven repo, grab top layer, and it'll work. Customer pulls from inside, and then it broke the code, and weren't testing it. Purpose of mock objects is to avoid inspector. | ||
− | '''The Inspector''' | + | * '''The Inspector''' |
It knows everything about the system. Object knows everything and tightly coupled. If changes, then it breaks. Joined models together, and if you remove it, then it breaks. | It knows everything about the system. Object knows everything and tightly coupled. If changes, then it breaks. Joined models together, and if you remove it, then it breaks. | ||
Line 38: | Line 38: | ||
Break it up, and make it less dependent and decoupled. | Break it up, and make it less dependent and decoupled. | ||
− | '''The Golddigger''' | + | * '''The Golddigger''' |
Greedy and they want everything in terms of resources. They need to have 50 things set up. Lots of time required. Can you really break it up? It takes 2 hours to set up. Pre-set up while doing a deployment. Deploy what the things that the gold digger needs. | Greedy and they want everything in terms of resources. They need to have 50 things set up. Lots of time required. Can you really break it up? It takes 2 hours to set up. Pre-set up while doing a deployment. Deploy what the things that the gold digger needs. | ||
At what point in the process do you right that test? Do it at the end? Or do it at the beginning? | At what point in the process do you right that test? Do it at the end? Or do it at the beginning? | ||
− | '''Anal probe / Contract Violator''' | + | * '''Anal probe / Contract Violator''' |
Write test and get to the internals, and then override OO fundamentals with private and public stuff. Heavily white, and get into internals of anything. Are you testing it in a realistic way. Playing with the innards of the code, and if things change, then you're screwed. Violating an object. | Write test and get to the internals, and then override OO fundamentals with private and public stuff. Heavily white, and get into internals of anything. Are you testing it in a realistic way. Playing with the innards of the code, and if things change, then you're screwed. Violating an object. | ||
Line 50: | Line 50: | ||
If something is inside that you need to test, then it's a design problem where it needs to exposed of whatever needs to be available. Need to reword code. | If something is inside that you need to test, then it's a design problem where it needs to exposed of whatever needs to be available. Need to reword code. | ||
− | '''Test with no name''' | + | * '''Test with no name''' |
There's a bug and name it non-sensical name like "Test CR2386." Solution is to use better names and do it right. Name doesn't tell you anything. This is more of a bad practice than an anti-pattern. | There's a bug and name it non-sensical name like "Test CR2386." Solution is to use better names and do it right. Name doesn't tell you anything. This is more of a bad practice than an anti-pattern. | ||
− | '''The Slow Poke''' | + | * '''The Slow Poke''' |
Takes a lot of time to run. Could potentially run in parallel. Or potentially break it up. Set up your own environment and make it aware. Make not put in CI or CD. Only run on release candidate instead of a daily build. It's likely to be an integration test, but it could be at any level. Database dependencies and network latencies. Can't always run integration test. Could you mock something? | Takes a lot of time to run. Could potentially run in parallel. Or potentially break it up. Set up your own environment and make it aware. Make not put in CI or CD. Only run on release candidate instead of a daily build. It's likely to be an integration test, but it could be at any level. Database dependencies and network latencies. Can't always run integration test. Could you mock something? | ||
− | '''The Giant / God Complex Test / Boss Hog''' | + | * '''The Giant / God Complex Test / Boss Hog''' |
If it's a big test, that is consuming. Way too much code, and may be a part of a chain gang. It's very complex | If it's a big test, that is consuming. Way too much code, and may be a part of a chain gang. It's very complex | ||
− | '''Wait & See''' | + | * '''Wait & See''' |
Using Sleep. Love and hate relationship with sleep. Press button, and sleep. You're not checking the validity of the system. You're going to race condition. It'll cause flickering. Solution is to not use sleep. If you have sleeps everywhere, and make sure you don't have interrupts. | Using Sleep. Love and hate relationship with sleep. Press button, and sleep. You're not checking the validity of the system. You're going to race condition. It'll cause flickering. Solution is to not use sleep. If you have sleeps everywhere, and make sure you don't have interrupts. | ||
− | '''China Vase''' | + | * '''China Vase''' |
Code is fragile. Selenium is too fragile. Biggest issue in industry. Every one keeps complaining about that's it's too long or it's too fragile. How do we deal with it? Break down into more stable pieces. Might have some other anti-patterns happens. More concerned with the China passes than fails. | Code is fragile. Selenium is too fragile. Biggest issue in industry. Every one keeps complaining about that's it's too long or it's too fragile. How do we deal with it? Break down into more stable pieces. Might have some other anti-patterns happens. More concerned with the China passes than fails. | ||
− | '''Flickering Lights''' | + | * '''Flickering Lights''' |
Flickers between passing and failing. Didn't write the test correctly. Too much mockery or too much golddigging. Had two different load balances, and direct to a working and not working breaking. It can also usually it's an environmental issue. It can be demoralizing, and testers get used to living with red lights. Psychologically keep pushing button until it passes, which is a bad habit. If I hit restart it 3 times, and then I'll investigate it. [laugh] If it doesn't pass 1st time, then investigate it. | Flickers between passing and failing. Didn't write the test correctly. Too much mockery or too much golddigging. Had two different load balances, and direct to a working and not working breaking. It can also usually it's an environmental issue. It can be demoralizing, and testers get used to living with red lights. Psychologically keep pushing button until it passes, which is a bad habit. If I hit restart it 3 times, and then I'll investigate it. [laugh] If it doesn't pass 1st time, then investigate it. | ||
− | '''The Pig''' | + | * '''The Pig''' |
Tests that don't clean up after themselves, which can lead to flickering lights. Dependencies that relate to each other. | Tests that don't clean up after themselves, which can lead to flickering lights. Dependencies that relate to each other. | ||
− | '''Edge Play''' | + | * '''Edge Play''' |
Playing on the edges too much. Waste the test cycles of testing things that the user doesn't do. High-risk, and you might only run it once. Reduce it form main test suites or take it out. | Playing on the edges too much. Waste the test cycles of testing things that the user doesn't do. High-risk, and you might only run it once. Reduce it form main test suites or take it out. | ||
− | '''Customer Don't Do that''' | + | * '''Customer Don't Do that''' |
Testing things that customer doesn't actually do. | Testing things that customer doesn't actually do. | ||
− | '''Fear the automator''' | + | * '''Fear the automator''' |
The fear that manual testers that automation will eliminate their jobs. Deliberate sabotage, and party when automated test fails. It's a management issue, and will loose morale and testing cycles. | The fear that manual testers that automation will eliminate their jobs. Deliberate sabotage, and party when automated test fails. It's a management issue, and will loose morale and testing cycles. | ||
− | '''The Metrics Lie''' | + | * '''The Metrics Lie''' |
Management will want to know how much test cycles are saving. Wanting ROI metrics, and sacrifice. Opening tickets on small tasks just to get the metrics to be higher. "Get lots of bugs now!" to justify bugs. "If you find a bug, then cover it up" | Management will want to know how much test cycles are saving. Wanting ROI metrics, and sacrifice. Opening tickets on small tasks just to get the metrics to be higher. "Get lots of bugs now!" to justify bugs. "If you find a bug, then cover it up" | ||
− | '''Test doesn't test anything''' | + | * '''Test doesn't test anything''' |
Happens in unit a lot. Who's responsible for that? Developer or tester? Whomever wrote it or is maintaining it. Who's' responsible for what? SOMEONE is responsible. If it doesn't do what it's supposed to do, then someone will need to take responsible. | Happens in unit a lot. Who's responsible for that? Developer or tester? Whomever wrote it or is maintaining it. Who's' responsible for what? SOMEONE is responsible. If it doesn't do what it's supposed to do, then someone will need to take responsible. | ||
− | '''Who owns this?''' | + | * '''Who owns this?''' |
No one owns it, and it gets ignored. Transparency and communication is the solution. Project team leads report to each other, and they have to fight it out. Denying responsibility, and have to prove it's not yours. Manager should know, but sometimes there's no management structure. | No one owns it, and it gets ignored. Transparency and communication is the solution. Project team leads report to each other, and they have to fight it out. Denying responsibility, and have to prove it's not yours. Manager should know, but sometimes there's no management structure. | ||
− | '''How are these related?''' | + | * '''How are these related?''' |
Boss hog and slow poke are connected | Boss hog and slow poke are connected | ||
Inspector and Gold Digger are connected. | Inspector and Gold Digger are connected. | ||
Line 101: | Line 101: | ||
Ice cream cone is independent. Break it down by risk is the answer to anything. | Ice cream cone is independent. Break it down by risk is the answer to anything. | ||
− | '''Bad practices''' | + | * '''Bad practices''' |
Test with no name, wait and see, 2nd class citizen | Test with no name, wait and see, 2nd class citizen | ||
"Notes by Kent Bye" | "Notes by Kent Bye" |
Revision as of 14:09, 22 September 2012
Anti-Automated Test Patterns
- Ice cream cone
Inverse of triangle of having lots of manual testing at the top, and not a lot of unit tests at the bottom. Break it down by risk.
- Happy Path
Basic function of the system, but it's not really testing anything serious. False sense of being complete. Don't have complete code coverage that you need. Need to do more comprehensive testing.
- Local Hero
Wrote something that will always pass in environment because of how you've written it. Seems risky because you expect one thing, but explodes in the real world. Learn from the regressions that you get out of it. Get input from customer service. May not be looking at the business requirements properly.
Perhaps deploy to a staging environment? May have lost the customer focus. Have customer service to testing.
Beautiful thing in real world, but you're not testing it in the same way that the users are using it. They may be doing more complicated tasks than you're testing for.
- 2nd Class Citizen
A lot of duplicate code, and so the code bloats up, and it becomes a maintenance headache. Same thing over and over again. You might need to refactor your automated test. May need to reuse some of the code that you've written.
If it's a high-risk thing, then perhaps refactor. If it's used often, then use re-use. Do evaluation before you start to refactor. Don't pay off all of your tech debt all at once.
- Chain Gang
Set up. Task and then Tear down. Merge two tests into one set up and tear down. Create dependencies within a chain of tests. Set up and tear down can be a PITA. Can be acceptable in some time if the tests are passing and you feel comfortable that they're valid. Advice is to evaluate the risk in the gang, and if there are problems, then split it up. Avoid it, but when you need it, then use it.
- The Mockery
Use the real stuff. Over mock everything. Need something. Mock it. Not using the real world or the real servers you'll be working with. If mocking REST and SOAP calls, and then it fails on the live data. If there's a lot of mockery.
The opposite of mockery is the local hero.
For example, with a maven repo, grab top layer, and it'll work. Customer pulls from inside, and then it broke the code, and weren't testing it. Purpose of mock objects is to avoid inspector.
- The Inspector
It knows everything about the system. Object knows everything and tightly coupled. If changes, then it breaks. Joined models together, and if you remove it, then it breaks.
White box testing is something that if you test it, then it breaks. Perhaps only use it on edge cases.
Break it up, and make it less dependent and decoupled.
- The Golddigger
Greedy and they want everything in terms of resources. They need to have 50 things set up. Lots of time required. Can you really break it up? It takes 2 hours to set up. Pre-set up while doing a deployment. Deploy what the things that the gold digger needs.
At what point in the process do you right that test? Do it at the end? Or do it at the beginning?
- Anal probe / Contract Violator
Write test and get to the internals, and then override OO fundamentals with private and public stuff. Heavily white, and get into internals of anything. Are you testing it in a realistic way. Playing with the innards of the code, and if things change, then you're screwed. Violating an object.
Would exploratory or ad hoc testing be enough for this test? Might be blinded by real part of the test.
If something is inside that you need to test, then it's a design problem where it needs to exposed of whatever needs to be available. Need to reword code.
- Test with no name
There's a bug and name it non-sensical name like "Test CR2386." Solution is to use better names and do it right. Name doesn't tell you anything. This is more of a bad practice than an anti-pattern.
- The Slow Poke
Takes a lot of time to run. Could potentially run in parallel. Or potentially break it up. Set up your own environment and make it aware. Make not put in CI or CD. Only run on release candidate instead of a daily build. It's likely to be an integration test, but it could be at any level. Database dependencies and network latencies. Can't always run integration test. Could you mock something?
- The Giant / God Complex Test / Boss Hog
If it's a big test, that is consuming. Way too much code, and may be a part of a chain gang. It's very complex
- Wait & See
Using Sleep. Love and hate relationship with sleep. Press button, and sleep. You're not checking the validity of the system. You're going to race condition. It'll cause flickering. Solution is to not use sleep. If you have sleeps everywhere, and make sure you don't have interrupts.
- China Vase
Code is fragile. Selenium is too fragile. Biggest issue in industry. Every one keeps complaining about that's it's too long or it's too fragile. How do we deal with it? Break down into more stable pieces. Might have some other anti-patterns happens. More concerned with the China passes than fails.
- Flickering Lights
Flickers between passing and failing. Didn't write the test correctly. Too much mockery or too much golddigging. Had two different load balances, and direct to a working and not working breaking. It can also usually it's an environmental issue. It can be demoralizing, and testers get used to living with red lights. Psychologically keep pushing button until it passes, which is a bad habit. If I hit restart it 3 times, and then I'll investigate it. [laugh] If it doesn't pass 1st time, then investigate it.
- The Pig
Tests that don't clean up after themselves, which can lead to flickering lights. Dependencies that relate to each other.
- Edge Play
Playing on the edges too much. Waste the test cycles of testing things that the user doesn't do. High-risk, and you might only run it once. Reduce it form main test suites or take it out.
- Customer Don't Do that
Testing things that customer doesn't actually do.
- Fear the automator
The fear that manual testers that automation will eliminate their jobs. Deliberate sabotage, and party when automated test fails. It's a management issue, and will loose morale and testing cycles.
- The Metrics Lie
Management will want to know how much test cycles are saving. Wanting ROI metrics, and sacrifice. Opening tickets on small tasks just to get the metrics to be higher. "Get lots of bugs now!" to justify bugs. "If you find a bug, then cover it up"
- Test doesn't test anything
Happens in unit a lot. Who's responsible for that? Developer or tester? Whomever wrote it or is maintaining it. Who's' responsible for what? SOMEONE is responsible. If it doesn't do what it's supposed to do, then someone will need to take responsible.
- Who owns this?
No one owns it, and it gets ignored. Transparency and communication is the solution. Project team leads report to each other, and they have to fight it out. Denying responsibility, and have to prove it's not yours. Manager should know, but sometimes there's no management structure.
- How are these related?
Boss hog and slow poke are connected Inspector and Gold Digger are connected. China Vase and Flickering Lights are connected If you're seeing flickering lights, then root cause could be Pig 2nd class citizen and the Flickering lights would be related. Gold Digger and Inspector are the same thing. Chain gang would lead to Boss Hog Mockery is connected to flickering lights Local hero goes with flickering lights. Works fine in staging, but not production Ice cream cone is independent. Break it down by risk is the answer to anything.
- Bad practices
Test with no name, wait and see, 2nd class citizen
"Notes by Kent Bye"