Using Machine Learning To Solve Test Passed Site Down Problem

From CitconWiki
Revision as of 10:34, 4 October 2015 by Magnuslassi (talk | contribs) (Created page with "Attendees: <br> PJ <br> Markus <br> Brett <br> Jeeva <br> Magnus Lassi <br> Erik <br> Dan <br> Girish <br> Magnus Stahre <br> Joe Bishop <br> Paul Duvall <br> Problem stateme...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Attendees:
PJ
Markus
Brett
Jeeva
Magnus Lassi
Erik
Dan
Girish
Magnus Stahre
Joe Bishop
Paul Duvall

Problem statement: Worked at pre-prod time fails in production - same as production Cause: Data in production starts changing

Question: Does the problem statement assume you’re using Continous Delivery (CD)? PJ: No, just that it was a certified artifact deployed to production.

Possible solutions: - Use AI to dynamically evolve pre-prod verification

Real world examples (PJ): We had an interface contract regarding user id’s that we received from a 3rd party that the user id’s would be numeric. In production we started receiving user id’s that were alphanumeric which was unexpected.

Another real world example: We had agree on a JSON structure where the id’s sent would be 6 digits but they sent 8 digits in production. [ {PJ}, {DAWN} ] [ {id: “125799846”} ]

We had also agreed on the user names being US alphanumerics but we received some names with umlauts and breaks the system: [ {PJ}, {DAWN}, {Hänk} ]

In a world where the data is constantly evolving, machine learning would be beneficial. It would be usable in a dynamic test suite to get the test data updated.

If you rely on existing data, you restrict yourself. You don’t allow the system to evolve.

If you allow each new message to let the system learn, no one on the team needs to know.

- Totally doable with existing data - to evolve need a feedback loop

Could use a batch process to look at data in production and feed it to test systems. We know we failed a request because we logged it somewhere.

Why haven’t we evolved from automated tests to an evolving feedback loop? Why don’t we automatically update the test suite (when failure occurs due to unexpected data in production)? The gap is that the automated test suite isn’t updated automatically. It seems reasonable that using AI / machine learning to solve this problem.

Reason’s we don’t do this today: - Hard to know what to change - Lack of knowledge about this possibility - Lack of trust that it will do what we want it to do - If we can understand the change then we can test better - Risk of lack of trust in test suite : false negatives / false positives, who’s testing the tester

No matter what requests the systems receive it should handle it gracefully. Returning an error is acceptable, crashing the system is not.

Take every input for the last hour -> expected output -> non-system-crash JSON -> dynamic -> non-system crash

If we can understand the change, we can do forward looking testing before the future data gets there.

In some systems the answer isn’t known until runtime. A good way to unit test these cases is to assert a given range.

mutation testing - modify the production code, your unit tests should fail. We’re testing our tests.

Big potential challenge to get organization to accept prod dat to flow down automatically to test systems.

Does anyone feel using AI is a waste of time for this? Consensus is no but challenging to implement. It can be a potential political challenge to convince peers and management.

It could be as easy as: take a feed of data + known post conditions + feed it to an AI model

The hypothesis is that is shouldn’t be a large task versus manually doing this work with people.

No-one gave a reason NOT to try machine learning to this problem.

Tools: - datatomic: http://cognitect.com/datomic - quick check (property testing tool) https://en.wikipedia.org/wiki/QuickCheck - Amazon machine learning (they have industry algorithms, you supply data) - Spock - Jester (mutation testing tool) http://programmers.stackexchange.com/questions/189939/is-there-a-modern-replacement-for-a-mutation-testing-tool-like-jester-for-java - F#