Difference between revisions of "Virtualisation Services"

From CitconWiki
Jump to navigationJump to search
m
Line 4: Line 4:
 
Current tools on the market:  
 
Current tools on the market:  
  
IBM Green Hat Ca Lisa
+
IBM Green Hat, CA LISA
  
Allow for integraion testing by providing Artifical Intelligent Stubs which need to be trained.  
+
Definition of Service Virtualisation: System need to develop or test against many other (external/internal) interfaces.  
  
[1]
+
Allow for integration testing by providing Artifical Intelligent Stubs which need to be trained.
  
 +
What we do is we use simulations/virtualisations to create stubs for those services.
  
 +
Not running on actual system, very fast to setup.
  
  
[edit]  
+
'''''CASE STUDY'''''
 +
Royal Bank of Scotland have successfully implemented SV in CI
 +
 
 +
'''BUSINESS NEEDS'''
 +
* Consolidate 4 messaging hubs, deliver 35 major business change projects within 3 years.
 +
* Enables payment/confirmation messages, clearing services, supporting regulatory services.
 +
* Needs to cater fro a new currency if Greece kicked out of the Euro.
 +
 
 +
'''TECHNICAL ENV'''
 +
* Waterfall
 +
* Contractors/Vendors doing different things
 +
* Manual testing (3 week regression)
 +
* Complex env in and out (a lot of external 3rd parties)
 +
* Regression tests expensive against actual 3rd party services
 +
* Manual deployment
 +
 
 +
 
 +
'''PROBLEMS'''
 +
* Not meeting business needs
 +
* External systems not available (functionality not developed)
 +
* Expensive testing
 +
* Expensive deployment
 +
 
 +
'''STRATEGIES'''
 +
* Delay integration until services available - too long
 +
* Write own stub - maintenance involved in keeping up-to-date slow and expensive
 +
* Evaluated SV tools
 +
 
 +
Selected IBM Green Hat because of complex message support.
 +
 
 +
Greenhat tool essentially a way of setting up stubs for complex messages.
 +
 
 +
SOAP/XML/JMS/SWIFT/MQ/etc
 +
 
 +
Greenhat is a simulator/intelligent service
 +
 
 +
Allows them to use a cyclical build/test env pipeline to production. Using Service Virtualisation (SV) for the first two environments, DEV and SIT. Actual services are used for UAT and PROD.
 +
 
 +
How do you confirm that you virtualization is covering all tests it needs. And is this the same as the actual service?
 +
 
 +
The suggestion is that this is done with collaboration of the 3rd party. Have to complete explicit audit tests and baselining, i.e. tests against the real service and the actual service and compare the results.
 +
 
 +
It's all just speculation and collaboration if the actual service doesn't exist.
 +
 
 +
Level of risk against each service in your pipeline allows you to determine how much effort you put into virtualization.
 +
 
 +
The suggestion is that if you have faith in the virtual service then you can skip UAT.
 +
 
 +
For RBS this has significantly sped up their testing because don't have to necessarily wait for UAT env.
 +
 
 +
Automated testing reduced regression testing from 3 weeks to 4 hours.
 +
 
 +
[Some discussion over the differences between automatic deploy and potential automatic deploy.]
 +
 
 +
[Then some discussion over 4 hour regression test not being viable in dev and how to handle it, i.e. use targeted testing for the part of the app that you're working on]
 +
 
 +
[Now some discussion about how to version the virtualisations]
 +
 
 +
The suggestion is that you create different virtualisations for different versions unless the version is inherent in the message that you are sending, i.e. unless the version is part of the message.
 +
 
 +
Nowe some discussion over version control and rolling back versions, this is a problem for the team to manage and is down to their workflow.
 +
 
 +
Royal Bank of Scotland (RBS) are using automatic configuration as well as automatic deployment.
 +
 
 +
RESULTS
 +
* Deliver change more quickly and frequently
 +
* Reduce incidents and defect costs reduce over time
 +
* UAT/Pre-prod minimised
 +
* Increase in test efficiency (coverage / time taken)
 +
 
 
== Testing Coverage ==
 
== Testing Coverage ==
  

Revision as of 21:50, 2 March 2014

Service Virtualisation

Current tools on the market:

IBM Green Hat, CA LISA

Definition of Service Virtualisation: System need to develop or test against many other (external/internal) interfaces.

Allow for integration testing by providing Artifical Intelligent Stubs which need to be trained.

What we do is we use simulations/virtualisations to create stubs for those services.

Not running on actual system, very fast to setup.


CASE STUDY Royal Bank of Scotland have successfully implemented SV in CI

BUSINESS NEEDS

  • Consolidate 4 messaging hubs, deliver 35 major business change projects within 3 years.
  • Enables payment/confirmation messages, clearing services, supporting regulatory services.
  • Needs to cater fro a new currency if Greece kicked out of the Euro.

TECHNICAL ENV

  • Waterfall
  • Contractors/Vendors doing different things
  • Manual testing (3 week regression)
  • Complex env in and out (a lot of external 3rd parties)
  • Regression tests expensive against actual 3rd party services
  • Manual deployment


PROBLEMS

  • Not meeting business needs
  • External systems not available (functionality not developed)
  • Expensive testing
  • Expensive deployment

STRATEGIES

  • Delay integration until services available - too long
  • Write own stub - maintenance involved in keeping up-to-date slow and expensive
  • Evaluated SV tools

Selected IBM Green Hat because of complex message support.

Greenhat tool essentially a way of setting up stubs for complex messages.

SOAP/XML/JMS/SWIFT/MQ/etc

Greenhat is a simulator/intelligent service

Allows them to use a cyclical build/test env pipeline to production. Using Service Virtualisation (SV) for the first two environments, DEV and SIT. Actual services are used for UAT and PROD.

How do you confirm that you virtualization is covering all tests it needs. And is this the same as the actual service?

The suggestion is that this is done with collaboration of the 3rd party. Have to complete explicit audit tests and baselining, i.e. tests against the real service and the actual service and compare the results.

It's all just speculation and collaboration if the actual service doesn't exist.

Level of risk against each service in your pipeline allows you to determine how much effort you put into virtualization.

The suggestion is that if you have faith in the virtual service then you can skip UAT.

For RBS this has significantly sped up their testing because don't have to necessarily wait for UAT env.

Automated testing reduced regression testing from 3 weeks to 4 hours.

[Some discussion over the differences between automatic deploy and potential automatic deploy.]

[Then some discussion over 4 hour regression test not being viable in dev and how to handle it, i.e. use targeted testing for the part of the app that you're working on]

[Now some discussion about how to version the virtualisations]

The suggestion is that you create different virtualisations for different versions unless the version is inherent in the message that you are sending, i.e. unless the version is part of the message.

Nowe some discussion over version control and rolling back versions, this is a problem for the team to manage and is down to their workflow.

Royal Bank of Scotland (RBS) are using automatic configuration as well as automatic deployment.

RESULTS

  • Deliver change more quickly and frequently
  • Reduce incidents and defect costs reduce over time
  • UAT/Pre-prod minimised
  • Increase in test efficiency (coverage / time taken)

Testing Coverage

Can be determined through impact analysis on changes by comparison of schema's on services and requirements/functional spec traceability with tests.

Also overtime confidence will be gained on defects not being found in production.