August 31, 2022
4 Key Reasons Service Virtualization is a Must for Agile Teams
Written by: Beverly Mindle
Service virtualization is not new. In fact, the concept and technology were established 20 years ago. At its core, service virtualization offers the ability to simulate behavior, data, and performance characteristics of applications and services. Through service virtualization, teams can ensure they have an on-demand environment to support their testing needs. Service virtualization requires minimal set-up and overhead, and has been successfully deployed by agile teams, performance testers, QA teams, and others in a wide range of organizations.
Despite the proven advantages of service virtualization, there are still those who adamantly defend testing in a live system. The question is why does that conviction exist? Why trust that a live “Dev/QA/Pre-Prod” system is better for testing? Is the answer the presumed quality of the results?
I would argue that service virtualization is crucial to your testing practices if you’re looking to guarantee quality releases. In the following sections, I’ll outline four key reasons why service virtualization is a must.
Negative Testing
If you are conducting all your testing using a live system, how can you ensure you are testing for negative scenarios and error conditions? For example, let’s say I am working on a digital bank application that is dependent on four downstream services. I want to make sure the system I am building is robust. To do so, I can’t just test based on “happy path” scenarios, that is, assuming all dependencies are always performing optimally. If I do so, I am likely to miss a number of potential issues, including error conditions and defects.
Some things I might like to test include, what is the user experience like if one of my dependencies is slow to respond? Does the UI simply hang or does the user see a helpful message? What happens if I get an error code or data comes back out of order? What if I want to test certain data conditions?
In an environment that you do not control, you have no way to make production-like systems misbehave, such as slow down or send data back incorrectly. However, that does not mean those systems won’t exhibit these behaviors at times. To ensure application quality, teams need to ensure they are prepared for these error scenarios.
With virtual services, you can control and test these scenarios by defining the request and the expected negative responses the application should return. By testing error conditions and verifying that they are handled properly, you can help ensure stability and validate that a product is ready for release.
Service virtualization should not be limited to one type of testing function. (For more on this topic, please see Broadcom Software DevOps CTO, Shamim Ahmed’s post, Continuous Service Virtualization: Introduction and Best Practices.) There are many types of negative testing scenarios that can be used before the code is shipped to production. For example, you can use service virtualization to help with chaos testing by seeing what can happen to an environment if a dependent service is suddenly stopped or dropped.
Performance Constraints
Many organizations have clear SLAs and SLOs for applications, micro-services, and systems they design. However, when do they test that they are able to meet these SLAs and SLOs? Do application teams wait to see if their services can meet high load requirements until the post-integration performance testing period? That delay not only reduces the time it takes to deliver new releases and value, but it also may impede teams in trying to track down what service is the culprit when slow performance or errors occur under high load.
The reason load tests are not typically done earlier in the development lifecycle is that it is impractically expensive to have a production-like environment available for agile teams to test with. Since virtual services are lightweight and can be designed to be deployed in performance mode, they are extremely valuable in helping simulate the performance tests needed to represent a production-like system.
Additionally, you can use think time as a means to ensure the systems you simulate respond in a manner similar to production systems. Think time is the defined delay, typically in milliseconds, before the system will send the response. Think time can be a specific number or a range. This enables agile teams to do performance tests as part of their development lifecycle and gain confidence that their system will not be the bottleneck.
Furthermore, when it is time for big performance runs, the cost of the needed environment can be reduced. Expensive systems, such as mainframe, SAP, third-party platforms, etc. can also be simulated. If slow performance is observed, virtual services can be used to stand in and help isolate which area of the integrated system is causing the delay.
Unavailable Environments
Agile teams are tasked with bringing value to customers by delivering quality offerings and moving both quickly and sustainably. But do they always have everything they need? How frequently are they forced to slow down because test environments are unavailable?
Environments can be unavailable for a variety of reasons. Maybe there are underlying systems that are too expensive to replicate, third-party systems that charge per transaction, constraints that limit concurrent testing, environments that are unstable, and so on.
Teams that are solely dependent on the live system are forced to wait. Not only is this costly but it will likely put pressure on teams to rush testing at the end, which will result in poorer quality. Teams that also leverage service virtualization can bring up the systems and services that they need. This enables them to remove environmental constraints and proceed with testing.
Parallel Development
Even if a service has not been created, if you have a contract on how a service should behave, you can use the specification files or request/response pairs to stand up a virtual service. Rather than waiting for the live service, you can shift left by using virtual services to allow parallel development. UI and API can work at the same time since the UI team can use the virtual service to stand in for the live system. For larger projects that have multiple teams with interdependencies, you can also use virtual services to allow for both teams to begin work at the same time.
Teams are able to build faster, and can even use the virtual services to help find defects earlier as they work on validating the contracts.
Summary
This is not meant to be a complete list on how Service Virtualization can help organizations provide value. For example, some organizations have saved 10 or more days per testing period by using Service Virtualization, often in combination with Test Data Manager. Service Virtualization helps you reduce the time to set-up data across systems and lets you use that same data for future tests. Other organizations have used Service Virtualization to represent internal systems to train new representatives so that they can practice processes end to end without touching live systems or potentially impacting production data. And the list goes on.
If you already have Service Virtualization, you should think about how else to balance a speedy release with quality. Am I leveraging all the areas where Service Virtualization is valuable? Have I integrated Service Virtualization as part of my continuous deployments and pipeline? Service Virtualization has a number of tools including Jenkins Plugins and APIs to allow it to do just that.
While it is clear the benefits to using Service Virtualization are high, I would argue that Service Virtualization is not only valuable, but crucial to any organizations testing strategy. Any organization only testing with live systems to test are wrong. Not only is this expensive to have and maintain a production-like environment, but there is no way to ensure you are testing all of your edge or negative scenarios, no way to ensure you meet SLAs/SLOs from initial development work, and it can improve your time to market. If you have not already considered how Service Virtualization impacts your organization’s testing practices, it is time that you start.
Beverly Mindle
Beverly is a Senior Product Manager working with Broadcom's Service Virtualization product. She is passionate about helping clients remove barriers, constraints and other obstacles that slow down the software development lifecycle or prevent robust application testing.
Other posts you might be interested in
Explore the Catalog
Blog
August 16, 2024
How Broadcom Test Data Manager Helps Enterprises Meet Fast-Changing Global Data Security and Privacy Requirements
Read More
Blog
May 31, 2024
Revolutionizing Healthcare Demands a New Approach to Software Development
Read More
Blog
December 20, 2023
Broadcom Software Academy Wins Silver in Brandon Hall Group’s Excellence in Technology Awards
Read More
Blog
October 20, 2023
The Broadcom Approach to Test Data Management for the Mainframe
Read More
Blog
August 1, 2023
How ARD Helps DevOps Realize Its Full Potential
Read More
Blog
April 20, 2023
Revitalize Your Testing With Continuous Everything Practices to Meet DevOps Goals
Read More
Blog
March 16, 2023
Better Together: Optimize Test Data Management for Distributed Applications with Service Virtualization
Read More
Blog
December 2, 2022
Ambiguous User Story? Perhaps It's Time to Model
Read More
Blog
November 10, 2022