Introduction
To transform their application delivery, many teams are employing model-based testing (MBT). Through this increasingly popular approach, teams are realizing a number of significant benefits, including improving test coverage, reducing testing effort, and improving quality.
In this blog, I’ll examine the use of MBT, but with a particular focus on some of the objectives we often hear DevOps teams say they’re focused on. For example, many teams we meet with are struggling because testing continues to lag behind development efforts, which results in delayed feedback loops that slow releases. These teams are eager to find out how MBT can help with their in-sprint testing.
Other teams ask me how they can best integrate MBT into their agile development and DevOps approaches, so it doesn’t just benefit testers, but all stakeholders, including product owners, developers, architects, software development engineers in test (SDETs), and release engineers.
Finally, other teams we work with need help optimizing their MBT implementations so they can derive maximum benefit from these approaches. Often, we see improper implementations undermining these gains. For example, one customer I spoke to was developing test models—after application development was completed.
Common Challenges in Doing In-Sprint Testing
As teams look to complete testing in-sprint, they confront a number of obstacles. Following are some of the most common issues we hear about:
- Developers do not perform adequate unit and component tests, or do not practice test-driven development (TDD).
- Test assets are not ready when coding is completed.
- Test design and specification takes too much time.
- Most post-development tests are manual and take too long to execute.
- Automated tests cannot be updated fast enough to keep pace with development.
- Test environments take significant time to setup and configure.
- Test data takes a significant time to define and provision.
- Continuous integration/continuous delivery (CI/CD) processes aren’t adequately automated to allow touchless provisioning of test assets.
The reality is that MBT testing can be instrumental in addressing many of these challenges. In the following sections, I will describe our approach to progressive MBT (where we build models incrementally in sync with lifecycle activities) in the context of agile development. This is especially useful in the context of enabling in-sprint testing in agile.
Note, this blog is not intended to be an introduction to MBT, since details about this approach are covered elsewhere. This is more about how different aspects of MBT are exploited in different parts of the agile lifecycle from the context of different personas. This blog focuses on how this approach allows progressive, incremental modeling in tandem with code development to allow in-sprint testing.
I will reference Broadcom’s Agile Requirements Designer (ARD) product capabilities as the MBT tool of choice, but other MBT tools offer similar capabilities as well.
How MBT Fits into the Agile/DevOps Lifecycle
The following diagram represents a high-level schematic of how and where MBT fits into the typical agile/DevOps lifecycle, specifically focused on the development and continuous integration (CI) process:
Figure 1
The following diagram represents testing in the CD process:
Figure 2
Let’s dive into each of the steps in the first diagram above.
Step 1a: Agile Backlog Grooming and Preparation (All Personas)
Good testing starts with good requirements. In the MBT paradigm, this starts with requirements modeling. In agile, the “3 Amigos Meeting” is a popular way of doing backlog grooming. Typically, in this meeting, the product owner, developers, testers, and sometimes the SDET meet to flesh out the details behind features and stories. Individuals will look to define acceptance criteria, test scenarios, and other design elements.
Figure 3
As shown in the whiteboard above, participants often use flow diagrams, rather than extensive sets of text, to clarify the intended behavior of the application or system. We call this behavioral modeling and this ties in nicely with MBT.
With MBT, we would capture this as a model in the MBT tool. At this point, we primarily need to capture the “happy path” flow associated with the story, as spelled out by the product owner. (In the example below, we offer a model of the simplest login process in ARD.)
Figure 4
We should note that modeling isn’t restricted to story-level requirements as described above. Modeling can be done for other types of requirements, such as features, system integration, and end-to-end user scenarios (for example customer journeys), which are typically executed in the CD process (see step 4 below). See the following figure.
Figure 5
While the tests for upper-level requirements typically get built later in the process (see step 2b), MBT allows us to incrementally connect the flows for lower-level requirements (such as stories) to higher-level scenarios. For example, as depicted below, the login step can be described as part of a bigger model that represents the customer journey for online shipping.
Figure 6
Similarly, the sub-models for other parts of the journey may be built out incrementally as part of other stories/features and connected to the bigger flow. This approach also allows us to generate tests at different levels of optimization or coverage. Tests can vary depending on the type of the requirement and the stage of the lifecycle in which the tests will be run, according to the testing pyramid.
This approach to requirements specification is clearly beneficial for all personas, specifically the product owner. It allows requirements to be specified more un-ambiguously, promotes better brainstorming and clarifying discussions, and allows better online digital collaboration. This all beats capturing the whiteboard discussions as smartphone pictures and subsequently manipulating images, especially if teams are working remotely. By using MBT to do requirements modeling, teams can improve specificity and minimize ambiguity, and in the process reduce downstream defects by more than 80%.
Step 1b: Agile Design (SDET and Developer Personas)
This step is in preparation for the coding phase (step 2a). In this step, developers typically create the design for software updates. For example, teams define design classes and methods or develop API specifications. Teams also build the unit tests for the new code, following TDD principles.
The tester (or SDET) enhances the model as follows:
- Adds test scenarios, including negative cases, to satisfy the acceptance criteria.
- Creates additional scenarios based on the acceptance criteria in the story.
- Generates behavior-driven development (BDD) feature files and associated automated tests using Cucumber. Since these tests will be used for unit testing, it is important to generate the maximum number of tests for the most thorough coverage possible. (This can be done using “All Possible Paths” optimization in ARD.)
- Defines rules for synthetic test data generation for the associated tests. Note that the test data needs for Step 2a are relatively straightforward, so this is done using simple formulas in ARD to generate synthetic data.
- Defines lightweight, synthetic virtual services (see a demo of how to do this with ARD) in case there is a dependency on another software component.
Figure 7
Figure 8
Note that there are other ways of generating lightweight mocks that are more developer friendly (such us using SV-as-code) or tester friendly. However, we recommend that this be integrated with the model so that all the test assets can be managed and generated. See more on progressive virtual services in a later section of this blog.
All of the test assets generated from this step may be pushed into the source code management system tied to the story, so that they are available for developers as part of their coding effort.
Naturally, this is of great benefit to developers, since it eliminates the manual creation and maintenance of BDD feature files, test data, and service mocks. And it makes these assets available to developers at the time of development. This allows the developer to focus more on design and development activities and typically results in developer productivity improvement of at least 25%. In addition, this approach provides higher test coverage against the acceptance scenarios, which helps improve code quality.
Step 2a: Development (Developer Persona, Supported by SDET)
During this step, the developer builds the code, does a local build, and runs unit and acceptance tests using the BDD tests generated in the step above. If required, the developer and the SDET may collaborate to refine the model, such as tweaking the acceptance scenarios or test data.
Step 2b: Optimized Model-driven testing for CI/CD (SDET/Tester Personas)
This is an important testing step that happens in parallel with coding (step 2a). The goal of this step is to support testing needs after code commit (CI process, step 3) and throughout the rest of the deployment pipeline (CD process, step 4).
Step 3: Testing During the Build Process (Automated)
To support step 3, the tester performs the following key functions:
- Enhance the story-level models with more robust test data—both in terms of volume and variety.
- Enhance the story-level models with more robust virtual services. For example, testers may look to use more robust test data or recordings of the behavior of a real service, if available.
- Generate automated tests with lower level of coverage than that used for unit testing (such as ARD “All Pairs” optimization). Generating automated tests from models is a much more scalable approach than doing manual scripting. From the same model, users can generate different types of automated tests for different test execution engines or target environments. In addition, this approach helps to automatically refresh automated tests when the model changes. See figure below.
- Develop or enhance additional models for integration testing across components, along with establishing support for test data and virtual services as described above.
- Contribute selected tests from the model to be part of the regression test suite. With MBT, we can continuously optimize the regression test suite to focus on tests that have been affected by specific changes made as part of the build.
Figure 9
All of these test assets may be pushed into the source code management system or other repository that the CI engine (such as Jenkins) can locate. These allow automated tests to be executed rapidly (and to provide fast feedback developers) after the build is completed. All of the actions in this step can help significantly reduce elapsed testing time. These steps can also reduce feedback time to developers—generally considered the biggest bottleneck in DevOps—by greater than 60% compared to doing it the traditional way.
See my previous blog post to learn more about better synchronizing code with models as part of steps 2a and 2b.
Step 4: Testing During the CD Process (Tester/SDET Persona)
To support step 4, the tester performs the following key functions:
- Develop or enhance models for system and pre-production (UAT) tests based on system testing and UAT scenarios. For example, as different application components are built out progressively, so are the corresponding models. See figure below. This allows us to build out test cases for corresponding end-to-end scenarios.
- Enhance the data for these tests. This includes progressively making data more robust and “hybrid,” that is, featuring a combination of synthetic and real-world or production-like characteristics, as we go from the left to the right of the pipeline. ARD works with test data management (TDM) tools to enable this hybrid test data approach. See figure below. Please refer to a quick demo of how this works with ARD and TDM.
- Similarly, virtual services are enhanced progressively as we go from the left to the right of the pipeline. Tests continue to become more robust and “hybrid,” that is, featuring a combination of synthetic test data and recordings from real services.
- It is recommended that teams use progressively less stringent test coverage as they go from left to right, such as ARD “All Edges” or “All Nodes.”
Figure 10
Figure 11
It is key that most of the testing in the CD process be automated and “touchless,” requiring no human intervention to start and monitor. Therefore, testers and SDETs need to ensure that all test assets generated in Step 2b are managed in a way that allows them to be provisioned and deployed automatically in sync with the deployment of the application assets. This can be done using CD tools like Continuous Delivery Director.
SDETs may need to collaborate with deployment or release engineers (or in some places, DevOps engineers) to ensure that the right test assets are deployed to appropriate environments with the correct builds. Since most test assets are generated from the model, this approach makes it easier for SDETs and deployment engineers to configure test environments correctly.
Progressive Service Virtualization and Test Data
As we noted in every step above, service virtualization plays a key role in enabling continuous in-sprint testing. (See figure below.) In fact, there is a whole lifecycle around progressive virtualization in continuous testing that probably merits being the topic of a separate blog.
Figure 12
The key thing to note here is that just as the model evolves progressively along the lifecycle, so do the virtual services. They start out being lightweight and synthetic—and easier to build—during development/CI, and progressively become more robust in the CD lifecycle as they are enhanced with more scenarios and test data. Eventually, these early iterations are supplanted with real service recordings or the real service when available. The use of an integrated service repository or catalog ensures that virtual services can be leveraged by multiple teams (both developers and testers), progressively refined, and their usage can be tracked and governed. In this way, teams can ensure that appropriate virtual services are used in the right context.
The same principle also applies to progressive test data management, which we’ll also plan to cover in a separate blog.
What about existing regression tests?
The above approach works well for progressive testing of new functionality. But what if we have existing regression test cases that are not optimized, or worse, mostly manual? Executing such regression suites will slow down deployment or release because they may take a long time to run or require significant testing effort. There is no magical way to resolve this problem. However, we can take an incremental approach to address this situation as described below:
Figure 13
- For new functionality, following the MBT-based approach as described above.
- When you have problematic application components, or those that change (and hence need to be retested) frequently, consider modeling the tests for those components. This initial investment helps us test these components more thoroughly. Further, this significantly reduces the time and effort needed to update these tests when there is a change. In many cases, we may be able to “reverse engineer” an initial model by doing an import of existing test cases.
- Lastly, for the remainder of the old tests, we can pick out the most critical tests to automate. We can also optimize the suite by removing redundant or unneeded tests. Gradual migration of the most important tests into MBT is recommended as part of reducing “technical debt” in testing. This can potentially be done in sync with addressing technical debt in the code.
Summary and Key Takeaways
This blog provides a high-level approach to using progressive modeling so we can improve our ability to do in-sprint testing. This approach enables testing to be more agile so it keeps pace with development. This progressive modeling also improves collaboration between various stakeholders, enhances quality, and significantly reduces testing effort and time. To summarize, here are the key takeaways from this approach:
- Like application development, modeling needs to be done incrementally, in sync with development as new software features are built. Modeling also needs to be done progressively, as we advance from left to right in the CI/CD lifecycle.
- Modeling benefits all stakeholders—including product owners, developers, release engineers, and more—not just testers. Stakeholders must be educated on the MBT approach and included in modeling efforts as described above.
- Modeling enables us to capture rich information about the intended behavior of the application in one place. This is the single “source of truth” from which most test assets can be generated. This approach is far superior to relying on extensive notes, pictures of whiteboards, and tribal memory.
- Through modeling, we can optimize different types of tests for distinct requirement types and execution contexts. For example, story-level tests are executed with full coverage in the development environment, but will have less coverage in subsequent phases of the lifecycle. Similarly, models for higher order requirements in the CD process are tuned to provide lower coverage, in keeping with the testing pyramid.
Happy progressive modeling! Please reach out to me if you have any questions or ideas for improvements.