To transform their application delivery, many teams are employing model-based testing (MBT). Through this increasingly popular approach, teams are realizing a number of significant benefits, including improving test coverage, reducing testing effort, and improving quality.
In this blog, I’ll examine the use of MBT, but with a particular focus on some of the objectives we often hear DevOps teams say they’re focused on. For example, many teams we meet with are struggling because testing continues to lag behind development efforts, which results in delayed feedback loops that slow releases. These teams are eager to find out how MBT can help with their in-sprint testing.
Other teams ask me how they can best integrate MBT into their agile development and DevOps approaches, so it doesn’t just benefit testers, but all stakeholders, including product owners, developers, architects, software development engineers in test (SDETs), and release engineers.
Finally, other teams we work with need help optimizing their MBT implementations so they can derive maximum benefit from these approaches. Often, we see improper implementations undermining these gains. For example, one customer I spoke to was developing test models—after application development was completed.
As teams look to complete testing in-sprint, they confront a number of obstacles. Following are some of the most common issues we hear about:
The reality is that MBT testing can be instrumental in addressing many of these challenges. In the following sections, I will describe our approach to progressive MBT (where we build models incrementally in sync with lifecycle activities) in the context of agile development. This is especially useful in the context of enabling in-sprint testing in agile.
Note, this blog is not intended to be an introduction to MBT, since details about this approach are covered elsewhere. This is more about how different aspects of MBT are exploited in different parts of the agile lifecycle from the context of different personas. This blog focuses on how this approach allows progressive, incremental modeling in tandem with code development to allow in-sprint testing.
I will reference Broadcom’s Agile Requirements Designer (ARD) product capabilities as the MBT tool of choice, but other MBT tools offer similar capabilities as well.
The following diagram represents a high-level schematic of how and where MBT fits into the typical agile/DevOps lifecycle, specifically focused on the development and continuous integration (CI) process:
The following diagram represents testing in the CD process:
Let’s dive into each of the steps in the first diagram above.
Good testing starts with good requirements. In the MBT paradigm, this starts with requirements modeling. In agile, the “3 Amigos Meeting” is a popular way of doing backlog grooming. Typically, in this meeting, the product owner, developers, testers, and sometimes the SDET meet to flesh out the details behind features and stories. Individuals will look to define acceptance criteria, test scenarios, and other design elements.
As shown in the whiteboard above, participants often use flow diagrams, rather than extensive sets of text, to clarify the intended behavior of the application or system. We call this behavioral modeling and this ties in nicely with MBT.
With MBT, we would capture this as a model in the MBT tool. At this point, we primarily need to capture the “happy path” flow associated with the story, as spelled out by the product owner. (In the example below, we offer a model of the simplest login process in ARD.)
We should note that modeling isn’t restricted to story-level requirements as described above. Modeling can be done for other types of requirements, such as features, system integration, and end-to-end user scenarios (for example customer journeys), which are typically executed in the CD process (see step 4 below). See the following figure.
While the tests for upper-level requirements typically get built later in the process (see step 2b), MBT allows us to incrementally connect the flows for lower-level requirements (such as stories) to higher-level scenarios. For example, as depicted below, the login step can be described as part of a bigger model that represents the customer journey for online shipping.
Similarly, the sub-models for other parts of the journey may be built out incrementally as part of other stories/features and connected to the bigger flow. This approach also allows us to generate tests at different levels of optimization or coverage. Tests can vary depending on the type of the requirement and the stage of the lifecycle in which the tests will be run, according to the testing pyramid.
This approach to requirements specification is clearly beneficial for all personas, specifically the product owner. It allows requirements to be specified more un-ambiguously, promotes better brainstorming and clarifying discussions, and allows better online digital collaboration. This all beats capturing the whiteboard discussions as smartphone pictures and subsequently manipulating images, especially if teams are working remotely. By using MBT to do requirements modeling, teams can improve specificity and minimize ambiguity, and in the process reduce downstream defects by more than 80%.
This step is in preparation for the coding phase (step 2a). In this step, developers typically create the design for software updates. For example, teams define design classes and methods or develop API specifications. Teams also build the unit tests for the new code, following TDD principles.
The tester (or SDET) enhances the model as follows:
Note that there are other ways of generating lightweight mocks that are more developer friendly (such us using SV-as-code) or tester friendly (such as mock services in BlazeMeter). However, we recommend that this be integrated with the model so that all the test assets can be managed and generated. See more on progressive virtual services in a later section of this blog.
All of the test assets generated from this step may be pushed into the source code management system tied to the story, so that they are available for developers as part of their coding effort.
Naturally, this is of great benefit to developers, since it eliminates the manual creation and maintenance of BDD feature files, test data, and service mocks. And it makes these assets available to developers at the time of development. This allows the developer to focus more on design and development activities and typically results in developer productivity improvement of at least 25%. In addition, this approach provides higher test coverage against the acceptance scenarios, which helps improve code quality.
During this step, the developer builds the code, does a local build, and runs unit and acceptance tests using the BDD tests generated in the step above. If required, the developer and the SDET may collaborate to refine the model, such as tweaking the acceptance scenarios or test data.
This is an important testing step that happens in parallel with coding (step 2a). The goal of this step is to support testing needs after code commit (CI process, step 3) and throughout the rest of the deployment pipeline (CD process, step 4).
To support step 3, the tester performs the following key functions:
All of these test assets may be pushed into the source code management system or other repository that the CI engine (such as Jenkins) can locate. These allow automated tests to be executed rapidly (and to provide fast feedback developers) after the build is completed. All of the actions in this step can help significantly reduce elapsed testing time. These steps can also reduce feedback time to developers—generally considered the biggest bottleneck in DevOps—by greater than 60% compared to doing it the traditional way.
See my previous blog post to learn more about better synchronizing code with models as part of steps 2a and 2b.
To support step 4, the tester performs the following key functions:
It is key that most of the testing in the CD process be automated and “touchless,” requiring no human intervention to start and monitor. Therefore, testers and SDETs need to ensure that all test assets generated in Step 2b are managed in a way that allows them to be provisioned and deployed automatically in sync with the deployment of the application assets. This can be done using CD tools like Continuous Delivery Director.
SDETs may need to collaborate with deployment or release engineers (or in some places, DevOps engineers) to ensure that the right test assets are deployed to appropriate environments with the correct builds. Since most test assets are generated from the model, this approach makes it easier for SDETs and deployment engineers to configure test environments correctly.
As we noted in every step above, service virtualization plays a key role in enabling continuous in-sprint testing. (See figure below.) In fact, there is a whole lifecycle around progressive virtualization in continuous testing that probably merits being the topic of a separate blog.
The key thing to note here is that just as the model evolves progressively along the lifecycle, so do the virtual services. They start out being lightweight and synthetic—and easier to build—during development/CI, and progressively become more robust in the CD lifecycle as they are enhanced with more scenarios and test data. Eventually, these early iterations are supplanted with real service recordings or the real service when available. The use of an integrated service repository or catalog ensures that virtual services can be leveraged by multiple teams (both developers and testers), progressively refined, and their usage can be tracked and governed. In this way, teams can ensure that appropriate virtual services are used in the right context.
The same principle also applies to progressive test data management, which we’ll also plan to cover in a separate blog.
The above approach works well for progressive testing of new functionality. But what if we have existing regression test cases that are not optimized, or worse, mostly manual? Executing such regression suites will slow down deployment or release because they may take a long time to run or require significant testing effort. There is no magical way to resolve this problem. However, we can take an incremental approach to address this situation as described below:
This blog provides a high-level approach to using progressive modeling so we can improve our ability to do in-sprint testing. This approach enables testing to be more agile so it keeps pace with development. This progressive modeling also improves collaboration between various stakeholders, enhances quality, and significantly reduces testing effort and time. To summarize, here are the key takeaways from this approach:
Happy progressive modeling! Please reach out to me if you have any questions or ideas for improvements.