What is Quality?
For those who are not familiar with the discipline of quality assurance, American Society for Quality (ASQ) has a great primer on the differences between quality assurance versus quality control. Quality is all-encompassing on any project; any step, process, or control from the beginning of a project to its end could be seen as a part of quality, whether that’s regularly-scheduled meetings, code reviews, or how a database is designed. All of these practices ultimately play a part in maintaining the project’s structure and supporting repeatable processes intended to produce successful outcomes.
Yet, while these are great high-level points that work well for telling both stakeholders and project members a great story, to understand how to enforce quality on a project—and certainly here at SemanticBits—is to understand how quality professionals execute testing. And that testing is a bit of a double-edged sword; on the one hand, it points out failures or inconsistencies in the project and, on the other, it can only (at best) recommend solutions. And, no matter what, it relies upon other team members like developers and project managers to produce a way forward.
How is Quality Executed?
In a nutshell, quality professionals typically surround a particular process in an application (such as logging in, uploading files, creating reports, etc.) with a set of steps called a test case, a repeatable way to confirm that the action does what the user expects it to. That test case is often part of a related set of cases called a test suite. These suites themselves can ultimately be tied to higher-level application functionality to ensure that past changes continue to successfully operate—a process often called regression.
Why is all of this important to understand? Because when the quality professional is tasked with creating these test cases, it is incumbent upon them to make those steps as specific as possible to execute functionionality while also keeping details as general as possible so that when the inevitable stakeholder or user request modifies that function, the associated test case will not have to be completely re-done.
If there is any truly negative part of a quality professional’s job, it would have to be test case maintenance. While projects are consistently dynamic and are meant to respond to changing needs on a Sprint-to-Sprint basis, all test cases are, on the other hand, inherently static—they never change unless someone makes them change. This is not a fundamentally resolvable problem; instead, this phenomenon is a tension that must always be maintained. If test cases weren’t static, there would be no way for the quality professional to objectively understand an operational baseline. But if test cases aren’t modified to reflect a function’s legitimate updates, that test case quickly becomes useless.
Aside from being a necessary part of a project’s QA process, what does test case maintenance have anything to do with automation? For the quality professional, while there is no shortage of resources offering guidance on the subject, making decisions on what can be automated all comes down to the degree that a given suite of test cases changes.
But let’s step back for a moment. Why automate anything? If tests must be maintained during practically every Sprint, what’s the point of bringing in automation? That’s where the business side comes in. From a stakeholder perspective, their goal is twofold for any given project: bring a product to successful completion at least on time and on budget, and maybe even in less time and under budget. This should mean that if the application can be tested more efficiently by a computer, that would save both time and money.
But just because a project benefits overall from having more efficient testing, aren’t automated tests subject to the same maintenance requirements as manual tests? Of course they are! And things get more interesting when the quality professional introduces automation to a project. The biggest challenge with any automation tool or framework is that it is dependent upon stability in order to make its biggest impact to a project. So, if the application is prone to change every Sprint, that particular area wouldn’t be worth the level of effort to scope out automation.
If an application is always changing, when can anything get automated? The good news for most projects is that there is almost always core functionality of the application that, if changed frequently, would so substantially change the application’s mission that it would perform altogether differently—it would become a different application! This core functionality therefore typically becomes the best candidate for automation, because it is the most regressed (as in, retested over and over again to prove the main application is still functional).
It can therefore be stated that any functionality that can be safely regression-tested can be safely automated. As the core functionality expands and becomes more stable with fewer changes, it becomes the leading indicator that an automation framework can continue growing alongside the application. What’s more, it is often true that the regressed portion of the application is the most onerous to test because it is so repeatable (see this list of tips and best practices for more considerations).
Humans are Essential
There is some fear in bringing automation to a project. Some think it might make a quality professional redundant, or perhaps engineering an automation effort would instead become a developer project. As an answer to the former, change is bad for automation! For the latter, there is a world of difference between unit testing (testing the changed area of code, something that every developer should be doing) versus end-to-end testing (testing the overall application in light of Sprint change, something that every quality professional should be doing). Like a proofreader for a novelist, these two areas should be separated and independent to achieve the best results. And even when the automation framework has to change, it will require that same quality professional to refactor the framework in order to prove that the application continues to work as expected (such engineering skills only add value).
As with any software engineering endeavor, a computer will only do what it’s told. Humans, however, possess an astounding capacity to imaginatively engage with software—intuitively coming up with scenarios that a specification or machine will never think of, many times arriving at ways to crash the application (for the quality professional, this is a crowning achievement!). It is this ability that really makes the professional shine.
Automation is a tool or framework that adds value to a project by offloading onerous and repetitive functions and quickly validating the stable part of an application, allowing the quality professional’s creativity to be freed up to concentrate on validating project areas that are changing. Properly maintained manual and automated testing can work together to help guide the project to producing a successful result.