Testing is a good thing. It has to be – the world is telling us it is. There are conferences and hundreds of meetups around the world just to discuss testing. There are workshops and countless testing tools. There are tons of books explaining testing down to the very last detail. And, most of all, there are QA jobs. Testing is so vital that an entire discipline of software development is dedicated to it. Clearly, testing is really important.
Nevertheless, people struggle with how to do testing well and at scale. Testers face increasingly tight deadlines, bigger deployments, multiple release platforms and the demand for continuously integrated development processes. Never has there been such a need for automation and process-driven testing methodologies. Yet most testers aren’t forced to tackle the problem of test scale until it becomes a problem. When that time comes, unscalable choices made early in a project make it even more difficult to react.
Compounding this problem is that the testing part of software development gets short shrift, even in this day and age. Testing is work, work takes time and all too often a company will trade that time for more features in their product. It is crucial that the time spent on testing is as efficient as possible. And if you have any belief that your product is going to succeed and grow, you need to make sure that your testing is set up from the beginning to scale with your success.
Test scale comes in many flavors: there are load tests themselves, which test the capacity and scale of a system directly; the number of integration and unit tests can increase and they all must be run as fast as possible; the team or company can grow in size and the test development process needs to keep pace.
Creating the right mindset and setting expectations are critically important. Each company has its own specific challenges with scaling, but here are a few suggestions to help you pull your organization toward a more efficient testing paradigm. This is something my team and my company feel passionate about and we look forward to hearing your thoughts as we begin a series of posts on the subject.
The trend in QA is so clearly towards automation that you’d be hard-pressed to find an engineer that has not heard this already, but everything must be automated. The tests themselves, the deployment of the application, the execution of the tests, the feedback to the team — all of it must flow automatically and quickly. If you are still stuck in a world of manual regression testing then you are painfully aware that your ability to scale your testing is directly limited by the size of your workforce. You can scale it in a certain sense, but only at tremendous cost.
You may not need to scale your testing in the early days of a project, but you should not handcuff yourself from that eventuality.
Avoid tools that don’t have a scalable architecture. Usually, I look for the ability to run things in parallel or in a distributed fashion as a key indicator of whether I will use a tool. For instance, I sat through a load-testing tool demo recently where the tool had no concept of distributed test clients; all the load was generated from a single machine. What happens, therefore, when you need to test with more load than a single machine can generate? This tool didn‘t have the answer and wasn’t the right solution for anything more than the short term.
It’s crucial that you ask the question of scale when evaluating tools and frameworks, otherwise you will find yourself a year into the project and needing to build your entire testing suite from scratch again.
Generally, a test framework or tool for what you want to do already exists. Only in the extremely rare case that there is a true void in the testing toolset should you undertake to write a new tool. Even then, only do this if you intend the tool or framework for public use, because someone will fill that void just as soon as you have finished your project. What you want to do is focus on writing the tests. You absolutely do not need to invent a new way of executing those tests.
Once you do pick a tool, don’t fight it. If you feel like you’re forcing things, you’re probably doing something you shouldn’t. One common example is where someone subverts the notion of a test suite vs. a test case and starts using test suites as test cases. The rationale is often that the configuration of the suites can be expressed in some non-code format so non-coding team members can write tests, too. Everything is fine until they try and use the built-in parallelization or reporting features of their framework; now the framework doesn‘t know that they’re treating test suites like test cases and they end up having to rewrite large parts of the framework to make it work. If you want a testing framework that works for non-coders, try a BDD (behavior-driven development) framework. Trust me, the framework you want already exists; the testing community almost always has you covered.
Testing only scales if the entire development team buys into the concept. While a whole lot of people agree that testing is good, the very fact that companies hire a separate group of people to do the testing means that testing is viewed as an external process. This tends to leave the testers in a subservient role always trying to catch up with current development. As the codebase and the development team grow, testing gets further and further behind.
Instead, if the whole development team agrees that testing is everyone’s duty, testing can scale naturally with the project. Some teams have this easy: the development team already believes wholeheartedly in testing and works on it from the beginning. Other times you have some work to do to instill good testing ethics into the culture.
Creating that team-test culture can be tricky, but there are a few basic things to keep in mind. First, make the tests easy to understand and maintain and by choosing tools and languages that are familiar to the development team. Next, force the issue by insisting that the tests block some part of your development process. Usually, this takes the form of running tests automatically in a continuous integration server after each developer commit and not deploying the code until all the tests pass. Hopefully, the development team realizes that it has to engage in testing because it is vital to their work making it to the real world.
You might say, “hey, I live in the real world and I have short-term needs, I can’t be worried about all these long-term contingencies.”
We all make those tradeoffs between short-term and long-term goals—that’s part of the art of software engineering. The important thing is that you realize what tradeoffs you’re making. That’s why creating the mindset that your testing needs to scale is essential. With the context of scaling as a backdrop, you have a set of criteria with which to judge the decisions you need to make on a daily basis.
These issues are something that we think a lot about here at Rackspace, and my team has a lot more advice that we can share. We’re working on a series of posts that will dissect each of the problems I mention above in more detail. Hopefully, our insights and advice will prove useful for people struggling to bring their testing up to the next level.