Back to blog home page

All About AngularJS Unit Testing – Part II

Posted by on Jul 21, 2015

Testing for Success

In our last article on unit testing we covered the common metrics used when gauging a unit testing suite’s success rate. We were able to demonstrate that each metric – on its own – had a fatal flaw that prevented you from being 100% certain of your test suite’s success when the metric was met. However, that is not to say that the metrics are completely useless. Below we’ll touch upon how the metrics should be viewed, then add a few best practices into the mix that, when followed, should make your code as robust as can be managed.

What do the Metrics Mean?

As we discussed previously, both code coverage and branch coverage have hidden deficiencies, and that 100% coverage in either – or even in both – cannot guarantee code quality. Of course full input domain coverage would mitigate this, but as we mentioned this produces a number of required tests that is simply far too high to prove a useful target.

However, even given these caveats, it is important to note that these metrics – code coverage and branch coverage – are still significant. A high percentage of code coverage, coupled with a high percentage of branch coverage, is a sign of a healthy and well-tested code base. The only issue arises when you try to use these metrics as verification that your code is fully tested. This is a matter of mindset. Instead of focusing on code coverage and branch coverage as valuable metrics on their own, instead recognize what they truly represent – an indication of the extent of your unit testing suite. High code and branch coverage is a side effect of a good test base, rather than a priority in and of itself.

Unit Testing Best Practices

One of the ways to reach a high percentage of code and branch coverage – without focusing on those metrics exclusively – is to adhere to a good set of best practices. Below are some suggestions to use when unit testing your code:

  • Follow Test-Driven Development (TDD) practices to ensure that, at the very least, you are testing the major features of your application as you write them.
  • Treat your tests as a functionality contract. Make any implementation assumptions, such as required parameters or control flow dependencies, explicit through the use of a test that exercises those assumptions.
  • In addition to highly-specific unit tests, make use of functional and integration tests to test your code from end-to-end.
  • When a bug is found, write a test that exposes the bug before fixing the issue.
  • Make test passage a necessary condition for task completion. If a task is completed that causes existing tests to fail, then that task should not be integrated into the code repository until the code has been modified to ensure all tests in your test suite pass.
  • Don’t test only success. Aim for a 2:1 ratio of failure-oriented tests to success-oriented tests. Understanding how your code fails can give you a lot more insight into its stability than simply focusing on what happens when it works. Verify that correct exceptions are being thrown, or that side effects – such as database calls – do not happen when the code involved fails.

Mixing Unit, Functional, and Integration Tests

In the above best practices list we touched on integrating unit, functional, and integration tests. You can reach full code and branch coverage via unit testing alone, but given that a true unit test requires mocking out all of the external dependencies in a given function you need to add tests that incorporate your functions as a group, rather than individual units. Functional tests are great for testing code paths, and only require you to mock out external services. Adding these into your test suite can make your code far more robust than unit testing alone, while minimizing the side effects inherent in testing your code.

However, we must also test what happens when interacting with these external services, which is when integration tests come into play. Integration tests should be built to run through all of the use cases of your software in an automated fashion, using as much of the code architecture as is feasible. The goal is to emulate a production environment as much as possible, meaning that every service must be exercised, and every integration point hit. This can be done through tools such as Karma, or external tools such as Selenium, SoapUI, and any of a number of other automated browser testing tools.

Having a Robust QA Process

All of the above best practices are useless without a dedicated QA process. A QA resource on your team should be tasked both with verifying any new functionality and finding any bugs in the existing code base through experimentation. While unit, functional, and integration tests can get you most of the way to a robust code base, you cannot be certain of your code base’s quality without the human element of someone attempting to use the product in ways in which you did not attend. Much as an author has an editor, you should want a QA resource to serve as a check on your development team.

Conclusion

There’s no one right answer to successful testing practices. While metrics such as code and branch coverage are useful for gauging quality, they should not be goals in and of themselves. By following testing best practices, you can get most of the way to high code and branch coverage without explicitly focusing on those goals. Furthermore, leveraging robust integration tests can help to exercise dependencies that your basic testing suite might not otherwise hit. Finally, a dedicated QA resource can leverage a wealth of experience and skills that can help find holes in your code base, making your product more robust. Following all of the above will not give you 100% certainty in the quality of your code, but it will get you as close as is humanly possible.