Part III: Automated Tests

Iván Guardado
Audiense Engineering
6 min readJul 2, 2018

--

One essential part of software development is the automation of code testing. This article does not pretend to be a tutorial about testing but shows some good practices to make it happen in NodeJS, taking into account the good practices we have seen so far.

Unit Tests

Unit tests are in charge of testing a portion of code in our code. This consists of taking a component, extracting it from the system and checking how it responds to different scenarios when isolated. In the real world, if we took the shock absorber from a car and put it in a machine that applies different kinds of vibrations to check if it’s working properly. This does not guarantee that the whole car works. It does not even guarantee that it is going to be connected to the right place in the car, but we can be sure that the piece works as expected.

How is this done in software? How can we separate the component for the rest of the system? If you are using dependency injection correctly you are already doing it since all the components are decoupled between them.

Test doubles

In the same way that there are two actors that look the same in films, those who work as stunt doubles, we have the same in unit tests. Depending on the kind of work they do, they can get different names: spy, stub, mock… The most common in unit tests is to use stubs, which creates the dependency with a predefined behaviour so we can check the component that we are testing (Subject Under Testing or SUT) is doing what is expected to do.

In OOP, a stub is frequently created to make an extended class from the original dependency and to overwrite the needed methods to create a controlled scenario.

In JS, we look at the simplest solutions that are faking just the part of the code that include the dependencies that we know they are going to use.

If we change the dependency interface, the test will fail as the service will call upon a nonexistent function or have the incorrect number of arguments..

SinonJS library is a good choice to create any kind of test double, but it’s important to note that when we are making use of complex doubles something is being done wrong and we could refactor the code to make it simpler.

Integration Tests

As we said in the shock absorber example, having a good battery of unit tests does not guarantee that the whole system will work properly, or in the case of programming languages that do not provide a type checker that ensures everything is well connected.

It is important to create integration tests as unlike unit tests we don’t have to fake dependencies (unless there are some exceptions that we will see soon) as we want to check whether dependencies are injected correctly.

Are you allowed to connect to the DB? Do we simulate HTTP requests to check the API is working fine? Could we gain access to the message queue? Could we interact with third party services?

These are some of the most common questions that arise when talking about integration testing and I personally think it depends on every organization to establish a limit. I encourage avoiding stretching that limit too much so that we don’t get into the land of end to end tests. These are the rules that we have at Audiense for the integration tests:

It is allowed

  • To access databases MySQL, Mongo, Redis.

It is not allowed

  • Access to the messages queue.
  • Call external services (APIs, Cloud…).
  • Access to services that are too hard to set up for testing as SOLR.

The most common for us is to create integration tests for actions (remember to use cases) and to be sure that the flow is successful. However we must be cautious when following the previous restrictions.

Nock

This library allows us to intercept all HTTP requests from our tests and save the responses in a way that we can re-visit in the in the future. This allows us to run our tests much faster (as they skip the HTTP connection) and they are determinist so we they don’t depend on an external service to be sure our code works.

It is important to note that we are not trying to detect when the external service interface changes. We just want to validate our code for a known version of it. In case of API changes we would have to record the responses again.

Proxyquire

There is another library that can help us break the rules for the integration tests. As I said in these kinds of tests we should not fake dependencies, however sometimes we would need to do it to ensure the determinism of the tests or because it connects to a forbidden service.

Following the proposed architecture of code, proxyquire should always point to a file builder (index.js) since this is where we make use of “require”. Let’s imagine we create an action that has a dependency with an external service that we want to skip.

In this example we disable the behaviour of PubSub, however the rest of dependencies are still doing their job.

Proxyquire is very useful when we have to work with legacy code that does not allow a dependency injection. Taking into account that a big portion of our work consists as a result of this kind of code, it’s worthwhile knowing the contents of the library and how to use it.

Data Builders

Most of our projects need a large amount of data to ensure that the code is working well. This works best when you have access to a tool that allows us to generate data, in memory or DB in a way we don’t have to deal with it constantly.

There are some libraries that help with this, such as faker.js that generate random data. We can create our helper functions on top of it to create an easy API that we can use in our integration tests to either create or clean data.

This is an example of what we’ve done at Audiense, although the implementation is too tailored to our needs to be shared here. You must analyze the needs of data generation in your project and try to create an abstract layer to be able to reuse it or change it at any time easily.

Spec Helpers

Regardless whether your working on unit or integration, one of the greatest troubles we have had was writing tests to deal with the file paths of the code we wanted to check. We had things like this:

Although code editors like VS Code help by autocompleting paths, it’s clear that is not very elegant. One of its problems is that it couples tests with the folder structure of the code.

The solution carried from the Ruby world. What we did was to create a file spec_helpers.js that defines helper functions and is automatically available for all tests. The way to achieve this is by using the --require argument in “mocha”, so if you are using a different test runner you’ll have to investigate if there is any similar.

In this helper file, we have among others, functions point directly to certain paths of code.

This way we are able to use these helper functions in our tests to load components in a much easier and decoupled way.

From this moment we started to add other helpers to tasks we detected were adding friction when we wrote tests:

  • Functions to integrate nock easily in the integration tests
  • Functions to access to data builders and generate data easily.

This is an example of how a simple solution can help improve productivity. Sometimes the excuse of not understanding how to use of nock or how to generate data is enough to skip writing a few tests.

Identify the points of concern when creating tests for your project, create functions that help to reduce it and add them to the spec_helpers.js file making them available for all future tests.

--

--

I love solving business problems applying technology solutions in a smart and scalable way. http://ivanguardado.com