Testing Error Handling in node.js
Working in a dynamic language like JavaScript has the advantage that you get stuff done quickly, but comes at the price of intense unit testing to be safe. And when it comes to achieving the coveted 100% coverage, not just for statements, but also branches, you won’t get around to thoroughly testing error conditions and error handling.

Whenever I’m writing node.js code, for instance to be executed in the serverless Adobe I/O Runtime, I’m paying extra attention not only to the regular, expected flow of the program, but even more to the error handling.
“How do I test hard-to-reach error conditions?”
However, defensive programming means that you probably spend as much code handling the most likely expected case, as you spend on many unlikely (and hard to reproduce) error conditions. So the questions is: “How do I test hard-to-reach error conditions?”
I’ve found myself adapting the following patterns to writing unit tests for error handling in node.js, which help me achieve 100% statement and branch coverage:
1. Don’t let yourself get away with less than 100%
My first step in getting superior test coverage is not letting myself get away with anything less than 100%.
Modifying my package.json
's test
script, I set up nyc
, which generates test coverage reports with the following options:
--reporter=text
creates a nice textual summary at the end of everynpm test
run that shows files and lines that are not properly covered by the existing tests.--reporter=lcov
also generates LCOV-style reports, that can be used with the Coverage Gutters VS Code extension to highlight code blocks that are not (or not fully) covered.--check-coverage
enables coverage checking (and will causenpm test
to fail when the thresholds set in the following arguments are not met).--lines 100
enforces 100% line coverage, i.e., every non-comment line in your code must be covered.--statements 100
enforces 100% statement coverage, which is a bit stricter than 100% line coverage, as a single line may contain multiple statements.--branches 100
enforces 100% branch coverage, the harshest condition, ensuring that every path that your program can take is covered.
2. Always assert the exception
Test coverage tools like nyc can tell you whether a particular statement was run as part of the test, but tests without proper assertions are worse than no tests at all, as they mask the underlying issue (code that isn’t properly tested).
When it comes to writing assertions for errors and exceptions, there are two things to assert:
- That the function under test is not exiting normally, but throwing an error or exception.
- That the thrown exception matches the API contract for the error type in question.
The code example below does both:
After calling the function under test in line 12, an exception should be thrown. If this does not happen, the function isn’t working as expected, therefore we fail
the test in line 13.
Now, the catch (pun intended) of this approach is that the catch
branch will handle both whatever the exception functionundertest
is throwing (or is expected to throw) and the AssertionError
that tells us things did not fail as expected.
The if
block in line 15 to 17 handles this scenario by simply bubbling up the AssertionError, making sure that the actual cause of the failing test (an exception was expected, but wasn’t thrown) is reported as it should be.
Finally, in line 19 the actual assertions start, making sure that the thrown exception has the correct error message, status code, etc.
3. Create drama in your HTTP responses
As long as your function is pure, without side effects and without external dependencies, it’s relatively easy to cover every possible case. But when working with REST APIs, it can get harder and harder to reproduce error states such as:
- 429: Too many requests (you don’t really want to melt your API server)
- 500: Internal server error
- 502: Bad gateway
- 503: Service unavailable
- 504: Gateway timeout
If your backend does not create sufficient errors for you with the reliability you need for a unit test or integration test, then create the responses you need using tooling like nock or Polly.JS, which temporarily replaces node’s HTTP stack and allow intercepting requests and faking responses.
The example above shows a simple test that simulates an error 504, which can occur when the backend is overloaded. Proper client code should guard against this by setting timeouts and handling the non-2xx response status.
In this example, the before
function is used to set up the test by listening to requests made to https://api.example.com/test
, delaying whatever response is coming by two seconds, before returning a status of 504. The actual test asserts:
- That the correct return value is given (line 21)
- That the function call does not block longer than five seconds (line 22)
- That the actual HTTP request has been made (line 25).
4. Fake what doesn’t fail
Now, external REST APIs are not the only way our functions under test are importing behavior that is prone to failure (but not always reproducible failure). The other most common way is through node modules that get require
d.
In cases where my functionundertest
depended on somefunction
in somemodule
that it imported through require
, and I knew that somefunction
might fail, perhaps because it was a wrapper around an external service itself, I’ve found myself creating reliable and repeatable tests for these failure states through a combination of proxyquire and Sinon.
- Proxyquire is a module that injects itself into node’s
require
function, enabling you to replace dependencies with objects and functions of your own - Sinon is a library that makes it easy to create fake functions and objects and verify that they have been called
- Together, they help me cover some of the most hard-to reproduce error cases.
Here is how this combination would look like in a simple test scenario:
Most of the interesting set-up work is happening in before
:
- Line 14 creates a fake
somefunction
using Sinon — and ensures it will always throw anError
when called. - Line 19 imports our
functionundertest
, but instead of using the straightrequire('../index')
, we useproxyquire
and pass in an object with all dependencies that should get faked. - The only dependency that we want to replace is
require('somemodule`)
, which is exactly what the key in line 20 does. - In line 21 we pass in
somefunction
, which we faked earlier in line 14.
When functionundertest
gets called, it will behave normally, except that instead of calling the actual implementation of somemodule
, it will use the fake implementation we’ve provided. In the after
verification, we can even make sure that somefunction
has been called once, and only once (line 32).
I’ve been using these four techniques in my node.js projects to achieve consistently high test coverage and to ensure my programs are not just working as expected, but also failing as expected.
Follow the Adobe Tech Blog for more developer stories and resources, and check out Adobe Developers on Twitter for the latest news and developer products.