Welcome back to the new issue of “Build Your Own Testing Framework” series! When trying to implement better formatting, we have discovered that some of our test suites do not run all tests! Today we are going to fix that, and we will make sure that such test suite will fail if it didn’t execute all tests.
This article is the sixth one of the series “Build Your Own Testing Framework” so make sure to stick around for next parts! Find all posts of these series here.
Shall we get started?
Verify All Tests Run
We will start from the RunTestSuiteTest
and run the test suite with the single test. Then we are going to assert that for that the test with the name testOk
has been reported as passing:
1 2 3 4 5 6 7 8 9 10 11 |
|
If we run this test suite, we can see that only one test executes!
1 2 3 4 |
|
Oh, that is interesting. This test suite does not run. Upon investigating, it turns out, that process.exit(0)
is being called during the runTestSuite(...)
function run. That is because of the latest feature that we have implemented - “exit with an appropriate exit code (zero for success, and one for failure).” We should be able to fix that by providing the process spy in the options of the runTestSuite
function that we are calling from the inside of the individual tests in the RunTestSuiteTest
test suite. And we ought to alleviate this kind of mistake somehow - we need a mechanism that would alert us if not all tests have been run. Maybe something like verifyAllTestsRun: true
option for the runTestSuite
. For that let’s write a test:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
That might be a bit complex at first. Let’s take a closer look how this test is supposed to work:
- First of all, we do assert that there was an assertion failure about all tests required to run.
- Inside of the action for this assertion we create and run the new test suite with two tests:
- test with the
runTestSuite
without process spy provided - empty test that should also execute
- test with the
If we run this test, it will pass. That is unexpected because we wanted it to fail. Apparently, most inner runTestSuite
is doing process.exit(0)
.
For that to work, we will need to be able to provide a hook into process.exit(code)
function. For that, we would need to create a SimpleProcess
class, that allows installation of such hooks. Let’s test-drive it!
process.exit
with hooks
First, we should start from the normal behavior without any hooks:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
When running this test, we will get a failure about SimpleProcess
being undefined. So let’s define it:
1 2 3 4 5 6 7 8 9 10 11 |
|
If we run our test suite now, we will get an error TypeError: process.exit is not a function
. To fix that failure we will have to define the exit(code)
method on our newly created class SimpleProcess
:
1 2 3 4 5 |
|
After doing that we will get an assertion failure Error: Expected to equal 0, but got: null
, as expected. To make the test pass, it would be enough to call globalProcess.exit(0)
:
1 2 3 |
|
If we run our test suite now, we will get no failures. That is great! Now, we can see that globalProcess.exit(0)
is probably not exactly what we want to have there. We ought to pass the code
parameter to the exit
function. To test-drive this properly, we will have to triangulate, i.e.: add another test with the different value of the code
parameter:
1 2 3 4 5 6 7 8 |
|
That fails as expected: Error: Expected to equal 1, but got: 0
. To make it pass we can either write some weird “if” statement or we could pass the code
parameter to the globalProcess.exit
function. The second option is simpler. According to the third rule of test-driven development, we should go for it:
1 2 3 |
|
That change makes our test suite pass. We probably should refactor the test suite to reduce the level of the duplication by extracting common variables from the tests:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
At that point, we should move on to tests for the hook installation functionality. Because right now we need only at most one hook we will not support multiple hooks at the same time - only one:
1 2 3 4 5 6 7 8 |
|
When we run this test, it fails because installHook
function is not defined: TypeError: process.installHook is not a function
. So we should define it:
1 2 3 4 5 6 |
|
Upon running these tests, we get Error: Expected to be called
because we didn’t call this hook yet. The simplest way to make it pass is to just call the hook from the installHook
function:
1 2 3 4 5 6 |
|
While that will make the tests pass it is not the behavior that we are after. To drive out the correct behavior, we ought to check that the function is being called only after process.exit(..)
, not earlier. For that we will need to have a sanity-check assertion:
1 2 3 4 5 6 7 8 9 10 |
|
That fails as expected with the error Error: Expected not to be called
. To make it pass we need to store the function in the variable and call it from the process.exit(..)
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
All the tests pass now! Finally, we want to be able to uninstall the hook, so let’s write the test for it:
1 2 3 4 5 6 7 8 9 10 |
|
To make it work it is enough to introduce this function and set hook
variable back to null
in it:
1 2 3 4 5 6 |
|
And all the tests will pass. Now we, also, want to replace the default value for the options.process
option with the instance of SimpleProcess
object. And all the tests should work as they were working before:
1 2 3 4 5 6 7 8 9 10 |
|
Installing the “verify all tests run” hook
Now, we can get back to our “verify all tests run” test. It still doesn’t fail as expected, so we need to install the hook, count all tests, count tests that had already run and compare them in the hook:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
|
At this point, this throws an error Expected all tests to run
and finishes the test fully without reaching our assertThrow(..)
assertion. That happens because we catch this error in the function runTest
, where we mark the test as failed, log the error and ignore the error object itself from there. One way to solve this problem is to have a particular error, that can propagate up the stack:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
Now our current test is passing, and the next test is failing with the error Expected all tests to run
. That happens because we have not uninstalled the hook as soon as it has triggered. Let’s do that:
1 2 3 4 5 6 7 8 9 10 11 |
|
That makes the next test run, succeed and exit immediately after that with error code zero. Let’s see what will happen if we put verifyAllTestsRun: true
on the top test suite here:
1 2 3 |
|
That doesn’t work because we re-install different hook inside of this test and as soon as this test finishes, we uninstall it. So we have two ways out of this situation: allow multiple hooks, or move that single test to its own test suite file. I think the second options is much simpler. Also, we will add the test for the negative case, where all tests run correctly (when we provide proper process spy):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
|
And this new test suite passes as expected. Just to double-check that these tests verify anything, we can break them (change expected error message and change assertNotThrow
to assertThrow
) and see if there is a failure and if it looks as expected:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
And it fails as expected, which means that our refactored tests still work as they should.
We have just applied a neat technique here: whenever we do a major refactoring in tests, we need to make sure they are still functioning correctly. For that, we break every single one of them (by changing the assertion or breaking the production code). Then we see if they fail as we expect them to. When they don’t, we know that refactoring didn’t quite work.
Fixing test suites to run all tests
Now we can go back to the RunTestSuiteTest
and see if it works as expected without that test. And it does: Error: Expected all tests to run
. To fix that we need to provide a process spy in every inner call to runTestSuite
. For that we will first extract {reporter: reporter}
as a common variable of the test suite:
1 2 3 4 5 6 7 |
|
And to make the error go away, we now can create a process spy and provide it through options:
1 2 3 4 5 |
|
If we run tests now, they all pass. And we can see that they all execute. Now we just need to double-check that all tests, that have inner calls to runTestSuite
have verifyAllTestsRun
option enabled. The only other test suite is the FailureTest
. Adding the option does not produce a failure because this test suite already uses process spy in all inner calls to runTestSuite
.
Conclusion
Today we learned that it is tricky to work with process.exit
or any function that can exit our program in the middle of the test. Such functions need to be mocked out completely inside of the tests. Also, we learned that it is possible to make sure we don’t forget to do that. That is quite important because, if we do forget, everything runs smoothly, and we don’t know that we made a mistake.
There is still a lot to go through. In a few next episodes we will:
- Report OK and FAIL for each test;
- Output carefully formatted failures to the STDERR;
- Enable our testing framework to run multiple test suite files at once;
- Enable our testing framework to run in a browser (it is javascript after all).
See you reading the next exciting article of the series: “Formatting the Output”!
Thanks
Thank you for reading, my dear reader. If you liked it, please share this article on social networks and follow me on Twitter: @tdd_fellow.
If you have any questions or feedback for me, don’t hesitate to reach me out on Twitter: @tdd_fellow.