Double Negative

Software, code and things.

Functional testing with Phantom Mochachino

I previously wrote about the internals of Mocha. The reason that I was intrigued by the inner workings of Mocha was because I was in the process of building a functional testing tool - Phantom Mochachino.

Phantom Mochachino is an extension of Mocha which works with the PhantomJS headless web browser to offer an end to end functional testing utility. A detailled explanation of how you can get started using Phantom Mochachino is found at the link above.

I am a big fan of PHP, and the site that I built Phantom Mocachino for is written in PHP. As such I wrote a PHP script to run my functional tests with.

I am sharing my implementation to demonstrate how you can use Phantom Mochachino to make functional testing fun.


$testRunner = "FunctionalTestRunner.js";

$testFiles = array(
    'RegisterTest.js' => array(
        'paths' => array(
        'useCookies' => true,
        'argumentsVar' => 'loginArguments'
    'LoginTest.js' => array(
        'paths' => array(
        'useCookies' => true,
        'argumentsVar' => 'loginArguments'
    'DoActionsTest.js' => array(
        'paths' => array(
        'useCookies' => true

while (true) {

    $loginArguments = array(
        'username' => substr(md5(rand()), 0, 7), //random username
        'password' => 'password'

    foreach ($testFiles as $testFile => $dataArray) {

        echo "\n";
        echo ">> PHP TEST RUNNER - NEW FILE \n";
        echo ">> RUNNING " . $testFile . "\n" ;
        echo "\n";

        $testPaths = $dataArray['paths'];

        foreach ($testPaths as $path) {

            echo "\n";
            echo ">> Against " . $path . "\n";
            echo "\n";

            $commandParts = array();
            $commandParts[] = "phantomjs";

            if ($dataArray["useCookies"]) {
                $commandParts[] = "--cookies-file=cookies.txt";

            $commandParts[] = $testRunner;

            $commandParts[] = $testFile;
            $commandParts[] = $path;

            if (isset($dataArray['argumentsVar'])) {
                $arguments = ${$dataArray['argumentsVar']};

                if (count($arguments) > 0) {
                    foreach ($arguments as $argument) {
                        $commandParts[] = $argument;

            $command = join(" ", $commandParts);

            passthru($command, $response);


    echo "\n";
    echo ">> Sleeping for 10 seconds";
    echo "\n";


The internals of the Mocha Javascript testing framework

In an attempt to extend the functionality of Mocha for a specific use case I was required to look into its source code.

I wanted to write the flow down so it was clearer to myself. Maybe someone else will find this useful.


Firstly you have the Mocha 'object' which in many respects is a container for all the other constituent parts (outlined below).

An instance of Mocha is exposed as mocha. The mocha instance has some helper methods for setting up Mocha with the correct options. For example the interface with which you will write your tests (BDD/TDD/qUnit).

One method on the Mocha prototype is run. Calling this instantiates a Reporter and a Runner (amongst other things).


A Runner object runs yours tests. Mocha by default creates a base level suite which the runner is instantiated with.

Each suite requires a Context. This context references a Runnable, and provides various prototype methods to set context specific settings. Each Test is a Runnable, as is each Hook.

When you call, it creates this runner, sets various things up and calls its run prototype method.


A reporter programatically specifies how the test output will be shown. A suprising amount of Mocha's codebase is different types of reporter. You can output your results in 'Nyan Cat' format if you so wish :)

On creation the Mocha object loads a reporter based on the passed options. When you run mocha the Reporter instance is instantiated, passing the Runner in.


Hooks are hooks. They are pieces of code that can be hooked in at different stages of the process. Mocha has hooks such as before(), after(), beforeEach(), and afterEach().

Behind the scenes, they are set up in exactly the same way as Tests.

Running your tests

Mocha relies heavily on nodes EventEmitter to pass messages around. For example it is used to tell the reporter that the test suite has started running. When we run our Runner it emits a message 'start'. I wont explicitly mention these messages after this point - essentially at each stage of the execution an appropriate message is sent/received/acted upon.

When we call a UI is setup. The default is the BDD (behaviour driven development) UI. This essentially defines a number of methods.. namely describe(), it(), and the various hook methods. These are what you use to write your tests.


When you call describe a new suite is created. What Mocha does here (if I have understood correctly) to allow for nested suites is very very clever.

Javascript executes syncronously, line after line. Within describe, a suite is created using the suite at the front of the locally available suites array as its parent. Once a suite has been created it is added to the start of this array.

It then executes the passed in function body of the describe call. If it has nested describe calls, these now use the new suites[0] as their parent when the respective suite is created.

After the function has completed executing, the suite is removed from the suites array.

Another interesting tidbit is that suites inherit their parents context (ctx).

  var context = function() {};
  context.prototype = parentContext;
  this.ctx = new context();

This allows them to inherit timeout settings amongst other things.


When you call it, a new Test is created and this is added to the suite. it() is executed in the context of its suite by using call() from within describe().

The Runner has a runSuite method which is called with Mocha's initial base suite. This method runs the appropriate 'suite level' Hooks, and then runs any tests (runTests). runTests calls the beforeEach hooks, which on completion runs the test, which on completion runs the afterEach hooks, which on completion runs the next test. Once all the tests are completed the callback from the level above is executed which will recursively execute runSuite on any nested suites.

The runTests method does similar in the sense that it runs the respective hooks at the right times (beforeEach, and afterEach). afterEach is passed a callback which on completing the hook is executed. This callback runs the next test allowing for test recursion.

The running of a test involves calling the run method on the test object.

The test object inherits this method from Runnable. Runnable has the method run which sets the test as the runnable within the Context object (initially set at the very lowest level in the instantiation of Mocha).

Now.. one of the great features of Mocha is that you can have tests running asynchronous code. The way this works is simple. I initially suspected that this would be done with timers and regular polling. It is in fact a lot cleaner and simpler than that.

The execution of sequential tests as mentioned above is controlled by its previous test in that within the call, fn is a callback which executes the following test. As such the Runnable run method simply does not execute this callback until it is complete (you have triggered done()) or until the test times out.


So we have written our tests and can run them. At each step of the way we emit messages using nodes EventEmitter.

A reporter, in extremely simple terms, listens to these messages and produces an output.

In addition to that, the reporters format the output so that it is useful. Mocha comes with jsDiff so you can see visual diffs where appropriate. They also control important things like the colours of your output, and printing out cat faces..

Further reading

Mocha is quite complex under the hood, and makes use of a number of complex yet clever design patterns. This is a general more in depth overview of its internals. The next step is to read the source code - explaining all the fine details in written text would be more challenging than the actual concepts.

If anything is not clear please do let me know.

PHPUnit, DBUnit and their quirks

I utilize PHPUnit for my backend testing and have noticed a number of things whilst using it. I have outlined these below - hopefully they will help someone.

DBUnit, composite datasets and foreign keys

There is a bug in the PHPUnit source code which means that composite datasets are truncated in the same order that they are created. If there are foreign key constraints between these tables, you will encounter a number of errors.

I have fixed this issue, and created a pull request here.

Properly tearing down database tests

Make sure you correctly tear down your database connections otherwise you may encounter various errors. I overload tearDown() to do this. Make sure that you call the parent method such that any operation you define within in getTearDownOperation() is called appropriately.

    public function tearDown() {
        $this->dbh = null;

Open Files

If you have a lot of tests you may encounter the "Too many open files" error.
You can fix this by changing the number of files a process can open using ulimit -n 5000

Test annotations

An annotation block fora PHPUnit test is as follows:

     * @test

It is not simply a comment block - the first line must have an extra asterisk. This is the same as for docblocks in phpDocumentor.

Test Size Annotations

PHPUnit provides the annotations @medium and @large which indicate the 'size' of a test. Unannotated tests are considered small.

You can configure the test runner to timeout if these tests take longer than expected.


The curveball in the mix is that as of version 4.2, all database tests are hard coded as large tests within the source code. I personally think that this should be changed such that database tests default to being 'large' but can still have their size overridden.

For now you'll either have to edit the source code yourself or run database testing independently of any other test setup where 'large' tests have a different nature.

If anyone else is aware of any quirks that it would be worth bringing to the attention of others, please let me know.

PhantomJS, Mocha, and Chai for functional testing

I have been playing around with a number of open source projects pertaining to testing different aspects of a web based application. Over the past few days i have been playing with PhantomJS, Mocha, and Chai.

What is PhantomJS?

PhantomJS is a full stack headless web browser based on Webkit. That means it uses the same browser engine as many of the top browsers including Chrome and Safari.

ZombieJS was the other option that I considered. The difference is that Zombie works with JSDOM a javascript implementation of the DOM.

I opted to use PhantomJS because Zombie is not a particularly stable product (in my opinion), and having tested both the 1.4.1 version and the alpha 2.0.0 version I encountered a number of issues. The biggest problem for me was that it did not work very well with my complex shimming of externally loaded Javascript files, nor did it play nicely with ReactJS.

The other obvious consideration is that I believe tests should be run in as real an environment as possible.

PhantomJS is very good - it is easy to install and setup, but the documentation is a little sparse.

What is Mocha?

Mocha is a javascript testing suite that can be used with node OR in the browser. I want to use it in the browser, my browser being a PhantomJS browser instance.

Mocha allows you to hook in at various points to make sure for example that the necessary setup is complete before running your tests. It also has a really nice way of dealing with code which executes asynchronously. It is actively developed and has a good community around it.

before(function(done) {  
    //run asynchronous setup

    //tell mocha when you are done

//test code

What is Chai?

Chai is an assertion library. It essentially provides methods that can be used to assert that what you get is what you expect. Mocha and Chai work extremely well together.

I want to use Chain because it is extremely readable (BDD constructs) and is extremely well documented.


Combining the three

This is where my adventure got a little tougher.

I want to essentially load a webpage (the page under test), execute a number of commands, and then assert that they did what I expected.

It is simple enough* to create a PhantomJS browser instance and load a page but how does one then load both Mocha and Chai and manipulate the page in a testable way?

Whereas when using node you can simply require() dependencies, because we are using phantomJS from the command line we cannot.

There is a phantomJS runner available called mocha-phantomjs however I found it to be somewhat constraining. You can call a file which contains the code you want to test and the libraries you want to use to test.. which are then run. I can see this being useful for unit testing, but I want to test an already bult page without needing to adapt it for testing. It essentially takes control of the browser (PhantomJS) piece of the puzzle which in may case is unsuitable.

My approach

PhantomJS has a webpage module that has an injectJs() method. I chose to utilize this to inject my test code (and all its requirements) into my page under test. What this means is that I can utilize jQuery (which is already loaded on my page) to manipulate the DOM and access the elements, properties, and values that i want to test.

PhantomJS also provides a method on the client side, callPhantom(). This allows you to callback to the Phantom instance where it triggers the callback that you setup on page.onCallback().

As such my approach is to:

  • Run a PhantomJS browser instance and load the page I want to test
  • Inject my tests
  • Run my tests using Mocha and Chai
  • Pass the formatted response back to PhantomJS
  • Output the results on the command line.


Given the above, my execution is as follows:

var page = require("webpage").create();  
var args = require('system').args;

//pass in the name of the file that contains your tests
var testFile = args[1];  
//pass in the url you are testing
var pageAddress = args[2];

if (typeof testFile === 'undefined') {  
    console.error("Did not specify a test file");
}, function(status) {  
    if (status !== 'success') {
        console.error("Failed to open", page.frameUrl);

//Inject mocha and chai                               page.injectJs("../node_modules/mocha/mocha.js");

    //inject your test reporter

    //inject your tests
    page.injectJs("mocha/" + testFile);

    page.evaluate(function() {;

page.onCallback = function(data) {  
    data.message && console.log(data.message);
    data.exit && phantom.exit();

page.onConsoleMessage = function(msg, lineNum, sourceId) {  
  console.log('CONSOLE: ' + msg + ' (from line #' + lineNum + ' in "' + sourceId + '")');

The only bit of the above code that i have yet to explain is reporters. Mocha provides a number of reporters for formatting your test results. Because of the nature of this setup you cannot simply use Mocha's reporters - you have to build your own. This is one benefit of mocha-phantomjs (see above) in that the author has successfully ported over the reporters for you to use.

My basic implementation of a reporter is as follows:

(function() {

    var color = Mocha.reporters.Base.color;

    function log() {

        var args = Array.apply(null, arguments);

        if (window.callPhantom) {
            window.callPhantom({ message: args.join(" ") });
        } else {
            console.log( args.join(" ") );


    var Reporter = function(runner){, runner);

        var out = [];
        var stats = { suites: 0, tests: 0, passes: 0, pending: 0, failures: 0 }

        runner.on('start', function() {
            stats.start = new Date;
            out.push([ "Testing",  window.location.href, "\n"]);

        runner.on('suite', function(suite) {
            out.push([suite.title, "\n"]);

        runner.on('test', function(suite) {

        runner.on("pass", function(test) {
            if ('fast' == test.speed) {
                out.push([ color('checkmark', '  ✓ '), test.title, "\n" ]);
            } else {
                    color('checkmark', '  ✓ '),
                    color(test.speed, test.duration + "ms"),


        runner.on('fail', function(test, err) {
            out.push([ color('fail', '  × '), color('fail', test.title), ":\n    ", err ,"\n"]);

        runner.on("end", function() {


            stats.end = new Date;
            stats.duration = new Date - stats.start;

            out.push([stats.tests, "tests ran in", stats.duration, "ms"]);
            out.push([ color('checkmark', stats.passes), "passed and", color('fail', stats.failures), "failed"]);

            while (out.length) {
                log.apply(null, out.shift());

            if (window.callPhantom) {
                window.callPhantom({ exit: true });



        ui: 'bdd',
        ignoreLeaks: true,
        reporter: Reporter



When I was playing with ZombieJS, my usage of React caused a number of issues. In my mind this was understandable - given how React works with the virtual DOM etc, I kind of figured that a javascript DOM implementation may have problems with it.

There was however an issue using React with PhantomJS. This is outlined in detail here - you just need to polyfill the bind method. This occurs because PhantomJS is using an old version of WebKit. PhantomJS 2.0 will be coming at some point, and this will resolve this issue. This update (when it comes) may change callPhantom() (discussed above) as the documentation outlines that it is an experimental API.


Hopefully you find the above helpful. I'd be interested to hear peoples thoughts on this approach as well as any suggestions people may have for improvements.