Saturday, March 14, 2015

A Karma, Mocha, Gulp and Browserify Workflow

The number of front-end frameworks at your disposal today is, frankly, obscene. What I hope to offer here, is a workflow that leverages a slice of these frameworks to help you build a structured, tested, web application. This isn't a guide for how to create and publish a node module from scratch, but an explanation of why I chose this set of frameworks. These goals and others are broken down as follows:

  • NodeJS Modules and npm packaging
  • Unit Testing with Mocha and Chai and Karma
  • Running node modules in the browser with Browserify
  • Running automated tests against multiple browsers with Karma
  • More automation with Gulp

We'll save the deep dives into each component and their alternatives for future articles. Since the scope of this article is more focused on the workflow itself, you'll be provided with just the high level understanding.

Prerequisites: Clone the example-browserify and example-node-karma-gulp repos from my github account. The following guide was written for Mac OSX users.

Packaging

Now, we won't be running any JavaScript server-side, but we will be following node module conventions. This will make our code more consistent with the thousands of modules in the npm registry. If you'd like a deeper understanding of node, npm, and why these conventions exist in the first place, see this article.

With the help of browserify, you'll have access to functions like require and the module.exports object in the browser, itself. First, you'll have to clone the example-node-karma-gulp repo. This is a utility module written in node that we'll be importing into another repository.

Let's first take a look at what's contained in this directory.

|~example-node-karma-gulp/
| |~bench/
| | `-utils.js
| |~src/
| | `-utils.js
| |~test/
| | `-utils_test.js
| |-gulpfile.js
| |-index.js
| |-karma.conf.js
| |-package.json
| `-README.md

The first item we want to go over is the package.json file:

{
  "name": "example-node-karma-gulp",
  "author": "Rob Layton",
  "email": "",
  "version": "0.26.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "gulp test"
  },
  "repository": {
    "type": "git",
    "url": "https://github.com/roblayton/example-node-karma-gulp"
  },
  "keywords": [
    "html5",
    "canvas",
    "javascript"
  ],
  "license": "MIT",
  "bugs": {
    "url": "https://github.com/roblayton/example-node-karma-gulp/issues"
  },
  "homepage": "https://github.com/roblayton/example-node-karma-gulp",
  "devDependencies": {
    "benchmark": "^1.0.0",
    "chai": "^2.1.1",
    "exec-sync": "^0.1.6",
    "gulp": "^3.8.11",
    "gulp-bench": "^1.1.0",
    "gulp-bump": "^0.2.2",
    "gulp-git": "^1.1.0",
    "gulp-jshint": "^1.9.2",
    "gulp-karma": "0.0.4",
    "karma": "^0.12.31",
    "karma-browserify": "^4.0.0",
    "karma-chrome-launcher": "^0.1.7",
    "karma-firefox-launcher": "^0.1.4",
    "karma-mocha": "^0.1.10",
    "karma-opera-launcher": "^0.1.0",
    "karma-safari-launcher": "^0.1.1",
    "mocha": "^2.2.0"
  }
}

This file is generated by running npm init at the root of the repository. You'll be presented with a series of prompts and then you're good to modify the file anyway you like from that point on. You may notice that my file has a longer list of devDependencies. The easiest way to populate this list is to add the --save-dev flag anytime you run npm install <MODULE> from within the repo. Not only with the module be added to your node_modules folder, but the package.json will get updated with the module and version you installed.

Think of this as the manifest of dependencies for your application. All someone has to do after cloning your repo is run npm install with no flags and all of the dependencies indicated here will be pulled in. This also means that you can add node_modules to your .gitignore.

Let's run npm install, now, to pull in all of the required modules.

Gulp

Gulp is a streaming build system for JavaScript applications that's been set to overtake grunt since its inception. Other build systems include Cake and Brocolli. Their aim is to automate the build steps in the same vein as Ant, Fabric and SBT. I prefer gulp over all of the other task runners for it's code-based task definition system. This differs from Grunt's configuration-style approach which can get a little verbose (having to always indicate src and destination) and hard to mix with other bits of code.

Gulp plugins are essentially through streams that read in and output data. Similar to Unix pipes, everything is processed in memory, with the ouput of a stream getting piped into another. Gulp's in-memory operations have the added benefit of being less expensive than Grunt's I/O.

Now, it's up to you how you want to define your API. You can run gulp commands directly, or you can leverage the "scripts" property of your package.json file to abstract what should actually be called. So whether you prefer grunt or gulp, the API is still an npm command, which is easier for your team and your CI tool. That's ultimately what good workflows aim for- abstraction and the delegation of tasks, isolating each component so it can focus on what it does best.

Before we move onto the next section, let's take a look at some numbers.

Framework Popularity

Repo Stars Forks Contributors Issues Open Issues Closed Pull Requests Open Pull Requests Closed
Ant 41 33 11 N/A N/A 2 5
Grunt 9,224 1,094 54 100 836 21 345
Brocolli 1,974 132 24 34 90 17 97
Gulp 12,277 996 125 40 627 3 289

So going off of the date at this point in time, while the Ant project has the highest number of commits, most of these have come from one contributor. And Gulp being the new kid on the block has surpassed Grunt in times starred by the community.

Testing with Mocha, Chai, and Karma

The unit test for the module can and should be able to run against the module itself. You wouldn't want to write your module to where you'd have to import it into an app before you could test it. We've decided to go with Mocha here because it's the simplest to set up, most flexible and intuitive. Alternatives include Jasmine and QUnit. Also worth mentioning is QUnit is marketed as a unit testing framework, while Jasmine a behavior-driven testing framework.

Anyway, in typical TDD (Test Driven Development), we write the unit test first. You'll notice that this test depends on chai to run. Chai is a BDD / TDD assertion library. It gives you access to functions like expect. Alternatives include should which can installed through the npm registry.

'use strict'

var expect = require('chai').expect;
var utils = require('../src/utils');

describe("utils", function() {
  var arr;

  beforeEach(function() {
    arr = [3, 5, 6, 7];
  });
  
  afterEach(function() {
    arr = null;
  });

  describe("remove", function() {
    it("remove the indicated element", function() {
      expect(utils.remove(arr, 5)).to.eql([3, 6, 7]);
    });
  });
});

The TDD workflow encourages you to write the your unit tests first, and run them so they fail. Then, you'd write your implementation afterwards to get the test to pass. To run these these tests, we'll be using the Karma test runner, which can be launched from the command line. This is a much better alternative than loading your test scripts in the browser and running them manually. In addition, Karma can be configured to run in the background as you make changes to your files and can launch multiple browsers, concurrently.

You can run the unit tests for this repo with the gulp test or npm test command. They will all end up passing because I've already written the implementation. Also, feel free to go through the karma.conf.js file and read Karma's documentation to get a better idea of all the customization options.

If you're interested in better coverage, you'll eventually want to look at Sinon which will allow you to mock and stub different components. At some point, you're tests are going to depend on the DOM, objects attached to the window like console.log and so on. So rather than pull these actual objects in, which comes with a lot of overhead and can be hard to clean up between tests, you'll want to mock these objects. For example, to ensure that your function is calling console.warn, you would mock that with some empty object with the same interface and just assert that the warn method was called.

You'll also want to look at supertest which simulates http requests. All of these libraries can be installed by running npm install <MODULE> --save-dev. Never forget to pass in the --save-dev flag to ensure that the module is indicated in your package.json file.

The Node Module

The module itself lives inside the src folder and the unit tests live inside the test folder. All tests are names with the following <name>_test.js convention.

If you were to open the utils.js file, you'd see an object that only contains one function for removing elements from arrays. This is because I intend to add a few more functions to this object in the future.

// src/utils.js

'use strict'

module.exports = {
  remove: function(arr, elem) {
    for (var i = 0, len = arr.length; i < len; i++) {
      if (arr[i] === elem) {
          arr.splice(i, 1);
          return arr;
      }
    }
    return arr;
  }
};

Benchmarking

We keep our benchmarks inside the bench directory. All we're testing here is how long it takes for our one utils.remove method. If you'd like to compare two different functions, see the grunt-benchmark documentation about setting up a test suite.

'use strict'

var utils = require('../src/utils');
var arr = [1, 2, 3, 6, 4, 5, 7, 8, 9];

module.exports = {
  name: 'utils#remove',
  maxTime: 2,
  onComplete: function() {
    arr = null;
  },
  fn: function() {
    utils.remove(arr, 9);
  }
};

Here is our output after running gulp benchmark.

[20:13:50] Using gulpfile ~/repos/example-node-karma-gulp/gulpfile.js
[20:13:50] Starting 'benchmark'...
[20:13:50] Running benchmark utils#remove [/Users/roblayton/repos/example-node-karma-gulp/bench/utils.js]...
[20:13:53] utils#remove x 35,636,268 ops/sec ±2.07% (37 runs sampled)
[20:13:53] Finished 'benchmark' after 2.34 s

Benchmarks are another way to gauge code quality and improve performance. With benchmarking in place, it's a lot easier to make decisions on what types of implementation works best. It's up to you how often you'd like to run these. Depending on how long they take, you might not want to run them as often as your unit and e2e tests through your CI tool, but they should run periodically. You can opt to run them after you tag a release, rather than every time you push to the develop branch, for instance.

There's also a gulp task for linting including in the gulpfile as another way to improve, code quality.

Browserify

What I like most about Browserify over RequireJS and CommonJS is that it's a module definition system that doesn't try to come up with its own conventions. Rather it leverages node's allowing the code you write to fit in well with that ecosystem. You've already seen how simple it is to define and import modules. There's little more to learn here.

Now we have our node module tested, benchmarked and ready to go, it's time to run it in a browser. Let's clone a repo that will be leveraging our utils module. You can find it on my Github.

The next thing you'll want to do is install our utils module through npm by running npm install example-node-karma-gulp from the root of the example-browserify directory. Then, open the index file in the browser and you should see the following:

Original arrays contains: 1,2,3,4
New array contains: 1,3,4 after using utils.remove()

The index file is actually pulling in a file that was built using browserify against the index.js file. The exact command was browserify index.js -o built.js -d and made the all of the necessary assets browser-compatible.

5 comments:

  1. Hi, there is a little error in your overall nice article. Please note, that CommonJS is actually the format node.js uses with its requires and module.exports. So the sentence "What I like most about Browserify over RequireJS and CommonJS is that it's a module definition system that doesn't try to come up with its own conventions." CommonJS is actually the convention used in node.js.

    ReplyDelete
    Replies
    1. Thanks for the kind words. I was saying I like Browserify because it isn't trying to come up with its own conventions for module definition and importing. RequireJS and CommonJS do, though. RequireJS has it's own syntax, and CommonJS has it's own spec. Node implemented the CommonJS spec. Browserify could have come up with its own syntax, but it opted to just bring the Node style of requiring and importing to the browser.

      Browserify's website says the following about their approach:
      "Browsers don't have the require method defined, but Node.js does. With Browserify you can write code that uses require in the same way that you would use it in Node."

      I hope that makes sense.

      Delete
  2. just as an idea (as I've spent the last 3 weeks pulling my hair out trying to make any number of combinations of these tools work), it'd be great if there were a central site with articles like yours for each of the combinations of tools. I like your stack but I use grunt, someone else uses a different assertion library, etc. so eventually one could go just download a working project that's kept up to date. that would allow many of us who frankly don't care to burn so much time trying to figure out so much technology just to test a project, to move right along. thank you for the article (I'm still digesting it)

    ReplyDelete
    Replies
    1. That really is a good idea- some sort of modular documentation or boilerplate generator where you can mix and match. Yeoman might be able to help you with some of your needs: http://yeoman.io/. Feel free to post a comment here if you hit any walls and I'll do my best to help you get through it. I've probably run into them myself.

      Delete
  3. like `git clone .../stack-karma-mocha-chai-browserify-grunt-phantomjs` would be awesome

    ReplyDelete