Saturday, June 7, 2014

How to speed up resource loading when using Require.js

If you're building a single-page AJAX application using Require.js, odds are your boot sequence for loading the application on production looks something like this:
  • First, your index.html gets loaded.
  • At the head of your index.html you link CSS resources which get loaded.
  • There's a script tag referencing Require.js, then some inline script for configuring it and finally you kick it off by specifying which modules should be loaded.
This is pretty much what booting our application at Speakap looks like and it results in the following graph for loading resources:


Here you see the index.html getting loaded at the top from speakap.speakap.com. The CSS gets loaded next and require.js is fetched in parallel because both are referenced straight from the index.html (please note all static files are padded with an MD5 hash for versioning in the graph above). The file nr-411.min.js can be ignored in this discussion as it's used for performance analysis and error reporting by New Relic, but it's not actually a module of our own application. Then finally we see our two modules - app.js and libs.js - that are getting loaded by Require.js.

This is not everything that is happening during boot, but it's the main part and with the resources you saw above, the application is able to render its skeleton and from a user's perspective, the application has loaded and is visible, though data fetching has still to occur. This all happens in under a second for a typical DSL connection (the graph above is measured from my home connection). It's certainly not bad, but it is up for improvement.

The most obvious way to improve the boot time is to improve the parallelization of the loading of resources. Unfortunately, in the current setup the inline script which configures Require.js and tells it which modules to load is blocked by both the loading of the CSS (see here if you want to know why) and the loading of Require.js itself. Because of this, app.js and libs.js cannot be loaded by Require.js until after these two files have been loaded. Fortunately, the workaround is easy; who says these two modules have to be loaded by Require.js? We already know upfront these two modules will be loaded, so why not just reference them straight from the HTML with a good old script tag, preferably with an async attribute.

This is exactly what I did, and the result is that app.js and libs.js are now fetched in parallel with the CSS and require.js. In a development setup (no latency) this saved me about 80ms, or about 15%, in total page load time. In the common case where latency is an issue the savings are even larger.

Before all is said and done however, there is one important caveat that will ruin the plan if you're using a vanilla Require.js library. Require.js expects to be in full control of the loaded libraries and will happily ignore the fact the modules it is instructed to load are already being loaded from a plain script tag. The result being we end up with two script tags for the same module, and depending on the order in which they are actually loaded, global state may be overwritten causing inconsistent JavaScript errors. The fix to this problem is a rather simple check to make Require.js aware of the already existing script tags, and re-use those when applicable. In hopes this might be useful to others, I have created a Pull Request for Require.js, which adds this fix. I hope it gets merged into Require.js proper, but until then, feel free to try it out!

Monday, February 3, 2014

C.I. Joe 0.0.3 "United We Stand" released

After a month of intense work and a big redesign — both mechanically and aesthetically — the time has come to release C.I. Joe 0.0.3 "United We Stand".

New in this release is a great C.I. Joe themed redesign created by my colleague Dennis de Jong (any possible remaining styling issues are due to my lack of CSS skills, not Dennis' design):



In addition, the concept of campaigns has been introduced as seen above. Campaigns provide an easy-to-use mechanism for scheduling multiple missions at once, to be executed one after another or in parallel.

Finally, I've made a beginning writing the in-application Help manual.

As with previous releases, you should be aware this is an alpha release and far from feature complete. But for those wanting to try it out, you can download the new release here: https://github.com/arendjr/CI-Joe/releases/download/v0.0.3/ci-joe-0.0.3.tar.gz

Join or follow the contribute on GitHub: https://github.com/arendjr/

Friday, December 27, 2013

C.I. Joe 0.0.1 "The Germ" released

For a little while, I've been working on a side project called C.I. Joe, and today I'm ready to announce it's very first alpha release, codenamed "The Germ".

This release is marked more by what it doesn't do than what it actually does do. So, to avoid any further disappointment, let's first get out of the way what it doesn't do:

  • No Scheduling
    Currently, every job you want to start has to be started manually. Better set your own watch.
  • No Version Control
    There's no integration with version control systems like Git or SVN yet. However, nothing is stopping you from making your own clone or checkout in the mission workspace and issuing a git pull or svn up at the start of a mission.
  • No Security
    Right now everything is publicly accessible, so anyone who can reach the server can do whatever he likes with it. So don't connect C.I. Joe to the internet just yet.
  • No Slave Configuration
    C.I. Joe comes with a single local slave pre-configured. If you edit the configuration, you can add additional slaves, but there's no UI for this yet, and only local slaves are supported thus far.
  • No Artifact Storage
    So far, every job you run will run in the same workspace, and no artifacts are kept from previous jobs. Console output from jobs is stored only in memory, so as soon as the server is restarted, all output is gone.

Having said all that, this very first release marks an important milestone in the development of C.I. Joe. Even though not yet fully featured, a functional master-slave architecture has been developed. There's an HTML5 web frontend that automatically updates in real-time. There's the beginning of a REST API which is already used for configuring missions and starting jobs. The master communicates with slaves as well as the web frontend through Socket.io connections. There is a testing framework and the first integration test has been written to test whether a mission can be configured and ran. The release process has been automated, including a build process that optimizes sources and removes development artifacts before distribution.

All of this means that from here on, development should speed up, and there can now be an actual prioritization of the features to develop next.

If you are interested to see what's next, check out the project page: https://github.com/arendjr/CI-Joe. Feel free to file issues for suggestions about where you would like this project to go next.

Finally, if you want to try out this very limited alpha yourself, here is the download link: https://github.com/arendjr/CI-Joe/raw/master/dist/ci-joe-0.0.1.tar.gz

Happy New Year!

Saturday, November 23, 2013

Writing a Backbone application without Backbone.js

Introduction

Recently, I was in the position of setting up the stack to use for one my side projects. The project contains a single-page HTML5 frontend, so naturally the choice of which JavaScript MVC framework to use came up. I had quite a bit of experience using Backbone.js from my day job and I mostly like using it, so it was a natural choice. Still, I couldn't help but feel there were a few shortcomings about it too, as well as a decent amount of functionality included in it that I never used anyway. Backbone.js would be a good start for me, but not a perfect match either.

Meanwhile, I also have another side project called Laces.js. It is a model micro-library and in my opinion, its model implementation beats Backbone's hands down. However, there's a large gap in functionality between a micro-library that just provides a model and the complete framework provided by Backbone.js. Still, it got me thinking, how much work would it be to cross that gap and what would the end result look like? I like programming with Backbone, but I also wanted something a bit more to my taste, and I really wanted to use Laces' models. I decided to take the plunge.

In this post I will describe piece by piece how I managed to replace Backbone.js with something eerily similar, yet feeling a lot more modern and improved in many ways. First, I will describe how I replaced Backbone Model and Backbone Collection using custom implementations based on Laces Model and Laces Array. Next, I will describe my replacement for Backbone View and show some of the potential of how it all comes together by showing some of my own Continuous Pager class. Finally, before I wrap up with a conclusion, I will give a little insight how I took Backbone Router to use it in a more standalone fashion. But first...

Extending

Everyone who has worked a great deal with Backbone.js knows how to use the extend() method to create subclasses of models, collections and views. Personally, I've gotten used to this method of subclassing and I like how it works. I'm not going to throw out the baby with the bath water, so I like to maintain this method of subclassing. Fortunately, it was easy to port using the following snippet, taken directly from Backbone.js:

Models and Collections

My experience with Backbone Model and Backbone Collection has always been that I would create my own Model and Collection subclasses that would serve to handle common functionality across models and collections but not found in Backbone.js itself. This time, I would again create my own Model and Collection classes, but there would be no Backbone Model and no Backbone Collection to start from. This meant I would have to implement more functionality myself, but it also meant I could improve on some things I didn't like about Backbone and I could integrate Laces.js into the mix. Specifically, there was a number of things I wanted to change:
  • Backbone's models do not support dot-notation for accessing attributes, instead developers always have to call the get() method. By letting my Model inherit from Laces Model, I automatically get support for using dot-notation. Having support for dot-notation means I never have to use toJSON() anymore before feeding my model to a template, but more on that later. And also note that Laces Model automatically has support for nested attributes, something for which Backbone would require use of the Backbone Deep Model plugin.
  • One of my main annoyances with Backbone was that Model and Collection were completely separated. In my experience, there's typically quite a bit of common ground between the two and there are often situations where you want to set attributes of a collection as if it were a model. For example, when a collection contains a subset of models stored on the server, you may want to have an attribute containing the total number of models on the server. Or when a collection contains search results, you may want to have attributes containing meta-data about the query. To enable these use-cases I made sure my Collection class inherited from Model. The collection's models are stored in a models property, just like with Backbone.js, except the property is now actually a Laces Array. Events generated by the models array are proxied by the Collection class.
  • Backbone.js provides excellent fetch() and save() methods, though experience taught me I would usually override these methods anyway to provide for some extra functionality, like preventing fetching of models which are already in the process of being fetched. In addition, I would normally provide a custom implementation of Backbone.sync(), which can now be handled straight in my own fetch and save implementations.
I have created two more snippets where you can see what these Model and Collection classes may look like. Here's a Model class with support for fetch(), fire()initialize(), isNew()on(), off(), remove(), save(), set(), toJSON()unset() and url() methods:

And here's a Collection class with support for all the Model methods, in addition to add(), any()at(), each(), find(), findIndex()indexOf(), push(), reject()remove(), shift(), slice(), splice() and unshift():

Looking at these gists, it is surprising how little actual code is needed to replicate basic Model and Collection functionality. Of course, to a large extend this is because any real complexity is solved by Laces.js and LoDash.js (or Underscore.js).

Views

Backbone Views are incredibly useful, but in my opinion there are 2 things missing:
  • Two-way data-binding. Having to manually update the DOM when a model changes is cumbersome, especially when there are libraries that can handle this for you. Fortunately, we can now use the Laces.js Tie add-on which provides exactly this.
  • Parent-child relationships. In real-world applications it is common for views to have subviews, but manually managing subviews (or child views) is a chore, especially when they need to be properly destroyed to avoid dangling event handlers. This is why I have built this functionality straight into my View class.
Here's my take on the View class, with support for addChild(), delegateEvents(), initialize()remove()removeChild(), removeChildren(), render(), reparent(), setElement()undelegateEvents() and $() methods:

As you can see, the amount of code needed is still only modest, but there's some complexity in reimplementing the un/delegateEvents() methods. There's one notable improvement though over the delegateEvent() method provided by Backbone.js, which is the events object of any class is automatically merged with those of its superclasses (which is a common request as evidenced by Backbone issue #244).

As example, I will show the implementation of a ContinuousPager that's subclassed from the View class. This should give you an idea of how similar to Backbone it really is at a higher level. Note that while this continuous pager has been brought back to the bare essentials, it is still able to render collections, it updates automatically when models are added to or removed from the collection, and even the rendered items update automatically if any of their attributes change by virtue of the two-way data-bindings provided by the Laces.js Tie add-on:

One more thing to note here is that templates can be passed directly into the Laces Tie constructor, regardless of their type. Plain HTML strings, Handlebars.js templates, Hogan.js templates, etc. all work. Interestingly, you also never need to call toJSON() on your model when feeding it to one of the template engines. Because Laces.js models have native support for dot-notation, all attributes can be accessed as if it was already a JSON object.

Router

For the router functionality, I have settled on a small Router class that uses the same type of router object for defining a mapping between paths and callbacks. I have even used Backbone's routerToRegExp() function to achieve compatibility. Here's a slimmed down version of the class I use:
You may notice I rely solely on HTML5 PushState support for history support, which requires the use of pretty recent browsers. Alternatively, there are plenty of other routing micro-frameworks you may opt to use.

Conclusion

I hope this article has been informative when it comes to showing what it takes to build a Backbone application without actually using Backbone.js, as well as giving some insights into potential gains by using this approach. To me, it was an interesting experiment and while of course it took some more time to set up (which I could afford due to the nature of the side project) the extra features and flexibility I got were definitely worth it. I even find myself occasionally wishing I had the same stack available during my work projects.

Some people may be interested to see what the gain in file size is between using Backbone.js and using Laces.js. Backbone.js is 19KB minified (6.4KB gzipped), whereas Laces.js is 7.4KB minified (2.2KB gzipped). However, as soon as you include the Laces.js Tie add-on the total comes to 11KB minified (3.4KB gzipped). Personally I think Backbone itself is already small enough to not worry too much about its size, but any gains are nice of course.

Finally, I'd like to hear what your thoughts are! Did you find this approach interesting? Valuable? Are you eager to incorporate Laces.js into your own projects? Or should I have used some other libraries even? Should I consider wrapping up all the examples and make them available as a ready-to-use MVC framework? Discussion is welcome!


PS.: The side project I based this article on is C.I. Joe. It's my attempt at building a continuous integration solution, though it's currently pre-alpha so don't even try to use it yet. You may be interested to browse its sources though, so you can see the provided examples in a real-world context.

Wednesday, January 16, 2013

Two-way data-binding with Laces.js Tie

This post will sidetrack a bit from the conventional PlainText blog, to cover a JavaScript library I wrote to assist with the map editor. If you have no interest at all in JavaScript, feel free to skip this post :)

The PlainText map editor is an HTML5 application that has to maintain a complete model of the game world on the client. On startup, it fetches all Areas, Rooms and Portals from the server, and keeps them in its own model, feeding changes back to the server.

Apart from feeding back changes, there are various views that all need to properly reflect the model. There's the main canvas view showing a graphical layout. There's the sidebar showing information of the selected Room. There's an Areas menu and an Areas editor. There's a Portal editor. Keeping all these views in sync meant that I needed to have a proper model, with proper data bindings so that various modules could be notified of changes. For this purpose I created my own model micro-framework: Laces.js.

Laces.js by itself provided just the model that I needed, but why stop there? With Laces.js any module could bind to the model, so that they could update their view. But why not create a direct binding between the model and the view, so that no (or at least less) manual updating is necessary? And what if changes to the view could directly be fed back to the model? This is called two-way data-binding, and it's traditional territory to MVC frameworks. But I didn't want a full-fledged MVC framework*, yet I did want two-way data-binding.

Meet the Laces.js Tie add-on.

Let's say we have a model variable that has a selectedRoom property. selectedRoom can by null, but it may also be a Room object, with properties such as id, name, description, etc.. This model itself could be created like this (we'll ignore any other useful things the model might have for now):

Now let's say we will want to show the name and description of the currently selected room. We can do this using a Laces.js Tie:

The above code will literally tie the HTML fragment to the Laces.js model, making sure the HTML stays up-to-date when the selectedRoom property or its subproperties change. But we can take it further by allowing inline editing of the properties:

Now when someone double-clicks one of the properties, it will turn into an input field, and when they make changes, those will be automatically saved to the model. Of course you can customize what event you prefer to use for editing. So if you want to enable editing through single click:

Besides plain HTML strings, you can also tie your model to a multitude of template engines. Laces.js Tie has already been confirmed to work in tandem with Handlebars.js, Hogan.js and Underscore.js' built-in template engine.

If you want to know more about Laces.js or the Laces.js Tie add-on, just check the project page.

*) Check the Rationale on the Laces.js project page for why I didn't want this.