appliness

TABLE OF CONTENTS

THE FIRST DIGITAL MAGAZINE FOR WEB APPLICATION DEVELOPERS
WELCOME TO APPLINESS. THE CONTRIBUTORS OF THIS FREE MAGAZINE ARE ALL PROFESSIONAL AND PASSIONATE DEVELOPERS. IF YOU WANT TO WRITE A TUTORIAL, SHOWCASE YOUR APPLICATION, CONTACT US ON OUR WEBSITE APPLINESS.COM. WE HOPE THAT YOU ENJOY READING THE 7TH ISSUE OF THIS MAGAZINE.

TUTORIALS

(
INTERVIEW SHOWCASE INTERVIEW

YEOMAN AT YOUR SERVICE: TOOLING AND FRAMEWORKS
by Andy Matthews

JAVASCRIPT SCOPING AND HOISTING
by Ben Cherry

CREATING A REST API USING NODE.JS, EXPRESS AND MONGODB
by Christophe Coenraets

POSTING DATA FROM A PHONEGAP APP TO A SERVER USING JQUERY
by Sam Croft

LOADING DATA INTO, AND POSTING DATA FROM, AN IOS CORDOVA APP
by Sam Croft

KEEPING YOUR KNOCKOUT.JS APP PERFORMING LIKE A CHAMP
by Ryan Niemeyer

XPLATFORM PHONEGAP TEMPLATES
by Holly Schinsky

NEW APIS OF IOS6 FOR HTML DEVELOPERS
by Maximiliano Firtman

HOW I ENDED UP ENJOYING JAVASCRIPT
by Omar Gonzales

CATEGORIZING VALUES IN JAVASCRIPT
by Dr. Axel Rauschmeyer

PARSE.COM JAVASCRIPT WITH OFFLINE SUPPORT
by Raymond Camden

WEB FONT PERFORMANCE
by Ilya Grigorik

LATENCY: THE NEW WEB PERFORMANCE BOTTLENECK
by Ilya Grigorik

CHROME NETWORKING
by Ilya Grigorik

ILYA GRIGORIK - THE WEB PERFORMER
Exclusive interview by Maile Valentine & Michaël Chaize

LIBRARY OF THE MONTH

BROADSTREET HTML5
by Darren Hurst

TRELLO
by Trello

BOBBY GRACE, THE DEV BEHIND TRELLO
Exclusive interview by Maile Valentine

HELTER SKELTER NEWS

NEWS ABOUT HTML AND JAVASCRIPT
by Brian Rinaldi

NAVIGATION GUIDE READ

NAVIGATE
GO BACK TO THE LIBRARY MOVE TO THE PREVIOUS ARTICLE DISPLAY THE TABLE OF CONTENTS VISUALLY BROWSE ALL THE ARTICLES

MOVE TO THE NEXT ARTICLE BY A HORIZONTAL SWIPE

READ THROUGH THE ARTICLE BY A VERTICAL SWIPE

appliness

DON’T WORRY, BE APPLI

YEOMAN AT YOUR SERVICE: TOOLING AND FRAMEWORKS FOR BEAUTIFUL WEB APPLICATIONS
YEOMAN IS HERE...

CLICK HERE TO CHECKOUT ANDY’S ORIGINAL ARTICLE ON THE ADOBE DEVELOPER CONNECTION.

It used to be acceptable to write some CSS, HTML and a little bit of JavaScript, upload it to your web server and you had a website. That’s no longer the case as people are now hitting your site from a myriad of different devices with names that start with “i”, and “Galaxy”, and “Nexus”. These new devices are fully capable of running the most modern of code but their bandwidth is limited. This places responsible developers in a bit of a quandry. How can you write modern applications while keeping bandwidth usage to a minimum? “Compress your PNG files with XYZ lib from the command line,” some people say. “Minimize your CSS,” others say and, “Concatenate and lint and uglify your JavaScript.” What they don’t tell you is how to perform all of those time-consuming actions within the context of your existing development process. This leaves earnest developers wondering what to do. The marketplace tells you that you must improve the user’s experience, but in today’s fast-paced tech companies, taking an extra hour or more of development time just to gain a few speed improvements can be frowned upon. What we need is a tool that will do all of these things for us, while getting out of our way and letting us write code. That tool is here and it is called Yeoman.

ou Playgr

(

nd

Difficulty
- rookie - intermediate - expert

- Node - Mocha - Jasmine

Todo list

- compile - test

- play bridge

by Andy Matthews

INTRODUCING YEOMAN
What’s Yeoman? From the Yeoman website (yeoman.io), Yeoman is “a robust and opinionated client-side stack, comprised of tools and frameworks that can help developers quickly build beautiful web applications.” In layman’s terms, Yeoman is a series of tools built upon Node.js that performs all of those tedious tasks we mentioned earlier. With just a few commands, Yeoman can create application skeletons, minimize and concatenate your CSS, JavaScript, and compress your images. Yeoman can fire up a simple web server in the current directory, run your unit tests, and more. Yeoman can help you streamline your workflow by quickly creating Model, View, and Controller boilerplate code for today’s most popular JavaScript frameworks. Yeoman can also act as a package manager for many wellknown JavaScript plugins and libraries. At the moment, Yeoman is Mac / Linux only, but Windows support is coming soon. It’s easy to get up and running with Yeoman. First, open up a Terminal window. I know the command line may seem scary. I used to be afraid of it myself but Yeoman makes it easy to use. Take look at the following line to install Yeoman. This single command will do everything for you: download, install and configure all in one. $ curl -L get.yeoman.io | sh That wasn’t so bad was it? In addition to using Node.js, Yeoman also installs libraries that allow it to compile CoffeeScript files and Sass/SCSS files, as well as a powerful tool called PhantomJS, which allows Yeoman to run unit tests in a headless version (no GUI) of WebKit. Run the command then sit back and let Yeoman set everything up.

ASKING YEOMAN FOR HELP
While the installation process runs in the background, let’s shift gears for a moment. With any new tool there’s a bit of a learning curve. Yeoman reduces that learning curve by letting you ask it questions anytime you’re not sure of something. If there’s a command that you don’t understand, just ask Yeoman. If you’d like to know what options a certain command offers, just ask Yeoman. The easiest way to ask for help is just to call for Yeoman. Type the following command and Yeoman will display a list of command line options. Try it. $ yeoman In fact almost every command also has its own help option. Ask Yeoman for help for help with a specific command by appending the --help flag, as in the following: $ yeoman init --help You can also ask Yeoman for help simply by using the help command: $ yeoman help

2/7

PUTTING YEOMAN TO WORK
Once the install script is complete, you’re ready to use Yeoman. Why don’t you start off by creating an application for your Mom’s Bridge Club? They meet once a month and they need a way to post their scores online. Open a Terminal window and create a new folder in the location of your choice, your Desktop for example. The first Yeoman command you run is yeoman init. It’s the front door to the whole Yeoman ecosystem. Type the following lines into your Terminal window to get started. $ cd ~/Desktop $ mkdir bridgeclub && cd bridgeclub Every command you give to Yeoman begins with the yeoman keyword – think of it as asking one of your teammates for help. Start by running the yeoman init command: $ yeoman init Yeoman will ask you a series of questions, simply answer yes or no based on your project requirements. For this article, you can answer whatever you like.

Figure 1: Initializing Yeoman

First, Yeoman will ask you if you’d like to include the Twitter Bootstrap styles and plugins, and where you’d like to place them. Next, Yeoman will ask you if you’d like to include RequireJS (for AMD support). You can read more about RequireJS in a recent Adobe Developer Connection article by Aaron Hardy. Finally, Yeoman asks you if you’d like to support writing ECMAScript 6 modules. You can read more about modules in an article by Addy Osmani, ECMAScript 6 Resources For The Curious JavaScripter. After giving you a chance to change your answers Yeoman gets to work. It’s worth pointing out that Yeoman even offers you a way to skip past all these questions and create an application with just the basics: HTML5 Boilerplate, Modernizr, and jQuery. If that’s more your speed enter the following command you’re off to the races.

3/7

$ yeoman init quickstart After a short while, Yeoman will finish its work and your Bridge Club application will be ready to develop. The project directory now contains a number of files and folders. Let’s take a quick look at them all.

Figure 2. Your bridgeclub project after running Yeoman

Starting from the bottom, Yeoman includes a test directory that contains unit tests and spec documentation. By default, Yeoman uses the Mocha test framework to run all of your unit tests, but you can opt for Jasmine instead by passing an additional option to the init command: $ yeoman init --test-framework jasmine You can, of course, use whatever test framework you like after Yeoman completes its initial site build. To get a full list of flags for the init command remember to ask Yeoman for help. $ yeoman init app --help You’ll also find a package.json file which helps you document your application’s author (you of course) and version number. package.json is also used to indicate any dependencies that your app might have on other libraries. Using this file is not required, but it is becoming quite common to have this level of documentation in a single location. You will also see a directory called app. That directory is where all of the source files for your application are contained: Raw SASS files, Coffeescript, images, original JS files, and more. The app directory is where you write your code and it’s where Yeoman pulls from when running its build process (more on that later). Also notice a file called Gruntfile.js. Yeoman is built on top of another project called Grunt that uses the Gruntfile to indicate configuration options. You can also create your own unique tasks or groups of tasks, or pull from the lengthy list of available Grunt plugins, by adding them to the Gruntfile that every Yeoman project contains. Speaking of tasks, let’s run a few to see what they do!

4/7

RUNNING UNIT TESTS
Not every developer uses unit tests, but I bet many of them say they would like to. Until recently, unit testing was a pain to set up and run. But there are so many frameworks to aid you in your testing that it is a crime not to. As mentioned before, Yeoman uses the Mocha framework, which allows for asynchronous testing: perfect for JavaScript and those pesky AJAX calls. In the root of your project, enter the following command. $ yeoman test Yeoman fires up an instance of PhantomJS, loads up your test files, then runs them for you, and reports back the successes or failures. You can, of course, still load the tests and run them in a regular browser window, but sometimes you just want to make sure they’re passing without seeing all the details. Yeoman helps you do that. Additionally, you could add yeoman test as part of a longer set of commands, and take action based on the success or failure of your test suite.

RUNNING A LOCAL SERVER WITH YEOMAN
Yeoman also helps you out in other ways, such as running a local web server in your project directory. You probably have at least a passing familiarity with Apache and vhost blocks, and maybe even adding DNS entries to your hosts file. That’s all well and good but it’s a pain to do when you just want to quickly check if an AJAX call is working. We all know the browser restrictions on AJAX calls within file:// URLs but running within the context of a web server makes those disappear. It also allows you store your web projects anywhere on your hard drive, and not just within your www or htdocs directories. Tell Yeoman to fire up a simple Python based web server with a single command and run your entire site as if it was in production. Let’s try it out right now. $ yeoman server Type the above command into your Terminal window and hit enter. After second or two of processing Yeoman lets you know that it’s starting a static web server on port 3501. It even opens a browser window with the correct URL for you. That’s great, but Yeoman uses a library called LiveReload to watch your project directory for changes, compile those changes and reload the browser window for you. Let’s put the LiveReload feature in Yeoman to the test. With the Yeoman server still running, browse to the app folder in your project directory and open up index.html in a text editor. Place your cursor anywhere inside the body tag and add some new HTML, then just hit save. You’ve got to make sure you’re watching the browser window because it happens fast. One second your change isn’t there and the next it is - all without you having to do a thing. The same is true for JavaScript files, Coffeescript files and even CSS/SASS files. Let’s try adding a style. Open up /app/ styles/main.css, add the following declaration, then hit save: body { background: red; } Yeoman doesn’t even reload the browser window for changes to stylesheets. It uses JavaScript to change the stylesheet in the browser. It’s fast too. Change a style, save it, and Yeoman instantly shows the change in the browser window. Reloading the browser window isn’t hard, or time-consuming if you’re just do-

5/7

ing it a few times. But developers like us do it hundreds of times a day, or more. LiveReload courtesy of the LiveReload project, really frees you to make quick changes and see them working immediately. It’s a breath of fresh air.

INSTALLING REMOTE PACKAGES WITH YEOMAN
We’re not done explaining Yeoman yet. How about finding and installing plugins, libraries or frameworks? Normally, you have to Google for the URL, find the right download link for the correct version of the file, then save it somewhere in your project. If you are hardcore you might get it from the developer’s Git repo and compile it. That’s cool and all, but with Yeoman you have a teammate who loves helping out. Why not send Yeoman on an errand for the files you need? Yeoman includes a package manager called Bower (built by some of the developers at Twitter). Just give Yeoman a command, and a package name, and it gets that package for you and store it locally. Let’s try installing the popular JavaScript application framework Backbone. Enter the following command and watch the magic: $ yeoman install backbone Yeoman does two things for you. It downloads the raw source for Backbone into a folder named components in the root of your project. It then moves just the working copy of the file into /app/scripts/ vendor. This way you’ve got the documentation for Backbone but the full repository isn’t clogging up your application’s codebase.

COMPILING PRODUCTION-READY FILES WITH YEOMAN
At some point, all good projects need to launch.Mom’s Bridge Club has been asking when the application will be ready and you don’t have any more time to spend. This is where Yeoman really shines. The build process is complicated and intricate and takes quite some time...for Yeoman that is. All you need to do is type the following command, then go get that Iced Triple Caramel Macchiato you’ve been thinking about all morning. $ yeoman build Rather than ask what Yeoman is doing for you here, it’s probably easier to ask what isn’t Yeoman doing for you. Remember the Gruntfile we talked about earlier? It contains a list of tasks that Yeoman can execute, and during the build process Yeoman runs every one of them. Yeoman will: • Run all of your unit tests. • Lint, concatenate and minify your JavaScript files as well as rev their version numbers if need be. • Condense all of your CSS files. • Compress your images. • Timestamp all your files to prevent browser caching. • Sift through your HTML files and make sure file references are correct (after concatenation).
6/7

After the build process is complete, notice that Yeoman added two new folders to your project root: temp and dist. The temp folder acts like a cache, containing compiled files which may or may not change again. The dist folder pulls from temp, and contains all of the minimized, compiled, compressed, linted, smothered, covered and chunked application files. This means that you don’t have to bother with storing the final files in your repo if you don’t want to. It also means that your application is production quality, squeaky clean, and as light and tight as can be. Yeoman is pretty clever isn’t it?

WHERE TO GO FROM HERE
Yeoman can do much more than this but even if you only use it for the things we talked about here, it is well worth your time. Oh wait...I forgot to mention that Yeoman is free, and open source. Other tools out there do the same thing, some are free, while some cost money. But Yeoman is the only one out there that you can fork and change. If it does something you do not like, change it. If you want to add a feature, write the code for it, then submit it back to the project for everyone else to benefit from. I hope you’ve decided to try out Yeoman for yourself. You won’t regret it. Additional information can be found at Yeoman.io, github, and their documentation page. All illustrations courtesy of Leonardo Maia.

ABOUT THIS ARTICLE
Andy is currently a Senior Web Developer for Dealerskins, Inc. and runs commadelimited.com. He’s the author of open source ColdFusion libraries: PicasaCFC, & ShrinkURL. He’s written an HTML/JS Adobe AIR Application called Shrinkadoo (in AIR Marketplace), & speaks at RIA User Groups or conferences. http://andymatthews.net/

ONLINE RESOURCES Yeoman.io http://yeoman.io/
Yeoman on Github https://github.com/yeoman/ Yeoman Documentation https://github.com/yeoman/yeoman/tree/master/docs/cli

@commadelimited

appliness

DON’T WORRY, BE APPLI

JAVASCRIPT SCOPING AND HOISTING

IN THIS ARTICLE, BEN CHERRY DIGS DEEPER INTO THE WORLD OF JAVASCRIPT BY EXPLORING SCOPING AND HIS NEWLY COINED TERM “HOISTING”.

Do you know what value will be alerted if the following is executed as a JavaScript program? var foo = 1; function bar() { if (!foo) { var foo = 10; } alert(foo); } bar(); If it surprises you that the answer is “10”, then this one will probably really throw you for a loop: var a = 1; function b() { a = 10; return; function a() {} } b(); alert(a);

ou Playgr

(

nd

t - JavaScrip - functions ns - declaratio

Difficulty
- rookie - intermediate - expert

Todo list
- scope - hoist - code
by Ben Cherry

Here, of course, the browser will alert “1”. So what’s going on here? While it might seem strange, dangerous, and confusing, this is actually a powerful and expressive feature of the language. I don’t know if there is a standard name for this specific behavior, but I’ve come to like the term “hoisting”. This article will try to shed some light on this mechanism, but first lets take a necessary detour to understand JavaScript’s scoping.

SCOPING IN JAVASCRIPT
One of the sources of most confusion for JavaScript beginners is scoping. Actually, it’s not just beginners. I’ve met a lot of experienced JavaScript programmers who don’t fully understand scoping. The reason scoping is so confusing in JavaScript is because it looks like a C-family language. Consider the following C program: #include <stdio.h> int main() { int x = 1; printf(“%d, “, x); // 1 if (1) { int x = 2; printf(“%d, “, x); // 2 } printf(“%d\n”, x); // 1 } The output from this program will be 1, 2, 1. This is because C, and the rest of the C family, has blocklevel scope. When control enters a block, such as the if statement, you can declare new variables within that scope, without affecting the outer scope. This is not the case in JavaScript. Try this in Firebug: var x = 1; console.log(x); // 1 if (true) { var x = 2; console.log(x); // 2 } console.log(x); // 2 In this case, Firebug will show 1, 2, 2. This is because JavaScript has function-level scope. This is radically different from the C family. Blocks, such as if statements, do not create a new scope. Only functions create a new scope. To a lot of programmers who are used to languages like C, C++, C#, or Java, this is unexpected and unwelcome. Luckily, because of the flexibility of JavaScript functions, there is a workaround. If you must create temporary scopes within a function, do the following: function foo() { var x = 1; if (x) { (function () { var x = 2; // some other code }()); } // x is still 1. }

2/5

This method is actually quite flexible, and can be used anywhere you need a temporary scope, not just within block statements. However, I strongly recommend that you take the time to really understand and appreciate JavaScript scoping. It’s quite powerful, and one of my favorite features of the language. If you understand scoping, hoisting will make a lot more sense to you.

DECLARATIONS, NAMES, AND HOISTING
In JavaScript, a name enters a scope in one of four basic ways: 1. Language-defined: All scopes are, by default, given the names this and arguments. 2. Formal parameters: Functions can have named formal parameters, which are scoped to the body of that function. 3. Function declarations: These are of the form function foo() {}. 4. Variable declarations: These take the form var foo;. Function declarations and variable declarations are always moved (“hoisted”) invisibly to the top of their containing scope by the JavaScript interpreter. Function parameters and language-defined names are, obviously, already there. This means that code like this: function foo() { bar(); var x = 1; } is actually interpreted like this: function foo() { var x; bar(); x = 1; } It turns out that it doesn’t matter whether the line that contains the declaration would ever be executed. The following two functions are equivalent: function foo() { if (false) { var x = 1; } return; var y = 1; } function foo() { var x, y; if (false) { x = 1; } return; y = 1; }
3/5

Notice that the assignment portion of the declarations were not hoisted. Only the name is hoisted. This is not the case with function declarations, where the entire function body will be hoisted as well. But remember that there are two normal ways to declare functions. Consider the following JavaScript: function test() { foo(); // TypeError “foo is not a function” bar(); // “this will run!” var foo = function () { // function expression assigned to local variable ‘foo’ alert(“this won’t run!”); } function bar() { // function declaration, given the name ‘bar’ alert(“this will run!”); } } test(); In this case, only the function declaration has its body hoisted to the top. The name ‘foo’ is hoisted, but the body is left behind, to be assigned during execution. That covers the basics of hoisting, which is not as complex or confusing as it seems. Of course, this being JavaScript, there is a little more complexity in certain special cases.

NAME RESOLUTION ORDER
The most important special case to keep in mind is name resolution order. Remember that there are four ways for names to enter a given scope. The order I listed them above is the order they are resolved in. In general, if a name has already been defined, it is never overridden by another property of the same name. This means that a function declaration takes priority over a variable declaration. This does not mean that an assignment to that name will not work, just that the declaration portion will be ignored. There are a few exceptions: • The built-in name arguments behaves oddly. It seems to be declared following the formal parameters, but before function declarations. This means that a formal parameter with the name arguments will take precedence over the built-in, even if it is undefined. This is a bad feature. Don’t use the name arguments as a formal parameter. • Trying to use the name this as an identifier anywhere will cause a SyntaxError. This is a good feature. • If multiple formal parameters have the same name, the one occurring latest in the list will take precedence, even if it is undefined.

NAMED FUNCTION EXPRESSIONS
You can give names to functions defined in function expressions, with syntax like a function declaration. This does not make it a function declaration, and the name is not brought into scope, nor is the body hoisted. Here’s some code to illustrate what I mean:
foo(); // TypeError “foo is not a function” bar(); // valid baz(); // TypeError “baz is not a function” spam(); // ReferenceError “spam is not defined”

4/5

var foo = function () {}; // anonymous function expression (‘foo’ gets hoisted) function bar() {}; // function declaration (‘bar’ and the function body get hoisted) var baz = function spam() {}; // named function expression (only ‘baz’ gets hoisted) foo(); // valid bar(); // valid baz(); // valid spam(); // ReferenceError “spam is not defined”

HOW TO CODE WITH THIS KNOWLEDGE
Now that you understand scoping and hoisting, what does that mean for coding in JavaScript? The most important thing is to always declare your variables with a var statement. I strongly recommend that you have exactly one var statement per scope, and that it be at the top. If you force yourself to do this, you will never have hoisting-related confusion. However, doing this can make it hard to keep track of which variables have actually been declared in the current scope. I recommend using JSLint(http://www.jslint. com) with the onevar option to enforce this. If you’ve done all of this, your code should look something like this: /*jslint onevar: true [...] */ function foo(a, b, c) { var x = 1, bar, baz = “something”; }

WHAT THE STANDARD SAYS
I find that it’s often useful to just consult the ECMAScript Standard (pdf) directly to understand how these things work. Here’s what it has to say about variable declarations and scope (section 12.2.2):

“I

f the variable statement occurs inside a FunctionDeclaration, the variables are defined with function-local scope in that function, as described in section 10.1.3. Otherwise, they are defined with global scope (that is, they are created as members of the global object, as described in section 10.1.3) using property attributes { DontDelete }. Variables are created when the execution scope is entered. A Block does not define a new execution scope. Only Program and FunctionDeclaration produce a new scope. Variables are initialised to undefined when created. A variable with an Initialiser is assigned the value of its AssignmentExpression when the VariableStatement is executed, not when the variable is created.”

I hope this article has shed some light on one of the most common sources of confusion to JavaScript programmers. I have tried to be as thorough as possible, to avoid creating more confusion. If I have made any mistakes or have large omissions, please let me know.

ABOUT THIS ARTICLE
Ben lives in San Francisco, CA, working as an entrepreneur and engineer. He’s led front-end engineering teams at Twitter, and more recently has co-founded a startup. On the side, he’s been learning iOS and Node.js. Ben gives decent programming advice on his blog, at http://www.adequatelygood.com.

ONLINE RESOURCES Ben’s Github https://github.com/bcherry
Talks by Ben http://www.bcherry.net/talks/ Subscribe to Ben’s latest posts http://feeds.feedburner.com/adequatelygood

http://www.adequatelygood.com/

@bcherry

appliness

DON’T WORRY, BE APPLI

CREATING A REST API USING NODE.JS, EXPRESS, AND MONGODB

I RECENTLY USED NODE.JS, EXPRESS, AND MONGODB TO REWRITE A RESTFUL API I HAD PREVIOUSLY WRITTEN IN JAVA AND PHP WITH MYSQL (JAVA VERSION, PHP VERSION), AND I THOUGHT I’D SHARE THE EXPERIENCE… HERE IS A QUICK GUIDE SHOWING HOW TO BUILD A RESTFUL API USING NODE.JS, EXPRESS, AND MONGODB.

INSTALLING NODE.JS
1. Go to http://nodejs.org, and click the Install button. 2. Run the installer that you just downloaded. When the installer completes, a message indicates that Node was installed at /usr/local/bin/node and npm was installed at /usr/local/bin/npm. At this point node.js is ready to use. Let’s implement the webserver application from the nodejs.org home page. We will use it as a starting point for our project: a RESTful API to access data (retrieve, create, update, delete) in a wine cellar database. 1. Create a folder named nodecellar anywhere on your file system. 2. In the wincellar folder, create a file named server.js. 3. Code server.js as follows:

ou Playgr

(

nd

Difficulty
- rookie - intermediate - expert

- NodeJS - Express - MongoDB

Todo list
- install - deploy - REST
by Christophe Coenraets

var http = require(‘http’); http.createServer(function (req, res) { res.writeHead(200, {‘Content-Type’: ‘text/plain’}); res.end(‘Hello World\n’); }).listen(3000, ‘127.0.0.1’); console.log(‘Server running at http://127.0.0.1:3000/’); We are now ready to start the server and test the application: To start the server, open a shell, cd to your nodecellar directory, and start your server as follows: node server.js To test the application, open a browser and access http://localhost:3000

INSTALLING EXPRESS
Express is a lightweight node.js web application framework. It provides the basic HTTP infrastructure that makes it easy to create REST APIs. To install Express in the nodecellar application: 1. In the nodecellar folder, create a file named package.json defined as follows: { “name”: “wine-cellar”, “description”: “Wine Cellar Application”, “version”: “0.0.1”, “private”: true, “dependencies”: { “express”: “3.x” }

}

2. Open a shell, cd to the nodecellar directory, and execute the following command to install the express module. npm install A node_modules folder is created in the nodecellar folder, and the Express module is installed in a subfolder of node_modules.

Now that Express is installed, we can stub a basic REST API for the nodecellar application: 1. Open server.js and replace its content as follows: var express = require(‘express’); var app = express(); app.get(‘/wines’, function(req, res) { res.send([{name:’wine1’}, {name:’wine2’}]); }); app.get(‘/wines/:id’, function(req, res) { res.send({id:req.params.id, name: “The Name”, description:

2/8

“description”}); }); app.listen(3000); console.log(‘Listening on port 3000...’); 2. Stop (CTRL+C) and restart the server: node server 3. To test the API, open a browser and access the following URLs: Get all the wines in the database: http://localhost:3000/wines Get wine with a specific id (for example: 1): http://localhost:3000/wines/1

USING NODE.JS MODULES
In a large application, things could easily get out of control if we keep adding code to a single JavaScript file (server.js). Let’s move the wine-related code in a wines module that we then declare as a dependency in server.js. 1. In the nodecellar folder, create a subfolder called routes. 2. In the routes folder create a file named wines.js and defined as follows: exports.findAll = function(req, res) { res.send([{name:’wine1’}, {name:’wine2’}, {name:’wine3’}]); }; exports.findById = function(req, res) { res.send({id:req.params.id, name: “The Name”, description: “description”}); }; 3. Modify server.js as follows to delegate the routes implementation to the wines module: var express = require(‘express’), wines = require(‘./routes/wines’); var app = express(); app.get(‘/wines’, wines.findAll); app.get(‘/wines/:id’, wines.findById); app.listen(3000); console.log(‘Listening on port 3000...’); 4. Restart the server and test the APIs: Get all the wines in the database: http://localhost:3000/wines Get wine with a specific id (for example: 1): http://localhost:3000/wines/1
3/8

The next step is to replace the placeholder data with actual data from a MongoDB database.

INSTALLING MONGODB
To install MongoDB on your specific platform, refer to the MongoDB QuickStart. Here are some quick steps to install MongoDB on a Mac: 1. Open a terminal window and type the following command to download the latest release: curl http://downloads.mongodb.org/osx/mongodb-osx-x86_64-2.2.0.tgz > ~/Downloads/mongo.tgz Note: You may need to adjust the version number. 2.2.0 is the latest production version at the time of this writing. 2. Extract the files from the mongo.tgz archive: cd ~/Downloads tar -zxvf mongo.tgz 3. Move the mongo folder to /usr/local (or another folder according to your personal preferences): sudo mv -n mongodb-osx-x86_64-2.2.0/ /usr/local/ 4. (Optional) Create a symbolic link to make it easier to access: sudo ln -s /usr/local/mongodb-osx-x86_64-2.2.0 /usr/local/mongodb 5. Create a folder for MongoDB’s data and set the appropriate permissions: sudo mkdir -p /data/db sudo chown `id -u` /data/db Start mongodb cd /usr/local/mongodb ./bin/mongod 6. You can also open the MongoDB Interactive Shell in another terminal window to interact with your database using a command line interface. cd /usr/local/mongodb ./bin/mongo Refer to the MongoDB Interactive Shell documentation for more information.

INSTALLING THE MONGODB DRIVER FOR NODE.JS
There are different solutions offering different levels of abstraction to access MongoDB from Node. js (For example, Mongoose and Mongolia). A comparaison of these solutions is beyond the scope of this article. In this, guide we use the native Node.js driver. To install the the native Node.js driver, open a terminal window, cd to your nodecellar folder, and execute the following command: npm install mongodb

4/8

IMPLEMENTING THE REST API
The full REST API for the nodecellar application consists of the following methods:
GET GET POST PUT DELETE

Method

URL

/wines /wines/5069b47aa892630aae000001 /wines /wines/5069b47aa892630aae000001 /wines/5069b47aa892630aae000001

Action

Retrieve all wines Retrieve the wine with the specified _id Add a new wine Update wine with the specified _id Delete the wine with the specified _id

To implement all the routes required by the API, modify server.js as follows: var express = require(‘express’), wine = require(‘./routes/wines’); var app = express(); app.configure(function () { app.use(express.logger(‘dev’)); /* ‘default’, ‘short’, ‘tiny’, ‘dev’ */ app.use(express.bodyParser()); }); app.get(‘/wines’, wine.findAll); app.get(‘/wines/:id’, wine.findById); app.post(‘/wines’, wine.addWine); app.put(‘/wines/:id’, wine.updateWine); app.delete(‘/wines/:id’, wine.deleteWine); app.listen(3000); console.log(‘Listening on port 3000...’); To provide the data access logic for each route, modify wines.js as follows: var mongo = require(‘mongodb’); var Server = mongo.Server, Db = mongo.Db, BSON = mongo.BSONPure; var server = new Server(‘localhost’, 27017, {auto_reconnect: true}); db = new Db(‘winedb’, server); db.open(function(err, db) { if(!err) { console.log(“Connected to ‘winedb’ database”); db.collection(‘wines’, {safe:true}, function(err, collection) { if (err) { console.log(“The ‘wines’ collection doesn’t exist. Creating it with sample data...”); populateDB(); } }); } });

5/8

exports.findById = function(req, res) { var id = req.params.id; console.log(‘Retrieving wine: ‘ + id); db.collection(‘wines’, function(err, collection) { collection.findOne({‘_id’:new BSON.ObjectID(id)}, function(err, item) { res.send(item); }); }); }; exports.findAll = function(req, res) { db.collection(‘wines’, function(err, collection) { collection.find().toArray(function(err, items) { res.send(items); }); }); }; exports.addWine = function(req, res) { var wine = req.body; console.log(‘Adding wine: ‘ + JSON.stringify(wine)); db.collection(‘wines’, function(err, collection) { collection.insert(wine, {safe:true}, function(err, result) { if (err) { res.send({‘error’:’An error has occurred’}); } else { console.log(‘Success: ‘ + JSON.stringify(result[0])); res.send(result[0]); } }); }); } exports.updateWine = function(req, res) { var id = req.params.id; var wine = req.body; console.log(‘Updating wine: ‘ + id); console.log(JSON.stringify(wine)); db.collection(‘wines’, function(err, collection) { collection.update({‘_id’:new BSON.ObjectID(id)}, wine, {safe:true}, function(err, result) { if (err) { console.log(‘Error updating wine: ‘ + err); res.send({‘error’:’An error has occurred’}); } else { console.log(‘’ + result + ‘ document(s) updated’); res.send(wine); } }); }); } exports.deleteWine = function(req, res) { var id = req.params.id;
6/8

}

console.log(‘Deleting wine: ‘ + id); db.collection(‘wines’, function(err, collection) { collection.remove({‘_id’:new BSON.ObjectID(id)}, {safe:true}, function(err, result) { if (err) { res.send({‘error’:’An error has occurred - ‘ + err}); } else { console.log(‘’ + result + ‘ document(s) deleted’); res.send(req.body); } }); });

// Populate database with sample data -- Only used once: the first time the application is started. // You’d typically not find this code in a real-life app, since the database would already exist. var populateDB = function() { var wines = [ { name: “CHATEAU DE SAINT COSME”, year: “2009”, grapes: “Grenache / Syrah”, country: “France”, region: “Southern Rhone”, description: “The aromas of fruit and spice...”, picture: “saint_cosme.jpg” }, { name: “LAN RIOJA CRIANZA”, year: “2006”, grapes: “Tempranillo”, country: “Spain”, region: “Rioja”, description: “A resurgence of interest in boutique vineyards...”, picture: “lan_rioja.jpg” }]; db.collection(‘wines’, function(err, collection) { collection.insert(wines, {safe:true}, function(err, result) {}); }); }; Restart the server to test the API.

7/8

TESTING THE API USING CURL
If you want to test your API before using it in a client application, you can invoke your REST services straight from a browser address bar. For example, you could try: - http://localhost:3000/wines - http://localhost:3000/wines/search/Chateau You will only be able to test your GET services that way. A more versatile solution to test RESTful services is to use cURL, a command line utility for transferring data with URL syntax. For example, using cURL, you can test the Wine Cellar API with the following commands: - Get all wines: curl -i -X GET http://localhost:3000/wines - Get wine with _id value of 5069b47aa892630aae000007 (use a value that exists in your database): curl -i -X GET http://localhost:3000/wines/5069b47aa892630aae000007 - Delete wine with _id value of 5069b47aa892630aae000007: curl -i -X DELETE http://localhost:3000/wines/5069b47aa892630aae000007 - Add a new wine: curl -i -X POST -H ‘Content-Type: application/json’ -d ‘{“name”: “New Wine”, “year”: “2009”}’ http://localhost:3000/wines - Modify wine with _id value of 5069b47aa892630aae000007: curl -i -X PUT -H ‘Content-Type: application/json’ -d ‘{“name”: “New Wine”, “year”: “2010”}’ http://localhost:3000/wines/5069b47aa892630aae000007

ABOUT THIS ARTICLE
Christophe Coenraets is a Technical Evangelist for Adobe where he focuses on Mobile and Rich Internet Applications for the Enterprise. He has been working on Flex since the early days of the product in 2003. Christophe has been a regular speaker at conferences worldwide for the last 15 years.

ONLINE RESOURCES Christophe’s blog http://coenraets.org
NodeJS http://nodejs.org/ MongoDB http://www.mongodb.org/

http://coenraets.org/

@ccoenraets

appliness

DON’T WORRY, BE APPLI

POSTING DATA FROM A PHONEGAP APP TO A SERVER USING JQUERY
IN THIS ARTICLE, SAM CROFT DESCRIBES THE STEPS HE TAKES TO POST DATA FROM A SERVER TO A PHONEGAP APP.

(

Recently I’ve had several requests to create an article about posting data to a server from a PhoneGap app so I thought I’d cover the steps I go through when dealing with this kind of requirement. The method is extremely simple providing a few important steps are followed. Important: Although not necessary, the context of this article will be better understood if you have read my previous article, Updated: loading external data into an iOS PhoneGap app using jQuery 1.5. The reference to landmarks throughout this article refers to a barebones app I created in the aforementioned article where I was listing the geo-coordinates of several wellknown UK landmarks. • App and server source code on GitHub

ou Playgr

nd

- PhoneGap - jQuery - Xcode

Difficulty
- rookie - intermediate - expert

Todo list
- create - post - test
by Sam Croft

CREATING AN IOS PHONEGAP APP WITH XCODE 4
The first step is to create a new iOS PhoneGap app. Open the project location in finder, create your www folder and add it to the project in Xcode. You will also need the phonegap.js file if one isn’t already built when you compile. Make sure you use the corresponding version for your version of PhoneGap—I’m using phonegap-1.2.0.js. Although I’m using iOS as the example, the project www files would be the same for Android and indeed any other supported platform.

CREATING THE FORM TO POST DATA FROM
Continuing with the example I created in my article about loading data from a server in a PhoneGap app, where I was loading geo coordinates for a few famous UK landmarks, I can expand on this and add a form for each landmark to submit data back to the server. A SIMPLE COMMENTS FORM I’m going to use a really simple form for each landmark so users can leave a comment about it. <div id=”landmark-1” data-landmark-id=”1”> <form> <label for=”email”> <b>Email</b> <input type=”email” id=”email” name=”email”> </label> <label for=”comment”> <b>Comment</b> <textarea id=”comment” name=”comment” cols=”30” rows=”10”></tex tarea> </label> <input type=”submit” value=”Save”> </form> </div> I have used my preferred method of form markup. Note: the intended use of this markup is that it is generated, or at least made visible, when a user taps an ‘add comment’ button. When this function is called the id for the landmark the user is commenting on is in the parent div as an id and custom data attribute. It could also be a hidden form field, but seeing as the id for the landmark is already available this seems unnecessary bloat. How you go about integrating this is entirely based on your app and what fits best, I am merely mentioning what fits best with my landmarks app.

2/6

USING JQUERY TO HANDLE AND POST FORM DATA
As with my previous articles I’m going to use jQuery for the JavaScript library but it’s worth mentioning that something like xui.js may be more suitable if you’re trying to create an app with as small a footprint as possible. USING THE .SUBMIT() EVENT HANDLER While not necessary, it’s certainly considered correct to use jQuery’s .submit() event handler rather than an HTML button or link when dealing with forms. A basic function assigned to the .submit() event handler: 1 $(‘form’).submit(function(){ 2 var landmarkID = $(this).parent().attr(‘data-landmark-id’); 3 var postData = $(this).serialize(); 4 5 $.ajax({ 6 type: ‘POST’, 7 data: postData+’&amp;lid=’+landmarkID, 8 url: ‘http://your-domain.com/comments/save.php’, 9 success: function(data){ 10 console.log(data); 11 alert(‘Your comment was successfully added’); 12 }, 13 error: function(){ 14 console.log(data); 15 alert(‘There was an error adding your comment’); 16 } 17 }); 18 19 return false; 20 }); There are three important things going on in this function: Line 2: firstly, I am grabbing the id for the landmark that is being commented on. I’m accessing this value from the custom data-landmark-id attribute in the parent div element. Line 3: secondly, I am using jQuery’s serialize() method to gather all the data and values from the form. This is somewhat easier than stepping through each field and using the val() method. If I was to log the postData variable the output would be something like; email=me@site.com&comment=I like this location! Lines 5-15: finally, I am using jQuery’s ajax() function to POST the data to a server. A successful POST will trigger one function while an error will trigger another. This is the foundation for posting form data from a PhoneGap app. Note: if you are creating your forms on-the-fly then you will need to use the live() event handler with your submit() event handler.

3/6

USING PHP TO CREATE A SERVER-SIDE COMPONENT TO STORE THE DATA
My previous article about loading data from a server used PHP and MySQL so I’m going to continue with the same example and build on the database. CREATING A TABLE TO STORE THE COMMENTS First I’m going to create a small table that will contain all of the submitted comments. CREATE TABLE `comments` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `location_id` int(11) DEFAULT NULL, `email` varchar(45) DEFAULT NULL, `comment` text, PRIMARY KEY (`id`) ) This is fairly self explanatory. I’m going to store the email address, comment and the location id. CREATING A TINY PHP SCRIPT TO HANDLE AND STORE THE COMMENTS The next part of submitting a comment is to create a PHP script that takes the data from the app, sanitises it and stores it in MySQL. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 $server = $username $password $database “”; = “”; = “”; = “”;

$con = mysql_connect($server, $username, $password) or die (“Could not connect: “ . mysql_error()); mysql_select_db($database, $con); $locationID = $_POST[“lid”]; $email = mysql_real_escape_string($_POST[“email”]); $comment = mysql_real_escape_string($_POST[“comment”]); $sql = “INSERT INTO comments (location_id, email, comment) “; $sql .= “VALUES ($locationID, ‘$email’, ‘$comment’)”; if (!mysql_query($sql, $con)) { die(‘Error: ‘ . mysql_error()); } else { echo “Comment added”; } mysql_close($con);

Lines 1-4: set up database connection details. Lines 6 & 8: connect with the MySQL server and select the database that will be used. Lines 10-12: grab the POSTed form data from the PhoneGap app and sanatise any strings. Lines 14-21: create a string with an SQL insert query and then execute the query. Return a success or error message. Line 23: close the MySQL connection.

4/6

TESTING THE APP IN THE IPHONE SIMULATOR
The final step is to build and compile the app and see how it works in the iPhone simulator. Bear in mind a couple of things and a common oversight and you’ll be submitting data to your server with no problems.

If all goes well, you should see this alert box when you submit a comment.

5/6

CONSOLE LOGGING IS YOUR FRIEND You’ll notice in the js function that I have a couple of console.logs that return the status of the comment save. These are great when you’re testing in your browser as you can see them in the developer console. And they also appear in the Xcode debug console (shift+cmd+c), so use them to step through your code if you are having issues. ADD YOUR SERVER TO YOUR APP’S SERVER WHITELIST With PhoneGap 1.0 came the addition of server whitelists whereby you must declare any external hosts that you will be accessing data from or sending data to. To add your server as an external host; view the PhoneGap.plist in Xcode (project navigator > project > supporting files) and add a new item within the ExternalHosts array for your server. In this example I’m using localhost so I’m going to go ahead and add a string value of localhost. Because I’m using localhost I’m also going to add another string value – debug.phonegap.com. This explicitly states I am in debug mode, rather than release. If I was using my own website as a data server I would add a string value of samcroft.co.uk and drop the debug.phonegap.com value.

THIS IS JUST THE BASIC OUTLINE
One final note is that this is just the basic outline in submitting data to a server from a PhoneGap app. Utilising this further you would have to consider how best to integrate with your app, create client and server side validation, have loading gifs, visually handle a complete form submission and maybe trigger a function that gathers the new data and appends it to any existing data already visible. I will cover all of this in another article if there is any interest. • App and server source code on GitHub

ABOUT THIS ARTICLE
Sam Croft is co-founder of Running in the Halls, an ambitious and creative web, apps and games studio based in Huddersfield, England. Sam has over 12 years experience in creating data-driven web applications. Since using PhoneGap for the first time in 2009, he has been an advocate for creating web/hybrid apps rather than native apps. He writes about Cordova/PhoneGap on his blog.

ONLINE RESOURCES Sam Croft’s Github https://twitter.com/samcroft jQuery http://jquery.com/
Phonegap http://phonegap.com/

http://samcroft.co.uk/

@samcroft

appliness

DON’T WORRY, BE APPLI

LOADING DATA INTO, AND POSTING DATA FROM, AN IOS CORDOVA APP

IN THIS ARTICLE, SAM CROFT SHARES TECHNIQUES FOR WORKING WITH DATA IN PHONEBAP/CORDOVA APPS. TO ILLUSTRATE THESE CONCEPTS, HE BUILDS THE FRAMEWORK OF A BUG TRACKING APP FOR A TEAM OF DEVELOPERS.

(

When I downloaded PhoneGap in 2009 the first thing I wanted to learn was how to load external data from a server or API into an app. Over the last three years I’ve built several large PhoneGap/Cordova apps that completely rely on data being loaded into and posted from an app. I would like to explore and share the techniques that I use to accomplish these tasks in this article. All of the code used in this article is available on its own github repository. Each code snippet has its own commit.

ou Playgr

nd

- Cordova - Zepto - JSON

Difficulty
- rookie - intermediate - expert

Todo list
- load - post - track bugs
by Sam Croft

APP CONTEXT
For the purpose of this article I’ll use the context of an app which allows users to view and add to a list of code bugs which will then form the framework of a bug tracking app for internal use by a team of developers. A bug app. Let’s call it BAPP. BAPP will be an iOS Cordova app.

Try me!

APP FRAMEWORK
I am going to focus on two functions of this app: 1. viewing a list of open bugs 2. adding a new bug If BAPP was a real app there would be additional functions such as user and project management, viewing bug details, actioning bugs and marking bugs as fixed. But for the purposes of this article we’ll focus on the essentials; viewing and adding bugs.

APP COMPONENTS
The app will rely on a number of components: 1. a PHP/MySQL server environment 2. JavaScript ajax calls to the server 3. PHP scripts to select open and add new bugs

SETTING UP A SERVER ENVIRONMENT TO MAINTAIN ALL OF THE BUGS
Before getting into the app itself I’m going to create a database on a server that will contain a list of all of the bugs we want to track in BAPP. This article isn’t really about server side technologies so I’m going to assume a basic level of knowledge about this.

2/15

CREATING THE BUGS DATABASE I’m going to create a new database called bapp, on my local MySQL server, and add the following bugs table: [mysql] CREATE TABLE `bugs` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `title` varchar(140) DEFAULT NULL, `details` text, `priority` tinyint(1) DEFAULT ‘1’, `open` tinyint(1) DEFAULT ‘1’, `date_time_added` timestamp NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=MyISAM AUTO_INCREMENT=5 DEFAULT CHARSET=latin1; [/mysql] In a real world app you’d have a lot more fields here. But these are all we need for now. MAKING A JSONP RESOURCE TO FETCH THE OPEN BUGS With the database set up the next thing to do is ensure there is a method of fetching all of the open bugs from the database. To keep things simple and easy to follow, the PHP script that we’re going to make here will be running on the same server as the database—localhost for me. I’m going to create a directory in my localhost htdocs folder called bapp for this file. localhost/bapp/bugs.php [php] <?php $server = $username $password $database

“localhost”; = “root”; = “”; = “bapp”;

$con = mysql_connect($server, $username, $password) or die (“Could not con nect: “ . mysql_error()); mysql_select_db($database, $con); $sql = “SELECT id, title FROM bugs WHERE open = TRUE ORDER BY priority, date_ time_added”; $result = mysql_query($sql) or die (“Query error: “ . mysql_error()); $bugs = array(); while($row = mysql_fetch_assoc($result)) { $bugs[] = $row; } echo $_GET[‘jsoncallback’] . ‘(‘ . json_encode($bugs) . ‘);’; mysql_close($con); ?> [/php] https://github.com/samcroft/bapp/commit/e83964faf3d71820bf6864db89cfa87414129682

3/15

The order I am selecting the bugs in is by priority and then the date they were added, so the urgent bugs appear first. When this PHP script is called it will output the bugs in the following format, forming the JSON object: [json] [ {

}, {

“id”: “3”, “title”: “Third bug”, “priority”: “1”, “date_time_added”: “2012-09-27 16:19:05” “id”: “1”, “title”: “First bug”, “priority”: “5”, “date_time_added”: “2012-09-27 16:18:12” “id”: “2”, “title”: “Second bug”, “priority”: “5”, “date_time_added”: “2012-09-27 16:18:14”

}, {

] [/json]

}

SETTING UP THE APP
When you have set up your Cordova project you will have a www folder that contains index.html. The first thing we need to do is open up index.html in your text editor of choice and delete the default Cordova HTML body content. The result of which should be something like between the body tags: www/index.html [html] <body> <script src=”cordova-2.0.0.js”></script> <script src=”js/index.js”></script> <script> app.initialize(); </script> </body> [/html] https://github.com/samcroft/bapp/commit/9ceefb9969ba00882fc1024d9fe8009ac76df188
4/15

While I’m at it, I’m also going to completely empty Cordova’s default index.css file. At this point you could use a mobile framework such as jQuery Mobile, jQTouch or Sencha Touch. But as I’m only going to be using a couple of pages/sections I’m going to keep things simple and just use a tiny bit of CSS to toggle the visibility of two different pages. The following markup consists of two pages/sections: www/index.html [html] <div class=”page current” id=”bugs”> <header> <h1>Open bugs</h1> </header> <ul></ul> </div> <div class=”page” id=”add-bug”> <header> <h1>Add a new bug</h1> </header> <form> <label for=”bug-title”>Bug title</label> <input type=”text” id=”bug-title” name=”bug-title”> <label for=”bug-details”>Bug details</label> <textarea id=”bug-details” name=”bug-details” cols=”30” rows=”10”></tex tarea> <label for=”bug-priority”>Bug priority</label> <select id=”bug-priority” name=”bug-priority”> <option value=”1”>Urgent</option> <option value=”2”>High</option> <option value=”3”>Medium</option> <option value=”4”>Low</option> </select> <input type=”submit” value=”Add bug”> </form> </div> [/html] https://github.com/samcroft/bapp/commit/76acbc3891fc998123d38532c672c3effc0ade0f Here we have added the page for displaying the open bugs and a page with a form for submitting new bugs. Note: I’m not going to cover how to handle the page / view switching but you can download the entire BAPP project from its github page to see how it works (it’s pretty simple).
5/15

LOADING THE OPEN BUGS INTO THE APP
So now the database is setup and we have a JSON resource to call from within the app we can get to the main part of this article; dealing with data in an app. Here I’m going to use Zepto.js to load a JSON data object into the app and append the results to an unordered list into our HTML index file. First I need to include the Zepto library: www/index.html [html] <script src=”js/zepto.min.js”></script> [/html] https://github.com/samcroft/bapp/commit/e6ffcb8874394b63a5986165c97e1f7ff6fdc227 The reason I’m using Zepto.js rather than jQuery is because it is so much lighter in file size, weighing in at a paltry 24kb (when using the minified version). jQuery itself is an enormous library containing a great many features that you wouldn’t ordinarily be leveraging in an app. That’s a lot of unnecessary bloat. Zepto on the other hand is like all of the functions from jQuery that you would likely be using in an app, plus a handy (no pun intended) array of touch events. If you’re more comfortable with jQuery, that’s ok, the JavaScript in this article remains the same, it’s just the include that is different. I do, however, use a tap event in the final build of BAPP that you download from its github page, so be aware of that if you do use jQuery. When the app is opened it’s going to automatically load the open bugs from the database and place them in the unordered list in the first page. This can be accomplished by using Zepto’s ajax() function.

USING ZEPTO’S .AJAX() FUNCTION TO CREATE A BASIC DATA REQUEST
Almost all of the Cordova/PhoneGap apps I’ve built have used the .ajax() function extensively. A basic ajax() function setup, to load data, might look like this: [js] $.ajax({ type: ‘GET’, url: ‘http://your-server.com/your-resource.php’, dataType: ‘JSON’, success: function(data) { //do something with the data }, error: function(data) } //do something if there is an error } }); [/js]

6/15

Here we’d be using a GET request to access a server resource and expecting to receive back a JSON object. If it’s successful we can do something with the loaded data. If there is an error we can display an appropriate message. Within BAPP we’re dealing with a JSON data object. But more specifically, JSONp. JSONp is used to load JSON data from a different server/location to where the data is hosted. And while cross-domain concerns aren’t much of a worry with a Cordova app, as it uses the file protocol. Using JSONp requests means that you’re building a web app that isn’t constrained to running in Cordova’s environment. While you might not necessarily be releasing your app as a mobile browser app, you will almost certainly be spending hours debugging it in your favourite webkit browser. You’re going to want to build things with cross-domain access in mind. So with that out of the way, we just need to make a few tweaks to the basic .ajax() function above, for use in BAPP, to load those glorious bugs. I’m going to add this request in the default index.js file that Cordova makes for you. I’m also going to place this ajax request in its own function so that we can call it when we want: www/js/index.js [js] function loadBugs() { var bugs = $(‘#bugs ul’); $.ajax({ type: ‘GET’, url: ‘http://localhost/bapp/bugs.php?&jsoncallback=?’, dataType: ‘JSONp’, timeout: 5000, success: function(data) { $.each(data, function(i,item){ bugs.append(‘<li>’+item.title) }); }, error: function(data) { bugs.append(‘<li>There was an error loading the bugs’); } }); } loadBugs(); [/js] https://github.com/samcroft/bapp/commit/509061f3cf240d699781bc2d0db8a1e8e8d49d5e This will make a request to the server that has the bugs.php file which, in turn, selects the bugs from the database, encodes the records as JSON and then returns it to the function invoked via the success property of the ajax function. The most important aspect in this particular ajax request is the callback parameter in the url string. This is what makes the JSONp request function correctly. Without this parameter, the request will fail. The callback parameter appends an automatically generated string to the requested url which is then returned, wrapping around the JSON object. You can call the callback parameter anything you like, but you must reference the same name in your PHP script (see above).
7/15

Note: with jQuery, you can add the callback parameter as a property of the ajax function itself, rather than appended to the url string. Once we have the JSON object passed to the success callback function in the ajax() function, it’s just a case of looping through each of the bugs and appending the relevant details to the unordered list we created in the index.html file. Zepto’s .each() function makes that a breeze. The PHP script has already ensured that the bugs will appear in the correct order so all that is required is to append each bug to the unordered list we created earlier. Testing this in the browser should load the list of bugs. If the error message is displayed then you should check your JavaScript console and see what kind of error is being thrown. If it’s syntax related it will most likely be a code issue with how things have been set up. If it’s showing an error code, check the network tab to see whether or not the JSON resource is actually being called. You should be able to diagnose any issues from here. Finally, before building the Cordova app and running in the simulator, or on your device, you will need to whitelist the server that you are connecting to pull the data from. Using my example you will need to add localhost as an ExternalHost in Cordova.plist. This is Cordova’s method of ensuring that you only pull data from websites that you have explicitly mentioned you will be interacting with.

Running BAPP should now produce a lovely list of bugs. Beautiful bugs. That is, if we had some in the bugs database! Right now it’s empty, so let’s go to the next section to add some bugs.

ADDING A NEW BUG AND POSTING IT TO THE SERVER DATABASE
The second part of this article concerns posting data from an app to a server. Here are the steps I’m going to go through: 1. use Zepto’s submit event handler to grab the new bug information 2. use Zepto’s ajax function to send the data to the server 3. create a PHP script to receive the data and store it in the database

8/15

THE FORM I added the form markup right at the start of this article, let’s look at it again: [html] <form> <label for=”bug-title”>Bug title</label> <input type=”text” id=”bug-title” name=”bug-title”> <label for=”bug-details”>Bug details</label> <textarea id=”bug-details” name=”bug-details” cols=”30” rows=”10”></tex tarea> <label for=”bug-priority”>Bug priority</label> <select id=”bug-priority” name=”bug-priority”> <option value=”1”>Urgent</option> <option value=”2”>High</option> <option value=”3”>Medium</option> <option value=”4”>Low</option> </select> <input type=”submit” value=”Add bug”> </form> [/html] Here we just have three fields; the bug title (text input), bug details (textarea) and the bug priority (select menu). Although I’m not using the bug-details field or data anywhere else in the app, I thought it was good to leave it in to show how handling multiple fields is relatively easy. Using the ajax() function to grab the form data and send it to the server Here I’m going to use the ajax() function again but this time to POST data rather than load it. While the basic setup of the ajax function is the same, it requires an additional parameter, data, where the form data is supplied: www/js/index.js [js] $(‘#add-bug form’).submit(function(){ var postData = $(this).serialize(); $.ajax({ type: ‘POST’, data: postData, url: ‘http://localhost/bapp/add-bug.php’, success: function(data){ console.log(‘Bug added!’); }, error: function(){ console.log(‘There was an error’); } }); return false; }); [/js] https://github.com/samcroft/bapp/commit/eaab3f668eb43dcb8bc04b3717d6f2f2bc727553

9/15

Here I’m using the submit event that starts a function that collects the form data, via the serialize() method and then calling a new ajax() function where the type is set to POST and a parameter called data is given the serialised data. The serialize() method goes through all the form fields and collects their respective values. It makes it easier than selecting the values one by one. For now I’ll leave the success function empty and come back to that after detailing the PHP script that’s going to receive this POST’ed data. CREATING A PHP SCRIPT TO HANDLE AND STORE THE BUGS The next part of submitting a bug is to create a PHP script that takes the data from the app and stores it in the MySQL database we created right at the start. localhost/bapp/add-bug.php [php] <?php $server = $username $password $database

“localhost”; = “root”; = “”; = “bapp”;

$con = mysql_connect($server, $username, $password) or die (“Could not con nect: “ . mysql_error()); mysql_select_db($database, $con); $bugTitle = mysql_real_escape_string($_POST[“bug-title”]); $bugDetails = mysql_real_escape_string($_POST[“bug-details”]); $bugPriority = mysql_real_escape_string($_POST[“bug-priority”]); $sql = “INSERT INTO bugs (title, details, priority) “; $sql .= “VALUES (‘$bugTitle’, ‘$bugDetails’, $bugPriority)”; if (!mysql_query($sql, $con)) { die(‘Error: ‘ . mysql_error()); } else { echo “Bug added”; } mysql_close($con); ?> [/php] https://github.com/samcroft/bapp/commit/aa57a2ae75e9dcea2ada97453e93c2ee1f1dde64

10/15

This connects to our MySQL server, with the same connection strings as the previous script, and selects the bugs database. Next I need to get the bug details that have been submitted from the app. In the previous step I used Zepto’s serialize function to make things a bit easier client side. To get the values into our PHP script we need to use PHP’s $_POST variable. The $_POST variable is an array that contains keys and values of data that has been sent to the PHP script via the POST method. The key is the name of the form field and the value is... it’s value. Therefore to obtain the bug title we need to use the key bug-title and for the bug details the key bug-details. One other thing I’m doing here is using MySQL’s mysql_real_escape_string function to strip the values of any malicious code that could have been submitted via the app. Note that to use mysql_real_escape_ string, you must call it after you have connected to MySQL. Next I’m just executing a simple INSERT sql statement. Providing your code is error free this will insert the new bug into the database.

11/15

Now you can add bugs and view the current open bugs.

Note: these screenshots have CSS applied that I have not covered in this article. The CSS is included in the projected on github, however. While BAPP just covers the basics, a real world instance of this app might allow you to tap each bug to view more details. And then action the bug, give comments, add a status and of course - mark it as closed.

12/15

FINISHING TOUCHES
That completes all of the server side code for this app. The final step is just to add a few checks to the submit function written earlier. Expected behaviour would be that after you have submitted a bug you are taken back to the open bug list where it is refreshed, with the newly added bug visible. So revisiting the submit event: [js] $(‘#add-bug form’).submit(function(){ var loading = $(this).find(‘input[type=”submit”]’); loading.addClass(‘loading’); var postData = $(this).serialize(); $.ajax({ type: ‘POST’, data: postData, url: ‘http://localhost/bapp/add-bug.php’, success: function(data){ loadBugs(); $(‘#bugs’).addClass(‘current’); $(‘#add-bug’).removeClass(‘current’); loading.removeClass(‘loading’); console.log(‘Bug added!’); }, error: function(){ loading.removeClass(‘loading’); console.log(‘There was an error’); } }); return false; }); [/js] https://github.com/samcroft/bapp/commit/90287ba388d3871a50790e18d16aaafa1954eac0 To do this, we just need to switch page views and call the bug list fetch function written earlier. I’m also going to add a loading indicator while the form is being submitted and data sent to the server. Finally I’m going to tweak the loadBugs function so that when it is called it first of all empties the list before making the ajax call to the server: [js] var bugs = $(‘#bugs ul’).empty(); [/js] https://github.com/samcroft/bapp/commit/909ff6238194336eb89fa57f6ce16b1bc2024488
13/15

Note: if BAPP was a real-world app, there would be more efficient ways of accomplishing this. You could append the newly created bug to the existing list without making an ajax call.

TESTING THE APP
With all of the pieces in place, it’s just a case of testing the app. There are a few things to be aware of here regarding how you test the app. TESTING IN A BROWSER If you’re loading the index.html file in your browser you need to be aware that depending on where the file is located and how you are opening it will affect the POST ajax call. Simply opening index.html in a browser will cause the POST ajax call to fail, as it is not running on a server. To test the file in a browser you will need to run it on a server, either locally on your machine or a domain you have access to. This does not apply to the the GET ajax call, however, providing the PHP script is running on a server somewhere. If you have downloaded the entire project from github and want to test it in your browser you will also need to change the tap event, on line 72 of www/js/index.js, to a click event. TESTING IN THE IPHONE SIMULATOR OR ON A DEVICE Of course, testing in XCode and the iPhone simulator, you won’t run into any of the above issues. There is one thing to be aware of though, regarding Zepto/jQuery. Ordinarily you’d wait until the DOM was ready before including your JavaScript. With Cordova, however, we need to wait until Cordova is ready, which occurs after the DOM is ready. Waiting for Cordova to be ready means that you can use Cordova’s API to make use of various device and native functionality. To do this we can just change the normal DOM check: [js] $(document).ready(function(){ //your code }); [/js] ...to: [js] $(document).on(‘deviceready’, function() { //your code }); [/js] https://github.com/samcroft/bapp/commit/570e0d2ba7186f71c00868043e7602a0af081b2c The latter method is of course how you should have things set up if you were building BAPP for release.

14/15

FINAL NOTES
I’ve tried to keep on topic by deliberately not mentioning anything about CSS and how BAPP might look - that’s not really what this article is about. I have, however, added some basic CSS to BAPP if you download the project. You can download all of the source code and the Cordova iOS project through a github repository for BAPP - https://github.com/samcroft/bapp - each code snippet above has its own commit.

ABOUT THIS ARTICLE
Sam Croft is co-founder of Running in the Halls, an ambitious and creative web, apps and games studio based in Huddersfield, England. Sam has over 12 years experience in creating data-driven web applications. Since using PhoneGap for the first time in 2009, he has been an advocate for creating web/hybrid apps rather than native apps. He writes about Cordova/PhoneGap on his blog.

ONLINE RESOURCES Sam Croft’s Github https://twitter.com/samcroft
Apache Cordova http://incubator.apache.org/cordova/ Phonegap http://phonegap.com/

http://samcroft.co.uk/

@samcroft

appliness

DON’T WORRY, BE APPLI

KEEPING YOUR KNOCKOUT.JS APPLICATION PERFORMING LIKE A CHAMP

KNOCKOUT.JS IS A JAVASCRIPT LIBRARY THAT ALLOWS YOU TO EASILY BUILD COMPLEX DATA-DRIVEN WEB APPLICATIONS. WHEN USING KNOCKOUT IT DOESN’T TAKE LONG BEFORE YOUR UI COMES ALIVE BASED ON UPDATES TO YOUR DATA OR ACTIONS TAKEN BY YOUR USERS. HOWEVER, TO ENSURE THAT YOUR KNOCKOUT APPLICATION HAS GREAT PERFORMANCE, IT IS USEFUL TO UNDERSTAND SOME COMMON PITFALLS AS WELL AS SEVERAL FEATURES THAT CAN MAKE YOUR PAGES MORE EFFICIENT.

#1 WORKING WITH THE UNDERLYING ARRAY
An observableArray is simply an observable with some additional methods added to deal with common array manipulation tasks. These functions get a reference to the underlying array, perform an action on it, then notify subscribers before and after that the array has changed. For example, doing a push on an observableArray essentially does this: ko.observableArray.fn.push = function () { var underlyingArray = this(); this.valueWillMutate(); var result = underlyingArray.push. apply(underlyingArray, arguments); this.valueHasMutated(); return result; };

ou Playgr

(

nd

t - JavaScrip - Knockout les - Observab

Difficulty
- rookie - intermediate - expert

Todo list
- loop - push - observe
by Ryan Niemeyer

Let’s consider a common scenario where we have a collection of objects along with a computed observable to track a filtered array of those objects. var ViewModel = function() { this.items = ko.observableArray([ new Item(“Task One”, “high”), new Item(“Task Two”, “normal”), new Item(“Task Three”, “high”) ]); this.highPriorityItems = ko.computed(function() { return ko.utils.arrayFilter(this.items(), function(item) { return item.priority() === “high”; }); };

};

Suppose that we want to add additional data from the server to the items observableArray. It is easy enough to loop through the new data and push each mapped item to our observableArray. this.addNewDataBad = function(data) { var item; for (var i = 0, j = newData.length; i < j; i++) { item = newData[i]; self.items.push(new Item(item.name, item.priority)); } }; Consider what happens though each time that we call .push(). The item is added to our underlying array and any subscribers are notified of the change. Each time that we push, our highPriorityItems filter code will run again. Additionally, if we are binding our UI to the items observableArray or against highPriorityItems, then the template binding has to do work each time to determine that only the one new item was added. A better pattern is to get a reference to our underlying array, push to it, then call at the end call .valueHasMutated(). Now, our subscribers only receive one notification indicating that the array has changed. this.addNewDataGood = function() { var item, underlyingArray = self.items(); for (var i = 0, j = newData.length; i < j; i++) { item = newData[i]; underlyingArray.push(new Item(item.name, item.priority)); } self.items.valueHasMutated(); }; This can even be simplified down to: this.addNewData = function() { var newItems = ko.utils.arrayMap(newData, function(item) { return new Item(item.name, item.priority); }); //take advantage of push accepting variable arguments self.items.push.apply(self.items, newItems); };

2/8

#2 IF/WITH - CONTROL-FLOW
The control flow bindings introduced in Knockout 2.0 are really just simple wrappers to the template binding. They use the children of the element as their “template” (what we call anonymous templates) and will re-render the contents whenever triggered. If you are not careful, the if and with bindings may be causing re-renders much more often than you realize. A common scenario where you might use the if binding is when you want to display a section only when an observableArray actually contains items. Something like: <!-- ko if: items().length --> <h2>Items</h2> <ul data-bind=”foreach: items”> <li data-bind=”text: name”></li> </ul> <!-- /ko --> The problem in this case, is that the if binding depends on the items observableArray and will be reevaluated each time that items is updated. Rather than only reacting to the number of items moving between 0 and 1, the binding re-renders its “template” again on every change to items. This issue will be addressed in Knockout version 2.2 by only re-rendering as necessary. Prior to 2.2, an option is to store a boolean in a separate observable that is updated by a manual subscription. A normal observable is preferable in this case, as computed observables will notify on all reevaluations rather than just when their value changes. An example of the subscription might look like: this.items = ko.observableArray(); this.hasItems = ko.observable(false); this.items.subscribe(function(newValue) { this.hasItems(newValue && newValue.length ? true : false); }, this); Now, the if binding can be bound against hasItems and it will only trigger a re-evaluation when it moves between true and false.

#3 ALL BINDINGS FIRE TOGETHER
When using multiple bindings on a single element, it is important to understand how Knockout triggers updates to bindings to avoid potential performance issues. For example, a common binding might look like: <select data-bind=”options: choices, value: selectedValue, visible: show Choices”></select> When Knockout determines that an element has bindings, a computed observable is created to aid in tracking dependencies. Inside the context of this computed observable, Knockout parses the data-bind attribute’s value to determine which bindings to run and the arguments to pass. As the init and update functions for each binding are executed, the computed observable takes care of accumulating dependencies on any observables that have their value accessed.

3/8

There are a couple of important points to understand here: • The init function for each binding is only executed once. However, currently this does happen inside the computed observable that was created to track this element’s bindings. This means that you can trigger the binding to run again (only it’s update function) based on a dependency created during initialization. Since, the init function won’t run again, this dependency will likely be dropped on the second execution (unless it was also accessed in the update function). • There is currently only one computed observable used to track all of an element’s bindings. This means that the update function for all of an element’s bindings will run again when any of the dependencies are updated. This is definitely something to consider when writing custom bindings where your update function does a significant amount of work. Whenever bindings are triggered for the element, your update function will run, even if none of its observables were triggered. If possible, it is a good idea to check if you need to do work, before actually executing your update logic. Here are a couple of techniques that you can use to help minimize this performance concern: • Put bindings that will be triggered frequently on a parent or wrapper element rather than with other bindings that perform a lot of work in their update function. • In a custom binding, use one or more computed observables in the init function to handle the logic that would normally reside in the update function. These computed observables allow you to perform your updates independently of the other bindings on the element. Additionally, the idea of automatically processing each binding independently is being explored for a future version of Knockout.

#4

EVERYTHING DOES NOT NEED TO BE OBSERVABLE

When creating your view model structures, it is useful to determine which properties actually need to be observable. Knockout can easily bind against plain JavaScript objects/properties, even in two-way scenarios where your model data needs to be updated by user input. If you do not need your UI to react to changes to the properties and you do not need to programmatically react to changes either through manual subscriptions or computed observables, then you can avoid the minor overhead of using an observable. Consider a scenario where you have an array of read-only data where a user can drill-down into an individual item. In that case, you can use an observable for the currently selected item, but leave the actual data as plain JavaScript objects. var viewModel = { posts: [ { title: “Knockout Performance Tips”, description: “...” }, { title: “Knockout Custom Bindings”, description: “...” }, { title: “Using Knockout with Require.js”, description: “...” }, ], selectedPost: ko.observable() }; There is no need for posts to be an observableArray, unless you will be dynamically adding or removing items from the array. Likewise, the title and description properties would not need to be observable, unless their value could change and you need to react to that change. When you select a post, then the selectedPost observable can be populated with the selected item and a section of your UI can react to this change by rendering the details of the post.

4/8

#5 COMPUTEDS ARE NOT MAGICAL
A computed observable is a handy tool to create filtered, mapped, and formatted versions of your data. However, they can be a potential source of performance issues, if they are being re-evaluated too often or are doing too much work on each re-evaluation. It is helpful to take note of your computed’s dependencies, how often any of these dependencies are being updated, and how much work that your computed needs to do to determine a new value each time. Suppose that you have a computed observable to determine if all items in a collection are checked. It might look like: this.allItemsChecked = ko.computed(function() { var i, j, items = this.items(); for (i = 0, j = items.length; i < j; i++) { if (!items[i].selected()) { return false; } } return true; }, this); In this case, we are simply looping through all of our items until we find any item that is not checked. At that point, we can quit searching, because we know that all items are not checked. This computed observable will have a dependency on the items observableArray along with each selected observable up until the point that we find an item that is not selected. So, if we wrote a checkAll function that looped through and marked all items as selected, we would actually trigger this computed to re-evaluate again and again, which could be costly if we have a large list of items to loop through each time. To remedy this issue we could use the throttle extender (described later in #7) or use another observable as a flag to pause and resume our evaluation as described here. A related scenario that can also be problematic is when you perform some type of mapping on a complex structure within a computed observable. For example, you might have a hierarchical structure that includes observables and then create a computed observable that flattens the objects into an array that is more suitable to binding using Knockout’s foreach binding. If this computed observable references lots of observables from each level of nesting, then any updates to those observables will trigger the computed to perform this mapping again. A better pattern for this scenario is to map your data into a more suitable format up front when you receive it from the server. If you need to post your view model data back to the server, then you can do a mapping in the other direction to match the original format, as necessary. This means that rather than having a computed do this mapping constantly as your data changes, you can reduce it to mapping the data only when you are communicating with the server and you can potentially keep this mapping code separated from the view model.

5/8

#6

UNOBTRUSIVE EVENT HANDLING (EVENT DELEGATION)

When dealing with hierarchical structures, you often bind against an array of items and even multiple levels of child arrays. When using the click and event bindings against these nested objects, you will be potentially creating and attaching a large number of event handlers to cover the events for each element, which can have a definite performance impact. A better alternative in this case is to use event delegation where handlers are attached at a higher level and respond to events that are triggered by the nested elements. This can drastically reduce the number of handlers attached and the time that it takes to wire them up. To facilitate this type of interaction, Knockout has two utility functions, ko.dataFor and ko.contextFor, that allow you to determine the appropriate view model data that is relevent to an element. When used against an element that triggered an event, these functions allow you to perform an action on the associated data without using the click or event bindings. For example, suppose that we have a simple view model with an array of items and a function to remove items at the root level like: var ViewModel = function() { this.items = ko.observableArray([ { name: “one” }, { name: “two” }, { name: “three”} ]); this.removeItem = function(item) { this.items.remove(item); };

};

Rather than using the click binding on a link or button inside of the template rendered for each item, we can instead add a handler to their parent element that will respond to events that bubble up from their children. Using jQuery’s on function, this might look like: $(“.items”).on(“click”, “a”, function(event) { var context = ko.contextFor(this); context.$root.removeItem(context.$data); event.preventDefault(); }); Calling ko.contextFor on the element that triggered the event will give us the binding context available to that element. Calling ko.dataFor on the element will just return the data that was bound against the element (the same as the context’s $data). Using delegated event handling can have a significant impact, especially in older browsers, when you are dealing with a large collection with many handlers/actions that you can perform on each item. If you are targetting older versions of Internet Explorer, then it is beneficial to test early on with these browsers to help understand where you may want to employ delegated handlers.

6/8

#7 THROTTLING
Knockout does include an extender that helps to throttle the amount of notifications that observables and computed observables are generating. This extender can be a remedy for some of the previous issues listed. One common use case for the throttle extender is to prevent unnecessary AJAX requests by waiting for a user to finish their action before actually triggering a request. Suppose that we have a simple search box that looks like: <input data-bind=”value: searchTerm, valueUpdate: ‘afterkeydown’” /> We might want to go to the server to start searching for potential matches as the user types. However, if the user is typing at a normal pace, we really don’t want to trigger a request for each change to the searchTerm observable. Instead, we can use the throttle extender to ensure that we do not actually update the observable until it stops changing for a certain period of time. var ViewModel = function() { this.searchTerm = ko.observable().extend({ throttle: 250 }); this.searchTerm.subscribe(function(newValue) { //trigger AJAX requests to retrieve search results }); }; Now as the user types, the search will only be triggered each time the user stops typing for more than 250 milliseconds. When used with an observable or writeable computed observable, the throttle extender will prevent writes from being committed until there are no further writes for the specified time period. Additionally, when used with a computed observable, the throttle extender will also prevent re-evaluations of the computed observable until its dependencies stop updating for that time period. A common use case for throttling a computed observable is when you need to update multiple observables that are dependencies of the computed and do not want to trigger multiple re-evaluations. Suppose that we are binding against a computed observable that represents the location of an element and we want to update both the horizontal and vertical position without causing the UI to update twice. var ViewModel = function() { this.x = ko.observable(3); this.y = ko.observable(2); this.reset = function() { this.x(1); this.y(1); } this.position = ko.computed(function() { return { left: this.x() * 50 + “px”, top: this.y() * 50 + “px” }; }, this).extend({ throttle: 1 });

};

7/8

In the reset function, we can now update both the x and y observables without worrying about position being updated multiple times, which would have caused our UI to unnecessarily repaint more than once. The throttle extender can be useful in these scenarios, but you should be careful not to overuse this technique when you have cascading dependen. The execution does end up happening in a setTimeout and if the result triggers other throttled re-evaluations, then you could end up in a situation where this chain of asynchronous updates can actually lead to delays in the UI. While throttling would generally be fine in the above case, a simple alternative is to simply store x and y as properties of an object stored in a single observable, unless you really need to update each property independently. var ViewModel = function() { this.location = ko.observable({ x: 3, y: 2 }); this.reset = function() { this.location({ x: 1, y: 1 }); }; this.position = ko.computed(function() { var location = this.location(); return { left: location.x * 50 + “px”, top: location.y * 50 + “px” }; }, this).extend({ throttle: 1 });

};

In this case, you would need to update location atomically or update the object’s x and y properties and then call valueHasMutated() on the observable to trigger notifications to any subscribers.

ABOUT THIS ARTICLE
Ryan Niemeyer has over 13 years of experience as a technical software tester focusing on .NET and web-based technologies. In his spare time, he is active in open source development. As a member of the Knockout.js core team, he focuses primarily on community support, education and documentation. He also develops a number of Knockout plugins and blogs about his experiences at KnockMeOut.net

ONLINE RESOURCES Knockout.js Home http://knockoutjs.com/
Knockout.js Observables http://knockoutjs.com/documentation/observables.html Knockout.js Documentation http://knockoutjs.com/documentation/introduction.html

http://knockmeout.net/

@RPNiemeyer

appliness

DON’T WORRY, BE APPLI

CROSS-PLATFORM PHONEGAP (AKA CORDOVA) PROJECT TEMPLATES IN A JIFFY!

THERE’S A HELPFUL NEW TOOL AVAILABLE FROM OUR OWN ADOBE ENGINEERS TO HELP YOU SETUP A CROSS-PLATFORM PROJECT TEMPLATE QUICKLY.

BUILD YOUR PHONEGAP PROJECTS
There’s a helpful new tool available from our own Adobe engineers to help you setup a cross-platform project template quickly when PhoneGap Build might not be the best option for you to use (for example when you want more control over your development and testing between platforms or when using unsupported plugins). This is also a good choice to be aware of even if you are planning to use PhoneGap Build but want to do some specific development and testing first.

The tool is called cordova-client, and it allows you to edit one code base and maintain and manage your overall cross-platform development project with almost no setup hassle or IDE (Eclipse/XCode) overhead. It’s run from the command line and takes advantage of the underlying Cordova command line interface recently released. It’s basically a set of scripts that do all the hassle work for you and keep the code base consistent by copying your latest source into the different supported platforms at build time.

It’s very easy to set up and use and this post is intended to help you get started with it right away. I was very excited when I saw this demo’d at

ou Playgr

(

nd

- PhoneGap - Cordova - Android

Difficulty
- rookie - intermediate - expert

Todo list
- build - deploy - debug
by Holly Schinsky

PhoneGap Day in Portland recently because I have also felt like developers needed something like this to help them become uber productive in their mobile development, so please read on. The setup can take 10 minutes or less depending on what you already have installed from the pre-requisities, but please see the main github site for the requirements and installation instructions and install before continuing. IMPORTANT NOTE Be sure to have your android-sdks/tools and android-sdk/platform-tools set on your system path to be accessed globally before running the cordova commands for android, the cordova scripts require access to those tools.

CREATE, DEBUG AND LAUNCH A CROSS-PLATFORM APPLICATION
1. Create a new project by entering the following command in the terminal window starting in the directory where you want to create the new project: cordova create MyCordovaProject org.devgirl.my-cordova-app MyCordovaApp In my terminal window, it looks like this (starting after the hollyschinsky$):

Creates a folder named MyCordovaProject in the current directory with an application name of MyCordovaApp and package of org.devgirl.my-cordova-app (also the bundle identifier for iOS). 2. cd into the new project folder (MyCordovaProject in this case) and list the contents. It should show the following:

3. Note: Currently the platforms and plugins folders are empty because we haven’t specified that any be added yet. The default www folder however contains a basic PhoneGap application architecture preset for you (includes js, css, img folders etc and includes the links to them from index.html. Note that it also includes config.xml for PhoneGap Build use). It’s important to note that this is the folder where you will be developing. It will be copied down into the different platforms to be used when you build and launch your application so the latest is used.

Note: If you’re not familiar with PhoneGap/Cordova enough to know what this www folder means, it’s basically the path to the index.html and associated source code and assets for all PhoneGap/Cordova projects. If you think about the overall architecture with PhoneGap and how it needs to work with different platforms, it will make more sense to understand there needs to be an entry point to bootstrap into the index.html. The www folder was the chosen way to ensure the consistency across platforms. For the super-curious (or geeky – you choose), the following is the code that is used to bootstrap from iOS (AppDelegate.m) and Android (MyCordovaAppActivity.java) to PhoneGap respectively:
2/5

iOS Native Hook self.viewController.wwwFolderName = @”www”; self.viewController.startPage = @”index.html”; Android Native Hook public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); super.loadUrl(“file:///android_asset/www/index.html”); } 4. Add your desired platforms. Here I am adding ios then going into the platforms folder to show that it has been added, then back out and adding android and showing how that exists after the command is run: cordova platform add ios

cordova platform add android

5. Now you have a basic application scaffolding in place where you could build and run on all platforms added to see a default cordova project in action. If one of your platforms include android, you’ll need to first either connect an actual android device or start up an android virtual device you’ve previously created by navigating to MyCordovaProject/platforms/android and typing cordova/emulate. Note: if you don’t have an android virtual device setup or your own to test with, you can create one using the android tool that comes with the required SDK in the pre-req’s for this tool. For more details, see this link. Nothing needs to be done before building and emulating for XCode, except for of course to have XCode (version 4.x). I have not tested with BlackBerry yet so I cannot say for that platform. Navigate to your project’s platforms folder and type the following: cordova build && cordova emulate: The result should be an app that looks like the following screenshot:

3/5

Note: on Android you *may* see the application get installed but not run immediately and have to click to start it. Look for the name of it in your applications with a logo that looks like the above. 6. Now you could go back into the ../MyCordovaProject/www folder that I talked about being the base above and make desired code changes, and simply go back and repeat step 5 to see your changes cross-platform! I suggest trying this quickly by making a simple change to the index.html and rerunning cordova build && cordova emulate from within the platforms folder to see how easy it is.

ADDITIONAL NOTES
- If you’re perusing the generated code, you may notice there’s a cordova subfolder created under each of the platforms (../android/cordova, ..ios/cordova etc). Those are created as part of the PhoneGap/ Cordova command line interface that this tool basically wraps. So the subfolder cordova scripts are intended to do a debug or emulation for just that one particular platform whereas the cordova-client commands covered above (build, emulate) are intended to work at the project level to cover all platforms. - The cordova-client create command can also be run with just a folder name and will default the name to Hello Cordova and package to io.cordova.hello-cordova, but since the extra parameters to name it yourself aren’t much hassle, I showed that as an example above. I also think it helps to understand better what was generated from the scripts based on knowing what exactly you entered. Also, there’s currently a bug in this step where the generated **.app file is built is named Hello Cordova.app so you need to rename it to Hello_Cordova.app (with the underscore) before you run the emulate command or it will not

4/5

find it. - If you want to get to the files that resulted from a build (the .apk or .app) you can find them in the following paths: ../MyCordovaProject/platforms/android/bin ../MyCordovaProject/platforms/ios/build - If you’re curious exactly where cordova is installed, the node package manager (npm) installs on Mac OS to a path of /usr/local/lib/node_modules/cordova. If you end up with any errors about a cordova-2.0.1.jar file missing there during the process, check that folder and if it’s missing, copy it from your {phonegap 2.0.1-path}/lib/android folder to /usr/local/lib/node_modules/ cordova/lib/android/framework/cordova-2.1.0.jar.

ABOUT THIS ARTICLE
Holly Schinsky is an Adobe Developer Evangelist focused on mobile development using PhoneGap, HTML/CSS & JavaScript and has 16+ years experience in software development including a large focus on Adobe Flex and AIR for front-end and mobile development & Java for development on the server-side. She has helped build developer tools such as: Tour de Flex, Adobe AIR Launchpad & PhoneGap templates.

ONLINE RESOURCES PhoneGap Build https://build.phonegap.com/
Cordova client https://github.com/filmaj/cordova-client HTML & Adobe http://html.adobe.com

http://devgirl.org/

@devgirlFL

FO LL OW

A

PP
JOIN THE APPLINESS COMMUNITY
APPLINESS IS A DIGITAL MAGAZINE WRITTEN BY PASSIONATE WEB DEVELOPERS. YOU CAN FOLLOW US ON YOUR FAVORITE SOCIAL NETWORK. YOU CAN SUBSCRIBE TO OUR MONTHLY NEWSLETTER. YOU CAN CONTRIBUTE AND PUBLISH ARTICLES. YOU CAN SHARE YOUR BEST IDEAS. YOU CAN BECOME APPLI.

BE CO

LI

M E

N

A

ES

FA

S

N

O F

US

- FACEBOOK PAGE - ON TWITTER - ON GOOGLE+ - OUR NEWSLETTER

C

O
- CONTACT US - SHARE AN IDEA - WRITE AN ARTICLE - BASH US

N

TR

IB S U

U

TE

LL FO W O

appliness

DON’T WORRY, BE APPLI

IPHONE 5 AND IOS 6 FOR HTML5 DEVELOPERS, A BIG STEP FORWARD: WEB INSPECTOR, NEW APIS AND MORE

THE NEW MAIN VERSION OF THE APPLE’S IOS IS WITH US, ALONG WITH THE NEW IPHONE 5 AND THE IPOD TOUCH FIFTH GENERATION. AS EVERY BIG CHANGE, LOT OF NEW STUFF IS AVAILABLE FOR HTML5 DEVELOPERS AND -AS ALWAYS- NO MUCH OFFICIAL INFORMATION IS AVAILABLE.

(

Other Smaller Updates

iPhone 5

Still Waiting

iOS6 & HTML5 Dev

Final Thoughts

ou Playgr

nd

Difficulty
- rookie - intermediate - expert

- HTML5 - Safari - JS

Todo list
- test - compare - adapt
by Maximiliano Firtman

QUICK REVIEW
I’m going to divide this post in two parts: iPhone 5 and iOS 6 new stuff. On iPhone 5 New screen size New simulator What you need to do Problems

New Features on iOS6 File uploads and camera access with Media Capture and File API Web Audio API Smart App Banners for native app integration CSS 3 Filters CSS 3 Cross Fade CSS Partial Image support Full screen support Animation Timing API Multi-resolution image support Passbook coupons and passes delivery Storage APIs and web app changes Web View changes for native web apps Debugging with Remote Web Inspector Faster JavaScript engine and other news
2/12

IPHONE 5
The new iPhone 5 -along with the iPod Touch 5th generation- has only one big change in terms of web development: screen resolution. These devices have a wide 4” screen, WDVGA (Wide Double VGA) 640×1136 pixels, 326 DPI -Retina Display as Apple called it. These devices have the same width as iPhone 4/4S but 176 more pixels-height on portrait mode. NEW SIMULATOR The new Xcode 4 (available on the Mac AppStore) includes the updated iOS Simulator. The new version has three options for the iPhone simulation: • iPhone: iPhone 3GS, iPod Touch 1st-3rd generation • iPhone Retina 3.5″: iPhone 4, iPhone 4S, iPod Touch 4th generation • iPhone Retina 4″: iPhone 5, iPod Touch 5th generation The new simulator also includes the new Maps application replacing Google Maps by default and Passbook. WHAT YOU NEED TO DO FOR THE NEW DEVICES Usually, if your website/app is optimized for vertical scrolling, you should not have any problem. Same viewport, icons and techniques for iPhone 4/4S should work properly. Remember, when updating the iOS, you are also updating the Web View: that means that all the native web apps -such as PhoneGap/ Apache Cordova apps- and pseudo-browsers such as Google Chrome for iOS are also updated. However if your solution is height-dependent, then you may have a problem. Just look at the following example of the Google Maps website on iPhone 4 and iPhone 5. As it is talking the height as a constant, the status bar is not hidden and there is a white bar at the bottom. Be careful if you’ve designed for an specific height as Google Maps. As you can see (right caption is from iPhone 5) there is a white bottom bar and the URL bar can’t be hidden as there is no enough content. If you are using Responsive Web Design you should not have too much trouble as usually, RWD techniques are using the width and not the height for conditionals.
iOS Simulator on Xcode 4 includes

iPhone 5 emulation

Be careful if you’ve designed for an specific height as Google Maps. As you can see (right caption is from iPhone 5) there is a white bottom bar and the URL bar can’t be hidden as there is no enough content.

3/12

DEVICE DETECTION At the time of this writing there are no iPhone 5 on the street yet. However, as far as every test I could check, there is no way to detect iPhone 5 server-side. The user agent only specifies an iPhone with iOS 6, and the same exact user agent is being used for iPhone 4S with iOS 6 and iPhone 5. Mozilla/5.0 (iPhone; CPU iPhone OS 6_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10A403 Safari/8536.25 Therefore, the only way to detect the presence of a 4″ iPhone device is to use JavaScript and/or media queries, client-side. If you need to know server-side, you can plant a cookie from client-side for next load. Remember that these devices have 1136 pixels height, but in terms of CSS pixels (independent resolution pixels) we are talking about 568 pixels-height as these devices have a pixel ratio of 2. isPhone4inches = (window.screen.height==568); Using CSS Media Queries and Responsive Web Design techniques,we can detect the iPhone 5 using: @media (device-height: 568px) and (-webkit-min-device-pixel-ratio: 2) { /* iPhone 5 or iPod Touch 5th generation */ } HOME SCREEN WEBAPPS For Home Screen webapps the problem seems important. I’ve reported the problem while in NDA without any answer from Apple yet. Basically, when you add a website to the Home Screen that supports apple-mobile-web-app-capable meta tag, your webapp works only in iPhone 3.5″ emulation mode (it’s not taking the whole height) as you can see in the following example from the Financial Times webapp. The letterbox black bars you see here are not a problem in the image. That is how a full-screen webapp is being launched by default on iPhone 5 and the new iPod Touch. While it’s a good idea to not add more height to a webapp if the OS is not sure about it’s compatibility on a wider screen, as far as I could test, there is no way to define that our webapp is 4″ compatible. I’ve tried as many combinations as I could think about, and if you provide an apple-touch-startup-image of 640×1096, the iPhone 5 takes your splash screen but it’s being resized to 640×920, at least in the Simulator for the GM compilation (almost the final version). UPDATE 9/20: Solution found, thanks to some guys that were pointing to some solutions I;ve found the trick. As weird as it sounds, you need to forget about the viewport with width=device-width or width=320. If you don’t provide a viewport, it will work properly. The same if you use other properties than width; if you don’t want your viewport to be the default 980px, the way to do it is: <meta name=”viewport” content=”initial-scale=1.0”>

4/12

The letterbox black bars you see here are not a problem in the image. That is how a full-screen webapp is being launched by default on iPhone 5 and the new iPod Touch.

Even if you use a viewport for an specific size different than 320 letterbox will not be present. <meta name=”viewport” content=”width=320.1”> Instead of changing all your viewports right now, the following script will make the trick changing it dynamically: if (window.screen.height==568) { // iPhone 4” document.querySelector(“meta[name=viewport]”).content=”width=320.1”; } The startup image has nothing to do with the letterbox as some developers were reporting. Of course, if you want to provide your launch startup image it has to be 640×1096 and you can use media queries to use different images on different devices. Some reports were saying that you need to name the launch image as in native apps “Default-568h@2x.png” but it’s not true. You can name it however you want. The sizes attribute is completely ignored. You can use media queries to provide different startup images: <link href=”startup-568h.png” rel=”apple-touch-startup-image” media=”(deviceheight: 568px)”> <link href=”startup.png” rel=”apple-touch-startup-image” sizes=”640x920” media=”(device-height: 480px)”> If you want to provide an alternative version for low resolution devices then you can use the -webkit-device-pixel-ratio conditional too. If you are wondering why 568px and not 1136px remember that we are using CSS pixels and in these devices the pixel ratio is 2. The trick is the viewport. Why? I don’t really know. For me, it’s just a bug. But it’s the only solution I’ve found so far. The other problem is with Home Screen icons that you are already have before buying your new device. iTunes will install the shortcut icon again from your backup and it’s not clear if we are going to have a way to upgrade the compatibility. Even if you change the viewport, if the icon is already installed before the change you will get the letterbox.

IOS 6 AND HTML5 DEVELOPMENT
iOS 6 is available as a free update for every iOS 5 device but not the iPad first generation so we will see this version browsing the web really soon and the iPad market is being fragmented for the first time. The following findings are useful for all iOS devices that are talking the iOS 6 upgrade. As always -and unfortunately- Apple is giving us just partial and incomplete updates on what’s new on Safari and I -as always- enter the hard work of digging into the DOM and other tricks to find new compatibility. FILE MANAGEMENT Finally! Safari for iOS 6 supports a file upload input type and with HTML Media Capture partial support. A simple file upload as the following, will ask the user for a file, from the Camera or Gallery as you can see in the figure. I really like how Safari is showing you screenshots instead of a temporary filename after selecting your image. <label>Single file</label> <input type=”file”>
5/12

We can also request multiple files using the HTML5 new boolean attribute. In this case, the user can’t use the camera as a source. <label>Multiple files</label> <input type=”file” multiple>

We can access the camera and gallery using file uploads

There is no way to force the camera, as using capture=”camcorder”. However, we can specify if we want to capture images or videos only, using the accept attribute. <input type=file accept=”video/*”> <input type=file accept=”image/*”> There is no support for other kind of files, such as audio, Pages documents or PDFs. There is no support for getUserMedia for live camera streaming. What you can do with the image or video after selected? • Send it using multipart POST form action (old-fashion upload mechanism) • Use XMLHttpRequest 2 to upload it using AJAX (even with progress support) • Use the File API that is available on iOS 6 that allows JavaScript to read the bytes directly and manipulate the file client side. There is a good example of this API in action on HTML5Rocks. WEB AUDIO API HTML5 game developers should be happy! Web Audio API appears on a mobile browser for the first time. This API allow us to process and synthesize audio on JavaScript. If you have never played with some low level audio, the API may seems a little weird, but after a while is not so hard to understand. Again, HTML5Rocks has a great article to begin with the Audio API. More information and news on the API on http://www.html5audio.org SMART APP BANNERS Website or native app? If we have both, now we can join efforts and connect our website with our native app. With Smart App Banners, Safari can show a banner when the current website has an associated native app. The banner will show a “INSTALL” button if the user doesn’t have the app installed or a “VIEW” button to open it if installed. We can also send arguments from the web to the native app. The case is to open the native app on the same content that the user were seeing on the web site. To define a Smart App Banner we need to create a meta tag, with name=”apple-itunes-app”. We first need to go and search for the app we have on iTunes Link Maker and take the app ID from there.
6/12

<meta name=”apple-itunes-app” content=”app-id=9999999”>

We can provide a string value for arguments using app-argument and if we participate in the iTunes Affiliate program we can also add affiliate-data in the same meta tag. <meta name=”apple-itunes-app” content=”app-id=9999999, app-argument=xxxxxx”> <meta name=”apple-itunes-app” content=”app-id=9999999, app-argument=xxxxxx, affiliate-data=partnerId=99&siteID=XXXX”> The banner takes 156 pixels (312 on hi-dpi devices) at the top until the user click on the bottom or on the close button and your website will get the full height for you. It acts like a DOM object at the top of your HTML but it’s not really on the DOM. On iPad -and more on landscape- it seems a little space-wasting.

With Smart App Banners, the browser will automatically invite the user to install or open a native app

During some seconds, the banners shows a “loading” animation while the system is verifying that the app suggested is valid to the current user’s device and App Store. If it’s not valid, the banner hides automatically; for example, it’s an iPad-only app and you are browsing with an iPhone or the app is available only on the german App Store and your account is in US. CSS 3 FILTERS CSS 3 Filters a set of image operations (filters) that we can apply using CSS functions, such as grayscale, blur, drop-shadow, brightness and other effects. These functions will be applied before the content is rendered on screen. We can use multiple filters using spaces (similar to transforms). You can try a nice demo here. A quick example of how it looks like: -webkit-filter: blur(5px) grayscale (.5) opacity(0.66) hue-rotate(100deg); CSS 3 CROSS-FADE iOS 6 start supporting some of the new CSS Image Values standard, including the cross-fade function. With this function, we can apply two images on the same place with different levels of opacity and it can even part of a transition or animation. Quick example: background-image: -webkit-cross-fade(url(“logo1.png”), url(“logo2.png”), 50%);

7/12

FULL SCREEN IN SAFARI Besides the chrome-less home screen meta tag, now the iPhone and iPod Touch (not the iPad) supports a full-screen mode when in landscape. This is perfect for immersive experiences such as games or multimedia apps. There is no way to force full-screen mode and it needs to be launched by the user (last icon on the toolbar). However, we can invite the user to move to landscape first and press on the full-screen icon to activate our app. If we mix this with some touch event handling we can hide the URL bar and provide a good interface until the user escape from the full-screen.

Fullscreen navigation on iPhone and iPod Touch

You will always find two or three overlay buttons at the bottom that your design should be aware of, that’s the back button, the optional forward button and the cancel full-screen. You can use the onresize event to detect if the user is changing to full-screen while in landscape. ANIMATION TIMING API Game developers, again, you are lucky. iOS 6 supports Animation Timing API, also known as requestAnimationFrame, a new way to manage JavaScript-based animations. It’s webkit prefixed, and for a nice demo and more detailed explanation check this post from Paul Irish. CSS IMAGE SET This is not part of any standard group yet. It’s a new image function, called image-set receiving a group or images with conditions to be applied. The only compatible conditions right now seems to be 1x and 2x for low density devices and high density devices. With this new function we don’t need to use media queries to define different images for different resolutions. The working syntax is: -webkit-image-set(url(low.png) 1x, url(hi.jpg) 2x) It’s working on CSS, such as a background-image. I couldn’t make it work on the HTML side, for the src attribute on an img element or the new proposed picture element. With this new syntax we can have more clear multi-resolution image definition, as we don’t need to use media queries and background-size values.

8/12

PASSBOOK COUPONS AND PASSES DELIVERY Passbook is a new app in iOS that works as a virtual container for all your passes, ticket, discount coupons, loyalty cards and gift cards. As a web developer you may want to serve the user with a discount coupon, a ticket to an event, an e-ticket for your next flight or a loyalty card. Apple allow websites to deliver this kind of passes from a website without the need of a native app. To deliver the pass on your website you just need to use the MIME type application/vnd.apple.pkpass or send it through email Apple provides a tool that you can install on your server to package and sign customized passes on the fly that may include current user information To pass file is just a JSON meta-data file and a couple of images. We need to package the file and sign it. Unfortunately, to sign the pass we need a signature from Apple and that means that the web developer needs an iOS Developer Program account ($99/year). If you receive the pass already signed, you can just insert it on your own site. One of the great features of passes is that once installed you can provide some web services on your end and through Push Notification Services, the operating system will call your web services to update the information on the pass. More information at developer.apple.com/passbook STORAGE APIS AND WEBAPP UPDATES No, there is no new storage API available. There is no support for IndexedDB yet. However, there are some changes you should take in consideration: • Application Cache limit was increased to 25Mb. • Chromeless webapps (using the apple-mobile-web-app-capable meta tag) now have their own storage sandbox. That means that even if they are served from the same domain, the web app from the Home Screen will have its own persistent Local and SQL Storage. Even if you install the icon many times, every icon will have its own sandbox. While this is good news for apps, it may be also a problem on some apps if you are passing information from the website to the home screen widgets through storage. Credits for this finding to George Henne on his post. • There is a new undocumented meta tag that can be used on any website (having the apple-mobile-webapp-capable meta tag or not) that allow us to define a different title for the Home Screen icon. As you may know, by default Safari takes the document’s title and crop it to 13 characters. Now we can define an alternative title for the Home Screen using: <meta name=”apple-mobile-web-app-title” content=”My App Name”> I’ve also found a meta tag called apple-mobile-web-app-orientations accepting the possible values portrait, portrait-upside-down, landscape-right, landscape-left, portrait-any. Unfortunately, I couldn’t make it work. If you have any luck feel free to comment here. WEB VIEW UPDATES On Web View (pseudobrowsers, PhoneGap/Cordova apps, embedded browsers) JavaScript now runs 3.3x slower (or let’s say that Nitro engine on Safari and Web apps is 3.3x faster). Be careful about the 3.3x, that is just the different of running SunSpider on the same device, on Web View and Safari. However, SunSpider is not covering all possible kind of apps and your total rendering time is not just JavaScript, so

9/12

this doesn’t mean that your app runs 3.3x slower. We can find some other good news: • Remote Web Inspector for webapp debugging • A new supressesIncrementalRendering Boolean attribute that can eliminate the partial rendering mechanism. I believe this feature is useful to reduce the perception of loading a web page instead of being an app. • A new WebKitStoreWebDataForBackup info.plist Boolean feature where we can define that we want localStorage and Web SQL databases to be stored in a place to be backed up, such as in iCloud. This problem has appeared in iOS 5.01, now it’s solved • Changes in the developer agreement: it seems that the restriction of using only the native WebView to parse HTML and JS has gone. It will be good if someone from Apple can confirm this. The only mention to the internal WebKit engine is that it’s the only engine capable of downloading and execute new code, while in the same app expected behavior; that’s the anti-Chrome statement. You can use your own engine but only if you are not downloading the code from the web. This may be opening a door… such as delivering our own engine, for example, with WebGL support. REMOTE DEBUGGING I’m keeping this topic at the end. Because it’s a huge change for web developers. For the first time, Safari on iOS includes an official Remote Web Inspector. Therefore tools, such as iWebInspector or Weinre will become obsolete since this version. The Remote Debugger works with the Simulator and with real devices via USB connection only. To start a remote inspection session you need to use Safari 6 for desktop. Here comes the bad news: you can only debug your webapp on a Mac desktop computer. It was a silent change, but Safari for Windows is not available anymore, so it’s stuck in 5.x. Therefore, only with a Mac OS computer you can make web debugging session on your iOS devices (at least officially for now). For security reasons, you need to first enable the Web Inspector from Settings > Safari > Advanced. The new Inspector means that the old JavaScript console is not available anymore.

You can start a debugging session with: • A safari window on your iOS device or simulator • A chrome-less webapp installed on your iOS device or simulator • A native app using a Web View, such as Apache Cordova/PhoneGap apps. When talking about native apps, you can only inspect apps that were installed in the device by Xcode (your own apps). Therefore, there is no way to inspect Google Chrome on iOS websites for example.
10/12

If you are used to the Webkit Inspector -Safari 5 or Chrome-, you are going to see a completely redesign edversion of the inspector in Safari 6 based on Xcode native development UI. You will be lost for a while understanding the new UI. With the inspector session, you can: • See and make live changes on your HTML and CSS • Access your storages: cookies, local storage, session storage and SQL databases • Profile your webapp, including performance reports for Network requests, Layout & Rendering and JavaScript and events. This is a big step in terms of performance tools. • Search on your DOM • See all the warning and errors in one place • Manage your workers (threads) • Manage JavaScript breakpoints, and define Uncaught exception breakpoint. • Access the console and execute JavaScript • Debug your JavaScript code • Touch to inspect: There is a little hand icon inside the inspector that allows you to touch on your device and find that DOM element on the inspector. Well done Apple, we were waiting for this on iOS for long time. Apache Cordova users should be also happy with this feature.

OTHER SMALLER UPDATES
• Apple claims to have a faster JavaScript engine. And it seems to be true. On the SunSpider test I’m receiving 20% improvement on JavaScript performance on the same device with iOS 5.1 and iOS 6. • Google Maps is not available anymore on iOS 6; Now http://maps.google.com redirects to the Google Maps website and not the native app. there fore there is a new URL scheme, maps, that will open the native new Maps applications. The syntax is maps:?q=<query> and query can be just a search or latitud and longitude separated by comma. To initiate a route navigation, the parameters are: maps:?saddr= <source>&daddr=<destination>.

11/12

• XHR2: Now the XMLHttpRequestProgressEvent is supported • The autocomplete attribute of the input is officially in the DOM • Mutation Observers from DOM4 are now implemented. You can catch a change in the DOM using the WebKitMutationObserver constructor • Safari no longer always creates hardware-accelerated layers for elements with the -webkit-transform: preserve-3d option. We should stop using it for performance techniques. • Selection API through window.selection • <keygen> element • Canvas update: Now the createImageData has one parameter and now there are two new functions that the name suggest to be prepared to provide High Resolution images webkitGetImageDataHD and webkitPutImageDataHD. • Updates to SVG processor and event constructors • New CSS viewport related measures: vh (viewport height) vw (viewport width) and vmin (minimum between vw and vh) • CSS3 Exclusions and CSS Regions were available on beta 1 but they were removed from the final version. It’s a shame although they were too new and not mature enough. • iCloud tabs. You can synchronize your tabs between all your devices, including Macs, iPhones and iPads. So the same URL will be distributed through all devices. Be careful on your mobile web architecture!

WHAT WE ARE STILL WAITING FOR
There are a couple of things that we still need to wait for a next version, such as: • IndexedDB • FileSystem API • Performance Timing API • WebRTC and getUserMedia • WebGL -still there and still disabled• Orientation Lock API for gaming/inmersive apps • Integration with Facebook and Twitter Accounts native APIs, so we can use current OS user’s credentials automatically instead of forcing the user to log in again.

FINAL THOUGHTS
Safari on iOS 6 is a big step for HTML5 developers; debugging tools, new APIs, better JavaScript performance. However, I must say that Apple is still forgetting about documentation updates and properly communication with web developers. There are almost no answers on the discussion forum, no updates on the Safari documentation (some docs are really too old right now). I believe Apple must do a better job supporting web developers. Did you find any other new stuff? other problem or bug? Anything else you were waiting for? Feel free to comment in the section below or contact me by twitter.
12/12

appliness

DON’T WORRY, BE APPLI

HOW I ENDED UP ENJOYING JAVASCRIPT
OMAR WRITES ABOUT THE TOOLS HE USED TO TRANSITION FROM FLEX TO JAVASCRIPT...

Eleven months ago I would have cringed at the thought of having to build large web applications, like the ones I’d usually built in Flex, using JavaScript. Over the years, my JS usage became less and less; embed a SWF, fix an IE bug. At the same time the opportunity to work on a big web app that would need to be written in JavaScript was becoming a real possibility. I began to research what the latest JavaScript techniques, tools and best practices were being used by the JavaScript community. What I’ve learned over the last eleven months is that I am also capable of enjoying big development in JavaScript, even after being mainly an ActionScript developer for close to 10 years. And even after being so adamantly opposed to dynamic loosely typed languages like JavaScript after having grown so accustomed to a strict-type language like ActionScript. With time and some good tool choices I’ve now gotten to the point where I can enjoy a JavaScript project just as much as any Flash, Flex or any other language I’ve coded throughout the years. Below I’ll talk about some of the tool choices I’ve made to help make my life in the JavaScript world more enjoyable. The first thing that I noticed in the modern JavaScript world was that there are about as many frameworks as there are JavaScript developers. Its like there is a new framework named after some fruit or celebrity about every day. Going through so many frameworks and libraries is a daunting task. I spent a lot of hours reading and watching video tutorials about as many frameworks as I could possibly find. Below I will go into my toolset of choice. I am not saying this combination will solve everyone’s problems, or that this is the best combination for you. I will instead present the toolset that has made me a happy JS dev, why I chose them, and I will list some alternatives where I can. So here goes!

ou Playgr

(

nd

- Require.js - Node.js - Angular.js

Difficulty
- rookie - intermediate - expert

Todo list
- load - test - lint
by Omar Gonzalez

SCRIPT LOADERS
One of the first things I knew could be a mess with JavaScript development is script management. The old days of having over 20 script tags in an HTML file and having to manage those was not something I was looking forward to trying to deal with. Scripts arranged this way can be fragile. Are they in the right order? Will scripts break if I have to add more code? Script loaders help you so you can keep your JS code organized into “class” modules. My other goal with finding a script loader was being able to use a single script tag. REQUIRE.JS I’ve been using require.js to do all my script loading. Aside from giving me a more maintainable set of <script/> tags in my HTML pages it also provides a great file concatenator and optimizer that will package all of your JavaScript files into a single file and then minify it with Uglify.js or the Google Closure Compiler. Require.js is an AMD module loader, which is a JavaScript module pattern to help you keep your code out of the JavaScript global scope to help prevent function name collisions. There are other module patterns like the CommonJS module approach, which is the pattern used in node.js scripts. The require.js approach can also be used in a node.js context. Alternatives: Shepherd.js, Browserify.js, Common.js

AUTOMATED TESTING
Running automated tests is important for any software development. When you’re coding in a language like JavaScript I would say it is even more so. This is a space where JavaScript has very many options to choose from. In my experience looking through all of the options this really becomes a matter of syntactical personal preference. JASMINE For unit testing my JavaScript code I use the Jasmine BDD style framework. I really like its readability, the api it provides creates what I think is a very readable set of tests. It comes with its own mocking/spy framework, or you can use something like Sinon.js to create your mocks and spies. There are also different ways to write Jasmine tests and execute them. You can set up your Jasmine test runner in an HTML file and just navigate to the page. I prefer to use jasmine-node, a node.js based Jasmine spec runner. I prefer this approach because I don’t have to manually add to a test suite. I can simply set a spec path and jasmine-node will run all the spec files it finds. Alternatives: QUnit, Mocha, YUI Test, Buster.js JASMINE-NODE The jasmine-node npm package lets you run Jasmine specs in a node environment. More importantly, if you’re writing require.js modules then you need AMD modules to run in node, node’s require expects CommonJS modules (module.exports = {}). Using the jasmine-node options you can configure it to use require.js. You can then write your browser JS modules as usual in closures with define() calls. Then in your Jasmine specs, simply wrap them in a require() call and run jasmine-node, like this: require(“path/to/ClassUnderTest”, function (ClassUnderTest) { describe(“ClassUnderTest()”, function () { it(“does stuff”, function () { // test something }); }); });

2/6

This lets you run your AMD modules through Jasmine on node with jasmine-node and then you don’t have to manually create suites or test runner HTML pages. I find it more productive. Check out the article “Testing with Node, Jasmine and Require.js” below for info on resolving global objects some of your scripts may depend on, like window, document, etc. Angular.js also solves some of these issues with the abstracted services it provides like $window, $document and its dependency injection system, more on that below. CUCUMBER.JS Unit tests are cool, but they’re not going to ensure your JS modules are all tied in together and all working as expected once you’ve integrated all your units together and you run them in the browser. Cucumber.js is a JavaScript port of the popular Ruby BDD testing framework Cucumber. It lets you write Gherkin based application specifications that you can then link with the Cucumber.js in what is referred to as step definitions to actually test your application specification. Ive used Cucumber. js for web apps doing headless browser testing with Zombie.js, running on Node.js. I’ve also used it to spec, build, and test CLI apps built on node.js. In fact, Cucumber.js can even test itself! Alternatives: None that I know of.

CSS PREPROCESSORS
With all of the experimentation going on at the different browser vendors it is increasingly more difficult to keep track of things like browser specific prefixes and which platforms they actually work on and don’t. CSS preprocessor help to keep your CSS maintainable not only by helping to normalize browser specific prefixes but also by introducing language features not currently present in CSS, such as easy to use variables and better CSS specificity in your style definitions. LESS My preprocessor of choice is the LESS preprocessor. I haven’t found much difference in the usage itself between LESS and other alternatives like SASS. LESS chooses a syntax that allows all CSS syntax to be valid, and then adds to it with variables, mixins, and nested rules. LESS has functions, SASS has inheritance and brace-less syntax option (a la CoffeeScript, Haml, etc). The main two reasons I chose LESS was a build tool called LESSLESS which lets me easily compile LESS code into CSS at build time, and less-elements, which solved my prefixes concerns by providing a set of mixins ready for use, nice. Alternatives: SASS, Stylus, Compass

3/6

APPLICATION FRAMEWORKS
This decision took the longest for me, as I suspect it will take anyone that is currently looking to switch from another language or framework to something different or new. For me the decision ended up coming down to three choices, Ext.js, Ember.js and Angular.js. ANGULAR.JS For me ultimately I chose Angular.js for a few of reasons. The first one doesn’t appear to be as big an issue as it was about 10 months ago, but at that time the documentation on Ember.js was not nearly as thorough as it was for Angular.js. Angular.js has a great tutorial, dev guide and api reference which were mostly there when I was evaluating these frameworks.

Testability was also a determining factor for me. Even in its earliest days Angular.js has always had a lot of examples on unit testing its parts, and all of the patterns in the framework are written in very testable forms. From the dependency injection it provides, to built in framework mocks and Jasmine test runners and test samples it really proved to me that the Angular.js team held testability as a first-class citizen in the framework, and not just an after-thought. As their tutorial teaches you how to use Angular.js, it also teaches you how to test your code along with it. In evaluating Ext.js there is no doubt that it provides a great component set. I did not like what I saw from the MVC it comes with, which is optional. That said, Angular.js is extremely flexible, and so Ext.js components can be easily used in Angular.js directives, giving you a truly declarative and decoupled interface layer. Ext.js says its declarative, but writing markup in a JavaScript string doesn’t make it declarative. At least not in my opinion. But because I could still use Ext.js components, or Twitter Bootstrap, or jQuery UI, at my discretion in Angular.js, I ultimately chose Angular.js. Alternatives: Ember.js, Ext.js, Backbone.js, Meteor.js, Knockout.js, Sammy.js

LINTING
Linting is the process of going through your code and identifying potential dangerous or buggy code usually based on static analysis. It helps to identify some of the really common pitfalls in JavaScript and in coding in general. Things like accidentally using “=” in an if comparison, or using “==” instead of “===”. JSHINT The two popular choices are JSLint and JSHint. I chose JSHint because it provides more options so you can configure it to your liking. You can get as strict as you want or relax some of the rules that it imposes. Alternatives: JSLint, Google Closure Linter

4/6

THAT’S A LOT OF STUFF TO RUN
Each one of these tools provides valuable benefits. However, running all of these tools manually to reap those benefits is not productive. The idea is to make the development process more enjoyable, and not more burdensome. Automation does not stop at the testing layer. All of these steps, from compiling less, switching .less file links to .css links, concatenating JavaScript, running unit tests, running Cucumber suites, minifying CSS and JS files, linting code, all of these things can be easily automated with the use of one of many build tools. GRUNT.JS AND NODE.JS The big part of why node.js became so popular was its use as a quick web server and the many possibilities it opened to JavaScript developers on the server side. But a side to node.js not often talked about is its role as a great build environment. I run unit tests with Jasmine using the jasminenode module for node.js. I run Cucumber.js tests using its command line interface which runs on node.js. I use a node based web server to serve my static HTML files to Zombie.js, which is a node based headless web browser you can script in JavaScript from within Cucumber.js step definitions. And finally, I use grunt.js to write my build scripts in JavaScript, also running on node.js. Not only are my builds extremely fast on node, but they’re easy to maintain. The grunt.js framework has over 100 plugins already. I wrote the grunt-jasmine-node plugin, and also the grunt-cucumber plugin. Its fast and easy to set up a grunt file, in JavaScript, and automate your builds. I like being able to have one language for the entire toolchain that I use, even though I know other languages and don’t have an issue switching. Especially in big projects, it reduces the amount of requirements needed for a developer to be effective with your entire project. Alternatives: ANT, make, rake, jake

WAT?
That is a lot of frameworks and libraries to learn, but I’ve found them to be very complimentary, and so far have been effective for my needs. It can’t hurt to have a great IDE to enjoy coding in JavaScript either. My choice is IntelliJ IDEA, or WebStorm (can be loaded as a plugin to IDEA). Another thing to keep in mind is you won’t enjoy JavaScript if you’re trying to force patterns and practices you’re used to in other languages like ActionScript 3 or Java. Within all of JavaScript’s ugliness it is entirely possibly to write clean, trust-worthy, maintainable and elegant JavaScript code. You just have to find the patterns and designs that make you the most productive. Learning the language design and its capabilities will allow you to create new workflows and patterns that you might end up liking! So far I am very happy with the toolset that I’ve chosen, but I’d love to hear about your choices as well! There’s always something to learn somewhere…

5/6

SUGGESTED READING
• Test Driven JavaScript Development • 20 JavaScript Frameworks Worth Checking Out • Ten Reasons You Should Be Using a CSS Preprocessor • Cucumer and JS: Getting Started with Cucumber.js • JavaScript The Right Way • AngularJS Tutorial • Meet Grunt: The Build Tool for JavaScript • Testing Your JavaScript with Jasmine • Testing with Node, Jasmine and Require.js • Specification by Example

ABOUT THIS ARTICLE
Omar Gonzalez is a Senior Software Architect at Almer/ Blank & has done web development for 10+ years. An avid BDD developer, he has developed apps for companies like eHarmony and Ford, and has spoken at conferences like FITC. Omar is an active contributor to the open source community & is a member of Apache Flex PPMC. He’s developed web, desktop & mobile apps using Flash technology to create apps via AIR and apps based on web standards. Omar co-authored a book on advanced Flex development for Friends of ED.

ONLINE RESOURCES Omar’s Github https://github.com/s9tpepper
Require.js http://requirejs.org/ Node.js http://nodejs.org/

http://omar.gy/

@s9tpepper

appliness

DON’T WORRY, BE APPLI

CATEGORIZING VALUES IN JAVASCRIPT

IN THIS ARTICLE, DR. AXEL RAUSCHMAYER TAKES A DEEPER LOOK AT FOUR WAYS IN WHICH JAVASCRIPT VALUES CAN BE CATEGORIZED.

INTRODUCTION
In this article, we examine what kinds of values JavaScript has and how one can categorize them. That will help you better understand how the language works. It will also help you with advanced programming tasks, such as writing a library, where you often have to deal with all kinds of values being passed to you. With the knowledge gained here, you will be able to avoid bugs caused by subtle differences between values. We’ll look at four ways in which values can be categorized: via the hidden property [[Class]], via the typeof operator, via the instanceof operator and via the function Array.isArray(). We’ll also explore the prototype objects of built-in constructors, which produce unexpected categorization results.

ou Playgr

(

nd

Difficulty
- rookie - intermediate - expert

- Arrays - Classes s - Operator

Todo list

- construct

- classify - categorize

by Dr. Axel Rauschmeyer

REQUIRED KNOWLEDGE
Before we can get started with the actual topic, we have to review some required knowledge. PRIMITIVES VERSUS OBJECTS All values in JavaScript are either primitives or objects. Primitives. The following values are primitive: • undefined • null • Booleans • Numbers • Strings Primitives are immutable; you can’t add properties to them: > var str = “abc”; > str.foo = 123; 123 > str.foo undefined Also, primitives are compared by value, meaning they are considered equal if they have the same content: > “abc” === “abc” true Objects. All non-primitive values are objects. Objects are mutable: > var obj = {}; > obj.foo = 123; // try to add property “foo” 123 > obj.foo // property “foo” has been added 123 Objects are compared by reference. Each object has its own identity and, because of this, two objects are only considered equal if they are, in fact, the same object: > {} === {} false > var obj = {}; > obj === obj true

2/14

WRAPPER OBJECT TYPES The primitive types boolean, number and string have the corresponding wrapper object types Boolean, Number and String. Instances of the latter types are objects and different from the primitives that they are wrapping: > typeof new String(“abc”) ‘object’ > typeof “abc” ‘string’ > new String(“abc”) === “abc” false Wrapper object types are rarely used directly, but their prototype objects define the methods of primitives. For example, String.prototype is the prototype object of the wrapper type String. All of its methods are also available for strings. Take the wrapper method String.prototype.indexOf. Primitive strings have the same method, not a different method with the same name, but literally the same method: > String.prototype.indexOf === “”.indexOf true INTERNAL PROPERTIES Internal properties are properties that cannot be directly accessed from JavaScript, but influence how it works. The names of internal properties start with an uppercase letter and are written in double square braces. As an example, [[Extensible]] is an internal property that holds a boolean flag that determines whether or not properties can be added to an object. Its value can only be manipulated indirectly. Object.isExtensible() reads its value, while Object.preventExtensions() sets its value to false. Once it is false, there is no way to change its value to true. TERMINOLOGY: PROTOTYPES VERSUS PROTOTYPE OBJECTS In JavaScript, the term prototype is unfortunately a bit overloaded: 1. On one hand, there is the prototype-of relationship between objects. Each object has a hidden property [[Prototype]] that either points to its prototype or is null. The prototype is a continuation of the object. If a property of an object is accessed and it can’t be found in the latter, the search continues in the former. Multiple objects can have the same prototype. 2. On the other hand, if, for example, a type is implemented by a constructor Foo then that constructor has a property Foo.prototype that holds the type’s prototype object. To make the distinction clear we call (1) “prototypes” and (2) “prototype objects”. Three methods help in dealing with prototypes: • Object.getPrototypeOf(obj) returns the prototype of obj: > Object.getPrototypeOf({}) === Object.prototype true • Object.create(proto) creates an empty object whose prototype is proto. > Object.create(Object.prototype) {} Object.create() can do more, but that is beyond the scope of this article.
3/14

• proto.isPrototypeOf(obj) returns true if proto is a prototype of obj (or a prototype of a proto-

type, etc.). > Object.prototype.isPrototypeOf({}) true THE PROPERTY “CONSTRUCTOR” Given a constructor function Foo, the prototype object Foo.prototype has a property Foo.prototype.constructor that points back to Foo. That property is set up automatically for each function. > function Foo() { } > Foo.prototype.constructor === Foo true > RegExp.prototype.constructor === RegExp true All instances of a constructor inherit that property from the prototype object. Thus, we can use it to determine which constructor created an instance: > new Foo().constructor [Function: Foo] > /abc/.constructor [Function: RegExp]

CATEGORIZING VALUES
Let’s look at four ways of categorizing values: • [[Class]] is an internal property with a string that classifies an object • typeof is an operator that categorizes primitives and helps distinguish them from objects • instanceof is an operator that categorizes objects • Array.isArray() is a function that determines whether a value is an array [[CLASS]] [[Class]] is an internal property whose value is one of the following strings: “Arguments”, “Array”, “Boolean”, “Date”, “Error”, “Function”, “JSON”, “Math”, “Number”, “Object”, “RegExp”, “String” The only way to access it from JavaScript code is via the default toString() method, which can be invoked generically like this: Object.prototype.toString.call(value) Such an invocation returns: • “[object Undefined]” if value is undefined, • “[object Null]” if value is null, • “[object “ + value.[[Class]] + “]” if value is an object.
4/14

• “[object “ + value.[[Class]] + “]” if value is a primitive (it is converted to an object and handled like in the previous rule).

Examples: > Object.prototype.toString.call(undefined) ‘[object Undefined]’ > Object.prototype.toString.call(Math) ‘[object Math]’ > Object.prototype.toString.call({}) ‘[object Object]’ Therefore, the following function can be used to retrieve the [[Class]] of a value x: function getClass(x) { var str = Object.prototype.toString.call(x); return /^\[object (.*)\]$/.exec(str)[1]; } Here is that function in action: > getClass(null) ‘Null’ > getClass({}) ‘Object’ > getClass([]) ‘Array’ > getClass(JSON) ‘JSON’ > (function () { return getClass(arguments) }()) ‘Arguments’ > function Foo() {} > getClass(new Foo()) ‘Object’

5/14

TYPEOF typeof categorizes primitives and allows us to distinguish between primitives and objects using the following syntax: typeof value It returns one of the following strings, depending on the operand value: Operand Null Boolean value Number value String value Function All other values Result “object” “boolean” “number” “string” “function” “object”

typeof returning “object” for null is a bug in JavaScript. Unfortunately, it can’t be fixed, because that would break existing code. Note that while a function is also an object, typeof makes a distinction. Arrays, on the other hand, are considered objects. INSTANCEOF instanceof checks whether a value is an instance of a type with the syntax: value instanceof Type The operator looks at Type.prototype and checks whether it is in the prototype chain of value. That is, if we were to implement instanceof ourselves, it might look something like this (minus some error checks, such as for type being null): function myInstanceof(value, Type) { return Type.prototype.isPrototypeOf(value); } instanceof always returns false for primitive values: > “” instanceof String false > “” instanceof Object false

6/14

ARRAY.ISARRAY() Array.isArray() exists because of one particular problem in browsers: each frame has its own global environment. An example: Given a frame A and a frame B (where either one can be the document), code in frame A can pass a value to code in frame B. The code in frame B cannot use instanceof Array to check whether the value is an array because its B Array is different from the A Array. An example: <html> <head> <script> // test() is called from the iframe function test(arr) { var iframeWin = frames[0]; console.log(arr instanceof Array); // false console.log(arr instanceof iframeWin.Array); // true console.log(Array.isArray(arr)); // true } </script> </head> <body> <iframe></iframe> <script> // Fill the iframe var iframeWin = frames[0]; iframeWin.document.write( ‘<script>window.parent.test([])</’+’script>’); </script> </body> </html> Therefore, ECMAScript 5 introduced Array.isArray() which uses [[Class]] to determine whether a value is an array. Nonetheless, the problem described above in our frames example exists for all types when used in conjunction with instanceof.

BUILT-IN PROTOTYPE OBJECTS
The prototype objects of built-in types are strange values: They behave like instances of the types, but when examined via instanceof, they are not instances. A few other categorization results for prototype objects are also unexpected. By trying to understand what happens, we can deepen our understanding of categorization. OBJECT.PROTOTYPE Object.prototype is an empty object: It is printed as one and does not have any enumerable own properties. > Object.prototype {} > Object.keys(Object.prototype) []

7/14

Unexpected. Object.prototype is an object, but it is not an instance of Object. On one hand, both typeof and [[Class]] recognize it as an object: > getClass(Object.prototype) ‘Object’ > typeof Object.prototype ‘object’ On the other hand, instanceof does not consider it an instance of Object: > Object.prototype instanceof Object false In order for the above result to be true, Object.prototype would have to be in its own prototype chain, causing a cycle starting and ending with Object.prototype. The prototype chain would not be linear, any more, which is not something you want for a data structure that has to be easy to traverse. Therefore, Object.prototype does not have a prototype. It is the only built-in object that doesn’t have one. > Object.getPrototypeOf(Object.prototype) null This kind of paradox holds true for all built-in prototype objects: They are considered instances of their type by all mechanisms except instanceof. Expected. [[Class]], typeof and instanceof agree on most other objects: > getClass({}) ‘Object’ > typeof {} ‘object’ > {} instanceof Object true FUNCTION.PROTOTYPE Function.prototype is itself a function. It accepts any arguments and returns undefined: > Function.prototype(“a”, “b”, 1, 2) undefined Unexpected. Function.prototype is a function, but not an instance of Function: On one hand, typeof, which checks whether an internal [[Call]] method is present, says that Function.prototype is a function: > typeof Function.prototype ‘function’ The [[Class]] property says the same: > getClass(Function.prototype) ‘Function’ On the other hand, instanceof says that Function.prototype is not an instance of Function. > Function.prototype instanceof Function false
8/14

That’s because it doesn’t have Function.prototype in its prototype chain. Instead, its prototype is Object.prototype: > Object.getPrototypeOf(Function.prototype) === Object.prototype true Expected. With all other functions, there are no surprises: > typeof function () {} ‘function’ > getClass(function () {}) ‘Function’ > function () {} instanceof Function true Function is also always considered a function in every case: > typeof Function ‘function’ > getClass(Function) ‘Function’ > Function instanceof Function true ARRAY.PROTOTYPE Array.prototype is an empty array: It is displayed that way and has a length of 0. > Array.prototype [] > Array.prototype.length 0 [[Class]] also considers it an array: > getClass(Array.prototype) ‘Array’ So does Array.isArray(), which is based on [[Class]]: > Array.isArray(Array.prototype) true Naturally, instanceof doesn’t: > Array.prototype instanceof Array false So as not to be redundant, we won’t mention prototype objects not being instances of their type again, for the remainder of this section.

9/14

REGEXP.PROTOTYPE RegExp.prototype is a regular expression that matches everything: > RegExp.prototype.test(“abc”) true > RegExp.prototype.test(“”) true RegExp.prototype is also accepted by String.prototype.match, which checks whether its argument is a regular expression via [[Class]]. That check is positive for both regular expressions and the prototype object: > getClass(/abc/) ‘RegExp’ > getClass(RegExp.prototype) ‘RegExp’ Excursion: the empty regular expression. RegExp.prototype is equivalent to the “empty regular expression”. That expression is created in either one of two ways: new RegExp(“”) /(?:)/ //constructor //literal

You should only use the RegExp constructor if you are dynamically assembling a regular expression. Alas, expressing the empty regular expression via a literal is complicated by the fact that you can’t use //, which would start a comment. The empty non-capturing group (?:) behaves the same as the empty regular expression: it matches everything and does not create captures in a match. > [ > [ new RegExp(“”).exec(“abc”) ‘’, index: 0, input: ‘abc’ ] /(?:)/.exec(“abc”) ‘’, index: 0, input: ‘abc’ ]

An empty group not only holds the complete match at index 0, but also the capture of that (first) group at index 1: > /()/.exec(“abc”) [ ‘’, // index 0 ‘’, // index 1 index: 0, input: ‘abc’ ] Interestingly, both an empty regular expression created via the constructor and RegExp.prototype are displayed as the empty literal: > new RegExp(“”) /(?:)/ > RegExp.prototype /(?:)/

10/14

DATE.PROTOTYPE Date.prototype is also a date: > getClass(new Date()) ‘Date’ > getClass(Date.prototype) ‘Date’ Dates wrap numbers. Quoting the ECMAScript 5.1 specification:

“A

Date object contains a Number indicating a particular instant in time to within a millisecond. Such a Number is called a time value. A time value may also be NaN, indicating that the Date object does not represent a specific instant of time. Time is measured in ECMAScript in milliseconds since 01 January, 1970 UTC.”

Two common ways of accessing the time value is by calling valueOf or by coercing a date to number: > var d = new Date(); // now > d.valueOf() 1347035199049 > Number(d) 1347035199049 The time value of Date.prototype is NaN: > Date.prototype.valueOf() NaN > Number(Date.prototype) NaN Date.prototype is displayed as an invalid date, the same as dates that have been created via NaN: > Date.prototype Invalid Date > new Date(NaN) Invalid Date NUMBER.PROTOTYPE Number.prototype is roughly the same as new Number(0): > Number.prototype.valueOf() 0 The conversion to number returns the wrapped primitive value: > +Number.prototype 0

11/14

Compare that to: > +new Number(0) 0 STRING.PROTOTYPE Similarly, is roughly the same as new String(“”): > String.prototype.valueOf() ‘’ The conversion to string returns the wrapped primitive value: > “” + String.prototype ‘’ Compare that to: > “” + new String(“”) ‘’ BOOLEAN.PROTOTYPE Boolean.prototype is roughly the same as new Boolean(false): > Boolean.prototype.valueOf() false Boolean objects can be coerced to boolean (primitive) values, but the result of that coercion is always true, because converting any object to boolean is always true. > !!Boolean.prototype true > !!new Boolean(false) true > !!new Boolean(true) true That is different from how objects are converted to numbers or strings. If an object wraps these primitives, the result of a conversion is the wrapped primitive.

RECOMMENDATIONS
We’ll conclude this post with recommendations for categorizing values. TREATING PROTOTYPE OBJECTS AS PRIMAL MEMBERS OF THEIR TYPES Is a prototype object always a primal members of a type? No, that only holds true for the built-in types. In general, that behavior of prototype objects is merely a curiosity; it is better to think of them as analogs to classes: they contain properties that are shared by all instances (usually methods).
12/14

WHICH CATEGORIZATION MECHANISMS TO USE When deciding on how to best use JavaScript’s quirky categorization mechanisms, you have to distinguish between normal code and code that might encounter values from other frames. Normal code. For normal code, use typeof and instanceof and forget about [[Class]] and Array.isArray(). You have to be aware of typeof’s quirks: that null is considered an “object” and that there are two non-primitive categories, “object” and “function”. For example, a function for determining whether a value is an object could be implemented as follows. function isObject(v) { return (typeof v === “object” && v !== null) || typeof v === “function”; } Trying this method out would look like: > isObject({}) true > isObject([]) true > isObject(“”) false > isObject(undefined) false Code that works with values from other frames. If you expect to receive values from other frames then instanceof is not reliable. You have to consider [[Class]] and Array.isArray(). An alternative is to work with the name of an object’s constructor but that is a brittle solution because not all objects record their constructor, not all constructors have a name and there is the risk of name clashes. The following function shows how to retrieve the name of the constructor of an object. function getConstructorName(obj) { if (obj.constructor && obj.constructor.name) { return obj.constructor.name; } else { return “”; } } Another thing worth pointing out is that the name property of functions (such as obj.constructor) is non-standard and, for example, not supported by Internet Explorer. Trying it out: > getConstructorName({}) ‘Object’ > getConstructorName([]) ‘Array’ > getConstructorName(/abc/) ‘RegExp’ > function Foo() {} > getConstructorName(new Foo()) ‘Foo’

13/14

If you apply getConstructorName() to a primitive value, you get the name of the associated wrapper type: > getConstructorName(“”) ‘String’ That’s because the primitive value gets the property constructor from the wrapper type: > “”.constructor === String.prototype.constructor true

ABOUT THIS ARTICLE
Dr. Axel Rauschmayer is a consultant and trainer for JavaScript, web technologies and information management. He has been programming since 1985, developing web applications since 1995 and held his first talk on Ajax in 2006. In 1999, he was technical manager at an internet startup that later expanded internationally.

ONLINE RESOURCES ECMA International http://ecma-international.org/ecma-262/5.1/
Prototypes as Classes http://www.2ality.com/2011/06/prototypes-as-classes.html Mozilla Developer Network https://developer.mozilla.org/en-US/docs/JavaScript/Reference/ Global_Objects/Object/create

http://www.2ality.com/

@rauschma

appliness

DON’T WORRY, BE APPLI

EXAMPLE OF A PARSE.COM JAVASCRIPT APPLICATION WITH OFFLINE SUPPORT

RAY EXPLAINS HOW TO ENABLE OFFLINE STORAGE OF DATA COMING FROM THE CLOUD SERVICE PARSE.COM

THE CHALLENGE
This morning I got a seemingly innocent question from a reader:
Came across your blog post on Parse + PhoneGap and wanted to get your opinion on the following use-case for that combo...

I’ve been exploring the possibilities of an app that essentially has a web form (similar to the contact form you’ve got right here, actually) that would store the resulting data via Parse. The reason being...it would be important that the app would allow a form to submit, even if there wasn’t an active Internet connection available.

So, just wanted your thoughts on whether I am looking in the right direction to accomplish this. Don’t have much experience in the way of iOS apps, but have to start somewhere, right?

I replied to him with a basic outline: - When you submit the form, hit Parse if you are online, hit WebSQL if not.

ou Playgr

(

nd

Difficulty
- rookie - intermediate - expert

- jQuery - Parse.com - HTML5

Todo list

- get data - cache data - display

by Raymond Camden

- When the application starts, see if you have data in WebSQL, and if you are online, push it to parse. That seemed simple enough, but I figured I might as well build a real demo just to prove it can be done. This is what I came up with. It’s got some issues (don’t we all?) but it covers the basics. As always though I’m open to suggestions for how this could be done better. I began by creating the layout for an application. Since the reader just mentioned a form, I built the entire application around one form. I decided to build a simple UFO Report Form. It has a field for the number of UFOs, your name, and the description. I didn’t make use of any UI framework but instead directed my incredible design skills at the task.

Here’s the HTML behind the form, just in case your curious: <h2>Sighting Reporter</h2> <form id=”sightForm”> Number of UFOs: <input type=”number” id=”numufos”><br/> Your Name: <input type=”text” id=”yourname”><br/> Description: <textarea id=”description”></textarea><br/> <input type=”submit” value=”Send Sighting”> </form> Fancy, eh? Ok, now it’s time to get into the code. I’m going to tackle this piece by piece, and it may get a bit confusing, but I’ll post the entire file in one chunk at the end for your perusal. Whether or not we are online, we need to set up the database. This is done via the WebSQL API. While this API is deprecated, it is fully supported in PhoneGap and works great in Chrome, the main browser I use for testing.

2/6

$(document).ready(function() { //initialize db db = window.openDatabase(“sightingdb”, “1.0”, “Sighting Database”, 200000); db.transaction(createDB, errorCB, initApp); function createDB(trans) { trans.executeSql(“create table if not exists sighting(id INTEGER PRIMARY KEY,numufos,yourname,description)”); } }); I’m not going to detail how this works as I’ve covered it before (Example of PhoneGap’s Database Support), but even if this is brand new to you I think you can get the idea. After the database is set up, our application needs to upload any existing data to Parse. We’re going to skip that now though and look at the basic form handling aspects of the code. I wrote a function to wrap my check for online/offline support. Why? I wrote this demo without actually building it as a PhoneGap application. It should work fine when converting into a mobile application, and at that point my wrapper function can be modified to use PhoneGap’s API, but for my initial testing I just wanted to use the navigator.onLine property. Having a wrapper also let me easily add in a hack (see the commented out line) to test being offline. function online() { //return false; return navigator.onLine; } If we are online, I need to initialize Parse support. I won’t repeat what is already covered in the Parse JavaScript Guide. Instead, this is just an example of how I initialize Parse.com with my API keys and define an object type I’m calling SightingObject (as in UFO sighting). Parse.initialize(“8Y0x2rCA0jKYdiC7wLKQuF9nQqKGFKdpqUHMfue3”, “8m7ng0w9UirTV6k4ExsJ0WsmPGeZMsJd5hcu54Oq”); SightingObject = Parse.Object.extend(“SightingObject”); Now let’s look at the form handler. Remember, this needs to either save to Parse or to the database. $(“#sightForm”).on(“submit”, function(e) { e.preventDefault(); /* gather the values - normally we’d do a bit of validation, but since UFO chasers are known for their rigorous and rational pursuit of science, this will not be necessary */ var report = {}; report.numufos = $(“#numufos”).val(); report.yourname = $(“#yourname”).val(); report.description = $(“#description”).val(); console.log(“To report: “,report); //ok, disable the form while submitting and show a loading gfx
3/6

$(this).attr(“disabled”,”disabled”); $(“#loadingGraphic”).show(); if(online()) { console.log(“I’m online, send to parse”); saveToParse(report,resetForm); } else { console.log(“I’m offline, save to WebSQL”); db.transaction(function(trans) { trans.executeSql(“insert into sighting(numufos,yourname,description) values(?,?,?)”, [report.numufos, report.yourname, report.description]); }, errorCB, resetForm); } }); This code block is a bit large, so let’s break it down. The first thing I do is grab the values from the form. As mentioned in the comments, it would probably make sense to do some basic validation. Screw validation - this is a demo. Next I do some basic UI stuff to let the user know that exciting things are happening in the background (although in theory, not as exciting as the UFO in front of them). Then we have the online/offline block. I’ve taken the Parse logic out into another function that I’ll show in a moment. The other part of the conditional simply writes it out to the database. In both cases we run a function, resetForm, that handles resetting my UI. Here is saveToParse. Notice how darn easy this is. Just in case it isn’t obvious - this is all the code I need to store my data, permanently, in the cloud. It would only be easier if the Parse.com engineers fed me grapes and lime jello shots while I wrote the code. function saveToParse(ob,successCB) { var sightingObject = new SightingObject(); sightingObject.save(ob, { success: function(object) { console.log(“Saved to parse.”); console.dir(object); successCB(object); }, error: function(model, error) { console.log(“Error!”); console.dir(error); } }); } Before we get into the synchronization aspect, here is resetForm. Again, it just handles updating the UI and letting the user know something happened with their important data. //handles removing the disabled form stuff and loading gfx function resetForm() { $(“#numufos”).val(“”); $(“#yourname”).val(“”); $(“#description”).val(“”); $(“#sightForm”).removeAttr(“disabled”,”disabled”); $(“#loadingGraphic”).hide(); var status = $(“#status”); if(online()) {

4/6

status.fadeIn().html(“Your sighting has been saved!”).fadeOut(4000); } else { status.fadeIn().html(“Your sighting has been saved locally and will be uploaded next time you are online!”).fadeOut(4000); } } I did some quick testing and confirmed it was working. I used Parse.com’s online data browser first:

I then tested offline storage. Chrome makes it easy to check since it has a database viewer built in:

That’s almost it. The final piece of the puzzle is handling uploading the database data. This turned out to be simple too. If we are online, we can run a SQL against the table. If anything exists, we upload it and remove it. //do we have existing objects in the db we can upload? db.transaction(function(trans) { trans.executeSql(“select * from sighting”, [], function(trans,result) { //do we have rows? if(result.rows.length > 0) { console.log(“Ok, we need to push stuff up”); for(var i=0, len=result.rows.length; i<len; i++) { var row = result.rows.item(i);

5/6

(function(row) { //Parse will try to save everything, including ID, so make a quick new ob var report = {}; report.numufos = row.numufos; report.yourname = row.yourname; report.description = row.description; saveToParse(report, function() { console.log(“i need to delete row “+row.id); db.transaction(function(trans) { trans.executeSql(“delete from sighting where id = ?”, [row.id]); }, errorCB); }); }(row)); } } },errorCB); }, errorCB, function() { console.log(“Done uploading the old rows”); }); That’s basically it. The biggest issue with this code is that it doesn’t handle a change to your online/offline status, specifically, if you start the application offline, save some sightings, and then become online, it won’t upload the old rows. That wouldn’t be too hard to fix, but I was trying to keep it simple. At minimum, the next time you run the application it will upload those old records. For folks who want to see the entire code base, simply view the gist here:

https://gist.github.com/3723074

ILYA GRIGORIK EXCLUSIVE “MAKE THE WEB FAST”
WEB FONTS PEFORMANCE: MAKING PRETTY, FAST
by Ilya Grigorik

LATENCY: THE NEW WEB PERFORMANCE BOTTLENECK
by Ilya Grigorik

CHROME NETWORKING: DNS PREFETCH & TCP PRECONNECT
by Ilya Grigorik

ILYA GRIGORIK: EXCLUSIVE INTERVIEW
by Ilya Grigorik

appliness

MAKE THE WEB FAST

WEB FONTS PERFORMANCE: MAKING PRETTY, FAST

ILYA GRIGORIK EXPLORES THE IMPACT THAT WEB FONTS HAS HAD ON WEB PERFORMANCE AND DISCUSSES THE FUTURE DIRECTION OF WEB FONTS AND THEIR EFFECT ON THE WORLD WIDE WEB.

The use of web fonts is surging. Just over the last year, the use of web fonts has doubled from ~6% to over ~12% according to the HTTP Archive. In the same time, Google Web Fonts has seen a 10x in the amount of requests, recently crossing 1B font views per day across 100M+ web pages. And there are no signs of slowdown in the adoption. Historically, web fonts have not had a great story when it comes to performance, but this is definitely changing fast: better compression formats, improved browser handling, unicode and character subsetting, and the list goes on. Not to mention the many accessibility, indexing, translation, file size, zoom and high DPI device friendly benefits of rendering text as, well, text! To discuss this, and more, we sat down with David Kuettel from the Google Web Fonts team, for an in-depth look at web fonts.

(

by Ilya Grigorik

OPTIMIZING WEB FONTS
Serving a web font is deceiving simple: download the file, put it on the local server, and we’re done? Turns out, it is much more interesting than that. First, there are four different formats (woff, ttf, eot, svg), and not one of them provides universal adoption. To have a consistent experience across all platforms we have to provide multiple formats. In the long run, the goal is to have a single, well supported format, but in the meantime we have to support all the legacy browsers. Next, the font file itself can be massive: Arial Unicode, which supports nearly all languages, weighs in at over 22MB! Of course, an average page does not need the entire unicode character set, hence we need a mechanism to restrict the font to a character subset (e.g. latin, or cyrillic only). Open Sans, which is one of the most popular Google web fonts, provides support for 20+ languages, and comes in at 217kB total, but only 36kB when restricted to latin subset. Sidenote: an average font served by Google Web Fonts today is ~35kB.

2/4

<!-- Serve Open Sans font family, but only the latin character set --> <link href=”http://fonts.googleapis.com/css?family=Open+Sans&subset=latin” rel=”stylesheet” /> Next, the font file size can be further reduced by eliminating font hinting meta-data for platforms that do not support it. With that out of the way, we can apply optimized compression algorithms for a ~15% win over simple gzip compression. With WOFF 2.0 in the pipeline, we should see another 30%+ compression improvement in the foreseeable future. Put all of these optimizations together, and it translates to 30+ static variants for each web font, custom tailored to each platform and user agent. So much for serving one file! Not to mention the dynamic optimizations, such as character subsetting, which allows you to specify the individual characters to meet your exact needs for a small headline or a similar use case. <!-- Serve Inconsolata font family, but only provide “H”, “e”, “l”, “o” char acters --> <link href=”http://fonts.googleapis.com/css?family=Inconsolata&text=Hello” rel=”stylesheet” />

SERVING WEB FONTS
All Google web fonts are free and open-source, which enables very effective cross-site caching: Open Sans served on this site, is the same Open Sans served across 1M+ other domains using the font. In fact, the top 40 Google web fonts are shared by 100K+ domains, and top 300 by 10K+. In other words, using a popular web font will likely translate to a browser cache hit for the font, even if this is your first time visiting my site (which uses Open Sans). The wider the adoption, the higher the likelihood of a cache hit, the better the performance! Let’s dissect a simple example of how a Google web font is served:

The CSS stylesheet provided by Google Web Fonts is a dynamic stylesheet which specifies the optimal file format for the visitors platform and browser - as determined by the combination of the optimizations we discussed earlier. This stylesheet is cached for 24 hours. Inside of the stylesheet is a URL reference to the web font resource itself. Why not inline the web font? Well, given the growing popularity of web fonts, the bet is that you already have a copy in your cache - no request is better than no request. Instead

3/4

of storing N copies of Open Sans, one for each site, the browser maintains a single copy across all sites. Leveraging the 24 hour CSS cache, and the one year cache for the web font itself allows quick and easy rollout of updates while optimizing for a fast browsing experience - vast majority of page renders will require zero requests for the fonts. The combination of a global CDN, optimized file formats, and a shared and global cache pay high dividends when it comes to performance.

WEB FONTS ARE HERE TO STAY
Web fonts are here to stay, and that is a good thing - yes, even for performance. Better accessibility, zoom and high DPI friendly, optimized compression, and improved handling from all browser vendors are all working in favor of growing adoption. Not to mention the ability to combine web fonts with CSS3 for some spectacular visuals. Having said that, serving web fonts definitely has its gotchas. If you are going the DIY route, then make sure you support all of the latest formats, optimize your fonts, serve the appropriate versions, and keep up with the latest developments - lots of things to get right! For the rest of us, a web font provider like Typekit, or Google Web Fonts is a much better bet.

ABOUT THIS ARTICLE
Ilya Grigorik is a web performance engineer and developer advocate on the Make The Web Fast team at Google, where he spends his days and nights on making the web fast and driving adoption of performance best practices. Whenever not thinking web performance, or analytics, Ilya contributes to open-source projects, reading, or building fun projects like VimGolf, GitHub Archive and others.

ONLINE RESOURCES
Make the Web Faster Google Developers https://developers.google.com/speed/ Let’s Make the Web Faster Official blog http://googleblog.blogspot.com/2009/06/lets-make-web-faster.html Web Performance Best Practices https://developers.google.com/speed/docs/best-practices/rules_intro

http://www.igvita.com/

@ igrigorik

appliness

MAKE THE WEB FAST

LATENCY: THE NEW WEB PERFORMANCE BOTTLENECK
ILYA GRIGORIK EXPLORES THE IMPORTANCE OF LATENCY AND ITS IMPACT ON WEB PERFORMANCE.

If we want a faster browsing experience then reducing the round trip time (RTT) should be near the top of our list. Or, as Mike Belshe put it: more bandwidth doesn’t matter (much). Now, let’s be clear, higher bandwidth is never a bad thing, especially for use cases that require bulk transfer of data: video streaming, large downloads, and so on. However, when it comes to your web browsing experience, it turns out that latency, not bandwidth, is likely the constraining factor today. As a consumer, did you consider this when you picked your ISP? Likely not, I’m yet to see any provider mention, yet alone advertise latency.

(

by Ilya Grigorik

BANDWIDTH VS. LATENCY
Akamai’s State of the Internet stats show that as of 2011 an average US consumer is accessing the web on a 5Mbps+ pipe. In fact, with the fast growth of broadband worldwide, many other countries are hovering around the same number, or quickly catching up. As it happens, 5mbps is an important threshold.

The two graphs above show the results of varying bandwidth and latency on the page load time (PLT). Upgrading your connection from 1Mbps to 2Mbps halves the PLT, but quickly thereafter we are into diminishing returns. In fact, upgrading from 5Mbps to 10Mbps results in a mere 5% improvement in page loading times! In other words, an average consumer in the US would not benefit much from upgrading their connection when it comes to browsing the web. However, the latency graph tells an entirely different story. For every 20ms improvement in latency, we have a linear improvement in page loading times. There are many good reasons for this: an average page is composed of many small resources, which require many connections, and TCP performance of each is closely tied to RTT.

DEVELOPING AN INTUITION FOR LATENCY
So what is 20ms of latency? Most of us have a pretty good mental model for bandwidth as we are used to thinking in megabytes and file sizes. For latency, distance travelled by light is our best proxy (click here to see Ilya’s original blog post):

2/5

Latency is constrained by the speed of light. Hence, 20ms RTT is equivalent to ~3000km, or a 1860 mile radius for light traveling in vacuum. We can’t do any better without changing the laws of physics. Our data packets travel through fiber, which slows us down by a factor of ~1.52, which translates to 2027 km, or a 1260 mile radius. What is remarkable is that we are already within a small constant factor of the theoretical limit. The above map models a simple, but an unrealistic scenario: you are at the center of the circle, what is the maximum one-way distance that the packet can travel for an X ms RTT? Unrealistic because the “Fiber RTT” assumes we have a direct fiber link between the center and the edge, but this nonetheless gives us a good tool to sharpen our intuition. For example, sending a packet from San Francisco to NYC carries a minimum 40ms RTT. Grab the center of the circle, move it around, and you’ll notice something very important: the Mercator projection we are all so used to seeing on our maps introduces a lot of distortion. The same 20ms at the equator covers a much larger “surface area” when moved either North or South.

IMPROVING BANDWIDTH & LATENCY
Bandwidth demand is growing fast, but the good news is that we still have plenty of capacity within current fiber (~20% of lit capacity used), and even more importantly, the capacity can be improved by upgrades to existing submarine cables: capacity of trans-Pacific links tripled between 2007 and 2011, most of it through upgrades. Latency on the other hand affords no such “easy” wins. Yes, the equipment can be improved to shave off a few milliseconds, but if you want significant improvements, then the answer is simple: you need new, shorter cables to reduce the propagation delay. As you may have guessed, this is an expensive proposition:

“H

uawei is working with another company, Hibernia Atlantic, to lay the first transatlantic fiber-optic submarine cable in a decade, a $400-million-plus project that will save traders five milliseconds. To do this, Hibernia is laying nearly 3,000 miles of cable across the Grand Banks off Canada and the North Atlantic, a shorter route that most companies have avoided because it traverses relatively shallow waters.”

That’s $80M+ per millisecond saved; latency is expensive - literally, and figuratively. Even more impressive, the theoretical limit between New York and London is 37.2ms, which means that this new cable (60ms RTT) only adds 38% of overhead for switching and amplification.

3/5

“LAST MILE” LATENCY & YOUR ISP
40ms between NYC and London is great in theory, but in practice our ping times are much higher. When Mike Belshe published his study in 2010, average worldwide RTT to Google was ~100ms, and ~60-70ms within the US. In 2012, the average worldwide RTT to Google is still ~100ms, and ~50-60ms within the US. That’s a positive trend within the US, but there is still a lot of room for improvement. Turns out, the “last mile” connection between your house and your ISP is often a significant bottleneck. According to FCC’s recent Measuring Broadband America report, during peak hours:

“F

iber-to-the-home services provided 17 milliseconds (ms) round-trip latency on average, while cable-based services averaged 28 ms, and DSL-based services averaged 44 ms.”

That’s 17-44ms of latency just to the closest measuring node within your ISP, before your packet hits any internet backbone. You can run a simple traceroute to test your own ISP, my first hop (<60 miles) to Comcast fluctuates in the 15-45ms range (ouch): $> traceroute google.com traceroute to google.com (74.125.224.102), 64 hops max, 52 byte packets 1 10.1.10.1 (10.1.10.1) 2.322 ms 1.084 ms 1.002 ms # local router 2 96.157.100.1 (96.157.100.1) 38.253 ms 16.489 ms 24.702 ms # comcast ...

MOBILE WEB LATENCY: 100-1000MS
The mobile web is a whole different game, and not one for the better. If you are lucky, your radio is on, and depending on your network, quality of signal, and time of day, then just traversing your way to the internet backbone can take anywhere from 50 to 200ms+. From there, add backbone time and multiply by two: we are looking at 100-1000ms RTT range on mobile. Here’s some fine print from the Virgin Mobile (owned by Sprint) networking FAQ:

sers of the Sprint 4G network can expect to experience average speeds of 3Mbps to 6Mbps download and up to 1.5Mbps upload with an average latency of 150ms. On the Sprint 3G network, users can expect to experience average speeds of 600Kbps - 1.4Mbps download and 350Kbps - 500Kbps upload with an average latency of 400ms.” To add insult to injury, if your phone has been idle and the radio is off, then you have to add another 1000-2000ms to negotiate the radio link. Testing my own Galaxy Nexus, which is running on Sprint, shows average first hop at 95ms. If latency is important on wired connections, then it is a critical bottleneck for the mobile web.

“U

4/5

“HIGH SPEED” INTERNET
If you are tasked with optimizing your site or service, then make sure to investigate latency. Google Analytics Site Speed reports can sample and report on real, user experienced latency of your visitors. Corollary: as a consumer, check the latency numbers in addition to the size of the pipe. Finally, if we can’t make the bits travel faster, then the only way to improve the situation is to move the bits closer: place your servers closers to your users, leverage CDN’s, reuse connections where possible (TCP slow start), and of course, no bit is faster than no bit - send fewer bits. We need more focus, tools and discussions about the impact of latency, especially on high latency links such as the mobile web. “High speed” connectivity is not all about bandwidth, unlike what many of our ISPs would like to promote.

ABOUT THIS ARTICLE
Ilya Grigorik is a web performance engineer and developer advocate on the Make The Web Fast team at Google, where he spends his days and nights on making the web fast and driving adoption of performance best practices. Whenever not thinking web performance, or analytics, Ilya contributes to open-source projects, reading, or building fun projects like VimGolf, GitHub Archive and others.

ONLINE RESOURCES
Make the Web Faster Google Developers https://developers.google.com/speed/ Let’s Make the Web Faster Official blog http://googleblog.blogspot.com/2009/06/lets-make-web-faster.html Web Performance Best Practices https://developers.google.com/speed/docs/best-practices/rules_intro

http://www.igvita.com/

@ igrigorik

appliness

MAKE THE WEB FAST

CHROME NETWORKING: DNS PREFETCH & TCP PRECONNECT

ILYA GRIGORIK EXPLORES TODAY’S RICHER CLIENT-SIDE BROWSER APPS AND SOME OF THE INNER WORKINGS OF HOW GOOGLE CHROME IS DESIGNED TO MANAGE RESOURCES TO INCREASE OPTMIZATION.

When you think about browser performance, the JavaScript VM wars is the first thing that comes to mind. Arguably rightfully so, since we are building far richer and more ambitious client-side apps in the browser. In fact, according to HTTP Archive the average page has nearly doubled the amount of JavaScript code just in the past year (up to 194kB). However, during that same time the size of an average page has grown to 1059kB (over 1MB!) and is now composed of over 80 subresource requests - let that sink in for a minute. Fetching all of this content is anything but free, and as it turns out, the networking stack of a browser like Chrome is by itself an increasingly important component worth understanding when it comes to optimizing web performance.

(

by Ilya Grigorik

CHROME & MULTI-PROCESS RESOURCE LOADING
Each browser tab in Chrome is its own isolated process, which gives us great isolation and many security benefits. However, all network communication is handled by the main browser process. Whenever a tab needs to fetch a remote resource, it sends an IPC request to the host process and waits for a response. This may seem counter-productive at first, but there are many good reasons for this architecture: the browser is able to control the network activity of each tab (security), it can limit the number of connections per host as well as provide connection pooling and re-use, and this also allows us to maintain consistent HTTP cache and session states (cookies and other cached data). Managing all of these interactions is a non-trivial task to begin with, but what is even more interesting is the level of optimization that goes into this layer to hide the networking latency: given a remote URL, we need to resolve it (DNS), perform the TCP handshake, and only then can we send the request. An average DNS lookup takes 60~120ms, followed by a full round-trip (RTT) to perform the TCP handshake - combined, that creates 100-200ms of latency before we can even send the request! So, what could the browser do to help us offset this cost?

DNS PREFETCHING & TCP PRECONNECT
At the core of the Chrome networking stack is a single Predictor object (predictor.h), whose sole responsibility is to anticipate the user behavior, as well as the resource requests that each tab may need in the near future. Put differently, Chrome learns the network topology as you use it. If it does its job right, then it can speculatively pre-resolve the hostnames (DNS prefetching), as well as open the connections (TCP preconnect) ahead of time. To do so, the Predictor needs to optimize against a large number of constraints: speculative prefetching and preconnect should not impact the current loading performance, being too aggressive may fetch unnecessary resources, and we must also guard against overloading the actual network. To manage this process, the predictor relies on historical browsing data, heuristics, and many other hints from the browser to anticipate the requests.

BUILDING THE CHROME PREDICTOR
The “browser startup experience” has its own separate cache where Chrome learns the first ten visited URLs across all of your sessions. Whenever you do a fresh boot of your browser, it immediately resolves all of those hostnames - a nice optimization to speed up your morning routine! Next, you focus on the omnibar and begin typing. If the input looks like a search query, then we can preconnect to the default search engine in anticipation of the query. Alternatively, if the input or the suggestion is a high likelihood URL, then we can preconnect directly to the host. If we guess right, the DNS and TCP handshake may complete before we even hit enter! Next, we request the URL and the parser begins tokenizing the incoming bytes to incrementally build up the DOM tree. Due to the deterministic concurrency model in the browser many subresources are blocking - the parser must stop and wait for the resource. To help eliminate the network wait time imposed by this model, WebKit uses a speculative PreloadScanner (HTMLPreloadScanner.cpp) which “looks

2/4

ahead” in the document and queues up remote resources. This allows us to resolve and fetch some of the resources before the parser even sees them. But even that is suboptimal. Ideally, we should be able to learn and anticipate the subresource connections! In fact, that is exactly what Chrome does: it learns the resource domains for each visited hostname, and on repeat visits it can preemptively resolve and preconnect to these resource hosts before the parser even sees the first byte of the document. The image below shows the inferred resource domains, as well as the stats for igvita.com.

chrome://dns - Chrome learns subresource domains

Finally, as you explore the rendered page, actions such as hovering over a link can also kick off a prefetch. Each of these signals goes into an internal FIFO prefetch queue and gets tagged with a ResolutionMotivation (url_info.h), which allows Chrome to re-order and optimize the resource load order: enum ResolutionMotivation { MOUSE_OVER_MOTIVATED, // PAGE_SCAN_MOTIVATED, // LINKED_MAX_MOTIVATED, // OMNIBOX_MOTIVATED, // STARTUP_LIST_MOTIVATED, // NO_PREFETCH_MOTIVATION, // EARLY_LOAD_MOTIVATED, // // UNIT_TEST_MOTIVATED, Mouse-over link induced resolution. Scan of rendered page induced resolution. enum demarkation above motivation from links. Omni-box suggested resolving this. Startup list caused this resolution. Browser navigation info (not prefetch related). In some cases we use the prefetcher to warm up the connection in advance of issuing the real request.

// The following involve predictive prefetching, triggered by a navigation. // The referrinrg_url_ is also set when these are used. STATIC_REFERAL_MOTIVATED, // External database suggested this resolution. LEARNED_REFERAL_MOTIVATED,// Prior navigation taught us this resolution. SELF_REFERAL_MOTIVATED, // Guess about need for a second connection. MAX_MOTIVATED // Beyond all enums, for use in histogram bounding. }; Best of all, we can inspect all of these historical and runtime caches right in the browser (copy the chrome:// links below, and open in a new tab): • chrome://network-action-predictor - omnibox predictor stats (tip: check ‘Filter zero confidences’) • chrome://net-internals/#sockets - current socket pool status • chrome://net-internals#dns - Chrome’s in-memory DNS cache • chrome://histograms/DNS - histograms of your DNS performance • chrome://dns - startup prefetch list and subresource host cache
3/4

DNS RESOLUTION IN CHROME
DNS resolution in Chrome deserves its own in-depth treatment, but it is worth mentioning that after much deliberation the Chrome team is now experimenting with building its own DNS resolver. Currently, Chrome relies on the OS to perform the DNS resolution, and to do so it maintains a pool of 8 threads dedicated to this task. Each getaddrinfo() call is blocking, which also puts a hard cap on the concurrency. Why 8? This is an empirical number based on least common denominator of hardware - higher numbers can overload some home routers. With the new async resolver in place, the limit could be dropped in favor of a dynamic one, and Chrome would also be able to manage its own DNS cache and perform more optimizations such as preemptive refresh of popular or expiring hostnames. For more details, read Will Chan’s post on Google+: Host resolution in Chromium. If you are curious, you can enable the new resolver in chrome://flags (under “Built-in Asynchronous DNS”) and you can also explore the performance of your current DNS stack via the built-in Chrome histograms. In the session below, an average DNS lookup was taking 84.9ms (ouch):

chrome://histograms/DNS.PrefetchResolution

NETWORK LATENCY, MOBILE AND CHROME
If the Chrome predictor does its job right then some of the cost of the networking latency can be hidden from the user. The above heuristics and algorithms have proven to yield great results, but there is still a lot of work to be done. As most of us know first hand, mobile experience today is often excruciatingly slow and this is in large part due to the much higher RTT’s (200-1000ms) on wireless networks. In fact, it is likely that the single best optimization you can make for mobile today is to reduce the number of outbound connections and the total bytesize of your pages. Network latency is anything but free. The browser can definitely help, as we saw above, but do check your network waterfall chart - your users will thank you for it.
ABOUT THIS ARTICLE
Ilya Grigorik is a web performance engineer and developer advocate on the Make The Web Fast team at Google, where he spends his days and nights on making the web fast and driving adoption of performance best practices. Whenever not thinking web performance, or analytics, Ilya contributes to open-source projects, reading, or building fun projects like VimGolf, GitHub Archive and others.

ONLINE RESOURCES
Make the Web Faster Google Developers https://developers.google.com/speed/ Let’s Make the Web Faster Official blog http://googleblog.blogspot.com/2009/06/lets-make-web-faster.html Web Performance Best Practices https://developers.google.com/speed/docs/best-practices/rules_intro

http://www.igvita.com/

@ igrigorik

Ilya Grigorik
THE WEB PERFORMER
PHOTO BY: Umesh Nair

HI ILYA, WE ARE VERY EXCITED TO HAVE YOU WITH US. THANK YOU SO MUCH FOR GIVING US SOME OF YOUR TIME! CAN YOU TELL OUR READERS A BIT ABOUT YOURSELF? WHAT DO YOU DO FOR WORK? FOR FUN?
Appreciate the opportunity! I am a Developer Advocate for the “Make the Web Fast” team at Google, where my focus is everything related to, you guessed it, web performance. As “the performance guy”, I also spend a lot of time working with the Google Chrome team, which is a lot of fun since the Chrome team is always incubating dozens of ideas to improve performance. In a nutshell, my job is: research current problems and performance bottlenecks, experiment with and evaluate new ideas and proposals, work with the developer community to gather feedback, and work with the Google engineers to improve the state of art. I learn new things every day and get to work with the smartest people working on these problems both in and outside of Google. Somehow, someone classified this as a “job” - in my books, this is qualified as fun.

HOW DID YOU COME TO WORK FOR GOOGLE?
I started PostRank back in early 2007 with the vision of building a “PageRank powered by social signals” - that is, instead of just looking at the link graph, our idea was to aggregate comments, votes, reshares, and other social signals, and use them to build a better ranking function for the web. At the time, blogging was all the rage, hence PostRank! We were so clever... Fast forward to early 2011, and one of the products our company developed was a social analytics service for online publishers. Within PostRank Analytics, we leveraged the Google Analytics API extensively, and layered our own, social data on top. We got to know the Google Analytics team pretty well, as we often found ourselves at the same events and conferences. At one point, we found ourselves in a Google conference room in Mountain View, talking about a “deeper partnership”. The rest, as they say, is history. Our entire PostRank Team (14 people) relocated from Waterloo (Canada), to Mountain View to join the Google Analytics team, and nowadays the PostRank Analytics product is much better known as “Social Reporting” within Google Analytics.

SO, POSTRANK, THEN WORKING WITH GOOGLE ANALYTICS, AND NOWADAYS YOU ARE FOCUSING ON WEB PERFORMANCE?
Web performance has always been a passion. In fact, analytics and web performance go hand in hand, and the jump is not nearly as big as you would imagine. One of the great opportunities that a company like Google can afford you is the ability to explore your passions across many different fields, and I took it - working with the “Make the Web Fast” team has been a blast. Instead of working on analytics during the day, and thinking and talking about web performance at nights and on weekends, the relationship is now reversed. I still work very closely with the Google Analytics team they are building a number of amazing tools to help measure and quantify the impact of your sites performance.

PHOTO BY: Umesh Nair

2/8

3/8

PHOTO BY: Umesh Nair

ARE THERE ANY SPECIAL PROJECTS CLOSE TO YOUR HEART THAT YOU ARE WORKING OR WOULD LIKE TO HAVE TIME TO DO?
I am a big believer in lifelong learning: I read books, I listen to books, and I consume 10+ hours of video and lecture content every week. I find it incredibly exciting to watch organizations like Khan Academy, Udacity, and Coursera amongst many others to start to change how we learn, when we learn, and who (everybody!) can learn any subject their heart desires. In the long term, I hope I can contribute to these trends in my own ways - both big and small. Amongst other things, I’m somewhat of a history geek. I love learning how the technologies we have come to take for granted came to be, why they are the way they are, and what drove these innovations at the time. One personal project I have started is to record a series of short videos (inspired by Khan Academy), to cover the origins of early networks and protocols: how did the telegraph come to be, what is the history of ARPANET, why did we invent TCP, and so on. I hope someone will find these useful. If nothing else, I’m having a ton of fun doing the research and recording them. In parallel, I’ve started working on a “Networking in the Browser” book with the O’Reilly team. Turns out, many web developers have never taken a course on networking and don’t have a solid understanding of the performance implications and limitations of the underlying network. This knowledge gap is actually a testament to the great job we’ve done to make the technology easy and accessible (yay!), but when it comes to building well performing apps, having an understanding of the underlying network layers is still a crucial piece to the performance puzzle.

CAN YOU TELL US A BIT ABOUT THE MAKE THE WEB FASTER INITIATIVE? WHAT IS ITS GOAL?
Make the Web Fast is a Google-wide team of performance gearheads with a single mission - build a faster web. What does this mean? We work on identifying existing bottlenecks, developing better tools and instrumentation, working with standards groups and partners to promote performance-oriented

contributions, as well as write code to help accelerate the performance of the web. From optimizing and improving performance of TCP, to developing new protocols like SPDY, and all the way up to improving JavaScript and GPU performance of the modern browsers - we are working on all layers of the stack. Best of all, the Google products and infrastructure is frequently our testbed, but the end goal for the team is to improve the performance of the entire web: our team was one of the founders of the W3C Performance Group, our PageSpeed optimization libraries are open source and have been adopted by many partners, and our kernel patches are making their way upstream into all the Linux distributions. Not to mention Chrome, Google Public DNS, and two dozen other projects. It’s a large and a high-profile effort within Google.

WHAT TOOLS CAN WE USE TO DETECT THE CAUSE OF BAD PERFORMANCE?
There are dozens of very good ones, each optimized for a specific use case. But to start, every team should invest into understanding their full performance profile: your server may be slow, the connectivity of your clients may not very good, or maybe it is the front-end code that’s performing poorly. You fix one thing, another problem rears its head. Performance is a process, not a destination. If there is one tool, or acronym, then it would be “RUM”, which stands for Real User Measurement. Benchmarks and synthetic tests are great, but they don’t show you the real performance data of your users. Most modern browsers now support Navigation Timing, which allows you to gather real network and front-end data. Check that first. Protip: Google Analytics collects this data for you!

IN YOUR OPINION, WHAT ARE THE TOP 3 CAUSES OF BAD PERFORMANCE ON A WEBSITE?
Opinions are dime a dozen. It’s always fun to talk in generalities, but if you want to have a real impact on the bottom line of your site or business, then measure first, measure again, then optimize. Done? Great! Lather, rinse, repeat! I’m a big proponent of advocating for treating speed as a product feature - and it should be measured, and prioritized as such. Quick tip, based on my experience in the analytics field: if you want your boss, team, or company to care about and invest in performance work, then you should spend some time to quantify its impact. Slow pages could be affecting your conversion rate, lead to higher bounce rates, and so forth - don’t optimize just because you can, optimize because you can make a real positive impact to the business. Do this, and you will end up looking like a hero! With that in mind, front-end performance, meaning the loading of your JavaScript, CSS and other resources is now often the bottleneck for many sites.
4/8

PHOTO BY: Umesh Nair

WHICH WEB STANDARDS OR TECHNOLOGIES HAVE EXPERIENCED THE MOST GAINS IN PROVIDING A FASTER WEB EXPERIENCE FOR END USERS?
A modern browser is an operating system with an optimized VM, a JIT, and machine learning to optimize your browsing on the fly. It is not fair to single out just the browser, because there are so many improvements just within the past few years, but without a doubt, the browsers have made one of the largest contributions to the improved performance of our browsing. As users, we all of benefit from the friendly competition between Chrome, Firefox, Opera, and IE... just to name a few! On top of that: much, much better tooling and instrumentation. We still have a long way to go, but we have made great progress in this space by enabling Navigation Timing, memory profiling, snapshots, GPU and detailed network stats. With the rise of mobile, we are also seeing a lot more developers asking for better instrumentation to help debug their mobile apps on native hardware. We still have a lot of work ahead of us.

WHICH WEB STANDARDS OR TECHNOLOGIES ARE THE KEY TARGETS FOR IMPROVEMENT IN ORDER TO MAKE THE WEB FASTER?
More and better automation. Optimizations such as bundling resources, minification, spriting, inlining small requests, domain sharding, and so on... These can, and should all be automated. We shouldn’t have to think about it. This is why our team invested heavily into projects such as mod_pagespeed, which is an open-source Apache module which can do all of the above for you and more: on-the-fly image optimization, rewriting and optimizing HTML and CSS, deferring image and javascript asset loading, and the list goes on. And for the long term: SPDY, or rather, HTTP 2.0. Many of the above optimizations are artifacts of the limitations of the HTTP 1.1 specification. The HTTPbis working group is now working on the HTTP 2.0 specification, and will be using SPDY v2 as the starting point. It might take a couple of years before we see an official HTTP 2.0 announcement but, in the meantime, you can already use SPDY on your server - there are plugins for Apache, nginx, node.js and many others.

PHOTO BY: Ben Margolin
5/8

DO DEVELOPERS NEED TO CONSIDER THE DIFFERENCES IN HOW THE DIFFERENT BROWSER VENDORS MANAGE OPTIMIZATION (E.G., PAGE & RESOURCE LOADING, DNS PREFETCHING, ETC.)?
Yep. You shouldn’t be optimizing for any particular browser, but it is a good practice to make sure that you don’t have any performance outliers. Protip: Google Analytics > Site Speed reports, and look at the browser report, or use advanced segments to slice and dice by ISP, geography, or any other variable.

WHAT IS STILL MISSING IN ORDER TO MAKE THE WEB RUN FASTER AND MORE EFFICIENTLY?
Are we there yet? There is an even better question... Are we going in the right direction to begin with? For example, it may be hard to imagine an internet that does not run on TCP, but what if we dare to dream? Could we make it more efficient, could we resolve the current bottlenecks? This may seem too “blue sky”, but I think it’s important to allow for and to explore these questions. I think we have only scratched the surface of what’s possible. One thing for sure, the adoption of the web will continue to grow at an exponential rate, bandwidth will increase, latency will improve within a small constant factor, and the pages and the applications will continue to grow in their complexity and scope. If we want to get to the “250 millisecond load time” for every page, we’ll have to make some dramatic changes in how the web is built. Roll up your sleeves, we have a lot to do!

DO YOU AND YOUR TEAM FOLLOW A METHODOLOGY OR SET OF BEST PRACTICES TO IDENTIFY AREAS WHERE THE WEB IS NOT RUNNING AS EFFICIENTLY AS IT COULD BE? IN RETURN, ARE THERE KEY STEPS TO FOLLOW TO IMPLEMENT IMPROVEMENTS IN THOSE AREAS?
We use almost every tool under the sun to help us understand the trends. In no particular order: HTTP Archive, WebPageTest.org, BrowserScope, PageSpeed Insights + Critical Path Explorer, Chrome stats, and the list goes on. Whenever I analyze a specific site, I always start with PageSpeed Insights, which provides customized recommendations, record and analyze some network + memory + rendering traces in DevTools, and also run it through WebPageTest to identify performance within different browsers. From there: fix, measure, iterate!

HOW IMPORTANT IS IT TO PLAN FOR MANAGING CSS FOR SITE OPTIMIZATION? WHAT ARE SOME KEY POINTS TO CONSIDER WHEN PLANNING YOUR CSS?
Browsers consider CSS as a “blocking resource” - meaning, nothing will be painted to the screen until the CSS is downloaded, parsed, and is applied. Put your CSS early in the document, and don’t forget to compress your CSS! (Some browsers do have a timeout on CSS to accelerate the paint time, but nonetheless...)
PHOTO BY: Ben Margolin
6/8

WHAT ARE SOME COMMON MISTAKES WEB DESIGNERS AND DEVELOPERS MAKE WHEN IT COMES TO OPTIMIZING THEIR WEBSITES AND APPS?
No performance optimization survives contact with the real user. Too often we run through the performance checklist, hand over the keys, and then sign-off. Then, the real user picks up the app, adds two dozen widgets, imports a few photos from their camera, adds a blocking script or two, and that’s the end of the performance story. Instead, we should have better automation. Most of the content on the web is not authored by performance conscious gurus, and even more importantly, neither can we expect them to learn or care about this stuff. We need to build better tools and processes which can automatically address these problems.

WHAT CAN DEVELOPERS AND DESIGNERS DO TO VERIFY WHICH OF THE ASSERTIONS THEY COME ACROSS ON THE WEB ARE VALID OR INVALID? WHICH SHOULD THEY ACT ON?
Instead of relying on hearsay and anecdotes, measure it on your site first, then check if that’s your biggest problem, and then invest some time to understand the underlying problem. Let me elaborate: 1) The plural of anecdote is not data. Before you invest hours into optimizing your CSS, you should measure what impact your CSS has on the overall performance profile of your site. 2) Before you dive into optimizing some sexy problem you are curious to learn about - is it the biggest problem? Too often we micro-optimize the exact wrong thing. It is exciting to try out that new async JavaScript loader for your widgets, but what you may be missing is that the widget adds no value to begin with. 3) Once you are sure about the problem and you can’t find a well researched guide or a solution... I kid you not, find the right file in WebKit, or Firefox source and just read the comments - you’ll learn a ton, and you won’t fall prey to some odd browser quirk.

WHAT STEPS CAN (OR SHOULD) DEVELOPERS TAKE FOR ISSUES THAT MAY BE BEYOND THEIR CONTROL (E.G., SLOW NETWORKS & DEVICES, SMALL CACHES, FLAKY DNS SERVERS, ETC.)?
Measure them. Once you know that DNS is a potential problem, you can find a better DNS provider. Slow networks? That’s why CDNs exist - move the content closer to your users. Lots of cache misses on mobile devices? You can investigate what you’re caching, or even leverage APIs like localStorage to improve performance. Found some browser or device quirk? Please submit a bug!

7/8

ARE THERE DIFFERENCES IN PLANNING FOR A FASTER WEB EXPERIENCES FOR STANDARD WEB APPS VS. MOBILE APPS?
Yes, of course. Mobile devices have slower connections, smaller caches, slower CPUs - basically, assume it’s “slower everything”. It is important to test on real hardware where possible to ensure that you meet your performance targets. One tip for network testing: find a “network conditioner” application for your platform and test your application with various network speeds. Try loading your application with 300-500ms RTT delay (a common 3G experience), decrease it down to 50ms RTT (cable), and optimize from there. Tip: OSX ships with “Network Link Conditioner” as part of XCode - try it! It’s an invaluable tool.

TIPS, FINAL THOUGHTS?
By now, I think you have figured out my angle: identify and measure the problem, quantify its impact in light of other features or other work, prioritize performance work like any other task or feature. Performance is not a destination, it’s a process. If you want to succeed in the long term, make sure you can justify your work to your team and company. Also, invest into good tools and monitoring infrastructure, and work on building a culture of performance across your entire team. A shared dashboard with actionable metrics, which also captures performance metrics you care about can do wonders.

PHOTO BY: Umesh Nair

8/8

appliness

LIBRARY OF THE MONTH

BROADSTREET HTML5

BROADSTREET IS AN HTML5 MOBILE FRAMEWORK THAT LEVERAGES BACKBONE.JS TO PROVIDE AN MVC-LIKE FRAMEWORK ENVIRONMENT.

THE FRAMEWORK
Broadstreet is an HTML5 Mobile framework that leverages backbone.js to provide an MVC-like framework environment. In Backbone you extend views, models, collection to build robust client-side web applications. We do this using JQuery and Backbone with a OOP/OOD pattern.

It creates re-useable controls using backbone views creating a reusable mobile framework. Style the application using Sass and Compass, leverage IODash ,Modernizer,Require and Raphael.js. with a simple R.js build tool to Minify and Uglify your production scripts.

ou Playgr

(

nd

- Backbone - RequireJS - MVC

Difficulty
- rookie - intermediate - expert

Todo list

- controls - navigation

- structure

by Darren Hurst

A MAIN VIEW
Require the control -> ‘controls/viewController/mainView’, createMainView: function(){ //create a main page view this.app = new MainView().render(this.$el.selector); } render:function(){ // start app this.createMainView(); }

ADD A INSTANCE OF A CHILD PAGE
this.page2 = this.app.setPage(this.app); var page2obj = new Page2(this); this.page2.setHtml(page2obj.template); page2obj.render(); The child page objects load at the same time, the importance here is to recognize that your application could have many MainView and ChildView sets. MainViews can be setup in the router. They will be available to navigate to by choosing the hash associated to the MainView childViews can be shown by: routes: { ‘’: ‘home’, ‘twitter’:’twitter’ //twitter kitchen sink }, ‘home’: function(){ var mainView = new MainView(); }, ‘twitter’: function(){ var twitter = new TwitterView(); twitter.render(); } mainView.render();

2/5

NAVIGATION
Mobile applications need to navigate between views, other view controllers and external webView request. By calling a navigation item of #twitter you would load the twitter MainView. this.navbar = new NavBar().render(this.app); this.navbar.addButton(“twitter”,”Twitter Example”,{“link”:”#twitter”}); //link accepts http: you can also navigate from the parent MainView to one of it’s ChildViews. this.navbar.addButton(“gear”,”page2 nav example”,{“nav”:this.page2});

navBarController
Method .addButton (SVG icon name, Title, Nav Map) Nav Map is a json object. nav = internal object navigation link = hash location or external web link i.e. http://www.google.com Currently we have added two navigation controllers. header and navBar(footer).

CONTROLS
Most of the basic mobile controls are available under the /Controls folder.

Text Input Control

Instantiate a control. var TextInput = new Input(“controls_right”,this).render(); TextInput.setTitle(“Your Name - will alert on blur”); TextInput.setEvent(this,”alertText”); Methods: new new(control placement, the calling view) .setTitle (The text above the control) .setEvent (parent view, the function to execute)
3/5

ListView Controller
In this example we place a listview control into a scrollView control. var list = new listView().render(this.ScrollView1.getId()); for(x in data){ //console.log(data[x].text); list.addRow(this.buildTweetBlock(data[x])); } example:

All Controls in the project will have a method .setClass( classname ) This will allow you to override the classNames and later on allow us to build a theme Manager Controller.

DOWNLOAD AND TRY BROADSTREET

Project info: https://github.com/BroadstreetMobile/BroadStreet

INTERACTIVE SAMPLE
Try an interactive application built with Broadstreet on the next page:

4/5

SA VE TH E

CREATE THE WEB
-TOUR-

DA
London
October 2nd

TE

Find out how Adobe is helping move the web forward and get a sneak peek at new tools, technologies and services for web designers and developers. Join Adobe for a series of free full-day events to learn about the latest tools and techniques for creating content for the modern web. Topics covered include HTML, CSS, motion graphics, web development and more.

San Diego
October 9th

Tokyo
October 9th

Sydney
October 11th

REGISTER
DATE

REGISTER
EVENT CITY

REGISTER
LOCATIONS

REGISTER
REGISTER

10/04/12 10/09/12 10/17/12 10/19/12 10/23/12 10/25/12 10/25/12 10/25/12 10/27/12

Kansas City San Diego Oslo Barcelona Las Vegas Atlanta Paris Rome Beijing

United States United States Norway Spain United States United States France Italy China China

Register > Register > Register > Register > Register > Register > Register > Register > Register > Register >

^^^ SCROLL THE CONTENTS Guagzhou OF THIS TABLE 11/10/12

appliness

SHOWCASE

TRELLO

TRELLO IS A COLLABORATION TOOL THAT ORGANIZES YOUR PROJECTS INTO BOARDS. IN ONE GLANCE, TRELLO TELLS YOU WHAT’S BEING WORKED ON, WHO’S WORKING ON WHAT, AND WHERE SOMETHING IS IN A PROCESS.

Appliness is using Trello everyday. Actually, this web application is at the center of the creation of our magazine. It’s one of the most impressive web application I’ve ever seen: real-time sharing, real-time notification and updates, public and private data, mobile applications... And it’s 100% free. Trello is also used by engineers to manage IT projects, that’s why we wanted to showcase this application. We also contacted this company and we interviewed the designer and the developer behind this tool. If you don’t know Trello, check this video presentation, then move to the next page to read the interview of Bobby Grace, the designer/developer begind Trello.

(

BOBBY GRACE
THE DESIGNER/DEVELOPER BEHIND TRELLO

CAN YOU DESCRIBE THE TRELLO APP FOR OUR READERS? The basic idea is that you have a board, which might be a project, which contain lists, which are like phases (“To-do”, “Doing”, and “Done”, for example). Those lists contain cards, which are like tasks. You drag cards between lists to show progression. You can add others to the board, assign them to cards, plus comment, vote, attach files, add checklists, and more. All updates happen in real time and the site is a super fast, single-page app. The normal line we tell people is that Trello is a collaboration tool that organizes your projects into boards, but really it’s just a list of lists. It can be used for just about anything. You can plan a vacation with your family, manage a software project at work, or just keep track of recipes. WHAT MAKES IT UNIQUE? Trello let’s you see everything about your project at once. You instantly see who’s working on what and where it is in the process. It’s got a very visual interface especially with card covers, the images on front of cards. We felt like a lot of task managers were just long, dull lists of tasks. Trello came from another idea for a product we had called “Five Things”. The idea was that each person on the team would list out five and only five things they were doing, and everybody would be able to see it and share their five things. Trello takes that idea of limiting tasks, since there’s a practical limit to the number of cards you can have on a board. And since a card has to be in a list, it forced to be sorted in the process. Of course, I use limit and force in the fun sense. WHO USES IT? It’s something that everyone can use. Both developers and managers at work, and roommates in an apartment. Your whole family planning a vacation. We’ve got a lot of big names using it, too, like Adobe, The New York Times, The Verge, Tumblr, Fresh Direct, and Khan Academy. HAVE YOU FOUND PEOPLE USING TRELLO IN SURPRISING OR UNCONVENTIONAL WAYS THAT YOU DID NOT ANTICIPATE WHEN DESIGNING THE APPLICATION? We knew we were developing a horizontal product, but we still love seeing all the interesting ways people use it. We’ve seen it used for publishing, recruiting, software development, wedding planning, home projects, research, fitness, event planning, job hunting, CRM, meet-ups, lists of stuff to buy. At Fog Creek, we use it for Trello itself. We have a pubic development board at trello.com/dev. We also use it in sales, support, marketing, to plan intern events, and for our other products, FogBugz and Kiln. Pretty much on every team. We have a kitchen board where people can request food. They’ll add a “Clif bars” card to the requested list, subscribe to the card, and get notified when it gets moved to the delivered list. We also use Trello to map out machines in our server racks and track virtual machine usage. Everyone has personal boards, too. CURRENTLY, TRELLO IS FREE TO USE. DO YOU SEE THAT EVER CHANGING IN THE FUTURE? IF SO, HOW? Trello will always be free. Everything available for free today will be free forever. We plan to add premium, for-pay features in the future. We have some announcements coming up, but I can’t talk about anything just yet.

2/4

TRELLO IS 100% HOSTED. WILL THERE EVER BE ANY SOFTWARE TO INSTALL? WHAT ADVANTAGES DOES THIS MODEL PROVIDE? We have no plans to release an installable version of Trello. We understand this means we’re cutting off some users who can’t ever use hosted software, but it makes life so much better for the vast majority. Developers get to focus on new features and have a single, predictable place to deploy. We don’t have to spend half of our time tracking down mysterious bugs on some arcane architecture. When somebody reports a bug, we can get a fix out for everybody almost instantly. We don’t have to worry about backwards compatibility or out of date versions. We can easily backfill changes. This ultimately makes life better for users. They get new features and fixes faster. TO STAY AHEAD OF THE TECHNOLOGY CURB, YOU EXPERIMENT A LOT WITH BLEEDING EDGE TECHNOLOGIES. HOW HAVE YOU GOTTEN HURT ALONG THE WAY? ONCE YOU’VE STOOD UP AND DUSTED OFF, WHAT REWARDS HAVE YOU EXPERIENCED AS A RESULT? Most of the technology we use is very new, no older than 5 years. We’ve had some nicks and bruises, but no broken bones. All-in-all, we’re happy with the ease of development and speed of our app. One fun mistake we made early on was with deploys. Because we have a constantly open connection with the browser, we had the ability to refresh the page when a new version got deployed. This meant you could have the Trello tab open all week and you would always have the most current version, which we thought was pretty cool. What actually happened was that the flood of new connections would bring down the service. We tried staggering the refreshes, but that didn’t scale either. Most of our deploys happen in the background now. You get the updates when you refresh. We also had some trouble early on with MongoDB, but that was mostly due to running it on unsupported architecture. We also battled socket.io for our realtime updates and ended up using a hacked up version that only uses WebSockets and AJAX polling. WHAT IS THE STRATEGY BEHIND THIS APPROACH? Fog Creek co-founder, Joel Spolsky, said early on to target technology and browsers that were going to be around in 3 years. When Trello is really popular down the line, that old tech isn’t going to matter. It’s paid off in ease of development and developer happiness. Not having to support IE8, for instance, has saved us an incalculable amount of time. Trying things like Backbone.js has made sense of our client-side codebase. Using CoffeeScript let’s us write sensible, clean JavaScript. HOW DO YOU MANAGE PUSHING NEW RELEASES TO YOUR USERS? We push about 3 times a week, sometimes more for important fixes. It’s a smooth process. We compile and concatenate all the LESS and CoffeeScript files into a single CSS and a single JS file, then send them to Amazon S3. For the server, we have a few shell scripts that handle deploys. We distribute Node.js process across multiple boxes and use our load balancer to take a box out of service while the version is being updated. Again, people tend to leave a Trello tab open all day. The average time of a session on our site is around 60 minutes. The client can get out of date if we push an update, but since we have a constant connection, we can refresh the page if it’s needed. WHAT FRAMEWORKS / JAVASCRIPT LIBRARIES ARE YOU USING? We’re using Node.js on the server-side. We needed something that would deliver a zillion little API calls and hold a lot of connections, so an event-driven, non-blocking server like Node.js made sense. We also make good use of the node modules express, connect, async, and mongoose. Backbone.js stitches the whole front-end together. It handles models, views, and updates to the DOM. We use Mustache for templates. We love the logic-less-ness of it. And of course, jQuery. We use CoffeeScript on both the frontend and back-end.

3/4

THE MOBILE APPLICATION IS VERY CLOSE TO THE BROWSER EXPERIENCE. IS IT AN HYBRID APP? HOW DID YOU DEVELOP IT ? Both our Android and iPhone apps are entirely native, aside form the log-in and sign-up views. We have a couple Android developers and three iOS developers, although everybody on the team is a generalist and our time is split. They both use our public API. We had betas for both apps, which helped us find bugs and smooth out development. WHAT ARE THE IMPORTANT FACTORS TO CONSIDER WHEN YOU ASSESS OTHER APIS OR PLATFORMS TO INTEGRATE WITH WHEN DEVELOPING TRELLO? How does HTML5 factor in to creating such light client for Trello? What other web components or libraries are used to achieve this? Backbone.js does most of the work here. All the templates and views are in the single JS file that makes up the client. It’s just a matter of sending messages down WebSockets or AJAX polling for JSON. Backbone will update the DOM. Cutting out super fancy CSS3 features like transforms, transitions and box shadows, helps performance quite a lot. We went wild with them early on, and performance definitely suffered. They had poor performance on some browsers and older machines. WHAT TECHNOLOGIES OR METHODS DO YOU USE TO INCREASE SERVER-SIDE PERFORMANCE? We recently started sharding MongoDB which has given us more room to scale and cut out strain on our servers. Staying on top of open-source software is important. Updating Node.js from 0.6 to 0.8 gave us a 20% speed increase, for instance. We’re now running everything through our new API, which helps combine requests and reduce duplicate calls. HOW IMPORTANT IS REAL-TIME UPDATES IN COLLABORATION SOFTWARE? WHAT ARE THE KEY COMPONENTS THAT MAKE THIS HAPPEN IN TRELLO? Watching cards and comments show up instantly is a magical experience. We couldn’t imagine Trello as anything other than a real-time app. It’s something we had to have from the beginning and we’ve built into the core of the app. It works with a combination of Redis for our pub-sub layer, WebSockets to cut down on HTTP requests, and Backbone to update the DOM. Browser clients certainly are not all created equally, what workarounds have you had to make because of this to still maintain the functionality and responsiveness Trello users have become accustomed to? When we started the project, we agreed to only support the latest browsers at the time. That meant no IE8 support, which helped a lot. We also cut out a lot of the super fancy CSS3 features like transforms, transitions and box shadows, except where they actually help the app. They had poor performance on some browsers and older machines. We also fall back to AJAX polling when WebSockets are not available. Sacrificing real-time updates in the app was not an option, but WebSockets are still a nascent technology that is not widely supported, so AJAX polling is a nice fallback.

appliness

HELTER SKELTER NEWS

Fresh news about HTML and Javascript collected by Brian Rinaldi - remotesynthesis.com

(

Combining Edge Animate and CSS FilterLab via Terry Ryan

5 HTML5 APIs You Didn’t Know Existed via David Walsh Default CSS Display Values for Different HTML Elements via Louis Lazaris

Partial Application in JavaScript via Ben Alman

CSS3 Conditional Statements via Johnny Simpson

Linking the Hash Map via Justin Naifeh Advanced Uploading Techniques — Part 2 via CreativeJS

Custom Scrollbars in WebKit by David Walsh ECMAScript 6 collections, Part 1: Sets via Nicholas Zakas

appliness

HELTER SKELTER NEWS

(

Responsive Images for HTML5 via Hans Muller HTML5 hidden Attribute via David Walsh

Looking for a Face. com API replacement? Try ReKognition. via Raymond Camden

Resizing images before upload using HTML5 canvas via Joseph Richter

Rundown of Handling Flexible Media via Chris Coyier

Finding Yourself Using Geolocation and the Google Maps API via Colin Ihrig

Encapsulation Breaking via Justin Naifeh

CSS3 2D and 3D graphics and animation effects via Oswald Campesato

Computer science in JavaScript: Insertion sort by Nicholas Zakas

MORE NEWS ON REMOTESYNTHESIS.COM