You are on page 1of 412

MODERN WEB

DEVELOPMENT
BY FLAVIO COPES
HTTPS://FLAVIOCOPES.COM

UPDATED 21 MAR 2018


Index

JavaScript
Introduction to JavaScript
ECMAScript
Lexical Structure
Variables
Types
Expressions
Arrays
Promises
Template Literals
The Set Data Structure
The Map Data Structure
Loops and Scope
Async and Await
Functional Programming

Web Platform
Progressive Web Apps
Service Workers
Fetch API
Channel Messaging API
Cache API
Push API
Notifications API
IndexedDB
Selectors API

Frontend Dev Tools


Webpack
Babel
npm
Yarn
Jest
ESLint
Prettier
Browser DevTools

React and Redux


React
JSX
Styled Components
Redux
Redux Saga

GraphQL
GraphQL
Apollo

Git and GitHub


Git
GitHub
A Git cheat sheet

CSS
CSS Grid
Flexbox
CSS Variables
PostCSS
How to center things in modern CSS
The CSS margin property

Deployment
Netlify
Firebase Hosting
Tutorials
Make a CMS-based website work offline
Create a Spreadsheet using React
JAVASCRIPT
INTRODUCTION TO THE JAVASCRIPT
PROGRAMMING LANGUAGE
JavaScript is one of the most popular programming languages in the
world.

Introduction
Basic definition of JavaScript
JavaScript versions

Introduction
JavaScript is one of the most popular programming languages in the world.

Created in 20 years ago, it’s gone a very long way since its humble
beginnings.

Being the first - and the only - scripting language that was supported
natively by web browsers, it simply sticked.

In the beginnings, it was not nearly powerful as it is today, and it was


mainly used for fancy animations and the marvel known at the time as
DHTML.

With the growing needs that the web platform demands, JavaScript had the
responsibility to grow as well, to accomodate the needs of one of the most
widely used ecosystem of the world.

Many things were introduced in the platform, with browser APIs, but the
language grew quite a lot as well.

JavaScript is now widely used also outside of the browser. The rise of
Node.js in the last few years unlocked backend development, once the
domain of Java, Ruby, Python and PHP and more traditional server-side
languages.

JavaScript is now also the language powering databases and many more
applications, and it’s even possible to develop embedded applications,
mobile apps, TV sets apps and much more. What started as a tiny language
inside the browser is now the most popular language in the world.

Basic definition of JavaScript


JavaScript is a programming language that is:

high level: it provides abstractions that allow you to ignore the


details of the machine where it’s running on. It manages memory
automatically with a garbage collector, so you can focus on the code
instead of managing memory locations, and provides many constructs
which allow you to deal with highly powerful variables and objects.
dynamic: opposed to static programming languages, a dynamic language
executes at runtime many of the things that a static language does at
compile time. This has pros and cons, and it gives us powerful
features like dynamic typing, late binding, reflection, functional
programming, object runtime alteration, closures and much more.
dynamically typed: a variable does not enforce a type. You can
reassign any type to a variable, for example assigning an integer to a
variable that holds a string.
weakly typed: as opposed to strong typing, weakly (or loosely) typeed
languages do not enforce the type of an object, allowing more
flexibility but denying us type safety and type checking (something
that TypeScript and Flow aim to improve)
interpreted: it’s commonly known as an interpreted language, which
means that it does not need a compilation stage before a program can
run, as opposed to C, Java or Go for example. In practice, browsers do
compile JavaScript before executing it, for performance reasons, but
this is transparent to you: there is no additional step involved.
multi-paradigm: the language does not enforce any particular
programming paradigm, unlike Java for example which forces the use of
object oriented programming, or C that forces imperative programming.
You can write JavaScript using an object-oriented paradigm, using
prototypes and the new (as of ES6) classes syntax. You can write
JavaScript in functional programming style, with its first class
functions, or even in an imperative style (C-like).
In case you’re wondering, JavaScript has nothing to do with Java, it’s a
poor name choice but we have to live with it.

JavaScript versions
Let me introduce the term ECMAScript here. We have a complete guide
dedicated to ECMAScript where you can dive into it more, but to start
with, you just need to know that ECMAScript (also called ES) is the name
of the JavaScript standard.

JavaScript is an implementation of that standard. That’s why you’ll hear


about ES6, ES2015, ES2016, ES2017, ES2018 and so on.

For a very long time, the version of JavaScript that all browser ran was
ECMAScript 3. Version 4 was cancelled due to feature creep (they were
trying to add too many things at once), while ES5 was a huge version for
JS.

ES2015, also called ES6, was huge as well.

Since then, the ones in charge decided to release one version per year, to
avoid having too much time idle between releases, and have a faster
feedback loop.

Currently the latest approved JavaScript version is ES2017.


ECMASCRIPT 2015-2017
Discover everything about ECMAScript, and the last features added in
ES6, 7, 8

Introduction
Current ECMAScript version
When is the next version coming out?
What is TC39
ES Versions
ES Next
ES2015 aka ES6
Arrow Functions
A new this scope
Promises
Generators
let and const
Classes
Constructor
Super
Getters and setters
Modules
Importing modules
Exporting modules
Multiline strings
Template Literals
Default parameters
Destructuring assignments
Enhanced Object Literals
Simpler syntax to include variables
Prototype
super()
Dynamic properties
For-of loop
Map and Set
ES2016 aka ES7
Array.prototype.includes()
Exponentiation Operator
ES2017 aka ES8
String padding
Object.values()
Object.entries()
getOwnPropertyDescriptors()
In what way is this useful?
Trailing commas
Async functions
Why they are useful
A quick example
Multiple async functions in series
Shared Memory and Atomics

Introduction
Whenever you read about JavaScript you’ll inevitably see one of these
terms:

ES3
ES5
ES6
ES7
ES8
ES2015
ES2016
ES2017
ECMAScript 2017
ECMAScript 2016
ECMAScript 2015

What do they mean?

They are all referring to a standard, called ECMAScript.

ECMAScript is the standard upon which JavaScript is based, and it’s often
abbreviated to ES.
Beside JavaScript, other languages implement(ed) ECMAScript, including:

ActionScript (the Flash scripting language), which is losing


popularity since Flash will be officially discontinued in 2020
JScript (the Microsoft scripting dialect), since at the time
JavaScript was supported only by Netscape and the browser wars were at
their peak, Microsoft had to build its own version for Internet
Explorer

but of course JavaScript is the most popular and widely used


implementation of ES.

Why this weird name? Ecma International is a Swiss standards


association who is in charge of definining international standards.

When JavaScript was created, it was presented by Netscape and Sun


Microsystems to Ecma and they gave it the name ECMA-262 alias ECMAScript.

This press release by Netscape and Sun Microsystems


(https://web.archive.org/web/20070916144913/http://wp.netscape.com/newsref/pr/
(the maker of Java) might help figure out the name choice, which might
include legal and branding issues by Microsoft which was in the committee,
according to Wikipedia (https://en.wikipedia.org/wiki/ECMAScript) .

After IE9, Microsoft stopped stopped branding its ES support in browsers


as JScript and started calling it JavaScript (at least, I could not find
references to it any more)

So as of 201x, the only popular language supporting the ECMAScript spec is


JavaScript.

Current ECMAScript version

The current ECMAScript version as of time of writing (Sep 2017) is ES2017,


AKA ES8

It was released in June 2017.


When is the next version coming out?

Historically JavaScript editions have been standardized in June, so we can


expect ECMAScript 2018 (named ES2018 or ES9) to be released in June 2018,
but this is just speculation.

What is TC39

TC39 is the committee that evolves JavaScript.

The members of TC39 are companies involved in JavaScript and browser


vendors, including Mozilla, Google, Facebook, Apple, Microsoft, Intel,
PayPal, SalesForce and others.

Every standard version proposal must go through various stages, which are
explained here (https://tc39.github.io/process-document/) .

ES Versions

I found it puzzling why sometimes an ES version is referenced by edition


number and sometimes by year, and I am confused by the year by chance
being -1 on the number, which adds to the general confusion around JS/ES

Before ES2015, ECMAScript specifications were commonly called by their


edition. So ES5 is the official name for the ECMAScript specification
update published in 2009.

Why does this happen? During the process that led to ES2015, the name was
changed from ES6 to ES2015, but since this was done late, people still
referenced it as ES6, and the community has not left the edition naming
behind - the world is still calling ES releases by edition number.

This table should clear things a bit:

Edition Official name Date published


Edition Official name Date published

ES8 ES2017 June 2017

ES7 ES2016 June 2016

ES6 ES2015 June 2015

ES5.1 ES5.1 June 2011

ES5 ES5 December 2009

ES4 ES4 Abandoned

ES3 ES3 December 1999

ES2 ES2 June 1998

ES1 ES1 June 1997

ES Next

ES.Next is a name that always indicates the next version of JavaScript.

So at the time of writing, ES8 has been released, and ES.Next is ES9

ES2015 aka ES6


ECMAScript 2015, also known as ES6, is a fundamental version of the
ECMAScript standard.

Published 4 years after the latest standard revision, ECMAScript 5.1, it


also marked the switch from edition number to year number.

So it should not be named as ES6 (although everyone calls it as such) but


ES2015 instead.

Since this long time passed, the release is full of important new features
and major changes in suggested best practices in developing JavaScript
programs. To understand how fundamental ES2015 is, just keep in mind that
with this version, the specification document went from 250 pages to ~600.

The most important changes in ES2015 include

Arrow functions
Promises
Generators
let and const
Classes
Modules
Multiline strings
Template literals
Default parameters
Destructuring assignment
Enhanced object literals
The for..of loop
Map and Set

Each of them has a dedicated section in this article.

Arrow Functions

Arrow functions since their introductions changed how most JavaScript code
looks (and works).

Visually, it’s a simple and welcome change, from:

const foo = function foo() {


//...
}

to

const foo = () => {


//...
}

And if the function body is a one-liner, just:

const foo = () => doSomething()

Also, if you have a single parameter, you could write:

const foo = param => doSomething(param)


This is not a breaking change, regular function s will continue to work
just as before.

A new this scope

The this scope with arrow functions is inherited from the context.

With regular function s this always refers to the nearest function,


while with arrow functions this problem is removed, and you won’t need to
write var that = this ever again.

Promises

Promises (check the full guide to promises) allow us to eliminate the


famous “callback hell”, although they introduce a bit more complexity
(which has been solved in ES2017 with async , a higher level construct).

Promises have been used by JavaScript developers well before ES2015, with
many different libraries implementations (e.g. jQuery, q, deferred.js,
vow…), and the standard put a common ground across differences.

By using promises you can rewrite this code

setTimeout(function() {
console.log('I promised to run after 1s')
setTimeout(function() {
console.log('I promised to run after 2s')
}, 1000)
}, 1000)

as

const wait = () => new Promise((resolve, reject) => {


setTimeout(resolve, 1000)
})

wait().then(() => {
console.log('I promised to run after 1s')
return wait()
})
.then(() => console.log('I promised to run after 2s'))
Generators

Generators are a special kind of function with the ability to pause


itself, and resume later, allowing other code to run in the meantime.

The code decides that it has to wait, so it lets other code “in the queue”
to run, and keeps the right to resume its operations “when the thing it’s
waiting for” is done.

All this is done with a single, simple keyword: yield . When a generator
contains that keyword, the execution is halted.

A generator can contain many yield keywords, thus halting itself


multiple times, and it’s idenfitied by the *function keyword, which is
not to be confused with the pointer dereference operator used in lower
level programming languages such as C, C++ or Go.

Generators enable whole new paradigms of programming in JavaScript,


allowing:

2-way communication while a generator is running


long-lived while loops which do not freeze your program

Here is an example of a generator which explains how it all works.

function *calculator(input) {
var doubleThat = 2 * (yield (input / 2))
var another = yield (doubleThat)
return (input * doubleThat * another)
}

We initialize it with

const calc = calculator(10)

Then we start the iterator on our generator:

calc.next()
This first iteration starts the iterator. The code returns this object:

{
done: false
value: 5
}

What happens is: the code runs the function, with input = 10 as it was
passed in the generator constructor. It runs until it reaches the yield ,
and returns the content of yield : input / 2 = 5 . So we got a value of
5, and the indication that the iteration is not done (the function is just
paused).

In the second iteration we pass the value 7 :

calc.next(7)

and what we got back is:

{
done: false
value: 14
}

7 was placed as the value of doubleThat . Important: you might read


like input / 2 was the argument, but that’s just the return value of
the first iteration. We now skip that, and use the new input value, 7 ,
and multiply it by 2.

We then reach the second yeld, and that returns doubleThat , so the
returned value is 14 .

In the next, and last, iteration, we pass in 100

calc.next(100)

and in return we got


{
done: true
value: 14000
}

As the iteration is done (no more yield keywords found) and we just return
(input * doubleThat * another) which amounts to 10 * 14 * 100 .

let and const

var is traditionally function scoped.

let is a new variable declaration which is block scoped.

This means that declaring let variables in a for loop, inside an if or


in a plain block is not going to let that variable “escape” the block,
while var s are hoisted up to the function definition.

const is just like let , but immutable.

In JavaScript moving forward, you’ll see little to no var declarations


any more, just let and const .

const in particular, maybe surprisingly, is very widely used nowadays


with immutability being very popular.

Classes

Traditionally JavaScript is the only mainstream language with prototype-


based inheritance. Programmers switching to JS from class-based language
found it puzzling, but ES2015 introduced classes, which are just syntaxic
sugar over the inner working, but changed a lot how we build JavaScript
programs.

Now inheritance is very easy and resembles other object-oriented


programming languages:
class Person {
constructor(name) {
this.name = name
}

hello() {
return 'Hello, I am ' + this.name + '.'
}
}

class Actor extends Person {


hello() {
return super.hello() + ' I am an actor.'
}
}

var tomCruise = new Actor('Tom Cruise')


tomCruise.hello()

(the above program prints “Hello, I am Tom Cruise. I am an actor.”)

Classes do not have explicit class variable declarations, but you must
initialize any variable in the constructor.

Constructor

Classes have a special method called constructor which is called when a


class is initialized via new .

Super

The parent class can be referenced using super() .

Getters and setters

A getter for a property can be declared as

class Person {
get fullName() {
return `${this.firstName} ${this.lastName}`
}
}

Setters are written in the same way:

class Person {
set age(years) {
this.theAge = years
}
}

Modules

Before ES2015, there were at least 3 major modules competing standards,


which fragmented the community:

AMD
RequireJS
CommonJS

ES2015 standardized these into a common format.

Importing modules

Importing is done via the import ... from ... construct:

import * from 'mymodule'


import React from 'react'
import { React, Component } from 'react'
import React as MyLibrary from 'react'

Exporting modules

You can write modules and export anything to other modules using the
export keyword:

export var foo = 2


export function bar() { /* ... */ }

Multiline strings

ES2015 introduced a new way to define a string, using backticks.

This allows to write multiline strings very easily:

const str = `One


Two
Three`
Compare it to pre-ES2015:

var str = 'One\n' +


'Two\n' +
'Three'

Template Literals

Template literals are a new syntax to create strings:

const aString = `A string`

They provide a way to embed expressions into strings, effectively


interpolating the values, by using the ${a_variable} syntax:

const var = 'test'


const string = `something ${var}` //something test

You can perform more complex expressions as well:

const string = `something ${1 + 2 + 3}`


const string2 = `something ${foo() ? 'x' : 'y' }`

and strings can span over multiple lines:

const string3 = `Hey


this

string
is awesome!`

See this post for an in-depth guide on template literals

Default parameters

Functions now support default parameters:


const foo = function(index = 0, testing = true) { /* ... */ }
foo()

Destructuring assignments

Given an object, you can extract just some values and put them into named
variables:

const person = {
firstName: 'Tom',
lastName: 'Cruise',
actor: true,
age: 54, //made up
}

const {firstName: name, age} = person

name and age contain the desired values.

The syntax also works on arrays:

const a = [1,2,3,4,5]
[first, second, , , fifth] = a

Enhanced Object Literals

In ES2015 Object Literals gained superpowers.

Simpler syntax to include variables

Instead of doing

const something = 'y'


const x = {
something: something
}

you can do

const something = 'y'


const x = {
something
}

Prototype

A prototype can be specified with

const anObject = { y: 'y' }


const x = {
__proto__: anObject
}

super()

const anObject = { y: 'y', test: () => 'zoo' }


const x = {
__proto__: anObject,
test() {
return super.test() + 'x'
}
}
x.test() //zoox

Dynamic properties

const x = {
['a' + '_' + 'b']: 'z'
}
x.a_b //z

For-of loop

ES5 back in 2009 introduced forEach() loops. While nice, they offered
no way to break, like for loops always did.

ES2015 introduced the for-of loop, which combines the consineness of


forEach with the ability to break:

//iterate over the value


for (const v of ['a', 'b', 'c']) {
console.log(v);
}

//get the index as well, using `entries()`


for (const [i, v] of ['a', 'b', 'c'].entries()) {
console.log(i, v);
}
Map and Set

Map and Set (and their respective garbage collected WeakMap and WeakSet)
are the official implementations of two very popular data structures.

ES2016 aka ES7


ES7, officially known as ECMAScript 2016, was finalized in June 2016.

Compared to ES6, ES7 is a tiny release for JavaScript, containing just two
features:

Array.prototype.includes
Exponentiation Operator

Array.prototype.includes()

This feature introduces a more readable syntax for checking if an array


contains an element.

With ES6 and lower, to check if an array contained an element you had to
use indexOf , which checks the index in the array, and returns -1 if
the element is not there.

Since -1 is evaluated as a true value, you could not do for example

if (![1,2].indexOf(3)) {
console.log('Not found')
}

With this feature introduced in ES7 we can do

if (![1,2].includes(3)) {
console.log('Not found')
}
Exponentiation Operator

The exponentiation operator ** is the equivalent of Math.pow() , but


brought into the language instead of being a library function.

Math.pow(4, 2) == 4 ** 2

This feature is a nice addition for math intensive JS applications.

The ** operator is standardized across many languages including Python,


Ruby, MATLAB, Lua, Perl and many others.

ES2017 aka ES8


ECMAScript 2017, edition 8 of the ECMA-262 Standard (also commonly called
ES2017 or ES8), was finalized in June 2017.

Compared to ES6, ES8 is a tiny release for JavaScript, but still it


introduces very useful features:

String padding
Object.values
Object.entries
Object.getOwnPropertyDescriptors()
Trailing commas in function parameter lists and calls
Async functions
Shared memory and atomics

String padding

The purpose of string padding is to add characters to a string, so it


reaches a specific length.

ES2017 introduces two String methods: padStart() and padEnd() .

padStart(targetLength [, padString])
padEnd(targetLength [, padString])
Sample usage:

padStart()

‘test’.padStart(4) ‘test’

‘test’.padStart(5) ’ test’

‘test’.padStart(8) ’ test’

‘test’.padStart(8, ‘abcd’) ‘abcdtest’

padEnd()

‘test’.padEnd(4) ‘test’

‘test’.padEnd(5) ‘test ‘

‘test’.padEnd(8) ‘test ‘

‘test’.padEnd(8, ‘abcd’) ‘testabcd’

Object.values()

This method returns an array containing all the object own property
values.

Usage:

const person = { name: 'Fred', age: 87 }


Object.values(person) // ['Fred', 87]

Object.values() also works with arrays:

const people = ['Fred', 'Tony']


Object.values(people) // ['Fred', 'Tony']

Object.entries()

This method returns an array containing all the object own properties, as
an array of [key, value] pairs.

Usage:
const person = { name: 'Fred', age: 87 }
Object.entries(person) // [['name', 'Fred'], ['age', 87]]

Object.entries() also works with arrays:

const people = ['Fred', 'Tony']


Object.entries(people) // [['0', 'Fred'], ['1', 'Tony']]

getOwnPropertyDescriptors()

This method returns all own (non-inherited) properties descriptors of an


object.

Any object in JavaScript has a set of properties, and each of these


properties has a descriptor.

A descriptor is a set of attributes of a property, and it’s composed by a


subset of the following:

value: the value of the property


writable: true the property can be changed
get: a getter function for the property, called when the property is
read
set: a setter function for the property, called when the property is
set to a value
configurable: if false, the property cannot be removed nor any
attribute can be changed, except its value
enumerable: true if the property is enumerable

Object.getOwnPropertyDescriptors(obj) accepts and object, and


returns an object with the set of descriptors.

In what way is this useful?

ES2015 gave us Object.assign() , which copies all enumerable own


properties from one or more objects, and return a new object.

However there is a problem with that, because it does not correctly copies
properties with non-default attributes.
If an object for example has just a setter, it’s not correctly copied to a
new object, using Object.assign() .

For example with

const person1 = {
set name(newName) {
console.log(newName)
}
}

This won’t work:

const person2 = {}
Object.assign(person2, person1)

But this will work:

const person3 = {}
Object.defineProperties(person3,
Object.getOwnPropertyDescriptors(person1))

As you can see with a simple console test:

person1.name = 'x'
"x"

person2.name = 'x'

person3.name = 'x'
"x"

person2 misses the setter, it was not copied over.

The same limitation goes for shallow cloning objects with Object.create().

Trailing commas

This feature allows to have trailing commas in function declarations, and


in functions calls:
const doSomething = (var1, var2,) => {
//...
}

doSomething('test2', 'test2',)

This change will encourage developers to stop the ugly “comma at the start
of the line” habit.

Async functions

Check the dedicated post about async/await

ES2017 introduced the concept of async functions, and it’s the most
important change introduced in this ECMAScript edition.

Async functions are a combination of promises and generators to reduce the


boilerplate around promises, and the “don’t break the chain” limitation of
chaining promises.

Why they are useful

It’s a higher level abstraction over promises.

When Promises were introduced in ES2015, they were meant to solve a


problem with asynchronous code, and they did, but over the 2 years that
separated ES2015 and ES2017, it was clear that promises could not be the
final solution. Promises were introduced to solve the famous callback hell
problem, but they introduced complexity on their own, and syntax
complexity. They were good primitives around which a better syntax could
be exposed to the developers: enter async functions.

A quick example

Code making use of asynchronous functions can be written as

function doSomethingAsync() {
return new Promise((resolve) => {
setTimeout(() => resolve('I did something'), 3000)
})
}

async function doSomething() {


console.log(await doSomethingAsync())
}

console.log('Before')
doSomething()
console.log('After')

The above code will print the following to the browser console:

Before
After
I did something //after 3s

Multiple async functions in series

Async functions can be chained very easily, and the syntax is much more
readable than with plain promises:

function promiseToDoSomething() {
return new Promise((resolve)=>{
setTimeout(() => resolve('I did something'), 10000)
})
}

async function watchOverSomeoneDoingSomething() {


const something = await promiseToDoSomething()
return something + ' and I watched'
}

async function watchOverSomeoneWatchingSomeoneDoingSomething() {


const something = await watchOverSomeoneDoingSomething()
return something + ' and I watched as well'
}

watchOverSomeoneWatchingSomeoneDoingSomething().then((res) => {
console.log(res)
})

Shared Memory and Atomics

WebWorkers are used to create multithreaded programs in the browser.

They offer a messaging protocol via events. Since ES2017, you can create a
shared memory array between web workers and their creator, using a
SharedArrayBuffer .
Since it’s unknown how much time writing to a shared memory portion takes
to propagate, Atomics are a way to enforce that when reading a value, any
kind of writing operation is completed.

Any more detail on this can be found in the spec proposal


(https://github.com/tc39/ecmascript_sharedmem/blob/master/TUTORIAL.md) ,
which has since been implemented.
LEXICAL STRUCTURE OF JAVASCRIPT
A deep dive into the building blocks of JavaScript

Unicode
Semicolons
White space
Case sensitive
Comments
Literals and Identifiers
Reserved words

Unicode
JavaScript is written in Unicode. This means you can use Emojis as
variable names, but more importantly, you can write identifiers in any
language, for example Japanese or Chinese.

Semicolons
JavaScript has a very C-like syntax, and you might see lots of code
samples that feature semicolons at the end of each line.

Semicolons are’t mandatory, and JavaScript does not have any problem in
code that does not use them, and lately many developers, especially those
coming from languages that do not have semicolons, started avoiding using
them.

You just need to avoid doing strange things like typing statements on
multiple lines

return
variable
or starting a line with parentheses ( [ or ( ) and you’ll be safe 99.9% of
the times (and your linter will warn you).

It goes to personal preference, and lately I have decided to never add


useless semicolons, so on this site you’ll never see them.

White space
JavaScript does not consider white space meaningful. Spaces and line
breaks can be added in any fashion you might like, even though this is in
theory.

In practice, you will most likely keep a well defined style and adhere to
what people commonly use, and enforce this using a linter or a style tool
such as Prettier.

For example I like to always 2 characters to indent.

Case sensitive
JavaScript is case sensitive. A varible named something is different
from Something .

The same goes for any identifier.

Comments
You can use two kind of comments in JavaScript:

/* */

//

The first can span over multiple lines and needs to be closed.

The second comments everything that’s on its right, on the current line.
Literals and Identifiers
We define as literal a value that is written in the source code, for
example a number, a string, a boolean or also more advanced constructs,
like Object Literals or Array Literals:

5
'Test'
true
['a', 'b']
{color: 'red', shape: 'Rectangle'}

An identifier is a sequence of characters that can be used to identify a


variable, a function, an object. It can start with a letter, the dollar
sign $ or an underscore _ , and it can contain digits. Using Unicode, a
letter can be any allowed char, for example an emoji .

Test
test
TEST
_test
Test1
$test

The dollar sign is commonly used to reference DOM elements.

Reserved words
You can’t use as identifiers any of the following words:

break
do
instanceof
typeof
case
else
new
var
catch
finally
return
void
continue
for
switch
while
debugger
function
this
with
default
if
throw
delete
in
try
class
enum
extends
super
const
export
import
implements
let
private
public
interface
package
protected
static
yield

because they are reserved by the language.


JAVASCRIPT VARIABLES
A variable is a literal assigned to an identifier, so you can reference
and use it later in the program. Learn how to declare one with
JavaScript

Introduction to JavaScript Variables


Using var
Using let
Using const

Introduction to JavaScript Variables


A variable is a literal assigned to an identifier, so you can reference
and use it later in the program.

Variables in JavaScript do not have any type attached. Once you assign a
specific literal type to a variable, you can later reassign the variable
to host any other type, without type errors or any issue.

This is why JavaScript is sometimes referenced as “untyped”.

A variable must be declared before you can use it. There are 3 ways to do
it, using var , let or const , and those 3 ways differ in how you can
interact with the variable later on.

Using var
Until ES2015, var was the only construct available for defining
variables.

var a = 0
If you forget to add var you will be assigning a value to an undeclared
variable, and the results might vary.

In modern environments, with strict mode enabled, you will get an error.
In older environments (or with strict mode disabled) this will simply
initialize the variable and assign it to the global object.

If you don’t initialize the variable when you declare it, it will have the
undefined value until you assign a value to it.

var a //typeof a === 'undefined'

You can redeclare the variable many times, overriding it:

var a = 1
var a = 2

You can also declare multiple variables at once in the same statement:

var a = 1, b = 2

The scope is the portion of code where the variable is visible.

A variable initialized with var outside of any function is assigned to


the global object, has a global scope and is visible everywhere. A
variable initialized with var inside a function is assigned to that
function, it’s local and is visible only inside it, just like a function
parameter.

Any variable defined into a function with the same name of a global
variable takes precedence over the global variable, shadowing it.

It’s important to understand that a block (identified by a pair of curly


braces) does not define a new scope. A new scope is only created when a
function is created, because var has not block scope, but function
scope.
Inside a function, any variable defined in it is visible throughout all
the function code, even if the variable is declared at the end of the
function it can still be referenced in the beginning, because JavaScript
before executing the code actually moves all variables on top (something
that is called hoisting). To avoid confusion, always declare variables at
the beginning of a function.

Using let
let is a new feature introduced in ES2015 and it’s essentially a block
scoped version of var . Its scope is limited to the block, statement or
expression where it’s defined, and all the contained inner blocks.

Modern JavaScript developers might choose to only use let and completely
discard the use of var .

If let seems an obscure term, just read let color = 'red' as


let the color be red and all has much more sense

Defining let outside of any function - contrary to var - does not


create a global variable.

Using const
Variables declared with var or let can be changed later on in the
program, and reassigned. A once a const is initialized, its value can
never be changed again, and it can’t be reassigned to a different value.

const a = 'test'

We can’t assign a different literal to the a const. We can however mutate


a if it’s an object that provides methods that mutate its contents.

const does not provide immutability, just makes sure that the reference
can’t be changed.
const has block scope, same as let .

Modern JavaScript developers might choose to always use const for


variables that don’t need to be reassigned later in the program.

Why? Because we should always use the simplest construct available to


avoid making errors down the road.
JAVASCRIPT TYPES
You might sometimes read that JS is untyped, but that's incorrect. It's
true that you can assign all sorts of different types to a variable,
but JavaScript has types. In particular, it provides primitive types,
and object types.

Primitive types
Numbers
Strings
Template strings
Booleans
null
undefined
Object types

Primitive types
Primitive types are

Numbers
Strings
Booleans

And two special types:

null
undefined

Let’s see them in details in the next sections.

Numbers
Internally, JavaScript has just one type for numbers: every number is a
float.
A numeric literal is a number represented in the source code, amd
depending on how it’s written, it can be an integer literal or a floating
point literal.

Integers:

10
5354576767321
0xCC //hex

Floats:

3.14
.1234
5.2e4 //5.2 * 10^4

Strings
A string type is a sequence of characters. It’s defined in the source code
as a string literal, which is enclosed in quotes or double quotes

'A string'
"Another string"

Strings can span across multiple lines by using the backslash

"A \
string"

A string can contain escape sequences that can be interpreted when the
string is printed, like \n to create a new line. The backslash is also
useful when you need to enter for example a quote in a string enclosed in
quotes, to prevent the char to be interpreted as a closing quote:

'I\'m a developer'

Strings can be joined using the + operator:


"A " + "string"

Template strings

Introduced in ES2015, template strings are string literals that allow a


more powerful way to define strings.

`a string`

You can perform string substitution, embedding the result of any JS


expression:

`a string with ${something}`


`a string with ${something+somethingElse}`
`a string with ${obj.something()}`

You can have multiline strings easily:

`a string
with
${something}`

Booleans
JavaScript defines two reserved words for booleans: true and false. Many
comparision operations == === < > (and so on) return either one or the
other.

if , while statements and other control structures use booleans to


determine the flow of the program.

They don’t just accept true or false, but also accept truthy and falsy
values.

Falsy values, values interpreted as false, are


0
-0
NaN
undefined
null
'' //empty string

All the rest is considered a truthy value.

null
null is a special value that indicates the absence of a value.

It’s a common concept in other languages as well, can be known as nil or


None in Python for example.

undefined
undefined indicates that a variable has not been initialized and the
value is absent.

It’s commonly returned by functions with no return value. When a


function accepts a parameter but that’s not set by the caller, it’s
undefined.

To detect if a value is undefined , you use the construct:

typeof variable === 'undefined'

Object types
Anything that’s not a primitive type is an object type.

Functions, arrays and what we call objects are object types. They are
special on their own, but they inherit many properties of objects, like
having properties and also having methods that can act on those
properties.
JAVASCRIPT EXPRESSIONS
Expressions are units of code that can be evaluated and resolve to a
value. Expressions in JS can be divided in categories.

Arithmetic expressions
String expressions
Primary expressions
Array and object initializers expressions
Logical expressions
Left-hand-side expressions
Property access expressions
Object creation expressions
Function definition expressions
Invocation expressions

Arithmetic expressions
Under this category go all expressions that evaluate to a number:

1 / 2
i++
i -= 2
i * 2

String expressions
Expressions that evaluate to a string:

'A ' + 'string'


'A ' += 'string'

Primary expressions
Under this category go variable references, literals and constants:
2
0.02
'something'
true
false
this //the current object
undefined
i //where i is a variable or a constant

but also some language keywords:

function
class
function* //the generator function
yield //the generator pauser/resumer
yield* //delegate to another generator or iterator
async function* //async function expression
await //async function pause/resume/wait for completion
/pattern/i //regex
() // grouping

Array and object initializers expressions


[] //array literal
{} //object literal
[1,2,3]
{a: 1, b: 2}
{a: {b: 1}}

Logical expressions
Logical expressions make use of logical operators and resolve to a boolean
value:

a && b
a || b
!a

Left-hand-side expressions
new //create an instance of a constructor
super //calls the parent constructor
...obj //expression using the spread operator
Property access expressions
object.property //reference a property (or method) of an object
object[property]
object['property']

Object creation expressions


new object()
new a(1)
new MyRectangle('name', 2, {a: 4})

Function definition expressions


function() {}
function(a, b) { return a * b }
(a, b) => a * b
a => a * 2
() => { return 2 }

Invocation expressions
The syntax for calling a function or method

a.x(2)
window.resize()
JAVASCRIPT ARRAYS
Common array operations using every feature available up to ES7

Initialize array
Get length of the array
Iterating the array
Every
Some
Iterate the array and return a new one with the returned result
of a function
Filter an array
Reduce
forEach
for..of
for
@@iterator
Adding to an array
Add a the end
Add at the beginning
Removing an item from an array
From the end
From the beginning
At a random position
Remove and insert in place
Join multiple arrays
Lookup the array for a specific element
Get a portion of an array
Sort the array
Get a string representation of an array
Copy an existing array by value
Copy just some values from an existing array
Copy portions of an array into the array itself, in other positions

JavaScript arrays over time got more and more features, sometimes it’s
tricky to know when to use some construct vs another. This post aims to
explain what you should use, as of 2017.

Initialize array
const a = []
const a = [1,2,3]
const a = Array.of(1,2,3)
const a = Array(6).fill(1); //init an array of 6 items of value 1

Don’t use the old syntax (just use it for typed arrays)

const a = new Array() //never use


const a = new Array(1,2,3) //never use

Get length of the array


const l = a.length

Iterating the array

Every

a.every(f)

Iterates a until f() returns false

Some

a.some(f)

Iterates a until f() returns true

Iterate the array and return a new one with the returned result
of a function
const b = a.map(f)

Iterates a and builds a new array with the result of executing f() on
each a element

Filter an array

const b = a.filter(f)

Iterates a and builds a new array with elements of a that returned true
when executing f() on each a element

Reduce

a.reduce((accumulator, currentValue, currentIndex, array) => {


//...
}, initialValue)

reduce() executes a callback function on all the items of the array and
allows to progressively compute a result. If initialValue is specified,
accumulator in the first iteration will equial to that value.

Example:

[1,2,3,4].reduce((accumulator, currentValue, currentIndex, array) => {


return accumulator * currentValue
}, 1)

// iteration 1: 1 * 1 => return 1


// iteration 2: 1 * 2 => return 2
// iteration 3: 2 * 3 => return 6
// iteration 4: 6 * 4 => return 24

// return value is 24

forEach

ES6
a.forEach(f)

Iterates f on a without a way to stop

Example:

a.forEach(v => {
console.log(v)
})

for..of

ES6

for (let v of a) {
console.log(v);
}

for

for (let i = 0; i < a.length; i += 1) {


//a[i]
}

Iterates a , can be stopped using return or break and an iteration can


be skipped using continue

@@iterator

ES6

Getting the iterator from an array returns an iterator of values

const a = [1, 2, 3]
let it = a[Symbol.iterator]();

console.log(it.next().value); //1
console.log(it.next().value); //2
console.log(it.next().value); //3

.entries() returns an iterator of key/value pairs

let it = a.entries();

console.log(it.next().value); //[0, 1]
console.log(it.next().value); //[1, 2]
console.log(it.next().value); //[2, 3]

.keys() allows to iterate on the keys:

let it = a.keys();

console.log(it.next().value); //0
console.log(it.next().value); //1
console.log(it.next().value); //2

.next() returns undefined when the array ends. You can also detect if
the iteration ended by looking at it.next() which returns a value,
done pair. done is always false until the last element, which returns
true .

Adding to an array

Add a the end

a.push(4)

Add at the beginning

a.unshift(0)
a.unshift(-2, -1)

Removing an item from an array


From the end

a.pop()

From the beginning

a.shift()

At a random position

a.splice(0, 2); // get the first 2 items


a.splice(3, 2); // get the 2 items starting from index 3

Do not use remove() as it leaves behind undefined values.

Remove and insert in place

a.splice(2, 3, 2, 'a', 'b'); //removes 3 items starting from


//index 2, and adds 2 items,
// still starting from index 2

Join multiple arrays


const a = [1, 2]
const b = [3, 4]
a.concat(b) // 1, 2, 3, 4

Lookup the array for a specific element


a.indexOf()

Returns the index of the first matching item found, or -1 if not found

a.lastIndexOf()
Returns the index of the last matching item found, or -1 if not found

ES6

a.find((element, index, array) => {


//return true or false
})

Returns the first item that returns true. Returns undefined if not found.

a.findIndex((element, index, array) => {


//return true or false
})

Returns the index of the first item that returns true. Returns undefined
if not found.

ES7

a.includes(value)

Returns true if a contains value .

a.includes(value, i)

Returns true if a contains value after the position i .

Get a portion of an array


a.slice()

Sort the array


Sort alphabetically (by ASCII value - 0-9A-Za-z )
const a = [1,2,3,10,11]
a.sort() //1, 10, 11, 2, 3

const b = [1,'a','Z',3,2,11]
b = a.sort() //1, 11, 2, 3, Z, a

Sort by a custom function

const a = [1,10,3,2,11]
a.sort((a, b) => a - b) //1, 2, 3, 10, 11

Reverse the order of an array

a.reverse()

Get a string representation of an array


a.toString()

Returns a string representation of an array

a.join()

Returns a string concatenation of the array elements. Pass a parameter to


add a custom separator:

a.join(', ')

Copy an existing array by value


const b = Array.from(a)
const b = Array.of(...a)

Copy just some values from an existing array


const b = Array.from(a, x => (x % 2 == 0))
Copy portions of an array into the array itself,
in other positions
const a = [1, 2, 3, 4]
a.copyWithin(0,2) // [3, 4, 3, 4]
const b = [1, 2, 3, 4, 5]
b.copyWithin(0,2) // [3, 4, 5, 4, 5]
//0 is where to start copying into,
// 2 is where to start copying from
const c = [1, 2, 3, 4, 5]
c.copyWithin(0, 2, 4) // [3, 4, 3, 4, 5]
//4 is an end index
PROMISES
Promises are one way to deal with asynchronous code in JavaScript,
without writing too many callbacks in your code.

Introduction to promises
How promises work, in brief
Which JS API use promises?
Creating a promise
Consuming a promise
Chaining promises
Example of chaining promises
Handling errors
Cascading errors
Orchestrating promises
Promise.all()
Promise.race()

Introduction to promises
A promise is commonly defined as a proxy for a value that will eventually
become available.

Promises are one way to deal with asynchronous code, without writing too
many callbacks in your code.

Although being around since years, they have been standardized and
introduced in ES2015, and now they have been superseded in ES2017 by async
functions.

Async functions use the promises API as their building block, so


understanding them is fundamental even if in newer code you’ll likely use
async functions instead of promises.
How promises work, in brief

Once a promise has been called, it will start in pending state. This means
that the caller function continues the execution, while it waits for the
promise to do its own processing, and give the caller function some
feedback.

At this point the caller function waits for it to either return the
promise in a resolved state, or in a rejected state, but as you know
JavaScript is asynchronous, so the function continues its execution while
the promise does it work.

Which JS API use promises?

In addition to your own code, and libraries code, promises are used by
standard modern Web APIs such as:

the Battery API


the Fetch API
Service Workers

It’s unlikely that in modern JavaScript you’ll find yourself not using
promises, so let’s start diving right into them.

Creating a promise
The Promise API exposes a Promise constructor, which you initialize using
new Promise() :

let done = true

const isItDoneYet = new Promise(


(resolve, reject) => {
if (done) {
const workDone = 'Here is the thing I built'
resolve(workDone)
} else {
const why = 'Still working on something else'
reject(why)
}
}
)

As you can see the promise checks the done global constant, and if
that’s true, we return a resolved promise, otherwise a rejected promise.

Using resolve and reject we can communicate back a value, in the above
case we just return a string, but it could be an object as well.

Consuming a promise
In the last section we introduced how a promise is created.

Now let’s see how the promise can be consumed, or used.

const isItDoneYet = new Promise(


//...
)

const checkIfItsDone = () => {


isItDoneYet
.then((ok) => {
console.log(ok)
})
.catch((err) => {
console.error(err)
})
}

Running checkIfItsDone() will execute the isItDoneYet() promise


and will wait for it to resolve, using the then callback, and if there
is any error, it will handle it in the catch callback.

Chaining promises
A promise can be returned to another promise, creating a chain of
promises.

A great example of chaining promises is given by the Fetch API, a layer on


top of the XMLHttpRequest API, which we can use to get a resource and
queue a chain of promises to execute when the resource is fetched.

The Fetch API is a promise-based mechanism, and calling fetch() is


equivalent to defining our own promise using new Promise() .

Example of chaining promises

const status = (response) => {


if (response.status >= 200 && response.status < 300) {
return Promise.resolve(response)
}
return Promise.reject(new Error(response.statusText))
}

const json = (response) => response.json()

fetch('/todos.json')
.then(status)
.then(json)
.then((data) => { console.log('Request succeeded with JSON response', data) })
.catch((error) => { console.log('Request failed', error) })

In this example, we call fetch() to get a list of TODO items from the
todos.json file found in the domain root, and we create a chain of
promises.

Running fetch() returns a response


(https://fetch.spec.whatwg.org/#concept-response) , which has many
properties, and within those we reference:

status , a numeric value representing the HTTP status code


statusText , a status message, which is OK if the request succeeded

response also has a json() method, which returns a promise that will
resolve with the content of the body processed and transformed as JSON.
So given those premises, this is what happens: the first promise in the
chain is a function that we defined, called status() , that checks the
response status and if it’s not a success response (between 200 and 299),
it rejects the promise.

This operation will cause the promise chain to skip all the chained
promises listed and will skip directly to the catch() statement at the
bottom, logging the Request failed text along with the error message.

If that succeeds instead, it calls the json() function we defined. Since


the previous promise, when successful, returned the response object, we
get it as an input to the second promise.

In this case we return the data JSON processed, so the third promise
receives the JSON directly:

.then((data) => {
console.log('Request succeeded with JSON response', data)
})

and we simply log it to the console.

Handling errors
In the example in the previous section we had a catch that was appended
to the chain of promises.

When anything in the chain of promises fails and raises an error or


rejects the promise, the control goes to the nearest catch() statement
down the chain.

new Promise((resolve, reject) => {


throw new Error('Error')
})
.catch((err) => { console.error(err) })

// or
new Promise((resolve, reject) => {
reject('Error')
})
.catch((err) => { console.error(err) })

Cascading errors

If inside the catch() you raise an error, you can append a second
catch() to handle it, and so on.

new Promise((resolve, reject) => {


throw new Error('Error')
})
.catch((err) => { throw new Error('Error') })
.catch((err) => { console.error(err) })

Orchestrating promises

Promise.all()

If you need to syncronize different promises, Promise.all() helps you


define a list of promises, and execute something when they are all
resolved.

Example:

const f1 = fetch('/something.json')
const f2 = fetch('/something2.json')

Promise.all([f1, f2]).then((res) => {


console.log('Array of results', res)
})
.catch((err) => {
console.error(err)
})

The ES2015 destructuring assignment syntax allows you to also do


Promise.all([f1, f2]).then(([res1, res2]) => {
console.log('Results', res1, res2)
})

You are not limited to using fetch of course, any promise is good to go.

Promise.race()

Promise.race() runs when any of the promises you pass to it resolve,


and it runs the attached callback n times as n are the promises you pass
to it.

Example:

const f1 = fetch('/something.json')
const f2 = fetch('/something2.json')

Promise.race([f1, f2]).then((res) => {


console.log(res)
})
.catch((err) => {
console.error(err)
})
JAVASCRIPT TEMPLATE LITERALS
Introduced in ES2015, aka ES6, Template Literals offer a new way to
declare strings, but also some new interesting constructs which are
already widely popular.

Introduction to Template Literals


Multiline strings
Interpolation
Template tags

Introduction to Template Literals


Template Literals are a new ES2015 / ES6 feature that allow you to work
with strings in a novel way compared to ES5 and below.

The syntax at a first glance is very simple, just use backticks instead of
single or double quotes:

const a_string = `something`

They are unique because they provide a lot of features that normal strings
built with quotes, in particular:

they offer a great syntax to define multiline strings


they provide an easy way to interpolate variables and expressions in
strings
they allow to create DSLs with template tags

Let’s dive into each of these in details.

Multiline strings
Pre-ES6, to create a string spanned over two lines you had to use the \
character at the end of a line:
const string = 'first part \
second part'

This allows to create a string on 2 lines, but it’s rendered on just one
line:

first part second part

To render the string on multiple lines as well, you explicitly need to add
\n at the end of each line, like this:

const string = 'first line\n \


second line'

or

const string = 'first line\n' +


'second line'

Template literals make multiline strings much simpler.

Once a template literal is opened with the backtick, you just press enter
to create a new line, with no special characters, and it’s rendered as-is:

const string = `Hey


this

string
is awesome!`

Keep in mind that space is meaningful, so doing this:

const string = `First


Second`

is going to create a string like this:

First
Second
an easy way to fix this problem is by having an empty first line, and
appending the trim() method right after the closing backtick, which will
eliminate any space before the first character:

const string = `
First
Second`.trim()

Interpolation
Template literals provide an easy way to interpolate variables and
expressions into strings.

You do so by using the ${...} syntax:

const var = 'test'


const string = `something ${var}` //something test

inside the ${} you can add anything, even expressions:

const string = `something ${1 + 2 + 3}`


const string2 = `something ${foo() ? 'x' : 'y' }`

Template tags
Tagged templates is one features that might sound less useful at first for
you, but it’s actually used by lots of popular libraries around, like
Styled Components or Apollo, the GraphQL client/server lib, so it’s
essential to understand how it works.

In Styled Components template tags are used to define CSS strings:

const Button = styled.button`


font-size: 1.5em;
background-color: black;
color: white;
`;
In Apollo template tags are used to define a GraphQL query schema:

const query = gql`


query {
...
}
`

The styled.button and gql template tags highlighted in those examples


are just functions:

function gql(literals, ...expressions) {

this function returns a string, which can be the result of any kind of
computation.

literals is an array containing the template literal content tokenized


by the expressions interpolations.

expressions contains all the interpolations.

If we take an example above:

const string = `something ${1 + 2 + 3}`

literals is an array with two items. The first is something , the


string until the first interpolation, and the second is an empty string,
the space betwene the end of the first interpolation (we only have one)
and the end of the string.

expressions in this case is an array with a single item, 6 .

A more complex example is:

const string = `something


another ${'x'}
new line ${1 + 2 + 3}
test`
in this case literals is an array where the first item is:

`something
another `

the second is:

`
new line `

and the third is:

`
test`

expressions in this case is an array with two items, x and 6 .

The function that is passed those values can do anything with them, and
this is the power of this kind feature.

The most simple example is replicating what the string interpolation does,
by simply joining literals and expressions :

const interpolated = interpolate`I paid ${10}€`

and this is how interpolate works:

function interpolate(literals, ...expressions) {


let string = ``
for (const [i, val] of expressions) {
string += literals[i] + val
}
string += literals[literals.length - 1]
return string
}
THE SET JAVASCRIPT DATA STRUCTURE
Discover the Set data structure introduced in ES6

What is a Set
Initialize a Set
Add items to a Set
Check if an item is in the set
Delete an item from a Set by key
Determine the number of items in a Set
Delete all items from a Set
Iterate the items in a Set
Initialize a Set with values
Convert to array
Convert the Set keys into an array
A WeakSet

What is a Set
A Set data structure allows to add data to a container.

ECMAScript 6 (also called ES2015) introduced the Set data structure to the
JavaScript world, along with Map

A Set is a collection of objects or primitive types (strings, numbers or


booleans), and you can think of it as a Map where values are used as map
keys, with the map value always being a boolean true.

Initialize a Set
A Set is initialized by calling:

const s = new Set()


Add items to a Set

You can add items to the Set by using the add method:

s.add('one')
s.add('two')

A set only stores unique elements, so calling s.add('one') multiple


times won’t add new items.

Check if an item is in the set

Once an element is in the set, we can check if the set contains it:

s.has('one') //true
s.has('three') //false

Delete an item from a Set by key

Use the delete() method:

s.delete('one')

Determine the number of items in a Set

Use the size property:

s.size

Delete all items from a Set

Use the clear() method:

s.clear()
Iterate the items in a Set

Use the keys() or values() methods - they are equivalent:

for (const k of s.keys()) {


console.log(k)
}

for (const k of s.values()) {


console.log(k)
}

The entries() method returns an iterator, which you can use like this:

const i = s.entries()
console.log(i.next())

calling i.next() will return each element as a { value, done =


false } object until the iterator ends, at which point done is true .

You can also use the forEach() method on the set:

s.forEach((v) => console.log(v))

or you can just use the set in a for..of loop:

for (const k of s) {
console.log(k)
}

Initialize a Set with values


You can initialize a Set with a set of values:

const s = new Set([1, 2, 3, 4])

Convert to array
Convert the Set keys into an array

const a = [...s.keys()]

// or

const a = [...s.values()]

A WeakSet
A WeakSet is a special kind of Set.

In a Set, items are never garbage collected. A WeakSet instead lets all
its items be freely garbage collected. Every key of a WeakSet is an
object. When the reference to this object is lost, the value can be
garbage collected.

Here are the main differences:

1. you cannot iterate over the WeakSet


2. you cannot clear all items from a WeakSet
3. you cannot check its size

A WeakSet is generally used by framework-level code, and only exposes


these methods:

add()
has()
delete()
THE MAP JAVASCRIPT DATA STRUCTURE
Discover the Map data structure introduced in ES6

What is a Map
Before ES6
Enter Map
Add items to a Map
Get an item from a map by key
Delete an item from a map by key
Delete all items from a map
Check if a map contains an item by key
Find the number of items in a map
Initialize a map with values
Map keys
Weird situations you’ll almost never find in real life
Iterating over a map
Iterate over map keys
Iterate over map values
Iterate over map key, value pairs
Convert to array
Convert the map keys into an array
Convert the map values into an array
WeakMap

What is a Map
A Map data structure allows to associate data to a key.

Before ES6
ECMAScript 6 (also called ES2015) introduced the Map data structure to the
JavaScript world, along with Set
Before its introduction, people generally (ab)used objects as maps, by
associating some object or value to a specific key value:

const car = {}
car['color'] = 'red'
car.owner = 'Flavio'
console.log(car['color']) //red
console.log(car.color) //red
console.log(car.owner) //Flavio
console.log(car['owner']) //Flavio

Enter Map
ES6 introduced the Map data structure, providing us a proper tool to
handle this kind of data organization.

A Map is initialized by calling:

const m = new Map()

Add items to a Map

You can add items to the map by using the set method:

m.set('color', 'red')
m.set('age', 2)

Get an item from a map by key

And you can get items out of a map by using get :

const color = m.get('color')


const age = m.get('age')

Delete an item from a map by key

Use the delete() method:


m.delete('color')

Delete all items from a map

Use the clear() method:

m.clear()

Check if a map contains an item by key

Use the has() method:

const hasColor = m.has('color')

Find the number of items in a map

Use the size property:

const size = m.size

Initialize a map with values


You can initialize a map with a set of values:

const m = new Map([


['color', 'red'],
['owner', 'Flavio'],
['age', 2],
])

Map keys
Just like any value (object, array, string, number) can be used as the
value of the key-value entry of a map item, any value can be used as the
key, even objects.
If you try to get a non-existing key using get() out of a map, it will
return undefined .

Weird situations you’ll almost never find in real


life
const m = new Map()
m.set(NaN, 'test')
m.get(NaN) //test

const m = new Map()


m.set(+0, 'test')
m.get(-0) //test

Iterating over a map

Iterate over map keys

Map offers the keys() method we can use to iterate on all the keys:

for (const k of m.keys()) {


console.log(k)
}

Iterate over map values

Map offers the values() method we can use to iterate on all the values:

for (const v of m.values()) {


console.log(v)
}

Iterate over map key, value pairs

Map offers the values() method we can use to iterate on all the values:
for (const [k, v] of m.entries()) {
console.log(k, v)
}

which can be simplified to

for (const [k, v] of m) {


console.log(k, v)
}

Convert to array

Convert the map keys into an array

const a = [...m.keys()]

Convert the map values into an array

const a = [...m.values()]

WeakMap
A WeakMap is a special kind of map.

In a Map, items are never garbage collected. A WeakMap instead lets all
its items be freely garbage collected. Every key of a WeakMap is an
object. When the reference to this object is lost, the value can be
garbage collected.

Here are the main differences:

1. you cannot iterate over the keys or values (or key-values) of a


WeakMap
2. you cannot clear all items from a WeakMap
3. you cannot check its size
A WeakMap exposes those methods, which are equivalent to the Map ones:

get(k)
set(k, v)
has(k)
delete(k)

The use cases of a WeakMap are less evident than the ones of a Map, and
you might never find the need for them, but essentially it can be used to
build a memory-sensitive cache that is not going to interfere with garbage
collection, or for careful encapsualtion and information hiding.
JAVASCRIPT LOOPS AND SCOPE
Learn some tricks about JavaScript loops and scoping with var and let

There is one feature of JavaScript that might cause a few headaches to


developers, related to loops and scoping.

Take this example:

const operations = []

for (var i = 0; i < 5; i++) {


operations.push(() => {
console.log(i)
})
}

for (const operation of operations) {


operation()
}

It basically iterates and for 5 times it adds a function to an array


called operations. This function simply console logs the loop index
variable i .

Later it runs these functions.

The expected result here should be:

0
1
2
3
4

but actually what happens is this:

5
5
5
5
5
Why is this the case? Because of the use of var .

Since var declarations are hoisted, the above code equals to

var i;
const operations = []

for (i = 0; i < 5; i++) {


operations.push(() => {
console.log(i)
})
}

for (const operation of operations) {


operation()
}

so, in the for-of loop, i is still visible, it’s equal to 5 and every
reference to i in the function is going to use this value.

So how should we do to make things work as we want?

The simplest solution is to use let declarations. Introduced in ES2015,


they are a great help in avoiding some of the weird things about var
declarations.

Simply changing var to let in the loop variable is going to work fine:

const operations = []

for (let i = 0; i < 5; i++) {


operations.push(() => {
console.log(i)
})
}

for (const operation of operations) {


operation()
}

Here’s the output:

0
1
2
3
4
How is this possible? This works because on every loop iteration i is
created as a new variable each time, and every function added to the
operations array gets its own copy of i .

Keep in mind you cannot use const in this case, because there would be
an error as for tries to assign a new value in the second iteration.

Another way to solve this problem was very common in pre-ES6 code, and it
is called Immediately Invoked Function Expression (IIFE).

In this case you can wrap the entire function and bind i to it. Since in
this way you’re creating a function that immediately executes, you return
a new function from it, so we can execute it later:

const operations = []

for (var i = 0; i < 5; i++) {


operations.push(((j) => {
return () => console.log(j)
})(i))
}

for (const operation of operations) {


operation()
}
MODERN ASYNCHRONOUS JAVASCRIPT
WITH ASYNC AND AWAIT

Introduction
Why was async/await introduced?
How it works
A quick example
Promise all the things
The code is much simpler to read
Multiple async functions in series

Introduction
JavaScript evolved in a very short time from callbacks to promises
(ES2015), and since ES2017 asynchronous JavaScript is even simpler with
the async/await syntax.

Async functions are a combination of promises and generators, and


basically they are a higher level abstraction over promises. Let me
repeat: async/await is built on promises.

Why was async/await introduced?


They reduce the boilerplate around promises, and the “don’t break the
chain” limitation of chaining promises.

When Promises were introduced in ES2015, they were meant to solve a


problem with asynchronous code, and they did, but over the 2 years that
separated ES2015 and ES2017, it was clear that promises could not be the
final solution.
Promises were introduced to solve the famous callback hell problem, but
they introduced complexity on their own, and syntax complexity.

They were good primitives around which a better syntax could be exposed to
the developers, so when the time was right we got async functions.

They make the code look like it’s syncronous, but it’s asynchronous and
non-blocking behind the scenes.

How it works
An async function returns a promise, like in this example:

function doSomethingAsync() {
return new Promise((resolve) => {
setTimeout(() => resolve('I did something'), 3000)
})
}

When you want to call this function you prepend await , and the calling
code will stop until the promise is resolved or rejected. One caveat: the
client function must be defined as async . Here’s an example:

async function doSomething() {


console.log(await doSomethingAsync())
}

A quick example
This is a simple example of async/await used to run a function
asynchronously:

function doSomethingAsync() {
return new Promise((resolve) => {
setTimeout(() => resolve('I did something'), 3000)
})
}

async function doSomething() {


console.log(await doSomethingAsync())
}
console.log('Before')
doSomething()
console.log('After')

The above code will print the following to the browser console:

Before
After
I did something //after 3s

Promise all the things


Prepending the async keyword to any function means that the function
will return a promise.

Even if it’s not doing so explicitly, it will internally make it return a


promise.

This is why this code is valid:

async function aFunction() {


return 'test';
}

aFunction().then(alert); // This will alert 'test'

and it’s the same as:

async function aFunction() {


return Promise.resolve('test');
}

aFunction().then(alert); // This will alert 'test'

The code is much simpler to read


As you can see in the example above, our code looks very simple. Compare
it to code using plain promises, with chaining and callback functions.
And this is a very simple example, the major benefits will arise when the
code is much more complex.

For example here’s how you would get a JSON resource, and parse it, using
promises:

const getFirstUserData = () => {


return fetch('/users.json') // get users list
.then(response => response.json()) // parse JSON
.then(users => users[0]) // pick first user
.then(user => fetch(`/users/${user.name}`)) // get user data
.then(userResponse => response.json()) // parse JSON
}

getFirstUserData();

And here is the same functionality provided using await/async:

async function getFirstUserData() {


const response = await fetch('/users.json') // get users list
const users = await response.json() // parse JSON
const user = users[0] // pick first user
const userResponse = await fetch(`/users/${user.name}`); // get user data
const userData = await user.json(); // parse JSON
return userData
}

getFirstUserData();

Multiple async functions in series


Async functions can be chained very easily, and the syntax is much more
readable than with plain promises:

function promiseToDoSomething() {
return new Promise((resolve)=>{
setTimeout(() => resolve('I did something'), 10000)
})
}

async function watchOverSomeoneDoingSomething() {


const something = await promiseToDoSomething()
return something + ' and I watched'
}

async function watchOverSomeoneWatchingSomeoneDoingSomething() {


const something = await watchOverSomeoneDoingSomething()
return something + ' and I watched as well'
}

watchOverSomeoneWatchingSomeoneDoingSomething().then((res) => {
console.log(res)
})

Will print:

I did something and I watched and I watched as well


FUNCTIONAL PROGRAMMING
Getting started with the main concepts of Functional Programming in the
JavaScript Programming Language

Introduction to Functional Programming


First class functions
They can be assigned to variables
They can be used as an argument to other functions
They can be returned by functions
Higher Order Functions
Declarative programming
Declarative vs Imperative
Immutability
const
Object.assign()
concat()
filter()
Purity
Data Transformations
Array.map()
Array.reduce()
Recursion
Composition
Composing in plain JS
Composing with the help of lodash

Introduction to Functional Programming


Functional Programming (FP) is a programming paradigm with some particular
techniques.

In programming languages, you’ll find purely functional programming


languages as well as programming languages that support functional
programming techniques.

Haskell, Clojure and Scala are some of the most popular purely functional
programming languages.

Popular programming languages that support functional programming


techniques are JavaScript, Python, Ruby and many others.

Functional Programming is not a new concept, actually its roots go back o


the 1930’s when lamda calculus was born, and has influenced many
programming languages.

FP has been gaining a lot of momentum lately, so it’s the perfect time to
learn about it.

In this course I’ll introduce the main concepts of Functional Programming,


by using in the code examples JavaScript.

First class functions


In a functional programming language, functions are first class citizens.

They can be assigned to variables

const f = (m) => console.log(m)


f('Test')

Since a function is assignable to a variable, they can be added to


objects:

const obj = {
f(m) {
console.log(m)
}
}
obj.f('Test')

as well as to arrays:
const a = [
m => console.log(m)
]
a[0]('Test')

They can be used as an argument to other functions

const f = (m) => console.log(m)


const f2 = (f3) => f3()
f2(f('Test'))

They can be returned by functions

const createF = () => {


return (m) => console.log(m)
}
const f = createF()
f('Test')

Higher Order Functions


Functions that accept or return functions, like we saw in the previous
article, are called Higher Order Functions.

Examples in the JavaScript standard library include Array.map() and


Array.reduce() , which we’ll see in a bit.

Declarative programming
You may have heard the term “declarative programming”.

Let’s put that term in context.

The opposite of declarative is imperative.

Declarative vs Imperative
An imperative approach is when you tell the machine (in general terms),
the steps it needs to take to get a job done.

A declarative approach is when you tell the machine what you need to do,
and you let it figure out the details.

You start thinking declarative when you have enough level of abstraction
to stop reasoning about low level constructs, and think more at a higher
UI level.

One might argue that C programming is more declarative than Assembly


programming, and that’s true.

HTML is declarative, so if you’ve been using HTML since 1995, you’ve


actually being building declarative UIs since 20+ years.

JavaScript can take both an imperative and a declarative programming


approach.

For example a declarative programming approach is to avoid using loops and


instead use functional programming constructs like map , reduce and
filter , because your programs are more abstract and less focused on
telling the machine each step of processing.

Immutability
In functional programming data never changes. Data is immutable.

A variable can never be changed. To update its value, you create a new
variable.

Instead of changing an array, to add a new item you create a new array by
concatenating the old array, plus the new item.

An object is never updated, but copied before changing it.


const

This is why the ES2015 const is so widely used in modern JavaScript,


which embraces functional programming concepts: to enforce immutability on
variables.

Object.assign()

ES2015 also gave us Object.assign() , which is key to creating objects:

const redObj = { color: 'red' }


const yellowObj = Object.assign({}, redObj, {color: 'yellow'})

concat()

To append an item to an array in JavaScript we generally use the push()


method on an array, but that method mutates the original array, so it’s
not FP-ready.

We instead use the concat() method:

const a = [1, 2]
const b = [1, 2].concat(3)
// b = [1, 2, 3]

or we use the spread operator:

const c = [...a, 3]
// c = [1, 2, 3]

filter()

The same goes for removing an item from an array: instead of using pop()
and splice() , which modify the original array, use array.filter() :

const d = a.filter((v, k) => k < 1)


// d = [1]
Purity
A pure function:

never changes any of the parameters that get passed to it by reference


(in JS, objects and arrays): they should be considered immutable. It
can of course change any parameter copied by value
the return value of a pure function is not influenced by anything else
than its input parameters: passing the same parameters always result
in the same output
during its execution, a pure function does not change anything outside
of it

Data Transformations
Since immutability is such an important concept and a foundation of
functional programming, you might ask how can data change.

Simple: data is changed by creating copies.

Functions, in particular, change the data by returning new copies of data.

Core functions that do this are map and reduce.

Array.map()

Calling Array.map() on an array will create a new array with the result
of a function executed on every item of the original array:

const a = [1, 2, 3]
const b = a.map((v, k) => v * k)
// b = [0, 2, 6]

Array.reduce()
Calling Array.reduce() on an array allows us to transform that array
on anything else, including a scalar, a function, a boolean, an object.

You pass a function that processes the result, and a starting point:

const a = [1, 2, 3]
const sum = a.reduce((partial, v) => partial + v, 0)
// sum = 6

const o = a.reduce((obj, k) => { obj[k] = k; return obj }, {})


// o = {1: 1, 2: 2, 3: 3}

Recursion
Recursion is a key topic in functional programming. when a function calls
itself, it’s called a recursive function.

The classic example of recursion is the Fibonacci sequence (N = (N-1 + N-


2)) calculation, here in its 2^N totally inefficient (but nice to read)
solution:

var f = (n) => n <= 1 ? 1 : f(n-1) + f(n-2)

Composition
Composition is another key topic of Functional Programming, a good reason
to put it into the “key topics” list.

Composition is how we generate a higher order function, by combining


simpler functions.

Composing in plain JS

A very common way to compose functions in plain JavaScript is to chain


them:
obj.doSomething()
.doSomethingElse()

or, also very widely used, by passing a function execution into a


function:

obj.doSomething(doThis())

Composing with the help of lodash

More generally, composing is the act of putting together a list of many


functions to perform a more complicated operation.

lodash/fp comes with an implementation of compose : we execute a list


of functions, starting with an argument, each function inherits the
argument from the preceding function return value. Notice how we don’t
need to store intermediate values anywhere.

import { compose } from 'lodash/fp'

const slugify = compose(


encodeURIComponent,
join('-'),
map(toLowerCase),
split(' ')
)

slufigy('Hello World') // hello-world


WEB PLATFORM
PROGRESSIVE WEB APPS
A Progressive Web App is an app that can provide additional features
based on the device support, including offline capabilities, push
notifications and almost native app look and speed, and local caching
of resources

Introduction
What is a Progressive Web App
Progressive Web Apps alternatives
Native Mobile Apps
Hybrid Apps
Apps built with React Native
Progressive Web Apps features
Features
Benefits
Core concepts
Service Workers
The App Manifest
Example
The App Shell
Caching

Introduction
Progressive Web Apps (PWA) are the latest trend of mobile application
development using web technologies, at the time of writing (early 2018)
only applicable to Android devices.

WebKit, the tech underlying Safari and Mobile Safari, has recently
(Aug 2017) declared they started working on introducing Service
Workers into the browser. This means that soon (sooner or later) they
will land in iOS devices as well, so the Progressive Web Apps concept
could as well be applicable to iPhones and iPads, if Apple decides to
encourage this approach.
Update Feb 2018: PWAs are coming to iOS 11.3 and macOS 10.13.4, very
soon.

It’s not a groundbreaking new technology, but rather a new term that
identifies a bundle of techniques that have the goal of creating a better
experience for web-based apps.

What is a Progressive Web App


A Progressive Web App is an app that can provide additional features based
on the device support, providing offline capability, push notifications
and almost native app look and speed, and local caching of resources.

This technique was originally introduced by Google in 2015, and proves to


bring many advantages to both the developer and the users.

Developers have access to building almost-first-class applications using a


web stack, which is always considerably easier and cheaper than building
native applications, especially when considering the implications of
building and maintaining cross-platform apps.

Devs can benefit from a reduced installation friction, at a time when


having an app in the store does not actually bring anything in terms of
discoverability for 99,99% of the apps, and Google search can provide the
same benefits if not more.

A Progressive Web App is a website which is developed with certain


technologies that make the mobile experience much more pleasant than a
normal mobile-optimized website, to a point that it’s almost working like
a native app, as it offers the following features:

Offline support
Loads fast
Is secure
Is capable of emitting push notifications
Has an immersive, full-screen user experience without the URL bar

Mobile platforms (Android at the time of writing, but it’s not technically
limited to that) offer an increasing support for Progressive Web Apps to
the point of asking the user to add the app to the home screen when they
detect a site a user is visiting is a PWA.

But first, a little clarification on the name. Progressive Web App can be
a confusing term, and a good definition is web apps that take advantage of
modern browsers features (like web workers and the web app manifest) to
let their mobile devices “upgrade” the app to the role of a first-class
citizen app.

Progressive Web Apps alternatives


How does a PWA stand compared to the alternatives when it comes to
building a mobile experience?

Let’s focus on the pros and cons of each, and let’s see where PWAs are a
good fit.

Native Mobile Apps

Native mobile apps are the most obvious way to build a mobile app.
Objective-C or Swift on iOS, Java / Kotlin on Android and C# on Windows
Phone.

Each platform has its own UI and UX conventions, and the native widgets
provide the experience that the user expects. They can be deployed and
distributed through the platform App Store.

The main pain point with native apps is that cross-platform development
requires learning, mastering and keeping up to date with many different
methodologies and best practices, so if for example you have a small team
or even you’re a solo developer building an app on 3 platforms, you need
to spend a lot of time learning the technology but also the environment,
manage different libraries, and use different workflows (for example,
iCloud only works on iOS devices, there’s no Android version).

Hybrid Apps

Hybrid applications are built using Web Technologies, but deployed to the
App Store. In the middle sits a framework or some way to package the
application so it’s possible to send it for review to the traditional App
Store.

Most common platforms are Phonegap, Xamarin, Ionic Framework, and many
others, and usually what you see on the page is a WebView that essentially
loads a local website.

The key aspect of Hybrid Apps is the write once, run anywhere concept, the
different platform code is generated at build time, and you’re building
apps using JavaScript, HTML and CSS, which is amazing, and the device
capabilities (microphone, camera, network, gps…) are exposed through
JavaScript APIs.

The bad part of building hybrid apps is that unless you do a great job,
you might settle on providing a common denominator, effectively creating
an app that’s sub-optimal on all platforms because the app is ignoring the
platform-specific human-computer interaction guidelines.

Also, performance for complex views might suffer.

Apps built with React Native

React Native exposes the native controls of the mobile device through a
JavaScript API, but you’re effectively creating a native application, not
embedding a website inside a WebView.
Their motto, to distinguish this approach from hybrid apps, is learn once,
write anywhere, meaning that the approach is the same across platforms,
but you’re going to create completely separate apps in order to provide a
great experience on each platform.

Performance is comparable to native apps, since what you build is


essentially a native app, which is distributed through the App Store.

Progressive Web Apps features


In the last section you saw the main competitors of Progressive Web Apps.
So how do PWAs stand compared to them, and what are their main features?

Remember, currently Progressive Web Apps are Android-only

Features

Progressive Web Apps have one thing that separates them completely from
the above approaches: they are not deployed to the app store..

This is a key advantage, since the app store is beneficial if you have the
reach and luck to be featured, which can make your app go viral, but
unless you’re in the 0,001% you’re not going to get much benefits from
having your little place on the App Store.

Progressive Web Apps are discoverable using Search Engines, and when a
user gets to your site which has PWAs capabilities, the browser in
combination with the device asks the user if they want to install the app
to the home screen. This is huge because regular SEO can apply to your
PWA, leading to much less reliance on paid acquisition.

Not being in the App Store means you don’t need the Apple or Google
approval to be in the users pockets, and you can release updates when you
want, without having to go through the standard approval process which is
typical of iOS apps.

PWAs are basically HTML5 applications / responsive websites on steroids,


with some key technologies that were recently introduced that make some of
the key features possible. If you remember the original iPhone came
without the option to develop native apps, and developers were told to
develop HTML5 mobile apps, that could be installed to the home screen, but
the tech back then was not ready for this.

Progressive Web Apps run offline.

The use of service workers allow the app to always have fresh content, and
download it in the background, and provide support for push notifications
to provide greater re-engagement opportunities.

Also, sharability makes for a much nicer experience for users that want to
share your app, as they just need a URL.

Benefits

So why should users and developers care about Progressive Web Apps?

1. PWA are lighter. Native Apps can weight 200MB or more, while a PWA
could be in the range of the KBs.
2. No native platform code
3. Lower the cost of acquisition (it’s much more hard to convince a user
to install an app than to visit a website to get the first-time
experience)
4. Significant less effort is needed to build and release updates
5. Much more support for deep links than regular app-store apps

Core concepts
Responsive: the UI adapts to the device screen size
App-like feel: it doesn’t feel like a website, but rather as an app as
much as possible
Offline support: it will use the device storage to provide offline
experience
Installable: the device browser prompts the user to install your app
Re-engaging: push notifications help users re-discover your app once
installed
Discoverable: search engines and SEO optimization can provide a lot
more users than the app store
Fresh: the app updates itself and the content once online
Safe: uses HTTPS
Progressive: it will work on any device, even older one, even if with
less features (e.g. just as a website, not installabla)
Linkable: easy to point to it, using URLs

Service Workers
Part of the Progressive Web App definition is that it must work offline.

Since the thing that allows the web app to work offline is the Service
Worker, this implies that Service Workers are a mandatory part of a
Progressive Web App.

WARNING: Service Workers are currently only supported by Chrome


(Desktop and Android), Firefox and Opera. See
http://caniuse.com/#feat=serviceworkers
(http://caniuse.com/#feat=serviceworkers) for updated data on the
support.
TIP: Don’t confuse Service Workers with Web Workers. They are a
completely different thing.

A Service Worker is a JavaScript file that acts as a middleman between the


web app and the network. Because of this it can provide cache services and
speed the app rendering and improve the user experience.

Because of security reasons, only HTTPS sites can make use of Service
Workers, and this is part of the reasons why a Progressive Web App must be
served through HTTPS.
Service Workers are not available on the device the first time the user
visits the app. What happens is that the first visit the web worker is
installed, and then on subsequent visits to separate pages of the site
will call this Service Worker.

Check out the complete guide to Service Workers

The App Manifest


The App Manifest is a JSON file that you can use to provide the device
information about your Progressive Web App.

You add a link to the manifest in all your web site pages header:

<link rel="manifest" href="/manifest.webmanifest">

This file will tell the device how to set:

The name and short name of the app


The icons locations, in various sizes
The starting URL, relative to the domain
The default orientation
The splash screen

Example

{
"name": "The Weather App",
"short_name": "Weather",
"description": "Progressive Web App Example",
"icons": [{
"src": "images/icons/icon-128x128.png",
"sizes": "128x128",
"type": "image/png"
}, {
"src": "images/icons/icon-144x144.png",
"sizes": "144x144",
"type": "image/png"
}, {
"src": "images/icons/icon-152x152.png",
"sizes": "152x152",
"type": "image/png"
}, {
"src": "images/icons/icon-192x192.png",
"sizes": "192x192",
"type": "image/png"
}, {
"src": "images/icons/icon-256x256.png",
"sizes": "256x256",
"type": "image/png"
}],
"start_url": "/index.html?utm_source=app_manifest",
"orientation": "portrait",
"display": "standalone",
"background_color": "#3E4EB8",
"theme_color": "#2F3BA2"
}

The App Manifest is a W3C Working Draft, reachable at


https://www.w3.org/TR/appmanifest/ (https://www.w3.org/TR/appmanifest/)

The App Shell


The App Shell is not a technology but rather a design concept aimed at
loading and rendering the web app container first, and the actual content
shortly after, to give the user a nice app-like impression.

This is the equivalent of the Apple HIG (Human Interface Guidelines)


suggestions to use a splash screen that resembles the user interface, to
give a psychological hint that was found to lower the perception of the
app taking a long time to load.

Caching

The App Shell is cached separately from the contents, and it’s setup so
that retrieving the shell building blocks from the cache takes very little
time.

Find out more on the App Shell at


https://developers.google.com/web/updates/2015/11/app-shell
(https://developers.google.com/web/updates/2015/11/app-shell)
SERVICE WORKERS
Service Workers are a key technology powering Progressive Web
Applications on the mobile web

Introduction to Service Workers


Background Processing
Offline Support
Precache assets during installation
Caching network requests
A Service Worker Lifecycle
Registration
Scope
Installation
Activation
Updating a Service Worker
Fetch Events
Background Sync
Push Events
A note about console logs:

Introduction to Service Workers


Service Workers are at the core of Progressive Web Apps, because they
allow caching of resources and push notifications, two of the main
distinguishing features that up to now set native apps apart.

A Service Worker is programmable proxy between your web page and the
network, providing the ability to intercept and cache network requests,
effectively giving you the ability to create an offline-first experience
for your app.

It’s a special kind of web worker, a JavaScript file associated with a web
page which runs on a worker context, separate from the main thread, giving
the benefit of being non-blocking - so computations can be done without
sacrificing the UI responsiveness.

Being on a separate thread it has no DOM access, and no access to the


Local Storage APIs and the XHR API as well, and it can only communicate
back to the main thread using the Channel Messaging API.

Service Workers cooperate with other recent Web APIs:

Promises
Fetch API
Cache API

And they are only available on HTTPS protocol pages, except for local
requests, which do not need a secure connection for an easier testing.

Background Processing
Service Workers run independent of the application they are associated to,
and they can receive messages when they are not active.

For example they can work:

when your mobile application is in the background, not active


when your mobile application is closed, so even not running in the
background
when the browser is closed, if the app is running in the browser

The main scenarios where Service Workers are very useful are:

they can be used as a caching layer to handle network requests, and


cache content to be used when offline
to allow push notifications

A Service Worker only runs when needed, and it’s stopped when not used.

Offline Support
Traditionally the offline experience for web apps has been very poor.
Without a network, often web mobile apps simply won’t work, while native
mobile apps have the ability to offer either a working version, or some
kind of nice message.

This is not a nice message, but this is what web pages look like in Chrome
without a network connection:

Possibly the only nice thing about this is that you get to play a free
game by clicking the dinosaur, but it gets boring pretty quickly.

In the recent past the HTML5 AppCache already promised to allow web apps
to cache resources and work offline, but its lack of flexibility and
confusing behavior made it clear that it wasn’t good enough for the job,
failing its promises (and it’s been discontinued
(https://html.spec.whatwg.org/multipage/offline.html#offline) ).

Service Workers are the new standard for offline caching.


Which kind of caching is possible?

Precache assets during installation

Assets that are reused throughout the application, like images, CSS,
JavaScript files, can be installed the first time the app is opened.

This gives the base of what is called the App Shell architecture.

Caching network requests

Using the Fetch API we can edit the response coming from the server,
determining if the server is not reachable and providing a response from
the cache instead.

A Service Worker Lifecycle


A Service Worker goes through 3 steps to be fully working:

Registration
Installation
Activation

Registration

Registration tells the browser where the server worker is, and it starts
the installation in the background.

Example code to register a Service Worker placed in worker.js :

if ('serviceWorker' in navigator) {
window.addEventListener('load', () => {
navigator.serviceWorker.register('/worker.js')
.then((registration) => {
console.log('Service Worker registration completed with scope: ',
registration.scope)
}, (err) => {
console.log('Service Worker registration failed', err)
})
})
} else {
} else {
console.log('Service Workers not supported')
}

Even if this code is called multiple times, the browser will only perform
the registration if the service worker is new, not registered previously,
or if has been updated.

Scope

The register() call also accepts a scope parameter, which is a path


that determines which part of your application can be controlled by the
service worker.

It defaults to all files and subfolders contained in the folder that


contains the service worker file, so if you put it in the root folder, it
will have control over the entire app. In a subfolder, it will only
control pages accessible under that route.

The example below registers the worker, by specifying the


/notifications/ folder scope.

navigator.serviceWorker.register('/worker.js', {
scope: '/notifications/'
})

The / is important: in this case, the page /notifications won’t


trigger the Service Worker, while if the scope was

{
scope: '/notifications'
}

it would have worked.

NOTE: The service worker cannot “up” itself from a folder: if its
file is put under /notifications , it cannot control the / path
or any other path that is not under /notifications .
Installation

If the browser determines that a service worker is outdated or has never


been registered before, it will proceed to install it.

self.addEventListener('install', (event) => {


//...
});

This is a good event to prepare the Service Worker to be used, by


initializing a cache, and cache the App Shell and static assets using the
Cache API.

Activation

The activation stage is the third step, once the service worker has been
successfully registered and installed.

At this point, the service worker will be able to work with new page
loads.

It cannot interact with pages already loaded, which means the service
worker is only useful on the second time the user interacts with the app,
or reloads one of the pages already open.

self.addEventListener('activate', (event) => {


//...
});

A good use case for this event is to cleanup old caches and things
associated with the old version but unused in the new version of the
service worker.

Updating a Service Worker


To update a Service Worker you just need to change one byte into it, and
when the register code is run, it will be updated.

Once a Service Worker is updated, it won’t become available until all


pages that were loaded with the old service worker attached are closed.

This ensures that nothing will break on the apps / pages already working.

Refreshing the page is not enough, as the old worker is still running and
it’s not been removed.

Fetch Events
A fetch event is fired when a resource is requested on the network.

This offers us the ability to look in the cache before making network
requests.

For example the snippet below uses the Cache API to check if the request
URL was already stored in the cached responses, and return the cached
response if this is the case. Otherwise, it executes the fetch request and
returns it.

self.addEventListener('fetch', (event) => {


event.respondWith(
caches.match(event.request)
.then((response) => {
if (response) { //entry found in cache
return response
}
return fetch(event.request)
}
)
)
})

Background Sync
Background sync allows outgoing connections to be deferred until the user
has a working network connection.
This is key to ensure a user can use the app offline, and take actions on
it, and queue server-side updates for when there is a connection open,
instead of showing an endless spinning wheel trying to get a signal.

navigator.serviceWorker.ready.then((swRegistration) => {
return swRegistration.sync.register('event1')
});

This code listens for the event in the Service Worker:

self.addEventListener('sync', (event) => {


if (event.tag == 'event1') {
event.waitUntil(doSomething())
}
})

doSomething() returns a promise. If it fails, another sync event will


be scheduled to retry automatically, until it succeeds.

This also allows an app to update data from the server as soon as there is
a working connection available.

Push Events
Service Workers enable web apps to provide native Push Notifications to
users.

Push and Notifications are actually two different concepts and


technologies, but combined to provide what we know as Push Notifications.
Push provides the mechanism that allows a server to send information to a
service worker, and Notifications are the way service workers can show
information to the user.

Since Service Workers run even when the app is not running, they can
listen for push events coming, and either provide user notifications, or
update the state of the app.
Push events are initiated by a backend, through a browser push service,
like the one provided by Firebase.

Here is an example of how the service worker can listen for incoming push
events:

self.addEventListener('push', (event) => {


console.log('Received a push event', event)

const options = {
title: 'I got a message for you!',
body: 'Here is the body of the message',
icon: '/img/icon-192x192.png',
tag: 'tag-for-this-notification',
}

event.waitUntil(
self.registration.showNotification(title, options)
)
})

A note about console logs:


If you have any console log statement ( console.log and friends) in the
Service Worker, make sure you turn on the Preserve log feature provided
by the Chrome Devtools, or equivalent.

Otherwise, since the service worker acts before the page is loaded, and
the console is cleared before loading the page, you won’t see any log in
the console.
FETCH API
The Fetch API, has been standardized as a modern approach to
asynchronous network requests, and uses Promises as a building block

Introduction to the Fetch API


Using Fetch
Catching errors
Response Object
Metadata
headers
status
statusText
url
Body content
Request Object
Request headers
POST Requests

Introduction to the Fetch API


Since IE5 was released in 1998, we’ve had the option to make asynchronous
network calls in the browser using XMLHttpRequest (XHR).

Quite a few years after this, GMail and other rich apps made heavy use of
it, and made the approach so popular that it had to have a name: AJAX.

Working directly with the XMLHttpRequest has always been a pain and it was
almost always abstracted by some library, in particular jQuery has its own
helper functions built around it:

jQuery.ajax()
jQuery.get()
jQuery.post()
and so on.

They had a huge impact on making this more accessible in particular with
regards to making sure all worked on older browsers as well.

The Fetch API, has been standardized as a modern approach to asynchronous


network requests, and uses Promises as a building block.

Fetch at the time of writing (Sep 2017) has a good support across the
major browsers, except IE.

The polyfill https://github.com/github/fetch


(https://github.com/github/fetch) released by GitHub allows us to use
fetch on any browser.

Using Fetch
Starting to use Fetch for GET requests is very simple:

fetch('/file.json')

and you’re already using it: fetch is going to make an HTTP request to get
the file.json resource on the same domain.
As you can see, the fetch function is available in the global window
scope.

Now let’s make this a bit more useful, let’s actually see what the content
of the file is:

fetch('./file.json')
.then((response) => {
response.json().then((data) => {
console.log(data)
})
}
)

Calling fetch() returns a promise. We can then wait for the promise to
resolve by passing a handler with the then() method of the promise.

That handler receives the return value of the fetch promise, a Response
object.

We’ll see this object in details in the next section.

Catching errors

Since fetch() returns a promise, we can use the catch method of the
promise to intercept any error occurring during the execution of the
request, and the processing done in the then callbacks:

fetch('./file.json')
.then((response) => {
//...
}
)
.catch((err) => {
console.error(err)
})

Response Object
The Response Object returned by a fetch() call contains all the
information about the request and the response of the network request.

Metadata

headers

Accessing the headers property on the response object gives you the
ability to look into the HTTP headers returned by the request:

fetch('./file.json')
.then((response) => {
console.log(response.headers.get('Content-Type'))
console.log(response.headers.get('Date'))
})

status

This property is an integer number representing the HTTP response status.

101, 204, 205, or 304 is a null body status


200 to 299, inclusive, is an OK status (success)
301, 302, 303, 307, or 308 is a redirect

fetch('./file.json')
.then((response) => {
console.log(response.status)
})

statusText

statusText is a property representing the status message of the


response. If the request is successful, the status is OK .

fetch('./file.json')
.then((response) => {
console.log(response.statusText)
})

url

url represents the full URL of the property that we fetched.


fetch('./file.json')
.then((response) => {
console.log(response.url)
})

Body content

A response has a body, accessible using the text() or json() methods,


which return a promise.

fetch('./file.json')
.then((response) => {
response.text().then(body => console.log(body))
response.json().then(body => console.log(body))
})

Request Object
The Request object represents a resource request, and it’s usually created
using the new Request() API.

Example:

const req = new Request('/api/todos')

The Request object offers several read-only properties to inspect the


resource request details, including

method : the request’s method (GET, POST, etc.)


url : the URL of the request.
headers : the associated Headers object of the request
referrer : the referrer of the request
cache : the cache mode of the request (e.g., default, reload, no-
cache).

And exposes several methods including json() , text() and formData()


to process the body of the request.
The full API can be found at
https://developer.mozilla.org/docs/Web/API/Request
(https://developer.mozilla.org/docs/Web/API/Request)

Request headers
Being able to set the HTTP request header is essential, and fetch gives
us the ability to do this using the Headers object:

const headers = new Headers();


headers.append('Content-Type', 'application/json')

or more simply

const headers = new Headers({


'Content-Type': 'application/json'
})

To attach the headers to the request, we use the Request object, and pass
it to fetch() instead of simply passing the URL.

Instead of:

fetch('./file.json')

we do

const request = new Request('./file.json', {


headers: new Headers({
'Content-Type': 'application/json'
})
})
fetch(request)

The Headers object is not limited to setting value, but we can also query
it:

headers.has('Content-Type');
headers.get('Content-Type');
and we can delete a header that was previously set:

headers.delete('X-My-Custom-Header');

POST Requests
Fetch also allows to use any other HTTP method in your request: POST, PUT,
DELETE or OPTIONS.

Specify the method in the method property of the request, and pass
additional parameters in the header and in the request body:

Example of a POST request:

const options = {
method: 'post',
headers: {
"Content-type": "application/x-www-form-urlencoded; charset=UTF-8"
},
body: 'foo=bar&test=1'
}

fetch(url, options)
.catch((err) => {
console.error('Request failed', err)
})
THE CHANNEL MESSAGING API
The Channel Messaging API allows iframes and workers to communicate
with the main document thread

Introduction to Channel Messaging API


How it works
An example with an iframe
An example with a Service Worker

Introduction to Channel Messaging API


Given two scripts running in the same document, but in a different
context, the Channel Messaging API allows them to communicate by passing
messages through a channel.

This use case involves communication between

the document and an iframe


two iframes
two documents

How it works

Calling new MessageChannel() a message channel is initialized.

The channel has 2 properties, called

port1
port2

Those properties are a MessagePort object. port1 is the port used by the
part that created the channel, and port2 is the port used by the channel
receiver (by the way, the channel is bidirectional, so the receiver can
send back messages as well).
Sending the message is done through the

otherWindow.postMessage()

method, where otherWindow is the other browsing context.

It accepts a message, an origin and the port.

A message can be a JavaScript value like strings, numbers, and some data
structures are supported, namely

File
Blob
FileList
ArrayBuffer

“Origin” is a URI (e.g. https://example.org ). You can use '*' to


allow less strict checking, or specify a domain, or specify '/' to set a
same-domain target, without needing to specify which domain is it.

The other browsing context listens for the message using


MessagePort.onmessage , and it can respond back by using
MessagePort.postMessage .

A channel can be closed by invoking MessagePort.close .

Let’s see a practical example in the next lesson.

An example with an iframe


Here’s an example of a communication happening between a document and an
iframe embedded into it.

The main document defines an iframe and a span where we’ll print a
message that’s sent from the iframe document. As soon as the iframe
document is loaded, we send it a message on the channel we created.
<!DOCTYPE html>
<html>
<body>
<iframe src="iframe.html" width="500" height="500"></iframe>
<span></span>
</body>
<script>
const channel = new MessageChannel()
const display = document.querySelector('span')
const iframe = document.querySelector('iframe')

iframe.addEventListener('load', () => {
iframe.contentWindow.postMessage('Hey', '*', [channel.port2])
}, false)

channel.port1.onmessage = (e) => {


para.innerHTML = e.data
}
</script>
</html>

The iframe page source is even simpler:

<!DOCTYPE html>
<html>
<script>
window.addEventListener("message", (event) => {
if (event.origin !== "http://example.org:8080") {
return
}

// process

// send a message back


event.ports[0].postMessage('Message back from the iframe');
}, false)
</script>
</html>

As you can see we don’t even need to initialize a channel, because the
window.onmessage handler is automatically run when the message is
received from the container page.

e is the event that’s sent, and is composed by the following properties:

data : the object that’s been sent from the other window
origin : the origin URI of the window that sent the message
source : the window object that sent the message
Always verify the origin of the message sender.

e.ports[0] is the way we reference port2 in the iframe, because


ports is an array, and the port was added as the first element.

An example with a Service Worker


A Service Worker is an event-driven worker, a JavaScript file associated
with web page. Check out the Service Workers guide to know more about
them.

What’s important to know is that Service Workers are isolated from the
main thread, and we must communicate with them using messages.

This is how a script attached to the main document will handle sending
messages to the Service Worker:

// `worker` is the service worker already instantiated

const messageChannel = new MessageChannel();


messageChannel.port1.addEventListener('message', (event) => {
console.log(event.data)
})
worker.postMessage(data, [messageChannel.port2]);

In the Service Worker code, we add an event listener for the message
event:

self.addEventListener('message', (event) => {


console.log(event.data)
})

And it can send messages back by posting a message to


messageChannel.port2 , with

self.addEventListener('message', (event) => {


event.ports[0].postMessage(data);
});
More on the inner workings of Service Workers in the Service Workers
guide.
CACHE API
The Cache API is part of the Service Worker specification, and is a
great way to have more power on resources caching.

Introduction
Detect if the Cache API is available
Initialize a cache
Add items to the cache
cache.add()
cache.addAll()
Manually fetch and add
Retrieve an item from the cache
Get all the items in a cache
Get all the available caches
Remove an item from the cache
Delete a cache

Introduction
The Cache API is part of the Service Worker specification, and is a great
way to have more power on resources caching.

It allows you to cache URL-addressable resources, which means assets, web


pages, HTTP APIs responses.

It’s not meant to cache individual chunks of data, which is the task of
the IndexedDB API.

It’s currently available in Chrome >= 40, Firefox >=39, and Opera >= 27.

Internet Explorer and Safari still do not support it, while Edge has
support for that is in preview release.
Mobile support is good on Android, supported on the Android Webview and in
Chrome for Android, while on iOS it’s only available to Opera Mobile and
Firefox Mobile users.

Detect if the Cache API is available


The Cache API is exposed through the caches object. To detect if the API
is implemented in the browser, just check for its existance using:

if ('caches' in window) {
//ok
}

Initialize a cache
Use the caches.open API, which returns a promise with a cache object
ready to be used:

caches.open('mycache').then((cache) => {
// you can start using the cache
})

If the cache does not exist yet, caches.open creates it.

Add items to the cache


The cache object exposes two methods to add items to the cache: add and
addAll .

cache.add()

add accepts a single URL, and when called it fetches the resource and
caches it.

caches.open('mycache').then((cache) => {
cache.add('/api/todos')
})
To allow more control on the fetch, instead of a string you can pass a
Request object, part of the Fetch API specification:

caches.open('mycache').then((cache) => {
const options = {
// the options
}
cache.add(new Request('/api/todos', options))
})

cache.addAll()

addAll accepts an array, and returns a promise when all the resources
have been cached.

caches.open('mycache').then((cache) => {
cache.addAll(['/api/todos', '/api/todos/today']).then(() => {
//all requests were cached
})
})

Manually fetch and add

cache.add() automatically fetches a resource, and caches it.

The Cache API offers a more granular control on this via cache.put() .
You are responsible for fetching the resource and then telling the Cache
API to store a response:

const url = '/api/todos'


fetch(url).then((res) => {
return caches.open('mycache').then((cache) => {
return cache.put(url, res)
})
})

Retrieve an item from the cache


cache.match() returns a Response object which contains all the
information about the request and the response of the network request

caches.open('mycache').then((cache) => {
cache.match('/api/todos').then((res) => {
//res is the Response Object
})
})

Get all the items in a cache


caches.open('mycache').then((cache) => {
cache.keys().then((cachedItems) => {
//
})
})

cachedItems is an array of Request objects, which contain the URL of the


resource in the url property.

Get all the available caches


The caches.keys() method lists the keys of every cache available.

caches.keys().then((keys) => {
// keys is an array with the list of keys
})

Remove an item from the cache


Given a cache object, its delete() method removes a cached resource
from it.

caches.open('mycache').then((cache) => {
cache.delete('/api/todos')
})

Delete a cache
The caches.delete() method accepts a cache identifier and when
executed it wipes the cache and its cached items from the system.

caches.delete('mycache').then(() => {
// deleted successfully
})
THE PUSH API
The Push API allows a web app to receive messages pushed by a server,
even if the web app is not currently open in the browser or not running
on the device.

What is the Push API


What can you do with it
How it works
Overview
Getting the user’s permission
Check if Service Workers are supported
Check if the Push API is supported
Register a Service Worker
Request permission from the user
Subscribe the user and get the PushSubscription object
Send the PushSubscription object to your server
How the Server side works
Registering a new client subscription
Sending a Push message
In the real world…
Receive a Push event
Displaying a notification

What is the Push API


The Push API is a recent addition to the browser APIs, and it’s currently
supported by Chrome (Desktop and Mobile), Firefox and Opera since 2016.
See more about the current state of browsers support at
https://caniuse.com/#feat=push-api (https://caniuse.com/#feat=push-api)

IE, Edge do not support it yet, and Safari has its own implementation
(https://developer.apple.com/notifications/safari-push-notifications/
(https://developer.apple.com/notifications/safari-push-notifications/) )
Since Chrome and Firefox support it, approximately 60% of the users
browsing on the desktop have access to it, so it’s quite safe to use.

What can you do with it


You can send messages to your users, pushing them from the server to the
client, even when the user is not browsing the site.

This lets you deliver notifications and content updates, giving you the
ability to have a more engaged audience.

This is huge because one of the missing pillars of the mobile web,
compared to native apps, was the ability to receive notifications, along
with offline support.

How it works

Overview

When a user visits your web app, you can trigger a panel asking permission
to send updates. A Service Worker is installed, and operating in the
background listens for a Push Event.

Push and Notifications are a separate concept and API, sometimes


mixed because of the push notifications term used in iOS. Basically,
the Notifications API is invoked when a push event is received using
the Push API.

Your server sends the notification to the client, and the Service Worker,
if given permission, receives a push event. The Service Worker reacts to
this event by triggering a notification.

Getting the user’s permission


The first step in working with the Push API is getting the user’s
permission to receive data from you.

Many sites implement this panel badly, showing it on the first page
load. The user is not yet convinced your content is good, and they
will deny the permission. Do it wisely.

There are 6 steps:

1. Check if Service Workers are supported


2. Check if the Push API is supported
3. Register a Service Worker
4. Request permission from the user
5. Subscribe the user and get the PushSubscription object
6. Send the PushSubscription object to your server

Check if Service Workers are supported

if (!('serviceWorker' in navigator)) {
// Service Workers are not supported. Return
return
}

Check if the Push API is supported

if (!('PushManager' in window)) {
// The Push API is not supported. Return
return
}

Register a Service Worker

This code register the Service Worker located in the worker.js file
placed in the domain root:

window.addEventListener('load', () => {
navigator.serviceWorker.register('/worker.js')
.then((registration) => {
console.log('Service Worker registration completed with scope: ',
registration.scope)
g p )
}, (err) => {
console.log('Service Worker registration failed', err)
})
})

To know more about how Service Workers work in details, check out the
Service Workers guide.

Request permission from the user

Now that the Service worker is registered, you can request the permission.

The API to do this changed over time, and it went from accepting a
callback function as a parameter to returning a Promise, breaking the
backward and forward compatibility, and we need to do both as we don’t
know which approach is implemented by the user’s browser.

The code is the following, calling


Notification.requestPermission() .

const askPermission = () => {


return new Promise((resolve, reject) => {
const permissionResult = Notification.requestPermission((result) => {
resolve(result)
})
if (permissionResult) {
permissionResult.then(resolve, reject)
}
})
.then((permissionResult) => {
if (permissionResult !== 'granted') {

throw new Error('Permission denied')


}
})
}

The permissionResult value is a string, that can have the value of: -
granted - default - denied

This code causes the browser to show the permission dialogue:


If the user clicks Block, you won’t be able to ask for the user’s
permission any more, unless they manually go and unblock the site in an
advanced settings panel in the browser (very unlikely to happen).

Subscribe the user and get the PushSubscription object

If the user gave us permission, we can subscribe it and by calling


registration.pushManager.subscribe() .

const APP_SERVER_KEY = 'XXX'

window.addEventListener('load', () => {
navigator.serviceWorker.register('/worker.js')
.then((registration) => {
askPermission().then(() => {
const options = {
userVisibleOnly: true,
applicationServerKey: urlBase64ToUint8Array(APP_SERVER_KEY)
}
return registration.pushManager.subscribe(options)
}).then((pushSubscription) => {
// we got the pushSubscription object
}
}, (err) => {
console.log('Service Worker registration failed', err)
})
})

APP_SERVER_KEY is a string - called Application Server Key or VAPID


key - that identifies the applications’s public key, part of a public /
private key pair.

It will be used as part of the validation that for security reasons occurs
to make sure you (and only you, not someone else) can send a push message
back to the user.

Send the PushSubscription object to your server


In the previous snippet we got the pushSubscription object, which
contains all we need to send a push message to the user. We need to send
this information to our server, so we’re able to send notifications later
on.

We first create a JSON representation of the object

const subscription = JSON.stringify(pushSubscription)

and we can post it to our server using the Fetch API:

const sendToServer = (subscription) => {


return fetch('/api/subscription', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(subscription)
})
.then((res) => {
if (!res.ok) {
throw new Error('An error occurred')
}
return res.json()
})
.then((resData) => {
if (!(resData.data && resData.data.success)) {
throw new Error('An error occurred')
}
})
}

sendToServer(subscription)

Server-side, the /api/subscription endpoint receives the POST request


and can store the subscription information into its storage.

How the Server side works


So far we only talked about the client-side part: getting a user’s
permission to be notified in the future.

What about the server? What should it do, and how should it interact with
the client?
These server-side examples uses Express.js (http://expressjs.com/
(http://expressjs.com/) ) as the base HTTP framework, but you can
write a server-side Push API handler in any language or framework

Registering a new client subscription

When the client sends a new subscription, remember we used the


/api/subscription HTTP POST endpoint, sending the PushSubscription
object details in JSON format, in the body.

We initialize Express.js:

const express = require('express')


const app = express()

This utility function makes sure the request is valid, has a body and an
endpoint property, otherwise it returns an error to the client:

const isValidSaveRequest = (req, res) => {


if (!req.body || !req.body.endpoint) {
res.status(400)
res.setHeader('Content-Type', 'application/json')
res.send(JSON.stringify({
error: {
id: 'no-endpoint',
message: 'Subscription must have an endpoint'
}
}))
return false
}
return true
}

The next utility function saves the subscription to the database,


returning a promise resolved when the insertion completed (or failed). The
insertToDatabase function is a placeholder, we’re not going into those
details here:

const saveSubscriptionToDatabase = (subscription) => {


return new Promise((resolve, reject) => {
insertToDatabase(subscription, (err, id) => {
if (err) {
reject(err)
return
}

resolve(id)
})
})
}

We use those functions in the POST request handler below. We check if the
request is valid, then we save the request and then we return a
data.success: true response back to the client, or an error:

app.post('/api/subscription', (req, res) => {


if (!isValidSaveRequest(req, res)) {
return
}

saveSubscriptionToDatabase(req, res.body)
.then((subscriptionId) => {
res.setHeader('Content-Type', 'application/json')
res.send(JSON.stringify({ data: { success: true } }))
})
.catch((err) => {
res.status(500)
res.setHeader('Content-Type', 'application/json')
res.send(JSON.stringify({
error: {
id: 'unable-to-save-subscription',
message: 'Subscription received but failed to save it'
}
}))
})
})

app.listen(3000, () => {
console.log('App listening on port 3000')
})

Sending a Push message

Now that the server has registered the client in its list, we can send it
Push messages. Let’s see how that works by creating an example code
snippet that fetches all subscriptions and sends a Push message to all of
them at the same time.

We use a library because the Web Push protocol


(https://developers.google.com/web/fundamentals/push-notifications/web-
push-protocol (https://developers.google.com/web/fundamentals/push-
notifications/web-push-protocol) ) is complex, and a lib allows us to
abstract away a lot of low level code that makes sure we can work safely
and correctly handle any edge case.

This example uses the web-push Node.js library


(https://github.com/web-push-libs/web-push (https://github.com/web-
push-libs/web-push) ) to handle sending the Push message

We first initialize the web-push lib, and we generate a tuple of


private and public keys, and set them as the VAPID details:

const webpush = require('web-push')


const vapidKeys = webpush.generateVAPIDKeys()

const PUBLIC_KEY = 'XXX'


const PRIVATE_KEY = 'YYY'

const vapidKeys = {
publicKey: PUBLIC_KEY,
privateKey: PRIVATE_KEY
}

webpush.setVapidDetails(
'mailto:my@email.com',
vapidKeys.publicKey,
vapidKeys.privateKey
)

Then we set up a triggerPush() method, responsible for sending the


push event to a client. It just calls webpush.sendNotification() and
catches any error. If the return error HTTP status code is 410, which
means gone (https://developer.mozilla.org/docs/Web/HTTP/Status/410
(https://developer.mozilla.org/docs/Web/HTTP/Status/410) ), we delete that
subscriber from the database.

const triggerPush = (subscription, dataToSend) => {


return webpush.sendNotification(subscription, dataToSend)
.catch((err) => {
if (err.statusCode === 410) {
return deleteSubscriptionFromDatabase(subscription._id)
} else {
console.log('Subscription is no longer valid: ', err)
}
})
}

We don’t implement getting the subscriptions from the database, but we


leave it as a stub:

const getSubscriptionsFromDatabase = () => {


//stub
}

The meat of the code is the callback of the POST request to the
/api/push endpoint:

app.post('/api/push', (req, res) => {


return getSubscriptionsFromDatabase()
.then((subscriptions) => {
let promiseChain = Promise.resolve()
for (let i = 0; i < subscriptions.length; i++) {
const subscription = subscriptions[i]
promiseChain = promiseChain.then(() => {
return triggerPush(subscription, dataToSend)
})
}
return promiseChain
})
.then(() => {
res.setHeader('Content-Type', 'application/json')
res.send(JSON.stringify({ data: { success: true } }))
})
.catch((err) => {
res.status(500)
res.setHeader('Content-Type', 'application/json')
res.send(JSON.stringify({
error: {
id: 'unable-to-send-messages',
message: `Failed to send the push ${err.message}`
}
}))
})
})

What the above code does is: it gets all the subscriptions from the
database, then it iterates on them, and it calls the triggerPush()
function we explained before.

Once the subscriptions are done, we return a successful JSON response,


unless an error occurred and we return a 500 error.
In the real world…

It’s unlikely that you’ll set up your own Push server unless you have a
very special use case, or you just want to learn the tech or you like to
DIY. Instead, you usually want to use platforms such as OneSignal
(https://onesignal.com (https://onesignal.com) ) which transparently
handle Push events to all kind of platforms, Safari and iOS included, for
free.

Receive a Push event


When a Push event is sent from the server, how does the client get it?

It’s a normal JavaScript event listener, on the push event, which runs
inside a Service Worker:

self.addEventListener('push', (event) => {


// data is available in event.data
})

event.data contains the PushMessageData


(https://developer.mozilla.org/docs/Web/API/PushMessageData
(https://developer.mozilla.org/docs/Web/API/PushMessageData) object which
exposes methods to retrieve the push data sent by the server, in the
format you want:

arrayBuffer() : as an ArrayBuffer object


blob(): as a Blob object
json(): parsed as json
text(): plain text

You’ll normally use event.data.json() .

Displaying a notification
Here we intersect a bit with the Notifications API, but for a good reason,
as one of the main use cases of the Push API is to display notifications.

Inside our push event listener in the Service Worker, we need to display
the notification to the user, and to tell the event to wait until the
browser has shown it before the function can terminate. We extend the
event lifetime until the browser has done displaying the notification
(until the promise has been resolved), otherwise the Service Worker could
be stopped in the middle of your processing:

self.addEventListener('push', (event) => {


const promiseChain = self.registration.showNotification('Hey!')
event.waitUntil(promiseChain)
})

More on notifications in the Notifications API Guide.


NOTIFICATION API
The Notifications API is responsible for showing the user system
notifications.

Introduction to the Notification API


Permissions
Create a notification
Add a body
Add an image
Close a notification

Introduction to the Notification API


The Notifications API is the interface that browsers expose to the
developer to allow showing messages to the user, with their permission,
even if the web site / web app is not open in the browser.

Those messages are consistent and native, which means that the receiving
person is used to the UI and UX of them, being system-wide and not
specific to your site.

In combination with the Push API this technology can be a successful way
to increase user engagement and to enhance the capabilities of your app.

The Notifications API interacts heavily with Service Workers, as they


are required for Push Notifications. You can use the Notifications
API without Push, but its use cases are limited.

if (window.Notification && Notification.permission !== "denied") {


Notification.requestPermission((status) => {
// status is "granted", if accepted by user
var n = new Notification('Title', {
body: 'I am the body text!',
icon: '/path/to/icon.png' // optional
})
})
})
}

n.close()

Permissions
To show a notification to the user, you must have permission to do so.

The Notification.requestPermission() method call requests this


permission.

You can call

Notification.requestPermission()

in this very simple form, and it will show a permission permission


granting panel - unless permission was already granted before.

To do something when the user interacts (allows or denies), you can attach
a processing function to it:

const process = (permission) => {


if (permission === "granted") {
// ok we can show the permission
}
}

Notification.requestPermission((permission) => {
process(permission)
}).then((permission) => {
process(permission)
})

See how we pass in a callback and also we expect a promise. This is


because of different implementations of
Notification.requestPermission() made in the past, which we now
must support as we don’t know in advance which version is running in the
browser. So to keep things in a single location I extracted the permission
processing in the process() function.
In both cases that function is passed a permission string which can
have one of these values:

granted : the user accepted, we can show a permission


denied : the user denied, we can’t show any permission

Those values can also be retrieved checking the


Notification.permission property, which - if the user already
granted permissions - evaluates to granted or denied , but if you
haven’t called Notification.requestPermission() yet, it will
resolve to default .

Create a notification
The Notification object exposed by the window object in the browser
allows you to create a notification and to customize its appearance.

Here is the simplest example, that works after you asked for permissions:

Notification.requestPermission()
new Notification('Hey')

You have a few options to customize the notification.

Add a body

First, you can add a body, which is usually shown as a single line:

new Notification('Hey', {
body: 'You should see this!'
})
Add an image

You can add an icon property:

new Notification('Hey', {
body: 'You should see this!',
icon: '/user/themes/writesoftware/favicon.ico'
})

More customization options, with platform-specific properties, can be


found at https://developer.mozilla.org/docs/Web/API/Notification
(https://developer.mozilla.org/docs/Web/API/Notification)

Close a notification
You might want to close a notification once you opened it.

To do so, create a reference to the notification you open:

const n = new Notification('Hey')

and then you can close it later, using:

n.close()

or with a timeout:

setTimeout(n.close(), 1 * 1000)
INDEXEDDB
Introduction to the Database of the Web

Introduction to IndexedDB
Create an IndexedDB Database
How to create a database
Create an Object Store
How to create an object store or add a new one
Indexes
Check if a store exists
Deleting from IndexedDB
Delete a database
Delete an object store
To delete data in an object store use a transaction
Add an item to the database
Getting items from a store
Getting a specific item from a store using get()
Getting all the items using getAll()
Iterating on all the items using a cursor via openCursor()
Iterating on a subset of the items using bounds and cursors

Introduction to IndexedDB
IndexedDB is one of the storage capabilities introduced into browsers over
the years. It’s a key/value store (a noSQL database) considered to be the
definitive solution for storing data in browsers.

It’s an asynchronous API, which means that performing costly operations


won’t block the UI thread providing a sloppy experience to users. It can
store an indefinite amount of data, although once over a certain threshold
the user is prompted to give the site higher limits.
It’s supported on all modern browsers (http://caniuse.com/#feat=indexeddb)
.

It supports transactions, versioning and gives good performance.

Inside the browser we can also use:

Cookies: can host a very small amount of strings


DOM Storage (or Web Storage), a term that commonly identifies
localStorage and sessionStorage, two key/value stores. sessionStorage,
does not retain data, which is cleared when the session ends, while
localStorage keeps the data across sessions

Local/session storage have the disadvantage of being capped at a small


(and inconsistent) size, with browsers implementation offering from 2MB to
10MB of space per site.

In the past we also had Web SQL, a wrapper aroung SQLite, but now this is
deprecated and unsupported on some modern browsers, it’s never been a
recognized standard and so it should not be used, although 83% of users
have this technology on their devices according to Can I Use
(http://caniuse.com/#feat=sql-storage) .

While you can technically create multiple databases per site, you
generally create one single database, and inside that database you can
create multiple object stores.

A database is private to a domain, so any other site cannot access another


website IndexedDB stores.

Each store usually contains a set of things, which can be

strings
numbers
objects
arrays
dates
For example you might have a store that contains posts, another that
contains comments.

A store contains a number of items which have a unique key, which


represents the way by which an object can be identified.

You can alter those stores using transactions, by performing add, edit and
delete operations, and iterating over the items they contain.

Since the advent of Promises in ES2015, and the subsequent move of APIs to
using promises, the IndexedDB API seems a bit old school.

While there’s nothing wrong in it, in all the examples that I’ll explain
I’ll use the IndexedDB Promised Library
(https://github.com/jakearchibald/idb) by Jake Archibald, which is a tiny
layer on top of the IndexedDB API to make it easier to use.

This library is also used on all the examples on the Google


Developers website regarding IndexedDB

Create an IndexedDB Database


Include the idb lib using:

yarn add idb

And then include it in your page, either using Webpack or Browserify or


any other build system, or simply:

<script src="./node_modules/idb/lib/idb.js"></script>

And we’re ready to go.

Before using the IndexedDB API, always make sure you check for support in
the browser, even though it’s widely available, you never know which
browser the user is using:

(() => {
'use strict'

if (!('indexedDB' in window)) {
console.warn('IndexedDB not supported')
return
}

//...IndexedDB code
})()

How to create a database

Using idb.open() :

const name = 'mydbname'


const version = 1 //versions start at 1
idb.open(name, version, upgradeDb => {})

The first 2 parameters are self-explanatory. The third param, which is


optional, is a callback called only if the version number is higher than
the current installed database version. In the callback function body you
can upgrade the structure (stores and indexes) of the db.

We use the name upgradeDB for the callback to identify this is the time
to update the database if needed.

Create an Object Store

How to create an object store or add a new one

An object store is created or updated in this callback, using the


db.createObjectStore('storeName', options) syntax:

const dbPromise = idb.open('mydb', 1, (upgradeDB) => {


upgradeDB.createObjectStore('store1')
})
.then(db => console.log('success'))
If you installed a previous version, the callback allows you to perform a
the migration:

const dbPromise = idb.open('keyval-store', 3, (upgradeDB) => {


switch (upgradeDB.oldVersion) {
case 0: // no db created before
// a store introduced in version 1
upgradeDB.createObjectStore('store1')
case 1:
// a new store in version 2
upgradeDB.createObjectStore('store2', { keyPath: 'name' })
}
})
.then(db => console.log('success'))

createObjectStore() as you can see in case 1 accepts a second


parameter that indicates the index key of the database. This is very
useful when you store objects: put() calls don’t need a second
parameter, but can just take the value (an object) and the key will be
mapped to the object property that has that name.

The index gives you a way to retrieve a value later by that specific key,
and it must be unique (every item must have a different key)

A key can be set to auto increment, so you don’t need to keep track of it
on the client code. If you don’t specify a key, IndexedDB will create it
transparently for us:

upgradeDb.createObjectStore('notes', { autoIncrement: true })

but you can specify a specific field of object value to auto increment as
well:

upgradeDb.createObjectStore('notes', {
keyPath: 'id',
autoIncrement: true
})

As a general rule, use auto increment if your values do not contain a


unique key already (for example, an email address for users).
Indexes

An index is a way to retrieve data from the object store. It’s defined
along with the database creation in the idb.open() callback in this
way:

const dbPromise = idb.open('dogsdb', 1, (upgradeDB) => {


const dogs = upgradeDB.createObjectStore('dogs')
dogs.createIndex('name', 'name', { unique: false })
})

The unique option determines if the index value should be unique, and no
duplicate values are allowed to be added.

You can access an object store already created using the


upgradeDb.transaction.objectStore() method:

const dbPromise = idb.open('dogsdb', 1, (upgradeDB) => {


const dogs = upgradeDB.transaction.objectStore('dogs')
dogs.createIndex('name', 'name', { unique: false })
})

Check if a store exists

You can check if an object store already exists by calling the


objectStoreNames() method:

if (!upgradeDb.objectStoreNames.contains('store3')) {
upgradeDb.createObjectStore('store3')
}

Deleting from IndexedDB


Deleting the database, an object store and data

Delete a database
idb.delete('mydb')
.then(() => console.log('done'))

Delete an object store

An object store can only be deleted in the callback when opening a db, and
that callback is only called if you specify a version higher than the one
currently installed:

const dbPromise = idb.open('dogsdb', 2, (upgradeDB) => {


upgradeDB.deleteObjectStore('old_store')
})

To delete data in an object store use a transaction

const key = 232

dbPromise.then((db) => {
const tx = db.transaction('store', 'readwrite')
const store = tx.objectStore('store')
store.delete(key)
return tx.complete
})
.then(() => {
console.log('Item deleted')
})

Add an item to the database


You can use the put method of the object store, but first we need a
reference to it, which we can get from
upgradeDB.createObjectStore() when we create it.

When using put , the value is the first argument, the key is the second.
This is because if you specify keyPath when creating the object store,
you don’t need to enter the key name on every put() request, you can just
write the value.

This populates store0 as soon as we create it:


idb.open('mydb', 1, (upgradeDB) => {
keyValStore = upgradeDB.createObjectStore('store0')
keyValStore.put('Hello world!', 'Hello')
})

To add items later down the road, you need to create a transaction, that
ensures database integrity (if an operation fails, all the operations in
the transaction are rolled back and the state goes back to a known state).

For that, use a reference to the dbPromise object we got when calling
idb.open() , and run:

dbPromise.then((db) => {
const val = 'hey!'
const key = 'Hello again'

const tx = db.transaction('store1', 'readwrite')


tx.objectStore('store1').put(val, key)
return tx.complete
})
.then(() => {
console.log('Transaction complete')
})
.catch(() => {
console.log('Transaction failed')
})

The IndexedDB API offers the add() method as well, but since
put() allows us to both add and update, it’s simpler to just use
it.

Getting items from a store

Getting a specific item from a store using get()

dbPromise.then(db => db.transaction('objs')


.objectStore('objs')
.get(123456))
.then(obj => console.log(obj))
Getting all the items using getAll()

dbPromise.then(db => db.transaction('store1')


.objectStore('store1')
.getAll())
.then(objects => console.log(objects))

Iterating on all the items using a cursor via openCursor()

dbPromise.then((db) => {
const tx = db.transaction('store', 'readonly')
const store = tx.objectStore('store')
return store.openCursor()
})
.then(function logItems(cursor) {
if (!cursor) { return }
console.log('cursor is at: ', cursor.key)
for (const field in cursor.value) {
console.log(cursor.value[field])
}
return cursor.continue().then(logItems)
})
.then(() => {
console.log('done!')
})

Iterating on a subset of the items using bounds and cursors

const searchItems = (lower, upper) => {


if (lower === '' && upper === '') { return }

let range
if (lower !== '' && upper !== '') {
range = IDBKeyRange.bound(lower, upper)
} else if (lower === '') {
range = IDBKeyRange.upperBound(upper)
} else {
range = IDBKeyRange.lowerBound(lower)
}

dbPromise.then((db) => {
const tx = db.transaction(['dogs'], 'readonly')
const store = tx.objectStore('dogs')
const index = store.index('age')
return index.openCursor(range)
})
.then(function showRange(cursor) {
if (!cursor) { return }
console.log('cursor is at:', cursor.key)
for (const field in cursor.value) {
console.log(cursor.value[field])
}
return cursor.continue().then(showRange)
})
.then(() => {
console.log('done!')
})
}

searchDogsBetweenAges(3, 10)
THE SELECTORS API
Access DOM elements using querySelector and querySelectorAll

Introduction
The Selectors API
Basic jQuery to DOM API examples
Select by id
Select by class
Select by tag name
More advanced jQuery to DOM API examples
Select multiple items
Select by HTML attribute value
Select by CSS pseudo class
Select the descendants of an element

Introduction
jQuery and other DOM libraries got a huge popularity boost in the past,
among with other features they provided, thanks to an easy way to select
elements on a page.

Traditionally browsers provided one single way to select a DOM element,


and that was by its id attribute, with getElementById()
(https://developer.mozilla.org/en-US/docs/Web/API/Document/getElementById)
, a method offered by the document object.

The Selectors API


Since 2013 the Selectors API (https://www.w3.org/TR/selectors-api/) , the
DOM allows you to use two more useful methods:

document.querySelector()
document.querySelectorAll()
They can be safely used, as caniuse.com tells us
(https://caniuse.com/#feat=queryselector) , and they are even fully
supported on IE9 in addition to all the other modern browsers, so
there is no reason to avoid them, unless you need to support IE8
(which has partial support) and below.

The way they work is by accepting any CSS selector, so you are no longer
limited by selecting elements by id.

document.querySelector() returns a single element, the first


found
document.querySelectorAll() returns all the elements, wrapped in
a NodeList object.

Those are all valid selectors:

document.querySelector('#test')
document.querySelector('.my-class')
document.querySelector('#test .my-class')
document.querySelector('a:hover')

Basic jQuery to DOM API examples


Here below is a translation of the popular jQuery API into native DOM API
calls.

Select by id

$('#test')
document.querySelector('#test')

We use querySelector since an id is unique in the page

Select by class

$('.test')
document.querySelectorAll('.test')
Select by tag name

$('div')
document.querySelectorAll('div')

More advanced jQuery to DOM API examples

Select multiple items

$('div, span')
document.querySelectorAll('div, span')

Select by HTML attribute value

$('[data-example="test"]')
document.querySelectorAll('[data-example="test"]')

Select by CSS pseudo class

$(':nth-child(4n)')
document.querySelectorAll(':nth-child(4n)')

Select the descendants of an element

For example all li elements under #test :

$('#test li')
document.querySelectorAll('#test li')
FRONTEND DEV TOOLS
WEBPACK
Webpack is a tool that has got a lot of attention in the last few
years, and it is now seen used in almost every project. Learn about it.

What can Webpack do?


Compare Webpack with its competitors
Where does it fit in the grand scheme of things?
Why is Webpack better than its competitors?
Installing Webpack
Webpack configuration
Running Webpack
Is Webpack a task runner?
Webpack is a tool that has got a lot of attention in the last few years,
and it is now seen used in almost every project.

Given its widely used status, you cannot really afford to not know
Webpack.

What can Webpack do?


Webpack is a swiss knife for helping you developing and shipping your web
application.

It’s a module bundler.

It helps you bundle your resources.

It’s also a CSS, HTML and images task runner.

It watches for changes and re-runs the tasks.

It runs Babel transpilation to ES5, allowing you to use the latest


JavaScript features without worrying about browser support.

It can transpile CoffeeScript to JavaScript

It can convert inline images to data URIs.

It allows you to use require() for CSS files.

It can run a development webserver.

It can handle hot code reload.

It can split the output files into multiple files, to avoid having a huge
js file to load in the first page hit.

It can perform tree shaking.


Compare Webpack with its competitors

Where does it fit in the grand scheme of things?

Predecessors of Webpack, and still widely used tools, are

Grunt
Broccoli
Gulp

Why is Webpack better than its competitors?

It has a more simple syntax, better support for projects with lots of
files, and better integration with other modern tools.

Installing Webpack
Webpack is installed very easily with YARN:

yarn global add webpack

or with NPM:

npm i -g webpack

once this is done, you should be able to run

$ webpack --help
webpack 3.5.6
Usage: https://webpack.js.org/api/cli/
Usage without config file: webpack <entry> [<entry>] <output>
Usage with config file: webpack
...
Webpack configuration
The Webpack configuration is usually stored in a file named
webpack.config.js in a project root folder.

This simple example compiles the ./index.js file into output.js

const webpack = require('webpack')

let config = {
entry = './index.js',
output: {
filename: 'output.js'
}
}

module.exports = config

In the next articles you’ll see some more useful configuration options.

Running Webpack
Webpack can be run from the command line manually, but generally one
writes a script inside the package.json file, which is then run using
npm or yarn .

For example this package.json scripts definition:

"scripts": {
"start": "webpack --watch"
},

allows us to run webpack --watch by running

npm start

or
yarn run start

Is Webpack a task runner?


No! Webpack is a module bundler. It relies on npm or yarn as a task
runner, like you saw in the example above.
BABEL
Babel is an awesome entry in the Web Developer toolset

Introduction to Babel
Installing Babel
An example Babel configuration
Babel presets
es2015 preset
env preset
react preset
More info on presets
Using Babel with Webpack

This article covers Babel 6, the current stable release

Introduction to Babel
Babel is an awesome tool, and it’s been around for quite some time, but
nowadays almost every JavaScript developer relies on it, and this will
continue going on, because Babel is now indispensable and has solved a big
problem for everyone.

Which problem?

The problem that every Web Developer has surely had: a feature of
JavaScript is available in the latest release of a browser, but not in the
older versions. Or maybe Chrome or Firefox implement it, but Safari iOS
and Edge do not.

For example, ES2015 introduced the arrow function:

[1, 2, 3].map((n) => n + 1)


Which is now supported by all modern browsers. IE11 does not support it,
nor Opera Mini (How do I know? By checking the ES6 Compatibility Table
(http://kangax.github.io/compat-table/es6/#test-arrow_functions) ).

So how should you deal with this problem? Should you move on and leave the
customers with older/incompatible browsers behind, or should you write
older JavaScript code to make all your users happy?

Enter Babel. Babel is a compiler: it takes code written in one standard,


and it transpiles it to code written into another standard.

You can configure Babel to transpile modern ES2017 JavaScript into


JavaScript ES5 syntax:

[1, 2, 3].map(function(n) {
return n + 1
})

This must happen at build time, so you must setup a workflow that handles
this for you. Webpack is a common solution.

(P.S. if all this ES thing sounds confusing to you, see more about ES
versions in the ECMAScript guide)

Installing Babel
Babel is easily installed using NPM or Yarn:

npm install --global babel-cli

or

yarn global add babel-cli

This will make the global babel command available in the command line:
Now inside your project install the babel-core and babel-loader packages,
by running:

npm install babel-core babel-loader --save-dev

or

yarn add --dev babel-core babel-loader

By default Babel does not provide anything, it’s just a blank box that you
can fill with plugins to solve your specific needs.

An example Babel configuration


Babel out of the box does not do anything useful, you need to configure
it.

To solve the problem we talked about in the introduction (using arrow


functions in every browser), we can run
npm install --save-dev \
babel-plugin-transform-es2015-arrow-functions

or (Yarn)

yarn add --dev \


babel-plugin-transform-es2015-arrow-functions

to download the package in the node_modules folder of our app, then we


need to add

{
"plugins": ["transform-es2015-arrow-functions"]
}

to the .babelrc file present in the application root folder. If you


don’t have that file already, you just create a blank file, and put that
content into it.

TIP: If you have never seen a dot file (a file starting with a dot)
it might be odd at first because that file might not appear in your
file manager, as it’s a hidden file.

Now if we have a script.js file with this content:

var a = () => {};


var a = (b) => b;

const double = [1,2,3].map((num) => num * 2);


console.log(double); // [2,4,6]

var bob = {
_name: "Bob",
_friends: ["Sally", "Tom"],
printFriends() {
this._friends.forEach(f =>
console.log(this._name + " knows " + f));
}
};
console.log(bob.printFriends());

running babel script.js will output the following code:


var a = function () {};var a = function (b) {
return b;
};

const double = [1, 2, 3].map(function (num) {


return num * 2;
});console.log(double); // [2,4,6]

var bob = {
_name: "Bob",
_friends: ["Sally", "Tom"],
printFriends() {
var _this = this;

this._friends.forEach(function (f) {
return console.log(_this._name + " knows " + f);
});
}
};
console.log(bob.printFriends());

As you can see arrow functions have all been converted to JavaScript ES5
function s.

Babel presets
We just saw in the previous article how Babel can be configured to
transpile specific JavaScript features.

You can add much more plugins, but you can’t add to the configuration
features one by one, it’s not practical.

This is why Babel offers presets.

The most popular presets are es2015 , env and react .

es2015 preset

This preset provides all the ES2015 features. You install it by running

npm install --save-dev babel-preset-es2015

or
yarn add --dev babel-preset-es2015

and by adding

{
"presets": ["es2015"]
}

to your .babelrc file.

env preset

The env preset is very nice: you tell it which environments you want to
support, and it does everything for you, supporting all modern JavaScript
features.

E.g. “support the last 2 versions of every browser, but for Safari let’s
support all versions since Safari 7`

{
"presets": [
["env", {
"targets": {
"browsers": ["last 2 versions", "safari >= 7"]
}
}]
]
}

or “I don’t need browsers support, just let me work with Node.js 6.10”

{
"presets": [
["env", {
"targets": {
"node": "6.10"
}
}]
]
}

react preset
The react preset is very convenient when writing React apps, by adding
preset-flow , syntax-jsx , transform-react-jsx , transform-
react-display-name .

By including it, you are all ready to go developing React apps, with JSX
transforms and Flow support.

More info on presets

https://babeljs.io/docs/plugins/ (https://babeljs.io/docs/plugins/)

Using Babel with Webpack


If you want to run modern JavaScript in the browser, Babel on its own is
not enough, you also need to bundle the code. Webpack is the perfect tool
for this.

TIP: read the Webpack guide if you’re not familiar with Webpack

Modern JS needs two different stages: a compile stage, and a runtime


stage. This is because some ES6+ features need a polyfill or a runtime
helper.

To install the Babel polyfill runtime functionality, run

npm install --save babel-polyfill \


babel-runtime \
babel-plugin-transform-runtime

or

yarn add babel-polyfill \


babel-runtime \
babel-plugin-transform-runtime

Now in your webpack.config.js file add:

entry: [
'babel-polyfill',
// your app scripts should be here
],

module: {
loaders: [
// Babel loader compiles ES2015 into ES5 for
// complete cross-browser support
{
loader: 'babel-loader',
test: /\.js$/,
// only include files present in the `src` subdirectory
include: [path.resolve(__dirname, "src")],
// exclude node_modules, equivalent to the above line
exclude: /node_modules/,
query: {
// Use the default ES2015 preset
// to include all ES2015 features
presets: ['es2015'],
plugins: ['transform-runtime']
}
}
]
}

By keeping the presets and plugins information inside the


webpack.config.js file, we can avoid having a .babelrc file.
NPM

Introduction to npm
Downloads
Installing all dependencies
Installing a single package
Updating packages
Versioning
Running Tasks

Introduction to npm
npm means node package manager.

In January 2017 over 350000 packages were reported being listed in the npm
registry, making it the biggest single language code repository on Earth,
and you can be sure there is a package for (almost!) everything.

It started as a way to download and manage dependencies of Node.js


packages, but it has since become a tool used also in frontend JavaScript.

There are many things that npm does.

Yarn is an alternative to npm. Make sure you check it out as well.

Downloads
npm manages downloads of dependencies of your project.

Installing all dependencies

If a project has a packages.json file, by running


npm install

it will install everything the project needs, in the node_modules


folder, creating it if it’s not existing already.

Installing a single package

You can also install a specific package by running

npm install <package-name>

Often you’ll see more flags added to this command:

--save installs and adds the entry to the package.json file


dependencies
--save-dev installs and adds the entry to the package.json file
devDependencies

The difference is mainly that devDependencies are usually development


tools, like a testing library, while dependencies are bundled with the
app in production.

Updating packages

Updating is also made easy, by running

npm update

npm will check all packages for a newer version that satisfies your
versioning constrains.

You can specify a single package to update as well:

npm update <package-name>

Versioning
In addition to plain downloads, npm also manages versioning, so you can
specify any specific version of a package, or require a version higher or
lower than what you need.

Many times you’ll find that a library is only compatible with a major
release of another library.

Or a bug in the latest release of a lib, still unfixed, is causing an


issue.

Specifying an explicit version of a library also helps keeping everyone on


the same exact version of a package, so that the whole team runs the same
version until the package.json file is updated.

In all those cases, versioning helps a lot, and npm follows the Semantic
Versioning (SEMVER) standard.

Running Tasks
The package.json file supports a format for specifying command line tasks
that can be run by using

npm <task-name>

For example:

{
"scripts": {
"start-dev": "node lib/server-development",
"start": "node lib/server-production"
},
}

It’s very common to use this feature to run Webpack:

{
"scripts": {
"watch": "webpack --watch --progress --colors --config webpack.conf.js",
"dev": "webpack --progress --colors --config webpack.conf.js",
"prod": "NODE_ENV=production webpack -p --config webpack.conf.js",
}
},
}

So instead of those long commands, which is easy to forget or mistype, you


can run

$ npm watch
$ npm dev
$ npm prod
YARN
Yarn is a JavaScript Package Manager, a direct competitor of npm, one
of Facebook most popular Open Source projects

Intro to Yarn
Install Yarn
Managing packages
Initialize a new project
Install the dependencies of an existing project
Install a package locally
Install a package globally
Install a package locally as a development dependency
Remove a package
Inspecing licenses
Inspecting dependencies
Upgrading packages

Intro to Yarn
Yarn is a JavaScript Package Manager, a direct competitor of npm, and it’s
one of Facebook most popular Open Source projects.

It’s compatible with npm packages, so it has the great advantage of being
a drop-in replacement for npm.

The reason you might want to use Yarn over npm are: - faster download of
packages, which are installed in parallel - support for multiple
registries - offline installation support

To me offline installation support seems like the killer feature, because


once you have installed a package one time from the network, it gets
cached and you can recreate a project from scratch without being connected
(and without consuming a lot of your data, if you’re on a mobile plan).
Since some projects could require a huge amount of dependencies, every
time you run npm install to initialize a project you might download
hundreds of megabytes from the network.

With Yarn, this is done just once.

This is not the only feature, many other goodies are provided by Yarn,
which we’ll see in this article.

In particular Yarn devotes a lot of care to security, by performing a


checksum on every package it installs.

Tools eventually converge to a set of features that keeps them on the same
level to stay relevant, so we’ll likely see those features in npm in the
future - competition is nice for us users.

Install Yarn
While there is a joke around about installing Yarn with npm ( npm
install -g yarn ), it’s not recommended by the Yarn team.

System-specific installation methods are listed at


https://yarnpkg.com/en/docs/install (https://yarnpkg.com/en/docs/install)
. On MacOS for example you can use Homebrew and run

brew install yarn

but every Operating System has its own package manager of choice that will
make the process very smooth.

In the end, you’ll end up with the yarn command available in your shell:
Managing packages
Yarn writes its dependencies to a file named package.json , which sits
in the root folder of your project, and stores the dependencies files into
the node_modules folder, just like npm if you used it in the past.

Initialize a new project


yarn init

starts an interactive prompt that helps you quick start a project:


Install the dependencies of an existing project

If you already have a package.json file with the list of dependencies


but the packages have not been installed yet, run

yarn

or

yarn install

to start the installation process.

Install a package locally

Installing a package into a project is done using

yarn add package-name

This is equivalent to running npm install --save package-name ,


thus avoiding the invisible dependency issue when running npm
install package-name , which does not add the dependency to the
package.json file

Install a package globally


yarn global add package-name

Install a package locally as a development dependency


yarn add --dev package-name

Equivalent to the --save-dev flag in npm


Remove a package
yarn remove package-name

Inspecing licenses
When installing many dependencies, which in turn might have lots of
depencencies, you install a number of packages, of which you don’t have
any idea about the license they use.

Yarn provides a handy tool that prints the licens of any dependency you
have:

yarn licenses ls
and it can also generate a disclaimer automatically including all the
licenses of the projects you use:

yarn licenses generate-disclaimer

Inspecting dependencies
Do you ever check the node_modules folder and wonder why a specific
package was installed? yarn why tells you:
yarn why package-name

Upgrading packages
If you want to upgrade a single package, run

yarn upgrade package-name

To upgrade all your packages, run

yarn upgrade

But this command can sometimes lead to problems, because you’re blindly
upgrading all the dependencies without worrying about major version
changes.

Yarn has a great tool to selectively update packages in your project,


which is a huge help for this scenario:

yarn upgrade-interactive
JEST

Introduction to Jest
Create the first Jest test
Matchers
Mocks

Introduction to Jest
Jest is one of the many JavaScript testing library, that had a lot of
adoption because it’s developed by Facebook and is a perfect companion to
React, another very popular Facebook Open Source project.

Jest didn’t just get popular because of this - software giants regularly
ship projects that fail - but it got used by a lot of people because it’s
great.

Among its key features we can highlight: - fast, parallel tests are
distributed across workers to speed up execution - part of create-
react-app , so straightforward to start with with no configuration
needed - code coverage reports - support for React Native and TypeScript

Create the first Jest test


Projects created with create-react-app have Jest installed and
preconfigured out of the box, but adding Jest to any project is as easy as
typing

yarn add --dev jest

Add to your package.json this line:


{
"scripts": {
"test": "jest"
}
}

and run your tests by executing yarn test in your shell.

Now, you don’t have any test here, so nothing is going to be executed:

Let’s create the first test. Open a math.js file and type a couple
functions that we’ll later test:

const sum = (a, b) => a + b


const mul = (a, b) => a * b
const sub = (a, b) => a - b
const div = (a, b) => a / b

export default { sum, mul, sub, div }

Now create a math.test.js file, in the same folder, and there we’ll use
Jest to test the functions defined in math.js :

const { sum, mul, sub, div } = require("./math")

test("Adding 1 + 1 equals 2", () => {


expect(sum(1, 1)).toBe(2)
})
test("Multiplying 1 * 1 equals 1", () => {
expect(mul(1, 1)).toBe(1)
})
test("Subtracting 1 - 1 equals 0", () => {
expect(sub(1, 1)).toBe(0)
})
test("Dividing 1 / 1 equals 1", () => {
expect(div(1, 1)).toBe(1)
})

Running yarn test results in Jest being run on all the test files it
finds, and returning us the end result:

Matchers
In the previous article I used toBe() as the only matcher:

test("Adding 1 + 1 equals 2", () => {


expect(sum(1, 1)).toBe(2)
})

A matcher is a method that lets you test values.

Most commonly used matchers, comparing the value of the result of


expect() with the value passed in as argument, are: - toBe compares
strict equality, using === - toEqual compares the values of two
variables. If it’s an object or array, checks equality of all the
properties or elements - toBeNull is true when passing a null value -
toBeUndefined is true when passing an undefined value - toBeCloseTo
is used to compare floating values, avoid rounding errors - toBeDefined
is true when passing a defined value (opposite as above) - toBeTruthy
true if the value is considered true (like an if does) - toBeFalsy
true if the value is considered false (like an if does) -
toBeGreaterThan true if the result of expect() is higher than the
argument - toBeGreaterThanOrEqual true if the result of expect() is
equal to the argument, or higher than the argument - toBeLessThan true
if the result of expect() is lower than the argument -
toBeLessThanOrEqual true if the result of expect() is equal to the
argument, or lower than the argument - toMatch is used to compare
strings with regular expression pattern matching - toContain is used in
arrays, true if the expected array contains the argument in its elements
set - toThrow checks if a function you pass throws an exception (in
general) or a specific exception - toHaveBeenCalled checks if a
function has been called at least once - toHaveBeenCalledTimes checks
that a function has been called a specific number of times

All those matchers can be negated using .not. inside the statement, for
example:

test("Adding 1 + 1 does not equal 3", () => {


expect(sum(1, 1)).not.toBe(3)
})

Those are the most popular ones. See all the matchers you can use at
https://facebook.github.io/jest/docs/en/expect.html
(https://facebook.github.io/jest/docs/en/expect.html)

Mocks
In testing, mocking allows you to test functionality that depends on -
Database - Network requests - access to Files - any External system
so that 1. your tests run faster, giving a quick turnaround time during
development 2. your tests are independent of network conditions, the state
of the database 3. your tests do not pollute any data storage because they
do not touch the database 4. any change done in a test does not change the
state for subsequent tests, and re-running the test suite should start
from a known and reproduceable starting point 5. you don’t have to worry
about rate limiting on API calls and network requests

Even more important, if you are writing a Unit Test, you should test the
functionality of a function in isolation, not with all its baggage of
things it touches.

You can do this in 2 ways: 1. mock functions 2. manual mocks


ESLINT
Learn the basics of the most popular JavaScript linter

What is linter?
ESLint
Install ESLint globally
Install ESLint locally
Use ESLint in your favourite editor
Common ESLint configurations
Airbnb style guide
React
Use a specific version of ECMAScript
Force strict mode
More advanced rules
Disabling rules on specific lines

ESLint is a JavaScript linter.

What is linter?
Good question! A linter is a tool that identifies issues in your code.

Running a linter against your code can tell you many things:

if the code adheres to a certain set of syntax conventions


if the code contains possible sources of problems
if the code matches a set of standards you or your team define

It will raise warnings that you, or your tools, can analyze and give you
actionable data to improve your code.

ESLint
ESLint is a linter for the JavaScript programming language, written in
Node.js.

It is hugely useful because JavaScript, being an interpreted language,


does not have a compilation step and many errors are only possibly found
at runtime.

ESLint will help you catch those errors. Which errors in particular you
ask?

avoid infinite loops in the for loop conditions


make sure all getter methods return something
disallow console.log (and similar) statements
check for duplicate cases in a switch
check for unreachable code
check for JSDoc validity

and much more! The full list is available at


https://eslint.org/docs/rules/ (https://eslint.org/docs/rules/)

The growing popularity of Prettier as a code formatter made the


styling part of ESLint kind of obsolete, but ESLint is still very
useful to catch errors and code smells in your code.

ESLint is very flexible and configurable, and you can choose which rules
you want to check for, or which kind of style you want to enforce. Many of
the available rules are disabled and you can turn them on in your
.eslintrc configuration file, which can be global or specific to your
project.

Install ESLint globally


npm install -g eslint

# create a `.eslintrc` configuration file


eslint --init

# run ESLint against any file with


eslint yourfile.js
Install ESLint locally
npm install eslint --save-dev

# create a `.eslintrc` configuration file


./node_modules/.bin/eslint --init

# run ESLint against any file with


./node_modules/.bin/eslint yourfile.js

Use ESLint in your favourite editor


The most common use of ESLint is within your editor of course.

Common ESLint configurations


ESLint can be configured in tons of different ways.

Airbnb style guide

A common setup is to use the Airbnb JavaScript coding style


(https://github.com/airbnb/javascript) to lint your code.

Run

yarn add --dev eslint-config-airbnb

or

npm install --save-dev eslint-config-airbnb

to install the Airbnb configuration package, and add in your .eslintrc


file in the root of your project:

{
"extends": "airbnb",
}
React

Linting React code is easy with the React plugin:

yarn add --dev eslint-plugin-react

or

npm install --save-dev eslint-plugin-react

In your .eslintrc file add

{
"extends": "airbnb",
"plugins": [
"react"
],
"parserOptions": {
"ecmaFeatures": {
"jsx": true
}
}
}

Use a specific version of ECMAScript

ECMAScript changes version every year now.

The default is currently set to 5, which means pre-2015.

Turn on ES6 (or higher) by setting this property in .eslintrc :

{
"parserOptions": {
"ecmaVersion": 6,
}
}

Force strict mode


{
"parserOptions": {
"ecmaFeatures": {
"impliedStrict": true
}
}
}

More advanced rules

A detailed guide on rules can be found on the official site at


https://eslint.org/docs/user-guide/configuring
(https://eslint.org/docs/user-guide/configuring)

Disabling rules on specific lines


Sometimes a rule might give a false positive, or you might be explicitly
willing to take a route that ESLint flags.

In this case, you can disable ESLint entirely on a few lines:

/* eslint-disable */
alert('test');
/* eslint-enable */

or on a single line:

alert('test'); // eslint-disable-line

or just disable one or more specific rules for multiple lines:

/* eslint-disable no-alert, no-console */


alert('test');
console.log('test');
/* eslint-enable no-alert, no-console */

or for a single line:

alert('test'); // eslint-disable-line no-alert, quotes, semi


PRETTIER
Prettier is a great way to keep code formatted consistently for you and
your team

Introduction to Prettier
Less options
Difference with ESLint
Installation
Prettier for beginners

Introduction to Prettier
Prettier is an opinionated code formatter.

Prettier logo

It supports a lot of different syntaxes out of the box, including:

JavaScript
Flow, TypeScript
CSS, SCSS, Less
JSX
GraphQL
JSON
Markdown

and with plugins (https://prettier.io/docs/en/plugins.html) you can use it


for Python, PHP, Swift, Ruby, Java and more.

It integrates with the most popular code editors, including VS Code,


Sublime Text, Atom and more.

Prettier is hugely popular, as in February 2018 it has been downloaded


over 3.5 million times.
The most important links you need to know more about Prettier are

https://prettier.io/ (https://prettier.io/)
https://github.com/prettier/prettier
(https://github.com/prettier/prettier)
https://www.npmjs.com/package/prettier
(https://www.npmjs.com/package/prettier)

Less options
I learned Go recently and one of the best things about Go is gofmt, an
official tool that automatically formats your code according to common
standards.

95% (made up stat) of the Go code around looks exactly the same, because
this tool can be easily enforced and since the style is defined on you by
the Go maintainers, you are much more likely to adapt to that standard
instead of insisting on your own style. Like tabs vs spaces, or where to
put an opening bracket.

This might sound like a limitation, but it’s actually very powerful. All
Go code looks the same.

Prettier is the gofmt for the rest of the world.

It has very few options, and most of the decisions are already taken for
you so you can stop arguing about style and little things, and focus on
your code.

Difference with ESLint


ESLint is a linter, it does not just format, but it also highlights some
errors thanks to its static analysis of the code.

It is an invaluable tool and it can be used alongside Prettier.


ESLint also highlights formatting issues, but since it’s a lot
configurable, everyone could have a different set of formatting rules.
Prettier provides a common ground for all.

Now, there are a few things you can customize, like:

the tab width


the use of single quotes vs double quotes
the line columns number
the use of trailing commas

and some others, but Prettier tries to keep the number of those
customizations under control, to avoid becoming too customizable.

Installation
Prettier can run from the command line, and you can install it using Yarn
or npm.

Another great use case for Prettier is to run it on PRs for your Git
repositories, for example on GitHub.

If you use a supported editor the best thing is to use Prettier directly
from the editor, and the Prettier formatting will be run every time you
save.

For example here is the Prettier extension for VS Code:


https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode
(https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-
vscode)

Prettier for beginners


If you think Prettier is just for teams, or for pro users, you are missing
a good value proposition of this tool.

A good style enforces good habits.


Formatting is a topic that’s mostly overlooked by beginners, but having a
clean and consistent formatting is key to your success as a new developer.

Also, even if you started using JavaScript 2 weeks ago, with Prettier your
code - style wise - will look just like code written from a JavaScript
Guru writing JS since 1998.
BROWSER DEV TOOLS
The Browser DevTools are a fundamental element in the frontend
developer toolbox.

The Browser DevTools


HTML Structure and CSS
The HTML panel
The CSS styles panel
The Console
Executing custom JavaScript
Error reporting
The emulator
The network panel
JavaScript debugger
Application and Storage
Storage
Application
Security tab
Audits

The Browser DevTools


I don’t think there was a time where websites and web applications were
easy to build, as for backend technologies, but client-side development
was surely easier than now, generally speaking.

Once you figured out the differences between Internet Explored and
Netscape Navigator, and avoided the proprietary tags and technology, all
you had to use was HTML and later CSS.

JavaScript was a tech for creating dialog boxes and a little bit more, but
was definitely not as pervasive as today.
Although lots of web pages are still plain HTML + CSS, like this page,
many other websites are real applications that run in the browser.

Just providing the source of the page, like browser did once upon a time,
was not enough.

Browser had to provide much more information on how they rendered the
page, and what the page is currently doing, hence they introduced a
feature for developers: their developer tools.

Every browser is different and so their dev tools are slightly different.
At the time of writing the best developer tools in my opinion are provided
by Chrome, and this is the browser we’ll talk in the rest of the course,
although also Firefox and Edge have great tools as well.

HTML Structure and CSS


The most basic form of usage, and a very common one, is inspecting the
content of a webpage. When you open the DevTools that’s the panel the
Elements panel is what you see:
The HTML panel

On the left, the HTML that composes the page.

Hovering the elements in the HTML panel highlights the element in the
page, and clicking the first icon in the toolbar allows you to click an
element in the page, and analyze it in the inspector.

You can drag and drop elements in the inspector to live change their
positioning in the page.
The CSS styles panel

On the right, the CSS styles that are applied to the currently selected
element.

In addition to editing and disabling properties, you can add a new CSS
property, with any target you want, by clicking the + icon.

Also you can trigger a state for the selected element, so you can see the
styles applied when it’s active, hovered, on focus.

At the bottom, the box model of the selected element helps you figure out
margins, paddings, border and dimensions at a quick glance:

The Console
The second most important element of the DevTools is the Console.

The Console can be seen on its own panel, or by pressing Esc in the
Elements panel, it will show up in the bottom.

The Console serves mainly two purposes: executing custom JavaScript and
error reporting.

Executing custom JavaScript

At the bottom of the Console there is a blinking cursor. You can type any
JavaScript there, and it will be promptly executed. As an example, try
running:

alert('test')

The special identifier $0 allows you to reference the element currently


selected in the elements inspector. If you want to reference that as a
jQuery selector, use $($0) .

You can write more than one line with shift-enter . Pressing enter at
the end of the script runs it.

Error reporting

Any error, warning or information that happens while rendering the page,
and subsequently executing the JavaScript, is listed here.

For example failing to load a resource from the network, with information
on why, is reported in the console.

In this case, clicking the resource URL brings you to the Network panel,
showing more info which you can use to determine the cause of the problem.

You can filter those messages by level (Error / Warning / Info) and also
filter them by content.
Those messages can be user-generated in your own JavaScript by using the
Console API:

console.log('Some info message')


console.warn('Some warning message')
console.error('Some error message')

The emulator
The Chrome DevTools embed a very useful device emulator which you can use
to visualize your page in every device size you want.

You can choose from the presets the most popular mobile devices, including
iPhones, iPads, Android devices and much more, or specify the pixel
dimensions yourself, and the screen definition (1x, 2x retina, 3x retina
HD).

In the same panel you can setup network throttling for that specific
Chrome tab, to emulate a low speed connection and see how the page loads,
and the “show media queries” option shows you how media queries modify the
CSS of the page.
The network panel
The Network Panel of the DevTools allows you to see all the connections
that the browser must process while rendering a page.

At a quick glance the page shows:

a toolbar where you can setup some options and filters


a loading graph of the page as a whole
every single request, with HTTP method, response code, size and other
details
a footer with the summary of the total requests, the total size of the
page and some timing indications.

A very useful option in the toolbar is preserve log. By enabling it, you
can move to another page, and the logs will not be cleared.

Another very useful tool to track loading time is disable cache. This can
be enabled globally in the DevTools settings as well, to always disable
cache when DevTools is open.
Clicking a specific request in the list shows up the detail panel, with
HTTP Headers report:

And the loading time breakdown:


JavaScript debugger
If you click an error message in the DevTools Console, your brought to the
Sources tab, where in addition to pointing you to the file and line where
the error happened, you ahve the option to use the JavaScript debugger.
This is a full-featured debugger. You can set breakpoints, watch
variables, and listen to DOM changes or break on specific XHR (AJAX)
network requests, or event listeners.

Application and Storage


The Application tab gives you lots of information about which information
is stored inside the browser relative to your website.
Storage

You gain access to detailed reports and tools to interact with the
application storage:

Local storage
Session storage
IndexedDb
Web SQL
Cookies

and you can quickly wipe any information, to start with a clean slate.

Application

This tab also gives you tools to inspect and debug Progressive Web Apps.

Click manifest to get information about the web app manifest, used to
allow mobile users to add the app to their home, and simulate the “add to
homescreen” events.

Service workers let you inspect your application service workers. If you
don’t know what service workers are, in short they are a fundamental
technology that powers modern web apps, to provide features like
notification, capability to run offline and syncronize across devices.

Security tab
The Security tab gives you all the information that the browser has
relatively to the security of the connection to the website.

If there is any problem with the HTTPS connection, if the site is served
over SSL, it will provide you more information about what’s causing it.

Audits
The Audits tab will help you find and solve some issues relative to
performance and in general the quality of the experience that users have
when accessing your website.

You can perform various kinds of audits depending on the kind of website:
The audit is provided by Lighthouse
(https://developers.google.com/web/tools/lighthouse/) , an open source
automated website quality check tool. It takes a while to run, then it
provides you a very nice report with key actions to check.
REACT AND REDUX
REACT
React is a JavaScript library that aims to simplify development of
visual interfaces. Learn why it's so popular and what problems does it
solve.

Introduction to React
What is React
Why is React so popular?
Less complex than the other alternatives
Perfect timing
Backed by Facebook
Is React really that simple?
JSX
React Components
What is a React Component
Custom components
PropTypes
Which types can we use
Requiring properties
Default values for props
How props are passed
Children
Setting the default state
Accessing the state
Mutating the state
Why you should always use setState()
State is encapsulated
Unidirectional Data Flow
Moving the State Up in the Tree
Events
Event handlers
Bind this in methods
The events reference
Clipboard
Composition
Keyboard
Focus
Form
Mouse
Selection
Touch
UI
Mouse Wheel
Media
Image
Animation
Transition
React’s Declarative approach
React declarative approach
The Virtual DOM
The “real” DOM
The Virtual DOM
Why is the Virtual DOM helpful: batching

Introduction to React
What is React

React is a JavaScript library that aims to simplify development of visual


interfaces.

Developed at Facebook and released to the world in 2013, it drives some of


the most widely used code in the world, powering Facebook and Instagram
among many, many other software companies.

Its primary goal is to make it easy to reason about an interface and its
state in any point in time, by dividing the UI into a collection of
components.

React is used to build single-page web applications, among with many other
libraries and frameworks that were available before React came into life.

Why is React so popular?


React has taken the frontend web development world by storm. Why?

Less complex than the other alternatives

At the time when React was announced, Ember.js and Angular 1.x were the
predominant choices as a framework. Both these imposed too many
conventions on the code that porting an existing app was not convienient
at all. React made a choice to be very easy to integrate into an existing
project, because that’s how they had to do it at Facebook in order to
introduce it to the existing codebase. Also, those 2 frameworks bringed
too much to the table, while React only chose to implement the View layer
instead of the full MVC stack.

Perfect timing

At the time, Angular 2.x was announced by Google, along with the backwards
incompatibility and major changes it was going to bring. Moving from
Angular 1 to 2 was like moving to a different framework, so this, along
with execution speed improvements that React promised, made it something
developers were eager to try.

Backed by Facebook

Being backed by Facebook obviously is going to benefit a project if it


turns to be successful, but it’s not a guarantee, as you can see from many
failed open source projects by both Facebook and Google as an example.

Is React really that simple?

Even though I said that React is simpler than alternative frameworks,


diving into React is still complicated, but mostly because of the
corollary technologies that can be integrated with React, like Redux,
Relay or GraphQL.

React in itself has a very small API.

There isn’t too much into React than these concepts:


Components
JSX
State
Props

We’ll see each one of them in the next articles.

JSX
Many developers, including who is writing this article, at first sight
thought that JSX was horrible, and quickly dismissed React.

Even though they said JSX was not required, using React without JSX was
painful.

It took me a couple years of occasionally looking at it to start digesting


JSX, and now I largely prefer it over the alternative, which is: using
templates.

The major benefit of using JSX is that you’re only interacting with
JavaScript object, not template strings.

JSX is not embedded HTML

Many tutorials for React beginners like to postpone the introduction of


JSX later, because they assume the reader would be better off without it,
but since I am now a JSX fan, I’ll immediately jump into it.

Here is how you define a h1 tag containing a string:

const element = <h1>Hello, world!</h1>

It looks like a strange mix of JavaScript and HTML, but in reality it’s
all JavaScript.
What looks like HTML, is actually a sugar syntax for defining components
and their positioning inside the markup.

Inside a JSX expression, attributes can be inserted very easily:

const myId = 'test'


const element = <h1 id={myId}>Hello, world!</h1>

You just need to pay attention when an attribute has a dash ( - ) which is
converted to camelCase syntax instead, and these 2 special cases:

class becomes className


for becomes htmlFor

because they are reserved words in JavaScript.

Here’s a JSX snippet that wraps two components into a div tag:

<div>
<BlogPostsList />
<Sidebar />
</div>

A tag always needs to be closed, because this is more XML than HTML (if
you remember the XHTML days, this will be familiar, but since then the
HTML5 loose syntax won). In this case a self-closing tag is used.

JSX while introduced with React is no longer a React-only technology.

React Components

What is a React Component

A component is one isolated piece of interface. For example in a typical


blog homepage you might find the Sidebar component, and the Blog Posts
List component. They are in turn composed by components themselves, so you
could have a list of Blog post components, each for every blog post, and
each with its own peculiar properties.

React makes it very simple: everything is a component.

Even plain HTML tags are component on their own, and they are added by
default.

The next 2 lines are equivalent, they do the same thing. One with JSX, one
without, by injecting <h1>Hello World!</h1> into an element with id
app .

import React from 'react'


import ReactDOM from 'react-dom'

ReactDOM.render(
<h1>Hello World!</h1>,
document.getElementById('app')
)

ReactDOM.render(
React.DOM.h1(null, "Hello World!"),
document.getElementById('app')
)

See, React.DOM exposed us an h1 component. Which other HTML tags are


available? All of them! You can inspect what React.DOM offers by typing
it in the Browser Console:
(the list is longer)

The built-in components are nice, but you’ll quickly outgrow them. What
React excels in is letting us compose a UI by composing custom components.

Custom components

There are 2 ways to define a component in React.

A stateless component does not manage internal state, and is just a


function:

const BlogPostExcerpt = () => {


return (
<div>
<h1>Title</h1>
<p>Description</p>
</div>
)
}

A stateful component is a class, which manages state in its own


properties:

import React, { Component } from 'react'

class BlogPostExcerpt extends Component {


render() {
return (
<div>
<h1>Title</h1>
<p>Description</p>
</div>
)
}
}

As they stand, they are equivalent because there is no state management


yet (coming in the next couple articles).

There is a third syntax which uses the ES5 / ES2015 syntax, without
the classes:

import React from 'react'

React.createClass({
render() {
return (
<div>
<h1>Title</h1>
<p>Description</p>
</div>
)
}
})

You’ll rarely see this in modern, > ES6 codebases.

Props is how Components get their properties. Starting from the top
component, every child component gets its props from the parent. In a
stateless component, props is all it gets passed, and they are available
by adding props as the function argument:

const BlogPostExcerpt = (props) => {


return (
<div>
<h1>{props.title}</h1>
<p>{props.description}</p>
</div>
)
}

In a stateful component, props are passed by default. There is no need to


add anything special, and they are accessible as this.props in a
Component instance.

import React, { Component } from 'react'


class BlogPostExcerpt extends Component {
render() {
return (
<div>
<h1>{this.props.title}</h1>
<p>{this.props.description}</p>
</div>
)
}
}

PropTypes

Since JavaScript is a dynamically typed language, we don’t really have a


way to enforce the type of a variable at compile time, and if we pass
invalid types, they will fail at runtime or give weird results if the
types are compatible but not what we expect.

Flow and TypeScript help a lot, but React has a way to directly help with
props types, and even before running the code, our tools (editors,
linters) can detect when we are passing the wrong values:

import PropTypes from 'prop-types';


import React from 'react'

class BlogPostExcerpt extends Component {


render() {
return (
<div>
<h1>{this.props.title}</h1>
<p>{this.props.description}</p>
</div>
)
}
}

BlogPostExcerpt.propTypes = {
title: PropTypes.string,
description: PropTypes.string
};

export default BlogPostExcerpt

Which types can we use

These are the fundamental types we can accept:


PropTypes.array
PropTypes.bool
PropTypes.func
PropTypes.number
PropTypes.object
PropTypes.string
PropTypes.symbol

We can accept one of two types:

PropTypes.oneOfType([
PropTypes.string,
PropTypes.number
]),

We can accept one of many values:

PropTypes.oneOf(['Test1', 'Test2']),

We can accept an instance of a class:

PropTypes.instanceOf(Something)

We can accept any React node:

PropTypes.node

or even any type at all:

PropTypes.any

Arrays have a special syntax that we can use to accept an array of a


particular type:

PropTypes.arrayOf(PropTypes.string)

Objects, we can compose an object properties by using


PropTypes.shape({
color: PropTypes.string,
fontSize: PropTypes.number
})

Requiring properties

Appending isRequired to any PropTypes option will cause React to return


an error if that property is missing:

PropTypes.arrayOf(PropTypes.string).isRequired,
PropTypes.string.isRequired,

Default values for props

If any value is not required we need to specify a default value for it if


it’s missing when the Component is initialized.

BlogPostExcerpt.propTypes = {
title: PropTypes.string,
description: PropTypes.string
}

BlogPostExcerpt.defaultProps = {
title: '',
description: ''
}

Some tooling like ESLint have the ability to enforce defining the
defaultProps for a Component with some propTypes not explicitly required.

How props are passed

When initializing a component, pass the props in a way similar to HTML


attributes:

const desc = 'A description'


//...
<BlogPostExcerpt title="A blog post" description={desc} />
We passed the title as a plain string (something we can only do with
strings!), and description as a variable.

Children

A special prop is children . That contains the value of anything that is


passed in the body of the component, for example:

<BlogPostExcerpt
title="A blog post"
description={desc}>
Something
</BlogPostExcerpt>

In this case, inside BlogPostExcerpt we could access “Something” by


looking up this.props.children .

While Props allow a Component to receive properties from its parent, be


“instructed” to print some data for example, state allows a component to
take life on itself, and be independent of the surrounding environment.

Remember: only class-based Components can have a state, so if you need to


manage state in a stateless (function-based) Component, you first need to
“upgrade” it to a Class component:

const BlogPostExcerpt = () => {


return (
<div>
<h1>Title</h1>
<p>Description</p>
</div>
)
}

becomes:

import React, { Component } from 'react'

class BlogPostExcerpt extends Component {


render() {
return (
<div>
<h1>Title</h1>
<p>Description</p>
</div>
)
}
}

Setting the default state

In the Component constructor, initialize this.state . For example the


BlogPostExcerpt component might have a clicked state:

class BlogPostExcerpt extends Component {


constructor(props) {
super(props)
this.state = { clicked: false }
}

render() {
return (
<div>
<h1>Title</h1>
<p>Description</p>
</div>
)
}
}

Accessing the state

The clicked state can be accessed by referencing this.state.clicked :

class BlogPostExcerpt extends Component {


constructor(props) {
super(props)
this.state = { clicked: false }
}

render() {
return (
<div>
<h1>Title</h1>
<p>Description</p>
<p>Clicked: {this.state.clicked}</p>
</div>
)
}
}
Mutating the state

A state should never be mutated by using

this.state.clicked = true

Instead, you should always use setState() instead, passing it an


object:

this.setState({ clicked: true })

The object can contain a subset, or a superset, of the state. Only the
properties you pass will be mutated, the ones omitted will be left in
their current state.

Why you should always use setState()

The reason is that using this method, React knows that the state has
changed. It will then start the series of events that will lead to the
Component being re-rendered, along with any DOM update.

State is encapsulated

A parent of a Component cannot tell if the child is stateful or stateless.


Same goes for children of a Component.

Being stateful or stateless (functional or class-based) is entirely an


implementation detail that other components don’t need to care about.

This leads us to Unidirectional Data Flow

Unidirectional Data Flow

A state is always owned by one Component. Any data that’s affected by this
state can only affect Components below it: its children.
Changing a state on a Component will never affect its parent, or its
siblings, or any other Component in the application: just its children.

This is the reason many times the state is moved up in the Components
tree.

Moving the State Up in the Tree

Because of the Unidirectional Data Flow rules, if two components need to


share a state, the state needs to be moved up to a common ancestor.

Many times the closest ancestor is the best place to manage the state, but
it’s not a mandatory rule.

The state is passed down to the components that need that value via props:

class Converter extends React.Component {


constructor(props) {
super(props)
this.state = { currency: '€' }
}

render() {
return (
<div>
<Display currency={this.state.currency} />
<CurrencySwitcher currency={this.state.currency} />
</div>
)
}
}

The state can be mutated by a child component by passing a mutating


function down as a prop:

class Converter extends React.Component {


constructor(props) {
super(props)
this.state = { currency: '€' }
}

handleChangeCurrency = (event) => {


this.setState({ currency: this.state.currency ===
'€' ? '$' : '€' })
}
render() {
return (
<div>
<Display currency={this.state.currency} />
<CurrencySwitcher
currency={this.state.currency}
handleChangeCurrency={this.handleChangeCurrency} />
</div>
)
}
}

const CurrencySwitcher = (props) => {


return (
<button onClick={props.handleChangeCurrency}>
Current currency is {props.currency}. Change it!
</button>
)
}

const Display = (props) => {


return (
<p>Current currency is {props.currency}.</p>
)
}

Events
React provides an easy way to manage events. Prepare to say goodbye to
addEventListener :)

In the previous article about the State you saw this example:

const CurrencySwitcher = (props) => {


return (
<button onClick={props.handleChangeCurrency}>
Current currency is {props.currency}. Change it!
</button>
)
}

If you’ve been using JavaScript for a while, this is just like plain old
JavaScript event handlers, except that this time you’re defining
everything in JavaScript, not in your HTML, and you’re passing a function,
not a string.

The actual event names are a little bit different because in React you use
camelCase for everything, so onclick becomes onClick , onsubmit
becomes onSubmit .

For reference, this is old school HTML with JavaScript events mixed in:

<button onclick="handleChangeCurrency()">
...
</button>

Event handlers

It’s a convention to have event handlers defined as methods on the


Component class:

class Converter extends React.Component {


handleChangeCurrency = (event) => {
this.setState({ currency: this.state.currency ===
'€' ? '$' : '€' })
}
}

All handlers receive an event object that adheres, cross-browser, to the


W3C UI Events spec (https://www.w3.org/TR/DOM-Level-3-Events/) .

Bind this in methods

Don’t forget to bind methods. The methods of ES6 classes by default are
not bound. What this means is that this is not defined unless you define
methods as

class Converter extends React.Component {


handleClick = (e) => { /* ... */ }
//...
}
when using the the property initializer syntax with Babel (enabled by
default in create-react-app )

otherwise you need to bind it manually in the constructor:

class Converter extends React.Component {


constructor(props) {
super(props);
this.handleClick = this.handleClick.bind(this);
}
handleClick(e) {}
}

The events reference

There are lots of events supported, here’s a summary list.

Clipboard

onCopy
onCut
onPaste

Composition

onCompositionEnd
onCompositionStart
onCompositionUpdate

Keyboard

onKeyDown
onKeyPress
onKeyUp

Focus

onFocus
onBlur

Form

onChange
onInput
onSubmit
Mouse

onClick
onContextMenu
onDoubleClick
onDrag
onDragEnd
onDragEnter
onDragExit
onDragLeave
onDragOver
onDragStart
onDrop
onMouseDown
onMouseEnter
onMouseLeave
onMouseMove
onMouseOut
onMouseOver
onMouseUp

Selection

onSelect

Touch

onTouchCancel
onTouchEnd
onTouchMove
onTouchStart

UI

onScroll

Mouse Wheel

onWheel

Media

onAbort
onCanPlay
onCanPlayThrough
onDurationChange
onEmptied
onEncrypted
onEnded
onError
onLoadedData
onLoadedMetadata
onLoadStart
onPause
onPlay
onPlaying
onProgress
onRateChange
onSeeked
onSeeking
onStalled
onSuspend
onTimeUpdate
onVolumeChange
onWaiting

Image

onLoad
onError

Animation

onAnimationStart
onAnimationEnd
onAnimationIteration

Transition

onTransitionEnd
React’s Declarative approach
You’ll run across articles describing React as a declarative approach to
building UIs.

See declarative programming to read more about declarative programming.

React declarative approach

React made its “declarative approach” quite popular and upfront so it


permeated the frontend world along with React.

It’s really not a new concept, but React took building UIs a lot more
declarative than with HTML templates: you can build Web interfaces without
even touching the DOM directly, you can have an event system without
having to interact with the actual DOM Events.

For example looking up elements in the DOM using jQuery or DOM events is
an iterative approach.

React’s declarative approach abstracts that for us. We just tell React we
want a component to be rendered in a specific way, and we never have to
interact with the DOM to reference it later.

The Virtual DOM


Many existing frameworks, before React came on the scene, were directly
manipulating the DOM on every change.

The “real” DOM


What is the DOM, first: the DOM (Document Object Model) is a Tree
representation of the page, starting from the <html> tag, going down
into every children, called nodes.

It’s kept in the browser memory, and directly linked to what you see in a
page. The DOM has an API that you can use to traverse it, access every
single node, filter them, modify them.

The API is the familiar syntax you have likely seen many times, if you
were not using the abstract API provided by jQuery and friends:

document.getElementById(id)
document.getElementsByTagName(name)
document.createElement(name)
parentNode.appendChild(node)
element.innerHTML
element.style.left
element.setAttribute()
element.getAttribute()
element.addEventListener()
window.content
window.onload
window.dump()
window.scrollTo()

React keeps a copy of the DOM representation, for what concernes the React
rendering: the Virtual DOM

The Virtual DOM

Every time the DOM changes, the browser has to do two intensive
operations: repaint (visual or content changes to an element that do not
affect the layout and positioning relatively to other elements) and reflow
(recalculate the layout of a portion of the page - or the whole page
layout).

React uses a Virtual DOM to help the browser use less resources when
changes need to be done on a page.
When you call setState() on a Component, specifying a state different
than the previous one, React marks that Component as dirty. This is key:
React only updates when a Component changes the state explicitly.

What happens next is:

React updates the Virtual DOM relative to the components marked as


dirty (with some additional checks, like triggering
shouldComponentUpdate() )
Runs the diffing algorithm to reconciliate the changes
Updates the real DOM

Why is the Virtual DOM helpful: batching

The key thing is that React batches much of the changes and performs a
unique update to the real DOM, by changing all the elements that need to
be changed at the same time, so the repaint and reflow the browser must
perform to render the changes are executed just once.
JSX

Introduction to JSX
A JSX primer
Transpiling JSX
JS in JSX
HTML in JSX
You need to close all tags
camelCase is the new standard
class becomes className
The style attribute changes its semantics
Forms
CSS in React
Why is this preferred over plain CSS / SASS / LESS?
Is this the go-to solution?
Forms in JSX
value and defaultValue
A more consistent onChange
JSX auto escapes
White space in JSX
Horizontal white space is trimmed to 1
Vertical white space is eliminated
Adding comments in JSX
Spread attributes

Introduction to JSX
JSX is a technology that was introduced by React.

Although React can work completely fine without using JSX, it’s an ideal
technology to work with components, so React benefits a lot from JSX.
At first, you might think that using JSX is like mixing HTML and
JavaScript (and as you’ll see CSS).

But this is not true, because what you are really doing when using JSX
syntax is writing a declarative syntax of what a component UI should be.

And you’re describing that UI not using strings, but instead using
JavaScript, which allows you to do many nice things.

A JSX primer
Here is how you define a h1 tag containing a string:

const element = <h1>Hello, world!</h1>

It looks like a strange mix of JavaScript and HTML, but in reality it’s
all JavaScript.

What looks like HTML, is actually a sugar syntax for defining components
and their positioning inside the markup.

Inside a JSX expression, attributes can be inserted very easily:

const myId = 'test'


const element = <h1 id={myId}>Hello, world!</h1>

You just need to pay attention when an attribute has a dash ( - ) which is
converted to camelCase syntax instead, and these 2 special cases:

class becomes className


for becomes htmlFor

because they are reserved words in JavaScript.

Here’s a JSX snippet that wraps two components into a div tag:

<div>
<BlogPostsList />
<Sidebar />
</div>

A tag always needs to be closed, because this is more XML than HTML (if
you remember the XHTML days, this will be familiar, but since then the
HTML5 loose syntax won). In this case a self-closing tag is used.

Notice how I wrapped the 2 components into a div . Why? Because the
render() function can only return a single node, so in case you want to
return 2 siblings, just add a parent. It can be any tag, not just div .

Transpiling JSX
A browser cannot execute JavaScript files containing JSX code. They must
be first transformed to regular JS.

How? By doing a process called transpiling.

We already said that JSX is optional, because to every JSX line, a


corresponding plain JavaScript alternative is available, and that’s what
JSX is transpiled to.

For example the following two constructs are equivalent:

Plain JS

ReactDOM.render(
React.DOM.div(
{id: "test"},
React.DOM.h1(null, "A title"),
React.DOM.p(null, "A paragraph")
),
document.getElementById('myapp')
)

JSX
ReactDOM.render(
<div id="test">
<h1>A title</h1>
<p>A paragraph</p>
</div>,
document.getElementById('myapp')
)

This very basic example is just the starting point, but you can already
see how more complicated the plain JS syntax is compared to using JSX.

At the time of writing the most popular way to perform the transpilation
is to use Babel, which is the default option when running create-react-
app , so if you use it you don’t have to worry, everything happens under
the hoods for you.

If you don’t use create-react-app you need to setup Babel yourself.

JS in JSX
JSX accepts any kind of JavaScript mixed into it.

Whenever you need to add some JS, just put it inside curly braces {} . For
example here’s how to use a constant value defined elsewhere:

const paragraph = 'A paragraph'


ReactDOM.render(
<div id="test">
<h1>A title</h1>
<p>{paragraph}</p>
</div>,
document.getElementById('myapp')
)

This is a basic example. Curly braces accept any JS code:

const paragraph = 'A paragraph'


ReactDOM.render(
<table>
{rows.map((row, i) => {
return <tr>{row.text}</tr>
})}
</div>,
document.getElementById('myapp')
)

As you can see we nested JavaScript inside a JSX defined inside a


JavaScript nested in a JSX. You can go as deep as you need.

HTML in JSX
JSX resembles a lot HTML, but it’s actually a XML syntax.

In the end you render HTML, so you need to know a few differences between
how you would define some things in HTML, and how you define them in JSX.

You need to close all tags

Just like in XHTML, if you have ever used it, you need to close all tags:
no more <br> but instead use the self-closing tag: <br /> (the same
goes for other tags)

camelCase is the new standard

In HTML you’ll find attributes without any case (e.g. onchange ). In


JSX, they are renamed to their camelCase equivalent:

onchange => onChange


onclick => onClick
onsubmit => onSubmit

class becomes className

Due to the fact that JSX is JavaScript, and class is a reserved word,
you can’t write

<p class="description">

but you need to use


<p className="description">

The same applies to for which is translated to htmlFor .

The style attribute changes its semantics

The style attribute in HTML allows to specify inline style. In JSX it no


longer accepts a string, and in CSS in JSX you’ll see why it’s a very
convenient change.

Forms

Form fields definition and events are changed in JSX to provide more
consistency and utility.

Forms in JSX goes into more details on forms.

CSS in React
JSX provides a cool way to define CSS.

If you have a little experience with HTML inline styles, at first glance
you’ll find yourself pushed back 10 or 15 years, to a world where inline
CSS was completely normal (nowadays it’s demonized and usually just a
“quick fix” go-to solution).

JSX style is not the same thing: first of all, instead of accepting a
string containing CSS properties, the JSX style attribute only accepts
an object. This means you define properties in an object:

var divStyle = {
color: 'white'
};

ReactDOM.render(<div style={divStyle}>Hello World!</div>, mountNode);

or
ReactDOM.render(<div style={{color: 'white'}}>Hello World!</div>, mountNode);

The CSS values you write in JSX is slightly different than plain CSS:

the keys property names are camelCased


values are just strings
you separate each tuple with a comma

Why is this preferred over plain CSS / SASS / LESS?

CSS is an unsolved problem. Since its inception, dozens of tools around it


rised and then falled. The main problem with JS is that there is no
scoping and it’s easy to write CSS that is not enforced in any way, thus a
“quick fix” can impact elements that should not be touched.

JSX allows components (defined in React for example) to completely


encapsulate their style.

Is this the go-to solution?

Inline styles in JSX are good until you need to

1. write media queries


2. style animations
3. reference pseudo classes (e.g. :hover )
4. reference pseudo elements (e.g. ::first-letter )

In short, they cover the basics, but it’s not the final solution.

Forms in JSX
JSX adds some changes to how HTML forms work, with the goal of making
things easier for the developer.

value and defaultValue

The value attribute always holds the current value of the field.
The defaultValue attribute holds the default value that was set when
the field was created.

This helps solve some weird behavior of regular DOM interaction when
inspecting input.value and input.getAttribute('value') returning
one the current value and one the original default value.

This also applies to the textarea field, e.g.

<textarea>Some text</textarea>

but instead

<textarea defaultValue={"Some text"} />

For select fields, instead of using

<select>
<option value="x" selected>...</option>
</select>

use

<select defaultValue="x">
<option value="x">...</option>
</select>

A more consistent onChange

Passing a function to the onChange attribute you can subscribe to


events on form fields.

It works consistently across fields, even radio , select and checkbox


input fields fire a onChange event.

onChange also fires when typing a character into a input or


textarea field.
JSX auto escapes
To mitigate the ever present risk of XSS exploits, JSX forces automatic
escaping in expressions.

This means that you might run into issues when using an HTML entity in a
string expression.

You expect the following to print © 2017 :

<p>{"&copy; 2017"}</p>

But it’s not, it’s printing &copy; 2017 because the string is escaped.

To fix this you can either move the entities outside the expression:

<p>&copy; 2017</p>

or by using a constant that prints the Unicode representation


corresponding to the HTML entity you need to print:

<p>{"\u00A9 2017"}</p>

White space in JSX


To add white space in JSX there are 2 rules:

Horizontal white space is trimmed to 1

If you have white space between elements in the same line, it’s all
trimmed to 1 white space.

<p>Something becomes this</p>

becomes
<p>Something becomes this</p>

Vertical white space is eliminated

<p>
Something
becomes
this
</p>

becomes

<p>Somethingbecomesthis</p>

To fix this problem you need to explicitly add white space, by adding a
space expression like this:

<p>
Something
{' '}becomes
{' '}this
</p>

or by embedding the string in a space expression:

<p>
Something
{' becomes '}
this
</p>

Adding comments in JSX


You can add comments to JSX by using the normal JavaScript comments inside
an expression:

<p>
{/* a comment */}
{
//another comment
}
</p>

Spread attributes
In JSX a common operation is assigning values to attributes.

Instead of doing it manually, e.g.

<div>
<BlogPost title={data.title} date={data.date} />
</div>

you can pass

<div>
<BlogPost {...data} />
</div>

and the properties of the data object will be used as attributes


automatically, thanks to the ES2015 spread operator
STYLED COMPONENTS
Styled Components are the new way to use CSS in modern JavaScript

A brief history
Introducing Styled Components
Installation
Your first styled component
Using props to customize components
Extending an existing Styled Component
It’s Regular CSS
Using Vendor Prefixes
Conclusion

A brief history
Once upon a time, the Web was really simple and CSS didn’t even exist. We
laid out pages using tables and frames. Good times.

Then CSS came to life, and after some time it became clear that frameworks
could greatly help especially in building grids and layouts, Bootstrap and
Foundation playing a big part of this.

Preprocessors like SASS and others helped a lot to slow down the
frameworks adoption, and to better organize the code conventions like BEM
and SMACSS grew in their usage, especially within teams.

Conventions are not a solution to everything, and they are complex to


remember, so in the last few years with the increasing adoption of
JavaScript and build processes in every frontend project, CSS got its way
into JavaScript (CSS-in-JS).

New tools explored new ways of doing CSS-in-JS and a few succeeded with
increasing popularity:
React Style
jsxstyle
Radium

and more.

Introducing Styled Components


One of the most popular of these tools is Styled Components.

It is the meant to be a successor of CSS Modules, a way to write CSS


that’s scoped to a single component, and not leak to any other element in
the page.

(more on CSS modules here (https://css-tricks.com/css-modules-part-1-


need/) and here (https://glenmaddern.com/articles/css-modules) )

Styled Components allow you to write plain CSS in your components without
worrying about class names collisions.

Installation
Simply install styled-components using npm or yarn:

npm install --save styled-components


yarn add styled-components

That’s it! Now all you have to do is to add this import:

import styled from "styled-components";

Your first styled component


With the styled object imported, you can now start creating Styled
Components. Here’s the first one:
const Button = styled.button`
font-size: 1.5em;
background-color: black;
color: white;
`;

Button is now a React Component in all its greatness.

We created it using a function of the styled object, called button in


this case, and passing some CSS properties in a template literal.

Now this component can be rendered in our container using the normal React
syntax:

render(
<Button />
)

Styled Components offer other functions you can use to create other
components, not just button , like section , h1 , input and many
others.

The syntax used, with the backtick, might be weird at first, but it’s
called Tagged Templates (https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Reference/Template_literals) , it’s plain
JavaScript and it’s a way to pass an argument to the function.

Using props to customize components


When you pass some props to a Styled Component, it will pass them down to
the DOM node mounted.

For example here’s how we pass the placeholder and type props to an
input component:

const Input = styled.input`


//...
`;

render(
<div>
<Input placeholder="@mxstbr" type="text" />
</div>
);

This will do just what you think, inserting those props as HTML
attributes.

Props instead of just being blindly passed down to the DOM can also be
used to customize a component based on the prop value. Here’s an example:

const Button = styled.button`


background: ${props => props.primary ? 'black' : 'white'};
color: ${props => props.primary ? 'white' : 'black'};
`;

render(
<div>
<Button>A normal button</Button>
<Button>A normal button</Button>
<Button primary>The primary button</Button>
</div>
);

Setting the primary prop changes the color of the button.

Extending an existing Styled Component


If you have one component and you want to create a similar one, just
styled slightly differently, you can use extend :

const Button = styled.button`


color: black;
//...
`;

const WhiteButton = Button.extend`


color: white;
`;

render(
<div>
<Button>A black button, like all buttons</Button>
<WhiteButton>A white button</WhiteButton>
</div>
);
It’s Regular CSS
In Styled Components, you can use the CSS you already know and love. It’s
just plain CSS. It is not pseudo CSS nor inline CSS with its limitations.

You can use media queries, nesting (https://tabatkins.github.io/specs/css-


nesting/) and everything you might come up with.

Using Vendor Prefixes


Styled Components automatically add all the vendor prefixes needed, so you
don’t need to worry about this problem.

Conclusion
That’s it for this Styled Components introduction! These concepts will
help you get an understanding on the concept and help you get up and
running with this way of using CSS in JavaScript.
REDUX

Why you need Redux


When should you use Redux?
Immutable State Tree
Actions
Actions types should be constants
Action creators
Reducers
What is a reducer
What a reducer should not do
Multiple reducers
A simulation of a reducer
The state
A list of actions
A reducer for every part of the state
A reducer for the whole state
The Store
Can I initialize the store with server-side data?
Getting the state
Update the state
Listen to state changes
Data Flow

Why you need Redux


Redux is a state manager that’s usually used along with React, but it’s
not tied to that library - it can be used with other technologies as well,
but we’ll stick to React for the sake of the explanation.

React has its own way to manage state, as you can read on the React
Beginner’s Guide, where I introduce how you can manage State in React.
What I didn’t say in that lesson is that that approach does not scale.

Moving the state up in the tree works in simple cases, but in a complex
app you might find you moving almost all the state up, and then down using
props, so a better approach is to use an external global store.

Redux is a way to manage an application state.

There are a few concepts to grasp, but once you do, Redux is a very simple
approach to the problem.

Redux is very popular with React applications, but it’s in no way unique
to React: there are bindings for nearly any popular framework. That said,
I’ll make some examples using React as it is its primary use case.

When should you use Redux?


Redux is ideal for medium to big apps, and you should only use it when you
have trouble managing the state with the default state management of React
or the other library you use.

Simple apps should not need it at all (and there’s nothing wrong with
simple apps)

Immutable State Tree


In Redux, the whole state of the application is represented by one
JavaScript object, called State or State Tree.

We call it Immutable State Tree because it is read only: it can’t be


changed directly.

It can only be changed by dispatching an Action.

Actions
An Action is a JavaScript object that describes a change in a minimalistic
way (just with the information needed):

{
type: 'CLICKED_SIDEBAR'
}

// e.g. with more data


{
type: 'SELECTED_USER',
userId: 232
}

The only requirement of an action object is having a type property,


whose value is usually a string.

Actions types should be constants

In a simple app an action type can be defined as a string, as I did in the


example in the previous lesson.

When the app grows is best to use constants:

const ADD_ITEM = 'ADD_ITEM'


const action = { type: ADD_ITEM, title: 'Third item' }

and to separate actions in their own files, and import them

import { ADD_ITEM, REMOVE_ITEM } from './actions'

Action creators

Actions Creators are functions that create actions.

function addItem(t) {
return {
type: ADD_ITEM,
title: t
}
}
You usually run action creators in combination with triggering the
dispatcher:

dispatch(addItem('Milk'))

or by defining an action dispatcher function:

const dispatchAddItem = i => dispatch(addItem(i))


dispatchAddItem('Milk')

Reducers
When an action is fired, something must happen, the state of the
application must change.

This is the job of reducers.

What is a reducer

A reducer is a pure function that calculates the next State Tree based on
the previous State Tree, and the action dispatched.

(currentState, action) => newState

A pure function takes an input and returns an output without changing the
input nor anything else. Thus, a reducer returns a completely new state
tree object that substitutes the previous one.

What a reducer should not do

A reducer should be a pure function, so it should:

never mutate its arguments


never mutate the state, but instead create a new one with
Object.assign({}, ...)
never generate side-effects (no API calls changing anything)
never call non-pure functions, functions that change their output
based on factors other than their input (e.g. Date.now() or
Math.random() )

There is no reinforcement, but you should stick to the rules.

Multiple reducers

Since the state of a complex app could be really wide, there is not a
single reducer, but many reducers for any kind of action.

A simulation of a reducer

At its core, Redux can be simplified with this simple model:

The state

{
list: [
{ title: "First item" },
{ title: "Second item" },
],
title: 'Grocieries list'
}

A list of actions

{ type: 'ADD_ITEM', title: 'Third item' }


{ type: 'REMOVE_ITEM', index: 1 }
{ type: 'CHANGE_LIST_TITLE', title: 'Road trip list' }

A reducer for every part of the state

const title = (state = '', action) => {


if (action.type === 'CHANGE_LIST_TITLE') {
return action.title
} else {
return state
}
}

const list = (state = [], action) => {


switch (action.type) {
case 'ADD_ITEM':
return state.concat([{ title: action.title }])
case 'REMOVE_ITEM':
return state.map((item, index) =>
action.index === index
? { title: item.title }
: item
default:
return state
}
}

A reducer for the whole state

const listManager = (state = {}, action) => {


return {
title: title(state.title, action),
list: list(state.list, action),
}
}

The Store
The Store is an object that:

holds the state of the app


exposes the state via getState()
allows to update the state via dispatch()
allows to (un)register as a state change listener using subscribe()

A store is unique in the app.

Here is how a store for the listManager app is created:

import { createStore } from 'redux'


import listManager from './reducers'
let store = createStore(listManager)

Can I initialize the store with server-side data?

Sure, just pass a starting state:

let store = createStore(listManager, preexistingState)

Getting the state

store.getState()
Update the state

store.dispatch(addItem('Something'))

Listen to state changes

const unsubscribe = store.subscribe(() =>


const newState = store.getState()
)

unsubscribe()

Data Flow
Data flow in Redux is always unidirectional.

You call dispatch() on the Store, passing an Action.

The Store takes care of passing the Action to the Reducer, generating the
next State.

The Store updates the State and alerts all the Listeners.
REDUX SAGA
Redux Saga is a library used to handle side effects in Redux

When to use Redux Saga


Basic example of using Redux Saga
How it works behind the scenes
Basic Helpers
takeEvery()
takeLatest()
take()
put()
call()
Running effects in parallel
all()
race()

When to use Redux Saga


In an application using Redux, when you fire an action something changes
in the state of the app.

As this happens, you might need to do something that derives from this
state change.

For example you might want to:

make a HTTP call to a server


send a WebSocket event
fetch some data from a GraphQL server
save something to the cache or browser local storage

…you got the idea.


Those are all things that don’t really relate to the app state, or are
async, and you need to move them into a place different than your actions
or reducers (while you technically could, it’s not a good way to have a
clean codebase).

Enter Redux Saga, a Redux middleware helping you with side effects.

Basic example of using Redux Saga


To avoid diving into too much theory before showing some actual code, I
briefly present how I solved a problem I faced when building a sample app.

In a chat room, when a user writes a message I immediately show the


message on the screen, to provide a prompt feedback. This is done through
a Redux Action:

const addMessage = (message, author) => ({


type: 'ADD_MESSAGE',
message,
author
})

and the state is changed through a reducer:

const messages = (state = [], action) => {


switch (action.type) {
case 'ADD_MESSAGE':
return state.concat([{
message: action.message,
author: action.author
}])
default:
return state
}
}

You initialize Redux Saga by first importing it, then by applying a saga
as a middleware to the Redux Store:

//...
import createSagaMiddleware from 'redux-saga'
//...
Then we create a middleware and we apply it to our newly created Redux
Store:

const sagaMiddleware = createSagaMiddleware()

const store = createStore(


reducers,
applyMiddleware(sagaMiddleware)
)

The last step is running the saga. We import it and pass it to the run
method of the middleware:

import handleNewMessage from './sagas'


//...
sagaMiddleware.run(handleNewMessage)

We just need to write the saga, in ./sagas/index.js :

import { takeEvery } from 'redux-saga/effects'

const handleNewMessage = function* handleNewMessage(params) {


const socket = new WebSocket('ws://localhost:8989')
yield takeEvery('ADD_MESSAGE', (action) => {
socket.send(JSON.stringify(action))
})
}

export default handleNewMessage

What this code means is: every time the ADD_MESSAGE action fires, we
send a message to the WebSockets server, which responds in this case on
localhost:8989 .

Notice the use of function* , which is not a normal function, but a


generator.

How it works behind the scenes


Being a Redux Middleware, Redux Saga can intercept Redux Actions, and
inject its own functionality.
There are a few concepts to grasp, and here are the main keywords that
you’ll want to stick in your head, altogether: saga, generator,
middleware, promise, pause, resume, effect, dispatch, action, fulfilled,
resolved, yield, yielded.

A saga is some “story” that reacts to an effect that your code is causing.
That might contain one of the things we talked before, like an HTTP
request or some procedure that saves to the cache.

We create a middleware with a list of sagas to run, which can be one or


more, and we connect this middleware to the Redux store.

A saga is a generator function. When a promise is run and yielded, the


middleware suspends the saga until the promise is resolved.

Once the promise is resolved the middleware resumes the saga, until the
next yield statement is found, and there it is suspended again until its
promise resolves.

Inside the saga code, you will generate effects using a few special helper
functions provided by the redux-saga package. To start with, we can
list:

takeEvery()
takeLatest()
take()
call()
put()

When an effect is executed, the saga is paused until the effect is


fulfilled.

For example:

import { takeEvery } from 'redux-saga/effects'

const handleNewMessage = function* handleNewMessage(params) {


const socket = new WebSocket('ws://localhost:8989')
yield takeEvery('ADD_MESSAGE', (action) => {
socket.send(JSON.stringify(action))
})
}

export default handleNewMessage

When the middleware exectutes the handleNewMessage saga, it stops at


the yield takeEvery instruction and waits (asynchronously, of course)
until the ADD_MESSAGE action is dispatched. Then it runs its callback,
and the saga can resume.

Basic Helpers
Helpers are abstractions on top of the low-level saga APIs.

Let’s introduce the most basic helpers you can use to run your effects:

takeEvery()
takeLatest()
take()
put()
call()

takeEvery()

takeEvery() , used in some examples, is one of those helpers.

In the code:

import { takeLatest } from 'redux-saga/effects'

function* watchMessages() {
yield takeEvery('ADD_MESSAGE', postMessageToServer)
}

The watchMessages generator pauses until an ADD_MESSAGE action


fires, and every time it fires, it’s going to call the
postMessageToServer function, infinitely, and cuncurrently (there is
no need for postMessageToServer to terminate its execution before a
new once can run)
takeLatest()

Another popular helper is takeLatest() , which is very similar to


takeEvery() but only allows one function handler to run at a time,
avoiding cuncurrency. If another action is fired when the handler is still
running, it will cancel the it, and run again with the latest data
available.

As with takeEvery() , the generator never stops and continues to run the
effect when the specified action occurs.

take()

take() is different in that it only waits a single time. When the action
it’s waiting for occurs, the promise resolves and the iterator is resumed,
so it can go on to the next instruction set.

put()

Dispatches an action to the Redux store. Instead of passing in the Redux


store or the dispatch action to the saga, you can just use put() :

yield put({ type: 'INCREMENT' })


yield put({ type: "USER_FETCH_SUCCEEDED", data: data })

which returns a plain object that you can easily inspect in your tests
(more on testing later).

call()

When you want to call some function in a saga, you can do so by using a
yielded plain function call that returns a promise:

delay(1000)
but this does not play nice with tests. Instead, call() allows you to
wrap that function call and returns an object that can be easily
inspected:

call(delay, 1000)

returns

{ CALL: {fn: delay, args: [1000]}}

Running effects in parallel


Running effects in parallel is possible using all() and race() , which
are very different in what they do.

all()

If you write

import { call } from 'redux-saga/effects'

const todos = yield call(fetch, '/api/todos')


const user = yield call(fetch, '/api/user')

the second fetch() call won’t be executed until the first one succeeds.

To execute them in parallel, wrap them into all() :

import { all, call } from 'redux-saga/effects'

const [todos, user] = yield all([


call(fetch, '/api/todos'),
call(fetch, '/api/user')
])

all() won’t be resolved until both call() return.

race()
race() differs from all() by not waiting for all of the helpers calls
to return. It just waits for one to return, and it’s done.

It’s a race to see which one finishes first, and then we forget about the
other participants.

It’s typically used to cancel a background task that runs forever until
something occurs:

import { race, call } from 'redux-saga/effects'

function* someBackgroundTask() {
while(1) {
//...
}
}

yield call([
bgTask: call(someBackgroundTask),
cancel: take('CANCEL_TASK')
])

when the CANCEL_TASK action is emitted, we stop the other task that
would otherwise run forever.
GRAPHQL
GRAPHQL
GraphQL is a query language for your API, and a set of server-side
runtimes (implemented in various backend languages) for executing
queries

What is GraphQL
GraphQL Principles
GraphQL vs REST
Rest is a concept
A single endpoint
Tailored to your needs
GraphQL makes it easy to monitor for fields usage
Access nested data resources
Types
Which one is better?
GraphQL Queries
Fields and arguments
Aliases
Fragments
GraphQL Variables
Making variables required
Specifying a default value for a variable
GraphQL Directives
@include(if: Boolean)
@skip(if: Boolean)

What is GraphQL
GraphQL is the new frontier in APIs (Application Programming Interfaces).

It’s a query language for your API, and a set of server-side runtimes
(implemented in various backend languages) for executing queries.
It’s not tied to a specific technology, but you can implement it in any
language.

It is a methodology that directly competes with REST (Representational


state transfer) APIs, much like REST competed with SOAP at first.

GraphQL was developed at Facebook, like many of the technologies that are
shaking the world lately, like React and React Native, and it was publicly
launched in 2015 - although Facebook used it internally for a few years
before.

Many big companies are adopting GraphQL beside Facebook, including GitHub,
Pinterest, Twitter, Sky, The New York Times, Shopify, Yelp and thousands
many other.

GraphQL Principles
GraphQL exposes a single endpoint.

You send a query to that endpoint by using a special Query Language


syntax. That query is just a string.

The server responds to a query by providing a JSON object.

Let’s see a first example of such a query. This query gets the name of a
person with id=1 :

GET /graphql?query={ person(id: "1") { name } }

or simply

{
person(id: "1") {
name
}
}

We’ll get this JSON response back:


{
"name": "Tony"
}

Let’s add a bit more complexity: we get the name of the person, and the
city where the person lives, by extracting it from the address object.
We don’t care about other details of the address, and the server does not
return them back to us.

GET /graphql?query={ person(id: "1") { name, address { city } } }

or

{
person(id: "1") {
name
address {
city
}
}
}

{
"name": "Tony",
"address": {
"city": "York"
}
}

As you can see the data we get is basically the same request we sent,
filled with values.

GraphQL vs REST
Since REST is such a popular, or I can say universal, approach to building
APIs, it’s fair to assume you are familiar with it, so let’s see the
differences between GraphQL and REST.

Rest is a concept
REST is a de-facto architecture standard but it actually has no
specification and tons of unofficial definitions. GraphQL has a
specification (http://facebook.github.io/graphql/) draft, and it’s a Query
Language (http://graphql.org/learn/queries/) instead of an architecture,
with a well defined set of tools built around it (and a flourishing
ecosystem).

While REST is built on top of an existing architecture, which in the most


common scenarios is HTTP, GraphQL is building its own set of conventions.
Which can be an advantage point or not, since REST benefits for free by
caching on the HTTP layer.

A single endpoint

GraphQL has only one endpoint, where you send all your queries. With a
REST approach, you create multiple endpoints and use HTTP verbs to
distinguish read actions (GET) and write actions (POST, PUT, DELETE).
GraphQL does not use HTTP verbs to determine the request type.

Tailored to your needs

With REST, you generally cannot choose what the server returns back to
you, unless the server implements partial responses using sparse fieldsets
(http://jsonapi.org/format/#fetching-sparse-fieldsets) , and clients use
that feature. The API maintainer cannot enforce such filtering.

The API will usually return you much more information than what you need,
unless you control the API server as well, and you tailor your responses
for each different request.

With GraphQL you explicitly request just the information you need, you
don’t “opt out” from the full response default, but it’s mandatory to pick
the fields you want.
This helps saving resources on the server, since you most probably need
less processing, and also network savings, since the payload to transfer
is smaller.

GraphQL makes it easy to monitor for fields usage

With REST, unless forcing sparse fieldsets, there is no way to determine


if a field is used by clients, so when it comes to refactoring or
deprecating, it’s impossible to determine actual usage.

GraphQL makes it possible to track which fields are used by clients.

Access nested data resources

GraphQL allows to generate a lot less network calls.

Let’s do an example: you need to access the names of the friends of a


person. If your REST API exposes a /person endpoint, which returns a
person object with a list of friends, you generally first get the person
information by doing GET /person/1 , which contains a list of ID of its
friends.

Unless the list of friends of a person already contains the friend name,
with 100 friends you’d need to do 101 HTTP requests to the /person
endpoint, which is a huge time cost, and also a resource intensive
operation.

With GraphQL, you need only one request, which asks for the names of the
friends of a person.

Types

A REST API is based on JSON which cannot provide type control. GraphQL has
a Type System.
Which one is better?

Organizations around the world are questioning their API technology


choices and they are trying to find out if migrating from REST to GraphQL
is best for their needs.

GraphQL is a perfect fit when you need to expose complex data


representations, and when clients might need only a subset of the data, or
they regularly perform nested queries to get the data they need.

As with programming languages, there is no single winner, it all depends


on your needs.

GraphQL Queries
In this article you’ll learn how is a GraphQL query composed.

The concepts I’ll introduce are

fields and arguments


aliases
fragments

Fields and arguments

Take this simple GraphQL query:

{
person(id: "1") {
name
}
}

In this query you see 2 fields, and 1 argument.

The field person returns an Object which has another field in it, a
String.
The argument allows us to specify which person we want to reference. We
pass an id , but we could as well pass a name argument, if the API we
talk to has the option to find a person by name.

Arguments are not limited to any particular field, we could have a


friends field in person that lists the friends of that person, and it
could have a limit argument, to specify how many we want the API to
return:

{
person(id: "1") {
name
friends(limit: 100)
}
}

Aliases

You can ask the API to return a field with a different name, for example:

{
owner: person(id: "1") {
fullname: name
}
}

will return

{
"data": {
"owner": {
"fullname": "Tony"
}
}
}

This feature, beside creating more ad-hoc naming for your client code, is
the only thing that can make the query work if you need to reference the
same endpoint 2 times in the same query:

{
owner: person(id: "1") {
fullname: name
}
first_employee: person(id: "2") {
fullname: name
}
}

Fragments

In the above query we replicated the person structure. Fragments allow us


to specify the structure once (much useful with many fields):

{
owner: person(id: "1") {
...personFields
}
first_employee: person(id: "2") {
...personFields
}
}

fragment personFields on person {


fullname: name
}

GraphQL Variables
More complex GraphQL queries need to use variables, a way to dynamically
specify a value that is used inside a query.

In this case we added the person id as a string inside the query:

{
owner: person(id: "1") {
fullname: name
}
}

The id will most probably change dynamically in our program, so we need a


way to pass it, and not with string interpolation.

With variables, the same query can be written as


query GetOwner($id: String) {
owner: person(id: $id) {
fullname: name
}
}

{
"id": "1"
}

In this snippet we have assigned the GetOwner name to our query. Think
of it as named functions, while previously you had an anonymous function.
Named queries are useful when you have lots of queries in your
application.

The query definition with the variables looks like a function definition,
and it works in an equivalent way.

Making variables required

Appending a ! to the type:

query GetOwner($id: String!)

instead of $id: String will make the $id variable required.

Specifying a default value for a variable

You can specify a default value using this syntax:

query GetOwner($id: String = "1")

GraphQL Directives
Directives let you include or exclude a field if a variable is true or
false.

query GetPerson($id: String) {


person(id: $id) {
fullname: name,
address: @include(if: $getAddress) {
city
street
country
}
}
}

{
"id": "1",
"getAddress": false
}

In this case if getAddress variable we pass is true, we also get the


address field, otherwise not.

We have 2 directives available: include , which we have just seen


(includes if true), and skip , which is the opposite (skips if true)

@include(if: Boolean)

query GetPerson($id: String) {


person(id: $id) {
fullname: name,
address: @include(if: $getAddress) {
city
street
country
}
}
}

{
"id": "1",
"getAddress": false
}

@skip(if: Boolean)

query GetPerson($id: String) {


person(id: $id) {
fullname: name,
address: @skip(if: $excludeAddress) {
city
street
country
}
}
}
{
"id": "1",
"excludeAddress": false
}
APOLLO
Apollo is a suite of tools to create a GraphQL server, and to consume a
GraphQL API. Let's explore Apollo in details, both Apollo Client and
Apollo Server.

The Apollo Logo

Introduction to Apollo
Apollo Client
Start a React app
Get started with Apollo Boost
Create an ApolloClient object
Apollo Links
Caching
Use ApolloProvider
The gql template tag
Perform a GraphQL request
Obtain an access token for the API
Use an Apollo Link to authenticate
Render a GraphQL query result set in a component
Apollo Server
Launchpad
The Apollo Server Hello World
Run the GraphQL Server locally
Your first Apollo Server code
Add a GraphiQL endpoint

Introduction to Apollo
In the last few years GraphQL got hugely popular as an alternative
approach to building an API over REST.

GraphQL is a great way to let the client decide which data they want to be
transmitted over the network, rather than having the server send a fixed
set of data.

Also, it allows you to specify nested resources, reducing a lot the back
and forth sometimes required when dealing with REST APIs.

Apollo is a team and community that builds on top of GraphQL, and provides
different tools that help you build your projects.

The tools provided by Apollo are mainly 3: Client, Server, Engine.

Apollo Client helps you consume a GraphQL API, with support for the most
popular frontend web technologies including React, Vue, Angular, Ember,
Meteor and more, and native development on iOS and Android.

Apollo Server is the server part of GraphQL, which interfaces with your
backend and sends responses back to the client requests.

Apollo Engine is an hosted infrastructure (SAAS) that serves as a middle


man between the client and your server, providing caching, performance
reporting, load measurement, error tracking, schema field usage
statistics, historical stats and many more goodies. It’s currently free up
to 1 million requests per month, and it’s the only part of Apollo that’s
not open source and free, and provides funding for the open source part of
the project.

It’s worth noting that those 3 tools are not linked together in any way,
and you can use just Apollo Client to interface with a 3rd part API, or
serve an API using Apollo Server without having a client at all, for
example.

It’s all compatible with the GraphQL standard specification, so there is


no proprietary or incompatible tech in Apollo.

But it’s very convenient to have all those tools together under a single
roof, a complete suite for all your GraphQL-related needs.
Apollo strives to be easy to use and easy to contribute to.

Apollo Client and Apollo Server are all community projects, built by the
community, for the community. Apollo is backed by the Meteor Development
Group (https://www.meteor.io/) , the company behind Meteor, a very popular
JavaScript framework.

Apollo is focused on keeping things simple. This is something key to the


success of a technology that wants to become popular, as some tech or
framework or library might be overkill for 99% of the small or medium
companies out there, and just suited for the big companies with very
complex needs.

Apollo Client
Apollo Client (https://www.apollographql.com/client) is the leading
JavaScript client for GraphQL. Community-driven, it’s designed to let you
build UI components that interface with GraphQL data, either in displaying
data, or in performing mutations when certain actions happen.

You don’t need to change everything in your application to make use of


Apollo Client. You can start with just one tiny layer, one request, and
expand from there.

Most of all, Apollo Client is built to be simple, small and flexible from
the ground up.

In this post I’m going to detail the process of using Apollo Client within
a React application.

I’ll use the GitHub GraphQL API (https://developer.github.com/v4/) as a


server.

Start a React app


I use create-react-app (https://github.com/facebook/create-react-app)
to setup the React app, which is very convenient and just adds the bare
bones of what we need:

npx create-react-app myapp

npx is a command available in the latest npm versions. Update npm


if you do not have this command.

and start the app local server with

yarn start

Open src/index.js :

import React from 'react'


import ReactDOM from 'react-dom'
import './index.css'
import App from './App'
import registerServiceWorker from './registerServiceWorker'

ReactDOM.render(<App />, document.getElementById('root'))


registerServiceWorker()

and remove all this content.

Get started with Apollo Boost

Apollo Boost is the easiest way to start using Apollo Client on a new
project. We’ll install that in addition to react-apollo and graphql .

In the console, run

yarn add apollo-boost react-apollo graphql

or with npm:

npm install apollo-boost react-apollo graphql --save


Create an ApolloClient object

You start by importing ApolloClient from apollo-client in index.js :

import { ApolloClient } from 'apollo-client'

const client = new ApolloClient()

By default Apollo Client uses the /graphql endpoint on the current


host, so let’s use an Apollo Link to specify the details of the connection
to the GraphQL server by setting the GraphQL endpoint URI.

Apollo Links

An Apollo Link is represented by an HttpLink object, which we import


from apollo-link-http .

Apollo Link provides us a way to describe how we want to get the result of
a GraphQL operation, and what we want to do with the response.

In short, you create multiple Apollo Link instances that all act on a
GraphQL request one after another, providing the final result you want.
Some Links can give you the option of retrying a request if not
successful, batching and much more.

We’ll add an Apollo Link to our Apollo Client instance to use the GitHub
GraphQL endpoint URI https://api.github.com/graphql

import { ApolloClient } from 'apollo-client'


import { HttpLink } from 'apollo-link-http'

const client = new ApolloClient({


link: new HttpLink({ uri: 'https://api.github.com/graphql' })
})

Caching
We’re not done yet. Before having a working example we must also tell
ApolloClient which caching strategy
(https://www.apollographql.com/docs/react/basics/caching.html) to use:
InMemoryCache is the default and it’s a good one to start.

import { ApolloClient } from 'apollo-client'


import { HttpLink } from 'apollo-link-http'
import { InMemoryCache } from 'apollo-cache-inmemory'

const client = new ApolloClient({


link: new HttpLink({ uri: 'https://api.github.com/graphql' }),
cache: new InMemoryCache()
})

Use ApolloProvider

Now we need to connect the Apollo Client to our component tree. We do so


using ApolloProvider , by wrapping our application component in the
main React file:

import React from 'react'


import ReactDOM from 'react-dom'
import { ApolloClient } from 'apollo-client'
import { HttpLink } from 'apollo-link-http'
import { InMemoryCache } from 'apollo-cache-inmemory'
import { ApolloProvider } from 'react-apollo'

import App from './App'

const client = new ApolloClient({


link: new HttpLink({ uri: 'https://api.github.com/graphql' }),
cache: new InMemoryCache()
})

ReactDOM.render(
<ApolloProvider client={client}>
<App />
</ApolloProvider>,
document.getElementById('root')
)

This is enough to render the default create-react-app screen, with


Apollo Client initialized:

create-react-app running Apollo Client


The gql template tag

We’re now ready to do something with Apollo Client, and we’re going to
fetch some data from the GitHub API and render it.

To do so we need to import the gql template tag:

import gql from 'graphql-tag'

any GraphQL query will be built using this template tag, like this:

const query = gql`


query {
...
}
`

Perform a GraphQL request

gql was the last item we needed in our toolset.

We’re now ready to do something with Apollo Client, and we’re going to
fetch some data from the GitHub API and render it.

Obtain an access token for the API

The first thing to do is to obtain a personal access token


(https://help.github.com/articles/creating-a-personal-access-token-for-
the-command-line/) from GitHub.

GitHub makes it easy by providing an interface from which you select any
permission you might need:

GitHub interface to create a new personal access token

For the sake of this example tutorial you don’t need any of those
permissions, they are meant for access to private user data but we will
just query the public repositories data.
The token you get is an OAuth 2.0 Bearer token.

You can easily test it by running from the command line:

$ curl -H "Authorization: bearer ***_YOUR_TOKEN_HERE_***" -X POST -d " \


{ \
\"query\": \"query { viewer { login }}\" \
} \
" https://api.github.com/graphql

which should give you the result

{"data":{"viewer":{"login":"***_YOUR_LOGIN_NAME_***"}}}

or

{
"message": "Bad credentials",
"documentation_url": "https://developer.github.com/v4"
}

if something went wrong.

Use an Apollo Link to authenticate

So, we need to send the Authorization header along with our GraphQL
request, just like we did in the curl request above.

How we do this with Apollo Client is by creating an Apollo Link


middleware. Start with installing apollo-link-context
(https://www.npmjs.com/package/apollo-link-context) :

npm install apollo-link-context

This package allows us to add an authentication mechanism by setting the


context of our requests.

We can use it in this code by referencing the setContext function in


this way:
const authLink = setContext((_, { headers }) => {
const token = '***YOUR_TOKEN**'

return {
headers: {
...headers,
authorization: `Bearer ${token}`
}
}
})

and once we have this new Apollo Link, we can compose


(https://www.apollographql.com/docs/link/composition.html) it with the
HttpLink we already had, by using the concat() method on a link:

const link = authLink.concat(httpLink)

Here is the full code for the src/index.js file with the code we have
right now:

import React from 'react'


import ReactDOM from 'react-dom'
import { ApolloClient } from 'apollo-client'
import { HttpLink } from 'apollo-link-http'
import { InMemoryCache } from 'apollo-cache-inmemory'
import { ApolloProvider } from 'react-apollo'
import { setContext } from 'apollo-link-context'
import gql from 'graphql-tag'

import App from './App'

const httpLink = new HttpLink({ uri: 'https://api.github.com/graphql' })

const authLink = setContext((_, { headers }) => {


const token = '***YOUR_TOKEN**'

return {
headers: {
...headers,
authorization: `Bearer ${token}`
}
}
})

const link = authLink.concat(httpLink)

const client = new ApolloClient({


link: link,
cache: new InMemoryCache()
})

ReactDOM.render(
<ApolloProvider client={client}>
<App />
</ApolloProvider>,
document.getElementById('root')
)

WARNING ⚠ Keep in mind this code is an example for educational


purposes and it exposes your GitHub GraphQL API to the world to see
in your frontend-facing code. Production code needs to keep this
token private.

We can now make the first GraphQL request at the bottom of this file, and
this sample query asks for the names and the owners of the 10 most popular
repositories, with more than 50k stars:

const POPULAR_REPOSITORIES_LIST = gql`


{
search(query: "stars:>50000", type: REPOSITORY, first: 10) {
repositoryCount
edges {
node {
... on Repository {
name
owner {
login
}
stargazers {
totalCount
}
}
}
}
}
}
`

client.query({ query: POPULAR_REPOSITORIES_LIST }).then(console.log)

Running this code successfully returns the result of our query in the
browser console:

console log of executed query

Render a GraphQL query result set in a component


What we saw up to now is already cool. What’s even cooler is using the
graphql result set to render your components.

We let Apollo Client the burden (or joy) or fetching the data and handling
all the low-level stuff, and we can focus on showing the data, by using
the graphql component enhancer offered by react-apollo :

import React from 'react'


import { graphql } from 'react-apollo'
import { gql } from 'apollo-boost'

const POPULAR_REPOSITORIES_LIST = gql`


{
search(query: "stars:>50000", type: REPOSITORY, first: 10) {
repositoryCount
edges {
node {
... on Repository {
name
owner {
login
}
stargazers {
totalCount
}
}
}
}
}
}
`

const App = graphql(POPULAR_REPOSITORIES_LIST)(props =>


<ul>
{props.data.loading ? '' : props.data.search.edges.map((row, i) =>
<li key={row.node.owner.login + '-' + row.node.name}>
{row.node.owner.login} / {row.node.name}: {' '}
<strong>
{row.node.stargazers.totalCount}
</strong>
</li>
)}
</ul>
)

export default App

Here is the result of our query rendered in the component

The result of our query rendered in the component


Apollo Server
A GraphQL server has the job of accepting incoming requests on an
endpoint, interpreting the request and looking up any data that’s
necessary to fulfill the client needs.

There are tons of different GraphQL server implementations for every


possible language.

Apollo Server is a GraphQL server implementation for JavaScript, in


particular for the Node.js platform.

It supports many popular Node.js frameworks, including:

Express (https://expressjs.com/)
Hapi (https://hapijs.com/)
Koa (http://koajs.com/)
Restify (http://restify.com/)

Apollo Server gives us 3 things basically:

gives us a way to describe our data with a schema.


provides the framework for resolvers, which are functions we write to
fetch the data needed to fulfill a request.
facilitates handling authentication for our API.

For the sake of learning the basics of Apollo Server, we’re not going to
use any of the supported Node.js frameworks. Instead, we’ll be using
something that was built by the Apollo team, something really great which
will be the base of our learning: Launchpad.

Launchpad

Launchpad (https://launchpad.graphql.com) is a project that’s part of the


Apollo umbrella of products, and it’s a pretty amazing tool that allows us
to write code on the cloud and create a an Apollo Server online, just like
we’d run a snippet of code on Codepen, JSFiddle or JSBin.
Except that instead of building a visual tool that’s going to be isolated
there, and meant just as a showcase or as a learning tool, with Launchpad
we create a GraphQL API and it’s going to be publicly accessible.

Every project on Launchpad is called pad and has its GraphQL endpoint URL,
like:

https://1jzxrj129.lp.gql.zone/graphql

Once you build a pad, Launchpad gives you the option to download the full
code of the Node.js app that’s running it, and you just need to run npm
install and npm start to have a local copy of your Apollo GraphQL
Server.

To summarize, it’s a great tool to learn, share, and prototype.

The Apollo Server Hello World

Every time you create a new Launchpad pad, you are presented with the
Hello, World! of Apollo Server. Let’s dive into it.

First you import the makeExecutableSchema function from graphql-


tools .

import { makeExecutableSchema } from 'graphql-tools'

This function is used to create a GraphQLSchema object, by providing


it a schema definition (written in the GraphQL schema language
(http://graphql.org/learn/schema/) ) and a set of resolvers.

A schema definition is an template literal string containing the


description of our query and the types associated with each field:

const typeDefs = `
type Query {
hello: String
}
`
A resolver is an object that maps fields in the schema to resolver
functions, able to lookup data to respond to a query.

Here is a simple resolver containing the resolver function for the hello
field, which simply returns the Hello world! string:

const resolvers = {
Query: {
hello: (root, args, context) => {
return 'Hello world!'
}
}
}

Given those 2 elements, the schema definition and the resolver, we use the
makeExecutableSchema function we imported previously to get a
GraphQLSchema object, which we assign to the schema const.

export const schema = makeExecutableSchema({


typeDefs,
resolvers
})

This is all you need to serve a simple read-only API. Launchpad takes care
of the tiny details.

Here is the full code for the simple Hello World example:

import { makeExecutableSchema } from 'graphql-tools'

const typeDefs = `
type Query {
hello: String
}
`

const resolvers = {
Query: {
hello: (root, args, context) => {
return 'Hello world!'
}
}
}

export const schema = makeExecutableSchema({


typeDefs,
resolvers
})

Launchpad provides a great built-in tool to consume the API:

The Launchpad API client

and as said previously the API can is publicly accessible, you just need
to login and save your pad.

I made a pad that exposes its endpoint at


https://kqwwkp0pr7.lp.gql.zone/graphql , so let’s try it using
curl from the command line:

$ curl \
-X POST \
-H "Content-Type: application/json" \
--data '{ "query": "{ hello }" }' \
https://kqwwkp0pr7.lp.gql.zone/graphql

which successfully gives us the result we expect:

{
"data": {
"hello": "Hello world!"
}
}

Run the GraphQL Server locally

We mentioned that anything you create on Launchpad is downloadable, so


let’s go on.

The package is composed by 2 files. The first schema.js is what we have


above.

The second, server.js , was invisible in Launchpad and that is what


provides the underlying Apollo Server functionality, powered by Express
(https://expressjs.com) , the popular Node.js framework.
It is not the simplest example of an Apollo Server setup, so for the sake
of explaining I’m going to replace it with a simpler example (but feel
free to study that after you’ve understood the basics)

Your first Apollo Server code

First, run npm install and npm start on the Launchpad code you
downloaded.

The node server we initialized previusly uses nodemon


(https://nodemon.io/) to restart the server when the files change, so when
you change the code, the server is restarted with your changes applied.

Add this code in server.js :

const express = require('express')


const bodyParser = require('body-parser')
const { graphqlExpress } = require('apollo-server-express')
const { schema } = require('./schema')

const server = express()

server.use('/graphql', bodyParser.json(), graphqlExpress({ schema }))

server.listen(3000, () => {
console.log('GraphQL listening at http://localhost:3000/graphql')
})

With just 11 lines, this is much simpler than the server set up by
Launchpad, because we removed all the things that made that code more
flexible for their needs.

Coding is 50% deciding how much flexibility you need now, versus how
more important is to have clean, well understandable code that you
can pick up 6 months from now and easily tweak, or pass to other
developers and team members and be productive in as little time as
needed.

Here’s what the code does:


We first import a few libraries we’re going to use.

express which will power the underlying network functionality to


expose the endpoint
bodyParser is the Node body parsing middleware
graphqlExpress is the Apollo Server object for Express

const express = require('express')


const bodyParser = require('body-parser')
const { graphqlExpress } = require('apollo-server-express')

Next we import the GraphQLSchema object we created in the schema.js


file above as Schema :

const { schema } = require('./schema')

Here is some standard Express set, we just initialize a server on port


3000

const server = express()

Now we are ready to initialize Apollo Server:

graphqlExpress({ schema })

and we pass that as a callback to our endpoint to HTTP JSON requests:

server.use('/graphql', bodyParser.json(), graphqlExpress({ schema }))

All we need now is to start Express:

server.listen(3000, () => {
console.log('GraphQL listening at http://localhost:3000/graphql')
})

Add a GraphiQL endpoint


If you use GraphiQL, you can easily add a /graphiql endpoint, to
consume with the GraphiQL interactive in-browser IDE
(https://github.com/graphql/graphiql) :

server.use('/graphiql', graphiqlExpress({
endpointURL: '/graphql',
query: ``
}))

We now just need to start up the Express server:

server.listen(PORT, () => {
console.log('GraphQL listening at http://localhost:3000/graphql')
console.log('GraphiQL listening at http://localhost:3000/graphiql')
})

You can test it by using curl again:

$ curl \
-X POST \
-H "Content-Type: application/json" \
--data '{ "query": "{ hello }" }' \
http://localhost:3000/graphql

This will give you the same result as above, where you called the
Launchpad servers:

{
"data": {
"hello": "Hello world!"
}
}
GIT AND GITHUB
GIT
Git is a free and Open Source *version control system* (VCS), a
technology used to track older versions of files, providing the ability
to roll back and maintain separate different versions at the same time

What is Git
Distributed VCS
Installing Git
OSX
Windows
Linux
Initializing a repository
Adding files to a repository
Add the file to the staging area
Commit changes
Branches
Push and pull
Add a remote
Push
Pull
Conflicts
Command Line vs Graphical Interface
GitHub Desktop
Tower
GitKraken
A good Git workflow
The feature is a quick one
The feature will take more than one commit to finish
Hotfix
Develop is unstable. Master is the latest stable release
What is Git
Git is a free and Open Source version control system (VCS), a technology
used to track older versions of files, providing the ability to roll back
and maintain separate different versions at the same time.

Git is a successor of SVN and CVS, two very popular version control
systems of the past. First developed by Linus Torvalds (the creator of
Linux), today is the go-to system which you can’t avoid if you make use of
Open Source software.

Distributed VCS

Git is a distributed system. Many developers can clone a repository from a


central location, work independently on some portion of code, and then
commit the changes back to the central location where everybody updates.

Git makes it very easy for developers to collaborate on a codebase


simultaneously and provides tools they can use to combine all the
independent changes they make.

A very popular service that hosts Git repositories is GitHub, especially


for Open Source software, but we can also mention BitBucket, GitLab and
many others which are widely used by teams all over the world to host
their code publicly and also privately.

Installing Git
Installing Git is quite easy on all platforms:

OSX
Using Homebrew (http://brew.sh/) , run:

brew install git

Windows

Download and install Git for Windows (https://git-for-windows.github.io/)


.

Linux

Use the package manager of your distribution to install Git. E.g.

sudo apt-get install git

or

sudo yum install git

Initializing a repository
Once Git is installed on your system, you are able to access it using the
command line by typing git .
Suppose you have a clean folder. You can initialize a Git repository by
typing

git init
What this command does? It creates a .git folder in the folder where you
ran it. If you don’t see it, it’s because it’s a hidden folder, so it
might not be shown everywhere, unless you set your tools to show hidden
folders.

Anything related to Git in your newly created repository will be stored


into this .git directory, all except the .gitignore file, which I’ll
talk about in the next article.

Adding files to a repository


Let’s see how a file can be added to Git. Type:

echo "Test" > README.txt

to create a file. The file is now in the directory, but Git was not told
to add it to its index, as you can see what git status tells us:
Add the file to the staging area

We need to add the file with

git add README.txt

to make it visible to Git, and be put into the staging area:

Once a file is in the staging area, you can remove it by typing:

git reset README.txt

But usually what you do once you add a file is commit it.
Commit changes
Once you have one or more changes to the staging area, you can commit them
using

git commit -am "Description of the change"

This cleans the status of the staging area:

and permanently stores the edit you made into a record store, which you
can inspect by typing git log :
Branches
When you commit a file to Git, you are committing it into the current
branch.

Git allows you to work simultaneously on multiple, separate branches,


different lines of development which represent forks of the main branch.

Git is very flexible: you can have an indefinite number of branches active
at the same time, and they can be developed independently until you want
to merge one of them into another.

Git by default creates a branch called master . It’s not special in any
way other than it’s the one created initially.

You can create a new branch called develop by typing

git branch develop


As you can see, git branch lists the branches that the repository has.
The asterisk indicates the current branch.

When creating the new branch, that branch points to the latest commit made
on the current branch. If you switch to it (using git checkout
develop ) and run git log , you’ll see the same log as the branch that
you were previously.

Push and pull


In Git you always commit locally. This is a very nice benefit over SVN or
CSV where all commits had to be immediately pushed to a server.

You work offline, do as many commits as you want, and once you’re ready
you push them to the server, so your team members, or the community if you
are pushing to GitHub, can access your latest and greatest code.

Push sends your changes.


Pull downloads remote changes to your working copy.

Before you can play with push and pull, however, you need to add a remote!
Add a remote

A remote is a clone of your repository, positioned on another machine.

I’ll do an example with GitHub. If you have an existing repository, you


can publish it on GitHub. The procedure involves creating a repository on
the platform, through their web interface, then you add that repository as
a remote, and you push your code there.

To add the remote type

git remote add origin https://github.com/YOU/REPONAME.git

An alternative approach is creating a blank repo on GitHub and


cloning it locally, in which case the remote is automatically added
for you

Push

Once you’re done, you can push your code to the remote, using the syntax
git push <remote> <branch> , for example:

git push origin master

You specify origin as the remote, because you can technically have more
than one remote. That is the name of the one we added previously, and it’s
a convention.

Pull

The same syntax applies to pulling:

git pull origin master


tells Git to push the master branch from origin , and merge it in the
current local branch.

Conflicts

In both push and pull there is a problem to consider: if the remote


contains changes incompatible with your set of commits, the operation will
fail.

This happens when the remote contains changes subsequent to your latest
pull, which affect lines of code you worked on as well.

In the case of push this is usually solved by pulling changes, analyzing


the conflicts, and then making a new commit that solves them.

In the case of pull, your working copy will automatically be edited with
the conflicting changes, and you need to solve them, and make a new commit
so the codebase now includes the problematic changes that were made on the
remote.

Command Line vs Graphical Interface


Up to now I talked about the command line Git application.

This was key to introduce you to how Git actually works, but in the day-
to-day operations, you are most likely to use an app that exposes you
those commands via a nice UI, although many developers I know like to use
the CLI.

The CLI (command line) commands will still prove themselves to be


useful if you need to setup Git using SSH on a remote server, for
instance. It’s not useless knowledge at all!
That said, there are many very nice apps that are made to simplify the
life of a developer that turn out very useful especially when you dive
more into the complexity of a Git repository. The easy steps are easy
everywhere, but things could quickly grow to a point where you might find
it hard to use the CLI.

Some of the most popular apps are

GitHub Desktop

https://desktop.github.com (https://desktop.github.com)

Free, at the time of writing only available for Mac and Win

Tower

https://www.git-tower.com (https://www.git-tower.com)

Paid, at the time of writing only available for Mac and Win

GitKraken

https://www.gitkraken.com (https://www.gitkraken.com)

Free / Paid depending on the needs, for Mac, Win and Linux

A good Git workflow


Different developers and teams like to use different strategies to manage
Git effectively. Here is a strategy I used on many teams and on widely
used open source projects, and I saw used by many big and small projects
as well.
The strategy is inspired by the famous A successful Git branching model
(http://nvie.com/posts/a-successful-git-branching-model/) post.

I have only 2 permanent branches: master and develop.

Those are the rules I follow in my daily routine:

When I take on a new issue, or decide to incorporate a feature, there are


2 main roads:

The feature is a quick one

The commits I’ll make won’t break the code (or at least I hope so): I can
commit on develop, or do a quick feature branch, and then merge it to
develop.

The feature will take more than one commit to finish

Maybe it will take days of commits before the feature is finished and it
gets stable again: I do a feature branch, then merge to develop once ready
(it might take weeks).

Hotfix

If something on our production server requires immediate action, like a


bugfix I need to get solved ASAP, I do a short hotfix branch, fix the
thing, test the branch locally and on a test machine, then merge it to
master and develop.

Develop is unstable. Master is the latest stable release

The develop branch will always be in a state of flux, that’s why it should
be put on a ‘freeze’ when preparing a release. The code is tested and
every workflow is checked to verify code quality, and it’s prepared for a
merge into master.
Every time develop or another hotfix branch is merged into master, I tag
it with a version number, and if on GitHub I also create a release, so
it’s easy to move back to a previous state if something goes wrong.
GITHUB
Learn all the most important pieces of GitHub that you should know as a
developer

Introduction to GitHub
Why GitHub?
GitHub issues
Social coding
Follow
Stars
Fork
Popular = better
Pull requests
Project management
Comparing commits
Webhooks and Services
Webhooks
Services
Final words

Introduction to GitHub
GitHub is a website where millions of developers gather every day to
collaborate on open source software. It’s also the place that hosts
billions of lines of code, and also a place where users of software go to
report issues they might have.

In short, it’s a platform for software developers, and it’s built around
Git.

TIP: If you don’t know about Git yet, checkout the Git guide.
As a developer you can’t avoid using GitHub daily, either to host your
code or to make use of other people’s code. This post explains you some
key concepts of GitHub, and how to use some of its features that improve
your workflow, and how to integrate other applications into your process.

Why GitHub?
Now that you know what GitHub is, you might ask why you should use it.

GitHub after all is managed by a private company, which profits from


hosting people’s code. So why should you use that instead of similar
platforms such as BitBucket or GitLab, which are very similar?

Beside personal preferences, and technical reasons, there is one big


reason: everyone uses GitHub, so the network effect is huge.

Major codebases migrated over time to Git from other version control
systems, because of its convenience, and GitHub was historically well
positioned into (and put a lot of effort to “win”) the Open Source
community.

So today any time you look up some library, you will 99% of the times find
it on GitHub.

Apart from Open Source code, many developers also host private
repositories on GitHub because of the convenience of a unique platform.

GitHub issues
GitHub issues are one of the most popular bug tracker in the world.

It provides the owners of a repository the ability to organize, tag and


assign to milestones issues.
If you open an issue on a project managed by someone else, it will stay
open until either you close it (for example if you figure out the problem
you had) or if the repo owner closes it.

Sometimes you’ll get a definitive answer, other times the issue will be
left open and tagged with some information that categorizes it, and the
developer could get back to it to fix a problem or improve the codebase
with your feedback.

Most developers are not paid to support their code released on GitHub, so
you can’t expect prompt replies, but other times Open Source repositories
are published by companies that either provide services around that code,
or have commercial offerings for versions with more features, or a plugin-
based architecture, in which case they might be working on the open source
software as paid developers.

Social coding
Some years ago the GitHub logo included the “social coding” tagline.

What did this mean, and is that still relevant? It certainly is.

Follow

With GitHub you can follow developers, by going on their profile and
clicking “follow”.

You can also follow a repository, by clicking the “watch” button on a


repo.

In both cases the activity will show up in your dashboard. You don’t
follow like in Twitter, where you see what people say, but you see what
people do.

Stars
One big feat of GitHub is the ability to star a repository. This action
will include it in your “starred repositories” list, which allows you to
find things you found interesting before, and it’s also one of the most
important rating mechanisms, as the more stars a repo has, the more
important it is, and the more it will show up in search results.

Major projects can have 70.000 and more stars.

GitHub also has a trending page (https://github.com/trending) where it


features the repositories that get the most stars in a determined period
of time, e.g. today or this week or month.

Getting into those trending lists can cause other network effects like
being featured on other sites, just because you have more visibility.

Fork

The last important network indicator of a project is the number of forks.

This is key to how GitHub works, as a fork is the base of a Pull Request
(PR), a change proposal. Starting from your repository, a person forks it,
makes some changes, then creates a PR to ask you to merge those changes.

Sometimes the person that forks never asks you to merge anything, just
because they liked your code and decided to add something on top of it, or
they fixed some bug they were experiencing.

Popular = better

All in all, those are all key indicators of the popularity of a project,
and generally along with the date of the latest commit and the involvement
of the author in the issues tracker, is a useful indication of whether or
not you should rely on a library or software.

Pull requests
Before I introduced what is a Pull Request (PR)

Starting from your repository, a person forks it, makes some changes, then
creates a PR to ask you to merge those changes.

A project might have hundreds of PRs, generally the more popular a


project, the more PRs, like the React project:

Once a person submits a PR, an easy process using the GitHub interface, it
needs to be reviewed by the core maintainers of the project.

Depending on the scope of your PR (the number of changes, or the number of


things affected by your change, or the complexity of the code touched) the
maintainer might need more or less time to make sure your changes are
compatible with the project.

A project might have a clear timeline of changes they want to introduce.


The maintainer might like to keep things simple while you are introducing
a complex architecture in a PR.
This is to say that not always a PR gets accepted fast, and also there is
no guarantee that the PR will even get accepted.

In the example i posted above, there is a PR in the repo that dates back
1.5 years. And this happens in all the projects.

Project management
Along with issues, which are the place where developers get feedback from
users, the GitHub interface offers other features aimed at helping project
management.

One of those is Projects. It’s very new in the ecosystem and very rarely
used, but it’s a kanban board that helps organizing issues and work that
needs to be done.

The Wiki is intended to be used as a documentation for users. One of the


most impressive usage of the Wiki I saw up to now is the Go Programming
Language GitHub Wiki (https://github.com/golang/go/wiki) .

Another popular project management aid is milestones. Part of the issues


page, you can assign issues to specific milestones, which could be release
targets.

Speaking of releases, GitHub enhances the Git tag functionality by


introducing releases.

A Git tag is a pointer to a specific commit, and if done consistently,


helps you roll back to previous version of your code without referencing
specific commits.

A GitHub release builds on top of Git tags and represents a complete


release of your code, along with zip files, release notes and binary
assets that might represent a fully working version of your code end
product.
While a Git tag can be created programmatically (e.g. using the Command
Line git program), creating a GitHub release is a manual process that
happens through the GitHub UI. You basically tell GitHub to create a new
release and tell them which tag you want to apply that release to.

Comparing commits
GitHub offers many tools to work with your code.

One of the most important things you might want to do is compare one
branch to another one. Or, compare the latest commit with the version you
are currently using, to see which changes were made over time.

GitHub allows you to do this with the compare view, just add /compare
to the repo name, for example: https://github.com/facebook/react/compare
(https://github.com/facebook/react/compare)

For example here I choose to compare the latest React v15.x to the latest
v16.0.0-rc version available at the time of writing, to check what’s
changed:
The view shows you the commits made between two releases (or tags or
commits references) and the actual diff, if the number of changes is lower
than a reasonable amount.

Webhooks and Services


GitHub offers many features that help the developer workflow. One of them
is webhooks, the other one is services.

Webhooks
Webhooks allow external services to be pinged when certain events happen
in the repository, like when code is pushed, a fork is made, a tag was
created or deleted.

When an event happens, GitHub sends a POST request to the URL we told it
to use.

A common usage of this feature is to ping a remote server to fetch the


latest code from GitHub when we push an update from our local computer.

We push to GitHub, GitHub tells the server we pushed, the server pulls
from GitHub.

Services

GitHub services, and the new GitHub apps, are 3rd part integrations that
improve the developer experience or provide a service to you.

For example you can setup a test runner to run the tests automatically
every time you push some new commits, using TravisCI (https://travis-
ci.org/) .

You can setup Continuous Integration using CircleCI


(https://circleci.com/) .

You might create a Codeclimate (https://codeclimate.com/) integration that


analyzes the code and provides a report of technical debt and test
coverage.

Final words
GitHub is an amazing tool and service to take advantage of, a real gem in
today’s developer toolset. This tutorial will help you start, but the real
experience of working on GitHub on open source (or closed source) projects
is something not to be missed.
A GIT CHEAT SHEET
Some useful Git commands I find handy to know

Squash a series of commits and rewrite the history by writing them as


one
Take a commit that lives in a separate branch and apply the same
changes on the current branch
Restore the status of a file to the last commit (revert changes)
Show a pretty graph of the commit history
Get a prettier log
Get a shorter status
Checkout a pull request locally
List the commits that involve a specific file
List the commits that involve a specific file, including the commits
content
List the repository contributors ordering by the number of commits
Undo the last commit you pushed to the remote
Pick every change you haven’t already committed and create a new
branch
Stop tracking a file, but keep it in the file system
Get the name of the branch where a specific commit was made

Squash a series of commits and rewrite the history by writing


them as one

git rebase -i

this puts you in the interactive rebasing tool.

Type s to apply squash to a commit with the previous one. Repeat the s
command for as many commits as you need.

Take a commit that lives in a separate branch and apply the


same changes on the current branch
single commit:

git cherry-pick <commit>

for multiple commits:

git cherry-pick <commit1> <commit2> <commit3>

Restore the status of a file to the last commit (revert


changes)

git checkout -- <filename>

Show a pretty graph of the commit history

git log --pretty=format:"%h %s" --graph

Get a prettier log

git log --pretty=format:"%h - %an, %ar : %s"

Get a shorter status

git status -s

Checkout a pull request locally

git fetch origin pull/<id>/head:<branch>

git checkout <branch>

List the commits that involve a specific file

git log --follow -- <filename>


List the commits that involve a specific file, including the
commits content

git log --follow -p -- <filename>

List the repository contributors ordering by the number of


commits

git shortlog -s -n

Undo the last commit you pushed to the remote

git revert -n HEAD

Pick every change you haven’t already committed and create a


new branch

git checkout -b <branch>

Stop tracking a file, but keep it in the file system

git rm -r --cached

Get the name of the branch where a specific commit was made

git branch --contains <commit>


CSS
CSS GRID
CSS Grid is the new kid in the CSS town, and while not yet fully
supported by all browsers, it's going to be the future system for
layouts

Introduction to CSS Grid


The basics
grid-template-columns and grid-template-rows
Automatic dimensions
Different columns and rows dimensions
Adding space between the cells
Spawning items on multiple columns and/or rows
Shorthand syntax
More grid configuration
Using fractions
Using percentages and rem
Using repeat()
Specify a minimum width for a row
Positioning elements using grid-template-areas
Adding empty cells
Wrapping up

Introduction to CSS Grid


CSS Grid is a fundamentally new approach to building layouts using CSS.

Keep an eye on the CSS Grid Layout page on caniuse.com


(https://caniuse.com/#feat=css-grid) to find out which browsers currently
support it. At the time of writing, Feb 2018, all major browsers (except
IE, which will never have support for it) are already supporting this
technology, covering 78% of all users.

CSS Grid is not a competitor to Flexbox. They interoperate and collaborate


on complex layouts, because CSS Grid works on 2 dimensions (rows AND
columns) while Flexbox works on a single dimension (rows OR columns).

Building layouts for the web has traditionally been a complicated topic.

I won’t dig into the reasons for this complexity, which is a complex topic
on its own, but you can think yourself as a very lucky human because
nowadays you have 2 very powerful and well supported tools at your
disposal:

CSS Flexbox
CSS Grid

These 2 are the tools to build the Web layouts of the future.

Unless you need to support old browsers like IE8 and IE9, there is no
reason in 2017 to be messing with things like:

Table layouts
Floats
clearfix hacks
display: table hacks

We’ll cover Flexbox in a separate guide.

In this guide there’s all you need to know about going from a zero
knowledge of CSS Grid to being a proficient user.

The basics
The CSS Grid layout is activated on a container element (which can be a
div or any other tag) by setting display: grid .

As with flexbox, you can define some properties on the container, and some
properties on each individual item in the grid.

These properties combined will determine the final look of the grid.
The most basic container properties are grid-template-columns and
grid-template-rows .

grid-template-columns and grid-template-rows

Those properties define the number of columns and rows in the grid, and
they also set the width of each column/row.

The following snippet defines a grid with 4 columns, each 400px wide, and
with 3 rows with a 100px height each.

.container {
display: grid;
grid-template-columns: 200px 200px 200px 200px;
grid-template-rows: 300px 300px;
}

Here’s another example of a grid with 2 columns and 3 rows:

.container {
display: grid;
grid-template-columns: 200px 200px;
grid-template-rows: 100px 100px 100px;
}

Automatic dimensions

Many times you might have a fixed header size, a fixed footer size, and
the main content that is flexible in height, depending on its length. In
this case you can use the auto keyword:

.container {
display: grid;
grid-template-rows: 100px auto 100px;
}

Different columns and rows dimensions

In the above examples we made regular grids by using the same values for
rows and the same values for columns.

You can specify any value for each row/column, to create a lot of
different designs:

.container {
display: grid;
grid-template-columns: 100px 200px;
grid-template-rows: 100px 50px;
}
Another example:

.container {
display: grid;
grid-template-columns: 10px 100px;
grid-template-rows: 100px 10px;
}

Adding space between the cells

Unless specified, there is no space between the cells.


You can add spacing by using those properties: - grid-column-gap -
grid-row-gap

or the shorthand syntax grid-gap .

Example:

.container {
display: grid;
grid-template-columns: 100px 200px;
grid-template-rows: 100px 50px;
grid-column-gap: 25px;
grid-row-gap: 25px;
}

The same layout using the shorthand:

.container {
display: grid;
grid-template-columns: 100px 200px;
grid-template-rows: 100px 50px;
grid-gap: 25px;
}

Spawning items on multiple columns and/or rows

Every cell item has the option to occupy more than just one box in the
row, and expand horizontally or vertically to get more space, while
respecting the grid proportions set in the container.
Those are the properties we’ll use for that: - grid-column-start -
grid-column-end - grid-row-start - grid-row-end

Example:

.container {
display: grid;
grid-template-columns: 200px 200px 200px 200px;
grid-template-rows: 300px 300px;
}
.item1 {
grid-column-start: 2;
grid-column-end: 4;
}
.item6 {
grid-column-start: 3;
grid-column-end: 5;
}

The numbers correspond to the vertical line that separates each column,
starting from 1:
The same principle applies to grid-row-start and grid-row-end ,
except this time instead of taking more columns, a cell takes more rows.

Shorthand syntax

Those properties have a shorthand syntax provided by: - grid-column -


grid-row

The usage is simple, here’s how to replicate the above layout:

.container {
display: grid;
grid-template-columns: 200px 200px 200px 200px;
grid-template-rows: 300px 300px;
}
.item1 {
grid-column: 2 / 4;
}
.item6 {
grid-column: 3 / 5;
}
Another approach is to set the starting column/row, and set how many it
should occupy using span :

.container {
display: grid;
grid-template-columns: 200px 200px 200px 200px;
grid-template-rows: 300px 300px;
}
.item1 {
grid-column: 2 / span 2;
}
.item6 {
grid-column: 3 / span 2;
}

More grid configuration

Using fractions

Specifying the exact width of each column or row is not ideal in every
case.

A fraction is a unit of space.

The following example divides a grid into 3 columns with the same width,
1⁄
3 of the available space each.

.container {
grid-template-columns: 1fr 1fr 1fr;
}

Using percentages and rem

You can also use percentages, and mix and match fractions, pixels, rem and
percentages:

.container {
grid-template-columns: 3rem 15% 1fr 2fr
}
Using repeat()

repeat() is a special function that takes a number that indicates the


number of times a row/column will be repeated, and the length of each one.

If every column has the same width you can specify the layout using this
syntax:

.container {
grid-template-columns: repeat(4, 100px);
}

This creates 4 columns with the same width.

Or using fractions:

.container {
grid-template-columns: repeat(4, 1fr);
}

Specify a minimum width for a row

Common use case: Have a sidebar that never collapses more than a certain
amount of pixels when you resize the window.

Here’s an example where the sidebar takes 1⁄4 of the screen and never
takes less than 200px:

.container {
grid-template-columns: minmax(200px, 3fr) 9fr;
}

You can also set just a maximum value using the auto keyword:

.container {
grid-template-columns: minmax(auto, 50%) 9fr;
}

or just a minimum value:


.container {
grid-template-columns: minmax(100px, auto) 9fr;
}

Positioning elements using grid-template-areas

By default elements are positioned in the grid using their order in the
HTML structure.

Using grid-template-areas You can define template areas to move them


around in the grid, and also to spawn an item on multiple rows / columns
instead of using grid-column .

Here’s an example:

<div class="container">
<main>
...
</main>
<aside>
...
</aside>
<header>
...
</header>
<footer>
...
</footer>
</div>

.container {
display: grid;
grid-template-columns: 200px 200px 200px 200px;
grid-template-rows: 300px 300px;
grid-template-areas:
"header header header header"
"sidebar main main main"
"footer footer footer footer";
}
main {
grid-area: main;
}
aside {
grid-area: sidebar;
}
header {
grid-area: header;
}
footer {
grid-area: footer;
}

Despite their original order, items are placed where grid-template-


areas define, depending on the grid-area property associated to them.

Adding empty cells

You can set an empty cell using the dot . instead of an area name in
grid-template-areas :

.container {
display: grid;
grid-template-columns: 200px 200px 200px 200px;
grid-template-rows: 300px 300px;
grid-template-areas:
". header header ."
"sidebar . main main"
". footer footer .";
}

Wrapping up
These are the basics of CSS Grid. There are many things I didn’t include
in this introduction but I wanted to make it very simple, to start using
this new layout system without making it feel overwhelming.
FLEXBOX
Flexbox, also called Flexible Box Module, is one of the two modern
layouts systems, along with CSS Grid

Introduction
Browser support
Enable Flexbox
Container properties
Rows or columns
Vertical and horizontal alignment
Horizontal alignment
Vertical alignment
A note on baseline
Wrap
Single item properties
Moving items before / after another one
Vertical alignment
Grow or shrink an item if necessary
flex-grow
flex-shrink
flex-basis
flex

Introduction
Flexbox, also called Flexible Box Module, is one of the two modern layouts
systems, along with CSS Grid.

Compared to CSS Grid (which is bi-dimensional), flexbox is a one-


dimensional layout model. It will control the layout based on a row or on
a column, but not together at the same time.
The main goal of flexbox is to allow items to fill the whole space offered
by their container, depending on some rules you set.

Unless you need to support old browsers like IE8 and IE9, Flexbox is the
tool that lets you forget about using

Table layouts
Floats
clearfix hacks
display: table hacks

Let’s dive into flexbox and become a master of it in a very short time.

Browser support
The best way to find out if you can use flexbox in your designs is to
check the caniuse.com flexbox page (https://caniuse.com/#feat=flexbox) .

At the time of writing (Feb 2018), it’s supported by 97.66% of the users,
all the most important browsers implement it since years, so even older
browsers (including IE10+) are covered.

While we must wait a few years for users to catch up on CSS Grid, Flexbox
is an older technology and can be used right now.

Enable Flexbox
A flexbox layout is applied to a container, by setting display: flex;
or display: inline-flex; .

The content inside the container will be aligned using flexbox.

Container properties
Some flexbox properties apply to the container, which sets the general
rules for its items. They are

flex-direction
justify-content
align-items
flex-wrap
flex-flow

Rows or columns

The first property we see, flex-direction , determines if the container


should align its items as rows, or as columns:

flex-direction: row places items as a row, in the text direction


(left-to-right for western countries)
flex-direction: row-reverse places items just like row but in
the opposite direction
flex-direction: column places items in a column, ordering top to
bottom
flex-direction: column-reverse places items in a column, just
like column but in the opposite direction

Vertical and horizontal alignment

Items have a default alignment, stacked from the start.


You can change this behavior using justify-content and align-items .

Horizontal alignment

justify-content has 5 possible values:

flex-start : align to the left side of the container.


flex-end : align to the right side of the container.
center : align at the center of the container.
space-between : display with equal spacing between them.
space-around : display with equal spacing around them
Vertical alignment

align-items has 5 possible values:

flex-start : align to the top of the container.


flex-end : align to the bottom of the container.
center : align at the vertical center of the container.
baseline : display at the baseline of the container.
stretch : items are stretched to fit the container.
A note on baseline
baseline looks similar to flex-start in this example, due to my boxes
being too simple. Check out this Codepen
(https://codepen.io/flaviocopes/pen/oExoJR) to have a more useful example,
which I forked from a Pen originally created by Martin Michálek
(https://twitter.com/machal) . As you can see there, items dimensions are
aligned.

Wrap

By default items in a flexbox container are kept on a single line,


shrinking them to fit in the container.

To force the items to spread across multiple lines, use flex-wrap:


wrap . This will distribute the items according to the order set in
flex-direction . Use flex-wrap: wrap-reverse to reverse this
order.

A shorthand property called flex-flow allows you to specify flex-


direction and flex-wrap in a single line, by adding the flex-
direction value first, followed by flex-wrap value, for example:
flex-flow: row wrap .

Single item properties


We’ve seen the properties you can apply to the container. Now, items can
have a certain amount of independence and flexibility, and you can alter
their appearance using those properties:

order
align-self
flex-grow
flex-shrink
flex-basis
flex
Let’s see them in details.

Moving items before / after another one

Items are ordered based on a order they are assigned. By default every
item has order 0 and the appearance in the HTML determines the final
order.

You can override this property using order on each separate item. This
is a property you set on the item, not the container. You can make an item
appear before all the others by setting a negative value.

Vertical alignment

An item can choose to override the container align-items setting, using


align-self , which has the same 5 possible values of align-items :

flex-start : align to the top of the container.


flex-end : align to the bottom of the container.
center : align at the vertical center of the container.
baseline : display at the baseline of the container.
stretch : items are stretched to fit the container.
Grow or shrink an item if necessary

flex-grow

The defaut for any item is 0.

If all items are defined as 1 and one is defined as 2, the bigger element
will take the space of two “1” items.

flex-shrink

The defaut for any item is 1.

If all items are defined as 1 and one is defined as 3, the bigger element
will shrink 3x the other ones. When less space is available, it will take
3x less space.

flex-basis

If set to auto , it sizes an item according to its width or height, and


adds extra space based on the flex-grow property.

If set to 0, it does not add any extra space for the item when calculating
the layout.

If you specify a pixel number value, it will use that as the length value
(width or height depends if it’s a row or a column item)

flex
This property combines the above 3 properties:

flex-grow
flex-shrink
flex-basis

and provides a shorthand syntax: flex: 0 1 auto


CSS VARIABLES
Discover this powerful new feature of modern CSS

Introduction
The basics of using variables
Create variables inside any element
Variables scope
Interacting with a CSS Variable value using JavaScript
Handling invalid values
Browser support
CSS Variables are case sensitive
Math in CSS Variables
Media queries with CSS Variables
Setting a fallback value for var()

Introduction
In the last few years CSS preprocessors had a lot of success. It was very
common for greenfield projects to start with Less or Sass. And it’s still
a very popular technology.

The main benefits of those technologies are, in my opinion:

They allow to nest selectors


The provide an easy imports functionality
They give you variables

Modern CSS has a new powerful feature called CSS Custom Properties, also
commonly known as CSS Variables.

CSS is not a programming language like JavaScript, Python, PHP, Ruby or Go


where variables are key to do something useful. CSS is very limited in
what it can do, and it’s mainly a declarative syntax to tell browsers how
they should display an HTML page.
But a variable is a variable: a name that refers to a value, and variables
in CSS helps reduce repetition and inconsistencies in your CSS, by
centralizing the values definition.

And it introduces a unique feature that CSS preprocessors won’t never


have: you can access and change the value of a CSS Variable
programmatically using JavaScript.

The basics of using variables


A CSS Variable is defined with a special syntax, prepending two dashes to
a name ( --variable-name ), then a colon and a value. Like this:

:root {
--primary-color: yellow;
}

(more on :root later)

You can access the variable value using var() :

p {
color: var(--primary-color)
}

The variable value can be any valid CSS value, for example:

:root {
--default-padding: 30px 30px 20px 20px;
--default-color: red;
--default-background: #fff;
}

Create variables inside any element


CSS Variables can be defined inside any element. Some examples:

:root {
--default-color: red;
}

body {
--default-color: red;
}

main {
--default-color: red;
}

p {
--default-color: red;
}

span {
--default-color: red;
}

a:hover {
--default-color: red;
}

What changes in those different examples is the scope.

Variables scope
Adding variables to a selector makes them available to all the children of
it.

In the example above you saw the use of :root when defining a CSS
variable:

:root {
--primary-color: yellow;
}

:root is a CSS pseudo-class that identifies the document, so adding a


variable to :root makes it available to all the elements in the page.

It’s just like targeting the html element, except that :root has
higher specificity (takes priority).

If you add a variable inside a .container selector, it’s only going to


be available to children of .container :
.container {
--secondary-color: yellow;
}

and using it outside of this element is not going to work.

Variables can be reassigned:

:root {
--primary-color: yellow;
}

.container {
--primary-color: blue;
}

Outside .container , --primary-color will be yellow, but inside it


will be blue.

You can also assign or overwrite a variable inside the HTML using inline
styles:

<main style="--primary-color: orange;">


<!-- ... -->
</main>

CSS Variables follow the normal CSS cascading rules, with precedence
set according to specificity

Interacting with a CSS Variable value using


JavaScript
The coolest thing with CSS Variables is the ability to access and edit
them using JavaScript.

Here’s how you set a variable value using plain JavaScript:

const element = document.getElementById('my-element')


element.style.setProperty('--variable-name', 'a-value')
This code below can be used to access a variable value instead, in case
the variable is defined on :root :

const styles = getComputedStyle(document.documentElement)


const value = String(styles.getPropertyValue('--variable-name')).trim()

Or, to get the style applied to a specific element, in case of variables


set with a different scope:

const element = document.getElementById('my-element')


const styles = getComputedStyle(element)
const value = String(styles.getPropertyValue('--variable-name')).trim()

Handling invalid values


If a variable is assigned to a property which does not accept the variable
value, it’s considered invalid.

For example you might pass a pixel value to a position property, or a


rem value to a color property.

In this case the line is considered invalid and ignored.

Browser support
Browser support for CSS Variables at the time of writing (Mar 2018) is
very good, according to Can I Use (https://www.caniuse.com/#feat=css-
variables) .

CSS Variables are here to stay, and you can use them today if you don’t
need to support Internet Explorer and old versions of the other browsers.

If you need to support older browsers you can use libraries like PostCSS
or Myth (http://www.myth.io/) , but you’ll lose the ability to interact
with variables via JavaScript or the Browser Developer Tools, as they are
transpiled to good old variable-less CSS (and as such, you lose most of
the power of CSS Variables).

CSS Variables are case sensitive


This variable:

--width: 100px;

is different than:

--Width: 100px;

Math in CSS Variables


To do math in CSS Variables, you need to use calc() , for example:

:root {
--default-left-padding: calc(10px * 2);
}

Media queries with CSS Variables


Nothing special here. CSS Variables normally apply to media queries:

body {
--width: 500px;
}

@media screen and (max-width: 1000px) and (min-width: 700px) {


--width: 800px;
}

.container {
width: var(--width);
}

Setting a fallback value for var()


var() accepts a second parameter, which is the default fallback value
when the variable value is not set:

.container {
margin: var(--default-margin, 30px);
}
POSTCSS
Discover PostCSS, a great tool to help you write modern CSS

Introduction
Why it’s so popular
Install the PostCSS CLI
Most popular PostCSS plugins
Autoprefixer
cssnext
CSS Modules
csslint
cssnano
Other useful plugins
How is it different than Sass?

Introduction
PostCSS is a very popular tool that allows developers to write CSS pre-
processors or post-processors, called plugins. There is a huge number of
plugins that provide lots of functionalities, and sometimes the term
“PostCSS” means the tool itself, plus the plugins ecosystem.

PostCSS plugins can be run via the command line, but they are generally
invoked by task runners at build time.

The plugin-based architecture provides a common ground for all the CSS-
related operations you need.

Note: PostCSS despite the name is not a CSS post-processor, but it


can be used to build them, as well as other things
Why it’s so popular
PostCSS provides several features that will deeply improve your CSS, and
it integrates really well with any build tool of your choice.

Install the PostCSS CLI


Using Yarn:

yarn global add postcss-cli

or npm:

npm install -g postcss-cli

Once this is done, the postcss command will be available in your command
line.

This command for example runs the autoprefixer plugin on CSS files
contained in the css/ folder, and save the result in the main.css file:

postcss --use autoprefixer -o main.css css/*.css

More info on the PostCSS CLI here: https://github.com/postcss/postcss-cli


(https://github.com/postcss/postcss-cli) .

Most popular PostCSS plugins


PostCSS provides a common interface to several great tools for your CSS
processing.

Here are some of the most popular plugins, to get an overview of what’s
possible to do with PostCSS.
Autoprefixer

Autoprefixer (https://github.com/postcss/autoprefixer) parses your CSS and


determines if some rules need a vendor prefix.

It does so according to the Can I Use (http://caniuse.com/) data, so you


don’t have to worry if a feature needs a prefix, or if prefixes you use
are now unneeded because obsolete.

You get to write cleaner CSS.

Example:

a {
display: flex;
}

gets compiled to

a {
display: -webkit-box;
display: -webkit-flex;
display: -ms-flexbox;
display: flex;
}

cssnext

https://github.com/MoOx/postcss-cssnext (https://github.com/MoOx/postcss-
cssnext)

This plugin is the Babel of CSS, allows you to use modern CSS features
while it takes care of transpiling them to a CSS more digestible to older
browsers:

it adds prefixes using Autoprefixer (so if you use this, no need to


use Autoprefixer directly)
it allows you to use CSS Variables
it allows you to use nesting, like in Sass
and a lot more (http://cssnext.io/features/) !

CSS Modules

CSS Modules (https://github.com/css-modules/postcss-modules) let you use


CSS Modules.

CSS Modules are not part of the CSS standard, but they are a build step
process to have scoped selectors.

csslint

Linting helps you write correct CSS and avoid errors or pitfalls. The
stylint (http://stylelint.io/) plugin allows you to lint CSS at build
time.

cssnano

cssnano (http://cssnano.co/) minifies your CSS and makes code


optimizations to have the least amount of code delivered in production.

Other useful plugins

On the PostCSS GitHub repo there is a full list of the available plugins
(https://github.com/postcss/postcss/blob/master/docs/plugins.md) .

Some of the ones I like include:

LostGrid (https://github.com/peterramsing/lost) is a PostCSS grid


system
postcss-sassy (https://github.com/andyjansson/postcss-sassy-mixins)
provides Sass-like mixins
postcss-nested (https://github.com/postcss/postcss-nested) provides
the ability to use Sass nested rules
postcss-nested-ancestors (https://github.com/toomuchdesign/postcss-
nested-ancestors) , reference any ancestor selector in nested CSS
postcss-simple-vars (https://github.com/postcss/postcss-simple-vars) ,
use Sass-like variables
PreCSS (https://github.com/jonathantneal/precss) provides you many
features of Sass, and this is what is most close to a complete Sass
replacement

How is it different than Sass?


Or any other CSS preprocessor?

The main benefit PostCSS provides over CSS preprocessors like Sass or Less
is the ability to choose your own path, and cherry-pick the features you
need, adding new capabilities at the same time. Sass or Less are “fixed”,
you get lots of features which you might or might not use, and you cannot
extend them.

The fact that you “choose your own adventure” means that you can still use
any other tool you like alongside PostCSS. You can still use Sass if this
is what you want, and use PostCSS to perform other things that Sass can’t
do, like autoprefixing or linting.

You can write your own PostCSS plugin to do anything you want.
HOW TO CENTER THINGS WITH CSS
Centering elements with CSS has always been easy for some things, hard
for others. Here is the full list of centering techniques, with modern
CSS techniques as well

Centering things in CSS is a task that is very different if you need to


center horizontally or vertically.

In this post I explain the most common scenarios and how to solve them. If
a new solution is provided by Flexbox I ignore the old techniques because
we need to move forward, and Flexbox is supported by browsers since years,
IE10 included.

Center horizontally

Text

Text is very simple to center horizontally using the text-align


property set to center :

p {
text-align: center;
}

Blocks

The modern way to center anything that is not text is to use Flexbox:

#mysection {
display: flex;
justify-content: center;
}
any element inside #mysection will be horizontally centered.

Center horizontally

Here is the alternative approach if you don’t want to use Flexbox.

Anything that is not text can be centered by applying an automatic margin


to left and right, and setting the width of the element:

section {
margin-left: 0 auto;
width: 50%;
}

the above margin-left: 0 auto; is a shorthand for:

section {
margin-top: 0;
margin-bottom: 0;
margin-left: auto;
margin-right: auto;
}

Remember to set the item to display: block if it’s an inline element.

Center vertically
Traditionally this has always been a difficult task. Flexbox now provides
us a great way to do this in the simplest possible way:

#mysection {
display: flex;
align-items: center;
}

any element inside #mysection will be vertically centered.

Center vertically
Center both vertically and horizontally
Techniques to center vertically and horizontally can be combined to
completely center an element in the page.

#mysection {
display: flex;
align-items: center;
justify-content: center;
}

Center both vertically and horizontally


THE CSS MARGIN PROPERTY
margin is a simple CSS property that has a shorthand syntax I keep
forgetting about, so I wrote this reference post

Introduction
Specific margin properties
Using margin with different values
1 value
2 values
3 values
4 values
Values accepted
Using auto to center elements

Introduction
The margin CSS property is commonly used in CSS to add space around an
element.

Remember:

margin adds space around an element border


padding adds space inside an element border

Specific margin properties


margin has 4 related properties that alter the margin of a single margin
at once:

margin-top
margin-right
margin-bottom
margin-left
The usage of those is very simple and cannot be confused, for example:

margin-left: 30px;
margin-right: 3em;

Using margin with different values


margin is a shorthand to specify multiple margins at the same time, and
depending on the number of values entered, it behaves differently.

1 value

Using a single value applies that to all the margins: top, right, bottom,
left.

margin: 20px;

2 values

Using 2 values applies the first to bottom & top, and the second to left &
right.

margin: 20px 10px;

3 values

Using 3 values applies the first to top, the second to left & right, the
third to bottom.

margin: 20px 10px 30px;

4 values
Using 4 values applies the first to top, the second to right, the third to
bottom, the fourth to left.

margin: 20px 10px 5px 0px;

So, the order is top-right-bottom-left.

Values accepted
margin accepts values expressed in any kind of length unit, the most
common ones are px, em, rem, but many others exist
(https://developer.mozilla.org/en-US/docs/Web/CSS/length) .

It also accepts percentage values, and the special value auto .

Using auto to center elements


auto can be used to tell the browser to select automatically a margin,
and it’s most commonly used to center an element in this way:

margin: 0 auto;

As said above, using 2 values applies the first to bottom & top, and the
second to left & right.

The modern way to center elements is to use Flexbox, and its justify-
content: center; directive.

Older browsers of course do not implement Flexbox, and if you need to


support them margin: 0 auto; is still a good choice.
DEPLOYMENT
HOST YOUR STATIC SITE ON NETLIFY
Discover Netlify, a great hosting service ideal for static sites

Introduction
Why Netlify
Advanced functionality for Static Sites

Note: this post is not sponsored by Netlify nor has affiliate links

Introduction
I recently switched my blog hosting from Firebase Hosting, operated by
Google, to Netlify (https://www.netlify.com) .

I did so while Firebase was having some issues that made my site
unreachable for a few hours, and while I waited for it to get up online
again, I created a replica of my site on Netlify.

Since this blog runs on Hugo (https://gohugo.io) , which is a Static Site


Generator, I need very little time to move the blog files around. All I
need is something that can serve HTML files, which is pretty much any
hosting on the planet.

I started looking for the best platform for a static site, and a few stood
out but I eventually tried Netlify, and I’m glad I did.

Netlify Logo

Why Netlify
There are a few things that made a great impression to me before trying
it.

First, the free plan is very generous for free or commercial projects,
with 100GB of free monthly bandwidth, and for a static site with just a
few images here and there, it’s a lot of space!

They include a global CDN, to make sure speed is not a concern even in
continents far away from the central location servers.

You can point your DNS nameservers to Netlify and they will handle
everything for you with a very nice interface to set up advanced needs.

They of course support having a custom domain and HTTPS.

Coming from Firebase, I expected a very programmer friendly way to manage


deploys, but I found it even better with regards to handling each Static
Site Generator.

I use Hugo, and locally I run a server by using its built-in tool hugo
server , which handles rebuilding all the HTML every time I make a
change, and it runs an HTTP server on port 1313 by default.

To generate the static site, I have to run hugo , and this creates a
series of files in the public/ folder.

I followed this method on Firebase: I ran hugo to create the files, then
firebase deploy , configured to push my public/ folder content to the
Google servers.

In the case of Netlify however, I linked it to my private GitHub


repository that hosts the site, and every time I push to the master
branch, the one I told Netlify to sync with, Netlify initiates a new
deploy, and the changes are live within seconds.

Dashboard
TIP: if you use Hugo on Netlify, make sure you set HUGO_VERSION in
netlify.toml to the latest Hugo stable release, as the default
version might be old and (at the time of writing) does not support
recent features like post bundles

If you think this is nothing new, you’re right, since this is not hard to
implement on your own server (I do so on other sites not hosted on
Netlify), but here’s something new: you can preview any GitHub (or GitLab,
or BitBucket) branch / PR on a separate URL, all while your main site is
live and running with the “stable” content.

Another cool feature is the ability to perform A/B testing on 2 different


Git branches.

Advanced functionality for Static Sites


Static sites have the obvious limitation of not being able to do any
server-side operation, like the ones you’d expect from a traditional CMS
for example.

This is an advantage (less security issues to care about) but also a


limitation in the functionality you can implement.

A blog is nothing complex, maybe you want to add comments and they can be
done using services like Disqus or others.

Or maybe you want to add a form and you do so by embedding forms generated
on 3rd part applications, like Wufoo or Google Forms.

Netlify provides a suite of tools to handle Forms


(https://www.netlify.com/docs/form-handling/#spam-filtering) ,
authenticate users and even deploy and manage Lambda functions
(https://macarthur.me/posts/building-a-lambda-function-with-netlify/) .

Need to password protect a site before launching it? ✅


Need to handle CORS? ✅

Need to have 301 redirects? ✅

Need prerendering for your SPA? ✅

I just scratched the surface of the things you can do with Netlify without
reaching out to 3rd part services, and I hope I gave you a reason to try
it out.
TUTORIALS
MAKE A CMS-BASED WEBSITE WORK
OFFLINE
Progressively enhance a website when viewed on modern devices

First approach: cache-first


Introducing a service worker
Fix URL, title and back button with the History API
Fix Google Analytics
Second approach: network-first, drop the app shell
Going simpler: no partials

This case study explains how I added the capability of working offline to
the writesoftware.org (https://writesoftware.org) website, which is based
on Grav, a great PHP-based CMS for developers (https://getgrav.org) , by
introducing a set of technologies grouped under the name of Progressive
Web Apps (in particular Service Workers and the Cache API).

When we’re finished, we’ll be able to use our site on a mobile device or
on a desktop, even if offline, like shown here below (notice the “Offline”
option in the network throttling settings)
First approach: cache-first
I first approached the task by using a cache-first approach. In short,
when we intercept a fetch request in the Service Worker, we first check if
we have it cached already. If not, we fetch it from the network. This has
the advantage of making the site blazing fast when loading pages already
cached, even when online - in particular with slow networks and lie-fi
(https://developers.google.com/web/fundamentals/performance/poor-
connectivity/#what_is_lie-fi) - but also introduces some complexity in
managing updating the cache when I ship new content.

This will not be the final solution I adopt, but it’s worth going through
it for demonstration purposes.

I’ll go through a couple phases:


1. I introduce a service worker and load it as part of the website JS
scripts
2. when installing the service worker, I cache the site skeleton
3. I intercept requests going to additional links, caching it

Introducing a service worker

I add the service worker in a sw.js file in the site root. This allows
it to work on all the site subfolders, and on the site home as well. The
SW at the moment is pretty basic, it just logs any network request:

self.addEventListener('fetch', (event) => {


console.log(event.request)
})

I need to register the service worker, and I do this from a script that I
include in every page:

window.addEventListener('load', () => {
if (!navigator.serviceWorker) {
return
}

navigator.serviceWorker.register('/sw.js', {
scope: '/'
}).then(() => {
//...ok
}).catch((err) => {
console.log('registration failed', err)
})
})

If service workers are available, we register the sw.js file and the
next time I refresh the page it should be working fine:
At this point I need to do some heavy lifting on the site. First of all, I
need to come up with a way to serve only the App Shell: a basic set of
HTML + CSS and JS that will be always available and shown to the users,
even when offline.

It’s basically a stripped down version of the website, with a <div


class="wrapper row" id="content-wrapper"></div> empty element,
which we’ll fill with content later, available under the /shell route:
So the first time the user loads the site, the normal version will be
shown (full-HTML version), and the service worker is installed.

Now any other page that is clicked is intercepted by our Service Worker.
Whenever a page is loaded, we load the shell first, and then we load a
stripped-down version of the page, without the shell: just the content.

How?

We listen for the install event, which fires when the Service Worker is
installed or updated, and when this happens we initialize the cache with
the content of our shell: the basic HTML layout, plus some CSS, JS and
some external assets:

const cacheName = 'writesoftware-v1'

self.addEventListener('install', (event) => {


event.waitUntil(caches.open(cacheName).then(cache => cache.addAll([
'/shell',
'user/themes/writesoftware/favicon.ico',
'user/themes/writesoftware/css/style.css',
'user/themes/writesoftware/js/script.js',
'https://fonts.googleapis.com/css?family=Press+Start+2P',
'https://fonts.googleapis.com/css?family=Inconsolata:400,700',
'https://cdnjs.cloudflare.com/ajax/libs/prism/1.6.0/themes/prism.min.css',
'https://cdnjs.cloudflare.com/ajax/libs/prism/1.6.0/prism.min.js',
'https://cdn.jsdelivr.net/prism/1.6.0/components/prism-jsx.min.js'
])))
})

Then when we perform a fetch, we intercept requests to our pages, and


fetch the shell from the Cache instead of going to the network.

If the URL belongs to Google Analytics or ConvertKit I avoid using the


local cache, and I fetch them without using CORS, since they deny
accessing them through this method.

Then, if I’m requesting a local partial (just the content of a page, not
the full page) I just issue a fetch request to get it.

If it’s not a partial, we return the shell, which is already cached when
the Service Worker is first installed.

Once the fetch is done, I cache it.

self.addEventListener('fetch', (event) => {


const requestUrl = new URL(event.request.url)

if (requestUrl.href.startsWith('https://www.googletagmanager.com') ||
requestUrl.href.startsWith('https://www.google-analytics.com') ||
requestUrl.href.startsWith('https://assets.convertkit.com')) {
// don't cache, and no cors
event.respondWith(fetch(event.request.url, { mode: 'no-cors' }))
return
}

event.respondWith(caches.match(event.request)
.then((response) => {
if (response) { return response }
if (requestUrl.origin === location.origin) {
if (requestUrl.pathname.endsWith('?partial=true')) {
return fetch(requestUrl.pathname)
} else {
return caches.match('/shell')
}
return fetch(`${event.request.url}?partial=true`)
}
return fetch(event.request.url)
})
.then(response => caches.open(cacheName).then((cache) => {
cache.put(event.request.url, response.clone())
return response
}))
.catch((error) => {
console.error(error)
}))
})

Now, I edit the script.js file to introduce an important feature:


whenever a link is clicked on my pages, I intercept it and I issue a
message to a Broadcast Channel.

Since Service Workers are currently only supported in Chrome, Firefox and
Opera, I can safely rely on the BroadcastChannel API
(https://developers.google.com/web/updates/2016/09/broadcastchannel) for
this.
First, I connect to the ws_navigation channel and I attach a
onmessage event handler on it. Whenever I receive an event, it’s a
communication from the Service Worker with new content to show inside the
App Shell, so I just lookup the element with id content-wrapper and I
put the partial page content into it, effectively changing the page the
user is seeing.

As soon as the Service Worker is registered I issue a message to this


channel, with a fetchPartial task and a partial page URL to fetch. This
is the content of the initial page load.

The shell is loaded immediately, since it’s always cached and soon after,
the actual content is looked up, which might be cached as well.

window.addEventListener('load', () => {
if (!navigator.serviceWorker) { return }
const channel = new BroadcastChannel('ws_navigation')

channel.onmessage = (event) => {


if (document.getElementById('content-wrapper')) {
document.getElementById('content-wrapper').innerHTML = event.data.content
}
}

navigator.serviceWorker.register('/sw.js', {
scope: '/'
}).then(() => {
channel.postMessage({
task: 'fetchPartial',
url: `${window.location.pathname}?partial=true`
})
}).catch((err) => {
console.log('SW registration failed', err)
})
})

The missing bit is handing a click on the page. When a link is clicked, I
intercept the event, halt it and I send a message to the Service Worker to
fetch the partial with that URL.

When fetching a partial, I attach a ?partial=true query to tell my


backend to only serve the content, not the shell.

window.addEventListener('load', () => {
//...

window.onclick = (e) => {


let node = e.target
while (node !== undefined && node !== null && node.localName !== 'a') {
node = node.parentNode
}
if (node !== undefined && node !== null) {
channel.postMessage({
task: 'fetchPartial',
url: `${node.href}?partial=true`
})
return false
}
return true

}
})

Now we just miss to handle this event. On the Service Worker side, I
connect to the ws_navigation channel and listen for an event. I listen
for the fetchPartial message task name, although I could simply avoid
this condition check as this is the only event that’s being sent here
(messages in the Broadcast Channel API are not dispatched to the same page
that’s originating them - only between a page and a web worker).

I check if the url is cached. If so, I just send it as a response message


on the channel, and return.

If it’s not cached, I fetch it, send it back as a message to the page, and
then cache it for the next time it might be visited.

const channel = new BroadcastChannel('ws_navigation')


channel.onmessage = (event) => {
if (event.data.task === 'fetchPartial') {
caches
.match(event.data.url)
.then((response) => {
if (response) {
response.text().then((body) => {
channel.postMessage({ url: event.data.url, content: body })
})
return
}

fetch(event.data.url).then((fetchResponse) => {
const fetchResponseClone = fetchResponse.clone()
fetchResponse.text().then((body) => {
channel.postMessage({ url: event.data.url, content: body })
})
caches.open(cacheName).then((cache) => {
cache.put(event.data.url, fetchResponseClone)
})
})
})
.catch((error) => {
console.error(error)
})
}
}

We’re almost done.

Now the Service Worker is installed on the site as soon as a user visits,
and subsequent page loads are handled dynamically through the Fetch API,
not requiring a full page load. After the first visit, pages are cached
and load incredibly fast, and - more importantly - then even load when
offline!

And - all this is a progressive enhancement. Older browsers, and browsers


that don’t support service workers, simply work as normal.

Now, hijacking the browser navigation poses us a few problems:

1. the URL must change when a new page is shown. The back button should
work normally, and the browser history as well
2. the page title must change to reflect the new page title
3. we need to notify the Google Analytics API that a new page has been
loaded, to avoid missing an important metric such as the page views
per visitor.
4. the code snippets are not highlighted any more when loading new
content dynamically

Let’s solve those challenges.

Fix URL, title and back button with the History API

In the message handler in script.js in addition to injecting the HTML of


the partial, we trigger the history.pushState() method:

channel.onmessage = (event) => {


if (document.getElementById('content-wrapper')) {
document getElementById('content-wrapper') innerHTML = event data content
document.getElementById( content wrapper ).innerHTML = event.data.content
const url = event.data.url.replace('?partial=true', '')
history.pushState(null, null, url)
}
}

This is working but the page title does not change in the browser UI. We
need to fetch it somehow from the page. I decided to put in the page
content partial a hidden span that keeps the page title, so we can fetch
it from the page using the DOM API, and set the document.title
property:

channel.onmessage = (event) => {


if (document.getElementById('content-wrapper')) {
document.getElementById('content-wrapper').innerHTML = event.data.content
const url = event.data.url.replace('?partial=true', '')
if (document.getElementById('browser-page-title')) {
document.title = document.getElementById('browser-page-title').innerHTML
}
history.pushState(null, null, url)
}
}

Fix Google Analytics

Google Analytics works fine out of the box, but when loading a page
dynamically, it can’t do miracles. We must use the API it provides to
inform it of a new page load. Since I’m using the Global Site Tag
( gtag.js ) tracking, I need to call:

gtag('config', 'UA-XXXXXX-XX', {'page_path': '/the-url'})

into the code above that handles changing page:

channel.onmessage = (event) => {


if (document.getElementById('content-wrapper')) {
document.getElementById('content-wrapper').innerHTML = event.data.content
const url = event.data.url.replace('?partial=true', '')
if (document.getElementById('browser-page-title')) {
document.title = document.getElementById('browser-page-title').innerHTML
}
history.pushState(null, null, url)
gtag('config', 'UA-XXXXXX-XX', {'page_path': url})
}
}
The last thing I need to fix on my page is the code snippets login their
highlighing. I use the Prism syntax highlighter and they make it very
easy, I just need to add a call Prism.highlightAll() in my onmessage
handler:

channel.onmessage = (event) => {


if (document.getElementById('content-wrapper')) {
document.getElementById('content-wrapper').innerHTML = event.data.content
const url = event.data.url.replace('?partial=true', '')
if (document.getElementById('browser-page-title')) {
document.title = document.getElementById('browser-page-title').innerHTML
}
history.pushState(null, null, url)
gtag('config', 'UA-XXXXXX-XX', {'page_path': url})
Prism.highlightAll()
}
}

The full code of script.js is:

window.addEventListener('load', () => {
if (!navigator.serviceWorker) { return }
const channel = new BroadcastChannel('ws_navigation')

channel.onmessage = (event) => {


if (document.getElementById('content-wrapper')) {
document.getElementById('content-wrapper').innerHTML = event.data.content
const url = event.data.url.replace('?partial=true', '')
if (document.getElementById('browser-page-title')) {
document.title = document.getElementById('browser-page-title').innerHTML
}
history.pushState(null, null, url)
gtag('config', 'UA-1739509-49', {'page_path': url})
Prism.highlightAll()
}
}

navigator.serviceWorker.register('/sw.js', {
scope: '/'
}).then(() => {
channel.postMessage({

task: 'fetchPartial',
url: `${window.location.pathname}?partial=true`
})
}).catch((err) => {
console.log('SW registration failed', err)
})

window.onclick = (e) => {


let node = e.target
while (node !== undefined && node !== null && node.localName !== 'a') {
node = node.parentNode
}
if ( d ! d fi d && d ! ll) {
if (node !== undefined && node !== null) {
channel.postMessage({
task: 'fetchPartial',
url: `${node.href}?partial=true`
})
return false
}
return true
}
})

and sw.js :

const cacheName = 'writesoftware-v1'

self.addEventListener('install', (event) => {


event.waitUntil(caches.open(cacheName).then(cache => cache.addAll([
'/shell',
'user/themes/writesoftware/favicon.ico',
'user/themes/writesoftware/css/style.css',
'user/themes/writesoftware/js/script.js',
'user/themes/writesoftware/img/offline.gif',
'https://fonts.googleapis.com/css?family=Press+Start+2P',
'https://fonts.googleapis.com/css?family=Inconsolata:400,700',
'https://cdnjs.cloudflare.com/ajax/libs/prism/1.6.0/themes/prism.min.css',
'https://cdnjs.cloudflare.com/ajax/libs/prism/1.6.0/prism.min.js',
'https://cdn.jsdelivr.net/prism/1.6.0/components/prism-jsx.min.js'
])))
})

self.addEventListener('fetch', (event) => {


const requestUrl = new URL(event.request.url)

if (requestUrl.href.startsWith('https://www.googletagmanager.com') ||
requestUrl.href.startsWith('https://www.google-analytics.com') ||
requestUrl.href.startsWith('https://assets.convertkit.com')) {
// don't cache, and no cors
event.respondWith(fetch(event.request.url, { mode: 'no-cors' }))
return
}

event.respondWith(caches.match(event.request)
.then((response) => {
if (response) { return response }
if (requestUrl.origin === location.origin) {
if (requestUrl.pathname.endsWith('?partial=true')) {
return fetch(requestUrl.pathname)
} else {
return caches.match('/shell')
}

return fetch(`${event.request.url}?partial=true`)
}
return fetch(event.request.url)
})

.then(response => caches.open(cacheName).then((cache) => {


if (response) {
p
cache.put(event.request.url, response.clone())
}
return response
}))
.catch((error) => {
console.error(error)
}))
})

const channel = new BroadcastChannel('ws_navigation')


channel.onmessage = (event) => {
if (event.data.task === 'fetchPartial') {
caches
.match(event.data.url)
.then((response) => {
if (response) {
response.text().then((body) => {
channel.postMessage({ url: event.data.url, content: body })
})
return
}

fetch(event.data.url).then((fetchResponse) => {
const fetchResponseClone = fetchResponse.clone()
fetchResponse.text().then((body) => {
channel.postMessage({ url: event.data.url, content: body })
})

caches.open(cacheName).then((cache) => {
cache.put(event.data.url, fetchResponseClone)
})
})
})
.catch((error) => {
console.error(error)
})
}
}

Second approach: network-first, drop the app shell


While the first approach gave us a fully working app, I was a bit
skeptical and worried about having a copy of a page cached for too long on
the client, so I decided for a network-first approach: when a user loads a
page it is fetched from the network first. If the network call fails for
some reason, I lookup the page in the cache to see if we got it cached,
otherwise I show the user a GIF if it’s totally offline, or another GIF if
the page does not exist (I can reach it but I got a 404 error).
As soon as we get a page we cache it (not checking if we cached it
previously or not, we just store the latest version).

As an experiment I also got rid of the app shell altogether, because in my


case I had no intentions of creating an installable app yet, as without an
up-to-date Android device I could not really test-drive it and I preferred
to avoid throwing out stuff without proper testing.

To do this I just stripped the app shell from the install Service Worker
event and I relied on Service Workers and the Cache API to just deliver
the plain pages of the site, without managing partial updates. I also
dropped the /shell fetch hijacking when loading a full page, so on the
first page load there is no delay, but we still load partials when
navigating to other pages later.

I still use script.js and sw.js to host the code, with script.js
being the file that initializes the Service Worker, and also intercepts
click on the client-side.

Here’s script.js :

const OFFLINE_GIF = '/user/themes/writesoftware/img/offline.gif'

const fetchPartial = (url) => {


fetch(`${url}?partial=true`)
.then((response) => {
response.text().then((body) => {
if (document.getElementById('content-wrapper')) {
document.getElementById('content-wrapper').innerHTML = body
if (document.getElementById('browser-page-title')) {
document.title = document.getElementById('browser-page-title').innerHTML
}
history.pushState(null, null, url)
gtag('config', 'UA-XXXXXX-XX', { page_path: url })
Prism.highlightAll()
}
})
})
.catch(() => {
if (document.getElementById('content-wrapper')) {
document.getElementById('content-wrapper').innerHTML = `<center><h2>Offline</h2><img sr
}
})
}

window.addEventListener('load', () => {
if (!navigator.serviceWorker) { return }

navigator.serviceWorker.register('/sw.js', {
scope: '/'
}).then(() => {
fetchPartial(window.location.pathname)
}).catch((err) => {
console.log('SW registration failed', err)
})

window.onclick = (e) => {


let node = e.target
while (node !== undefined && node !== null && node.localName !== 'a') {
node = node.parentNode
}
if (node !== undefined && node !== null) {
fetchPartial(node.href)
return false
}
return true
}
})

and here’s sw.js :

const CACHE_NAME = 'writesoftware-v1'


const OFFLINE_GIF = '/user/themes/writesoftware/img/offline.gif'
const PAGENOTFOUND_GIF = '/user/themes/writesoftware/img/pagenotfound.gif'

self.addEventListener('install', (event) => {


event.waitUntil(caches.open(CACHE_NAME).then(cache => cache.addAll([
'/user/themes/writesoftware/favicon.ico',
'/user/themes/writesoftware/css/style.css',
'/user/themes/writesoftware/js/script.js',
'/user/themes/writesoftware/img/offline.gif',
'/user/themes/writesoftware/img/pagenotfound.gif',
'https://fonts.googleapis.com/css?family=Press+Start+2P',
'https://fonts.googleapis.com/css?family=Inconsolata:400,700',
'https://cdnjs.cloudflare.com/ajax/libs/prism/1.6.0/themes/prism.min.css',
'https://cdnjs.cloudflare.com/ajax/libs/prism/1.6.0/prism.min.js',
'https://cdn.jsdelivr.net/prism/1.6.0/components/prism-jsx.min.js'
])))
})

self.addEventListener('fetch', (event) => {


if (event.request.method !== 'GET') return
if (event.request.headers.get('accept').indexOf('text/html') === -1) return

const requestUrl = new URL(event.request.url)


let options = {}

if (requestUrl.href.startsWith('https://www.googletagmanager.com') ||
requestUrl.href.startsWith('https://www.google-analytics.com') ||
requestUrl.href.startsWith('https://assets.convertkit.com')) {
// no cors
options = { mode: 'no-cors' }
}
}

event.respondWith(fetch(event.request, options)
.then((response) => {
if (response.status === 404) {
return fetch(PAGENOTFOUND_GIF)
}
const resClone = response.clone()
return caches.open(CACHE_NAME).then((cache) => {
cache.put(event.request.url, response)
return resClone
})
})
.catch(() => caches.open(CACHE_NAME).then(cache => cache.match(event.request.url)
.then((response) => {
if (response) {
return response
}
return fetch(OFFLINE_GIF)
})
.catch(() => fetch(OFFLINE_GIF)))))
})

Going simpler: no partials


As an experiment I dropped the click interceptor that fetches partials,
and I relied on Service Workers and the Cache API to just deliver the
plain pages of the site, without managing partial updates:

script.js :

window.addEventListener('load', () => {
if (!navigator.serviceWorker) { return }
navigator.serviceWorker.register('/sw.js', {
scope: '/'
}).catch((err) => {
console.log('SW registration failed', err)
})
})

sw.js :

const CACHE_NAME = 'writesoftware-v1'


const OFFLINE_GIF = '/user/themes/writesoftware/img/offline.gif'
const PAGENOTFOUND_GIF = '/user/themes/writesoftware/img/pagenotfound.gif'

self.addEventListener('install', (event) => {


event.waitUntil(caches.open(CACHE_NAME).then(cache => cache.addAll([
'/user/themes/writesoftware/favicon.ico',
'/user/themes/writesoftware/css/style.css',
'/user/themes/writesoftware/js/script.js',
'/ /th / it ft /i / ffli if'
'/user/themes/writesoftware/img/offline.gif',
'/user/themes/writesoftware/img/pagenotfound.gif',
'https://fonts.googleapis.com/css?family=Press+Start+2P',
'https://fonts.googleapis.com/css?family=Inconsolata:400,700',
'https://cdnjs.cloudflare.com/ajax/libs/prism/1.6.0/themes/prism.min.css',
'https://cdnjs.cloudflare.com/ajax/libs/prism/1.6.0/prism.min.js',
'https://cdn.jsdelivr.net/prism/1.6.0/components/prism-jsx.min.js'
])))
})

self.addEventListener('fetch', (event) => {


if (event.request.method !== 'GET') return
if (event.request.headers.get('accept').indexOf('text/html') === -1) return

const requestUrl = new URL(event.request.url)


let options = {}

if (requestUrl.href.startsWith('https://www.googletagmanager.com') ||
requestUrl.href.startsWith('https://www.google-analytics.com') ||
requestUrl.href.startsWith('https://assets.convertkit.com')) {
// no cors
options = { mode: 'no-cors' }
}

event.respondWith(fetch(event.request, options)
.then((response) => {
if (response.status === 404) {
return fetch(PAGENOTFOUND_GIF)
}
const resClone = response.clone()
return caches.open(CACHE_NAME).then((cache) => {
cache.put(event.request.url, response)
return resClone
})
})
.catch(() => caches.open(CACHE_NAME).then(cache => cache.match(event.request.url)
.then((response) => {
return response || fetch(OFFLINE_GIF)
})
.catch(() => fetch(OFFLINE_GIF)))))
})

I think this is the bare bones example of adding offline capabilities to a


website, still keeping things simple. Any kind of website can add such
Service Worker without too much complexity.

In the end for me this approach was not enough to be viable, and I ended
up implementing the version with fetch partial updates.
TUTORIAL: CREATE A SPREADSHEET
USING REACT
How to build a simple Google Sheets or Excel clone using React

Related content
First steps
Build a simple spreadsheet
Introducing formulas
Improve performance
Saving the content of the table
Wrapping up

Creating a stripped down version of a spreadsheet like Google Sheets is


really a good example of showing many of the capabilities of React.

At the end of this tutorial you’ll have a working, configurable, reusable


spreadsheet React Component to power all your calculations

Related content
This tutorial covers the following topics which have dedicated guides on
Write Software:

React
JSX
ES2015

You might want to check them out to get an introduction to these topics if
you’re new to them.

First steps
The code of this tutorial is available on GitHub at
https://github.com/flaviocopes/react-spreadsheet-component
(https://github.com/flaviocopes/react-spreadsheet-component)

First thing, we’re going to detail what we’re going to build. We’ll create
a Table component that will have a fixed number of rows. Each row has the
same number of columns, and into each column we’ll load a Cell component.

We’ll be able to select any cell, and type any value into it. In addition,
we’ll be able to execute formulas on those cells, effectively creating a
working spreadsheet that won’t miss anything from Excel or Google Sheets
</sarcasm> .

Here’s a little demo gif:

The tutorial first dives into the basic building blocks of the
spreadsheet, and then goes into adding more advanced functionality such
as:

adding the ability to calculate formulas


optimizing performance
saving the content to local storage

Build a simple spreadsheet


If you don’t have create-react-app installed already, this is a good
time to do that:
yarn global add create-react-app
// or npm install -g create-react-app

Then let’s start with

create-react-app spreadsheet
cd spreadsheet
yarn start // or npm start

and the React app will launch on localhost:3000 :

This procedure creates a number of files in the spreadsheet folder:


The one we should focus now is App.js. This file out of the box contains
the following code:

import React, { Component } from 'react';


import logo from './logo.svg';
import './App.css';

class App extends Component {


render() {
return (
<div className="App">
<header className="App-header">
<img src={logo} className="App-logo" alt="logo" />
<h1 className="App-title">Welcome to React</h1>
</header>
<p className="App-intro">
To get started, edit <code>src/App.js</code> and
save to reload.
</p>
</div>
);
}
}

export default App;

Let’s wipe out the bulk of this code and just replace it with a simple
render of the Table component. We pass it 2 properties: x the number of
columns and y the number of rows.

import React from 'react'


import Table from './components/Table'
const App = () =>
(<div style={{ width: 'max-content' }}>
<Table x={4} y={4} />
</div>)

export default App

Here’s the Table component, which we store in components/Table.js :

import React from 'react'


import PropTypes from 'prop-types'
import Row from './Row'

export default class Table extends React.Component {


constructor(props) {
super(props)

this.state = {
data: {},
}
}

handleChangedCell = ({ x, y }, value) => {


const modifiedData = Object.assign({}, this.state.data)
if (!modifiedData[y]) modifiedData[y] = {}
modifiedData[y][x] = value
this.setState({ data: modifiedData })
}

updateCells = () => {
this.forceUpdate()
}

render() {
const rows = []

for (let y = 0; y < this.props.y + 1; y += 1) {


const rowData = this.state.data[y] || {}
rows.push(
<Row
handleChangedCell={this.handleChangedCell}
updateCells={this.updateCells}
key={y}
y={y}
x={this.props.x + 1}
rowData={rowData}
/>,
)
}
return (
<div>
{rows}
</div>
)
}
}

Table.propTypes = {
x: PropTypes.number.isRequired,
y: PropTypes.number.isRequired,
}

The Table component manages its own state. Its render() method creates
a list of Row components, and passes to each one the part of state that
bothers them: the row data. The Row component will in turn pass this data
down to multiple Cell components, which we’ll introduce in a minute.

We use the y row number as the key property, which is mandatory to


distinguish multiple rows.

We pass to each Row component the handleChangedCell method as a


prop. When a row calls this method, it passes an (x, y) tuple indicating
the row, and the new value that’s been inserted into it, and we update the
state accordingly.

Let’s examine the Row component, stored in components/Row.js :

import React from 'react'


import PropTypes from 'prop-types'
import Cell from './Cell'

const Row = (props) => {


const cells = []
const y = props.y
for (let x = 0; x < props.x; x += 1) {
cells.push(
<Cell
key={`${x}-${y}`}
y={y}
x={x}
onChangedValue={props.handleChangedCell}
updateCells={props.updateCells}
value={props.rowData[x] || ''}
/>,
)
}
return (
<div>
{cells}
</div>
)
}

Row.propTypes = {
handleChangedCell: PropTypes.func.isRequired,
updateCells: PropTypes.func.isRequired,
x: PropTypes.number.isRequired,
y: PropTypes.number.isRequired,
rowData: PropTypes.shape({
string: PropTypes.string,
}).isRequired,
}

export default Row

Same as the Table component, here we’re building an array of Cell


components and we put it in the cells variable, which the component
renders.

We pass the x, y coordinates combination as the key, and we pass down as a


prop the current state of that cell value using value=
{props.rowData[x] || ''} , defaulting the state to an empty string if
not set.

Let’s now dive into the Cell, the core (and last) component of our
spreadsheet!

import React from 'react'


import PropTypes from 'prop-types'

/**
* Cell represents the atomic element of a table
*/
export default class Cell extends React.Component {
constructor(props) {
super(props)
this.state = {
editing: false,
value: props.value,
}
this.display = this.determineDisplay(
{ x: props.x, y: props.y },
props.value
)
this.timer = 0
this.delay = 200
this.prevent = false
}

/**
* Add listener to the `unselectAll` event used to broadcast the
* unselect all event
*/
componentDidMount() {
window.document.addEventListener('unselectAll',
this.handleUnselectAll)
}
/**
* Before updating, execute the formula on the Cell value to
* calculate the `display` value. Especially useful when a
* redraw is pushed upon this cell when editing another cell
* that this might depend upon
*/
componentWillUpdate() {
this.display = this.determineDisplay(
{ x: this.props.x, y: this.props.y }, this.state.value)
}

/**
* Remove the `unselectAll` event listener added in
* `componentDidMount()`
*/
componentWillUnmount() {
window.document.removeEventListener('unselectAll',
this.handleUnselectAll)
}

/**
* When a Cell value changes, re-determine the display value
* by calling the formula calculation
*/
onChange = (e) => {
this.setState({ value: e.target.value })
this.display = this.determineDisplay(
{ x: this.props.x, y: this.props.y }, e.target.value)
}

/**
* Handle pressing a key when the Cell is an input element
*/
onKeyPressOnInput = (e) => {
if (e.key === 'Enter') {
this.hasNewValue(e.target.value)
}
}

/**
* Handle pressing a key when the Cell is a span element,
* not yet in editing mode
*/
onKeyPressOnSpan = () => {
if (!this.state.editing) {

this.setState({ editing: true })


}
}

/**
* Handle moving away from a cell, stores the new value
*/
onBlur = (e) => {
this.hasNewValue(e.target.value)
}

/**
* Used by `componentDid(Un)Mount`, handles the `unselectAll`
* event response
*/
handleUnselectAll = () => {
if (this.state.selected || this.state.editing) {
this.setState({ selected: false, editing: false })
}
}

/**
* Called by the `onBlur` or `onKeyPressOnInput` event handlers,
* it escalates the value changed event, and restore the editing
* state to `false`.
*/
hasNewValue = (value) => {
this.props.onChangedValue(
{
x: this.props.x,
y: this.props.y,
},
value,
)
this.setState({ editing: false })
}

/**
* Emits the `unselectAll` event, used to tell all the other
* cells to unselect
*/
emitUnselectAllEvent = () => {
const unselectAllEvent = new Event('unselectAll')
window.document.dispatchEvent(unselectAllEvent)
}

/**
* Handle clicking a Cell.
*/
clicked = () => {
// Prevent click and double click to conflict
this.timer = setTimeout(() => {
if (!this.prevent) {
// Unselect all the other cells and set the current
// Cell state to `selected`
this.emitUnselectAllEvent()
this.setState({ selected: true })
}
this.prevent = false
}, this.delay)

/**
* Handle doubleclicking a Cell.
*/
doubleClicked = () => {
// Prevent click and double click to conflict
clearTimeout(this.timer)
this.prevent = true

// Unselect all the other cells and set the current


// Cell state to `selected` & `editing`
this.emitUnselectAllEvent()
this.setState({ editing: true, selected: true })
}
determineDisplay = ({ x, y }, value) => {
return value
}

/**
* Calculates a cell's CSS values
*/
calculateCss = () => {
const css = {
width: '80px',
padding: '4px',
margin: '0',
height: '25px',
boxSizing: 'border-box',
position: 'relative',
display: 'inline-block',
color: 'black',
border: '1px solid #cacaca',
textAlign: 'left',
verticalAlign: 'top',
fontSize: '14px',
lineHeight: '15px',
overflow: 'hidden',
fontFamily: 'Calibri, \'Segoe UI\', Thonburi,
Arial, Verdana, sans-serif',
}

if (this.props.x === 0 || this.props.y === 0) {


css.textAlign = 'center'
css.backgroundColor = '#f0f0f0'
css.fontWeight = 'bold'
}

return css
}

render() {
const css = this.calculateCss()

// column 0
if (this.props.x === 0) {
return (
<span style={css}>
{this.props.y}

</span>
)
}

// row 0
if (this.props.y === 0) {
const alpha = ' abcdefghijklmnopqrstuvwxyz'.split('')
return (
<span
onKeyPress={this.onKeyPressOnSpan}
style={css}
role="presentation">
{alpha[this.props.x]}
</span>
)
}

if (this.state.selected) {
css.outlineColor = 'lightblue'
css.outlineStyle = 'dotted'
}

if (this.state.editing) {
return (
<input
style={css}
type="text"
onBlur={this.onBlur}
onKeyPress={this.onKeyPressOnInput}
value={this.state.value}
onChange={this.onChange}
autoFocus
/>
)
}
return (
<span
onClick={e => this.clicked(e)}
onDoubleClick={e => this.doubleClicked(e)}
style={css}
role="presentation"
>
{this.display}
</span>
)
}
}

Cell.propTypes = {
onChangedValue: PropTypes.func.isRequired,
x: PropTypes.number.isRequired,
y: PropTypes.number.isRequired,
value: PropTypes.string.isRequired,
}

Quite a bit to discuss here! But first, you should be able to finally see
something in your browser, and this something seems already working quite
good:
It’s not much, but we can already edit the cells content.

Let’s examine the code.

In the constructor we set some internal state properties that we’ll need
later, and we also initialize the this.display property based upon
props.value , which is used in the render() method. Why we do this?
Because later when we’ll add the option to store th table data in local
storage, we’ll be able to initialize a cell with a value instead of an
empty value.

At the moment, props.value will always have an empty value, so all the
cells are initialized empty.

When a Cell value changes, I escalate the updateCells event to Table


which forces an update of the whole component.

When a Cell is selected, I trigger the selected state which I use to


add some CSS attributes (outline). This could have been left to a CSS job,
but I decided to factor it in as a state property so I could optionally
later control multiple cells selection.

When a Cell is selected, it emits a unselectAll plain JS event, which


allows sibling cells to communicate. It is also instrumental to clear
selection across multiple table instances on the page, which I considered
a good behaviour and a natural UX feat.

A Cell can be clicked or double-clicked, and I introduced a timer to


prevent conflicts between these 2 events. Clicking a cell select it, while
double-clicking allows editing by switching the span normally used to
render the table into an input field, and you can enter any value.

So wrapping up a Table renders a list of y Row components, which in


turn render x Cell components each.

In the current implementation Row is not much more than a proxy; it is


responsible for the creation of the Cell s that compose a row, but aside
from this it just passes events up the hierarchy to the Table via
props .

Introducing formulas
The spreadsheet at this point is nice and all, but the real power comes
from being able to execute formulas: sum values, reference other cells,
and so on.

I decided to use this pretty nice library that handles Excel formulas:
https://github.com/handsontable/formula-parser
(https://github.com/handsontable/formula-parser) so we can get full
compatibility with the most popular formulas for free, without having to
code them ourselves.

The library seems quite actively developed, and has a good test suite so
we can run the test ourselves to check if something goes wrong.

We can run yarn add hot-formula-parser and then restart our app with
yarn start .

We did the first app dissection from top to bottom, let’s now start from
the bottom.
In the Cell component, when determining the value of an item we run the
determineDisplay() method:

determineDisplay = ({ x, y }, value) => {


return value
}

It’s very simple, because it is missing the bulk of the functionality.


Determining the value is simple if it’s just a value, but it’s more
complicated if we need to calculate the value based on a formula. A
formula (in our little spreadsheet) always starts with the equal sign = ,
so whenever we find it as the first char of a value, we run the formula
computation on it, by calling the executeFormula() method passed as
one of the props of Cell:

export default class Cell extends React.Component {


//...

determineDisplay = ({ x, y }, value) => {


if (value.slice(0, 1) === '=') {
const res = this.props.executeFormula({ x, y },
value.slice(1))
if (res.error !== null) {
return 'INVALID'
}
return res.result
}
return value
}

//...
}

Cell.propTypes = {
//...
executeFormula: PropTypes.func.isRequired,
//...
}

We get executeFormula() from our parent component, so let’s see it in


Row:

const Row = (props) => {


//...
cells.push(
<Cell
key={`${x}-${y}`}
y={y}
x={x}
onChangedValue={props.handleChangedCell}
updateCells={props.updateCells}
value={props.rowData[x] || ''}
executeFormula={props.executeFormula}
/>,
)
//...
}

Row.propTypes = {
//...
executeFormula: PropTypes.func.isRequired,
//...
}

We’re simply passing it down from the component props to its children.
Nothing complicated here. The meat of the functionality is all moved up to
Table then! This is because to do anything, we must know all the state of
the table, we can’t just run a formula on a cell or on a row: any formula
might reference any other cell. So here is how we’ll edit Table to fit
formulas:

//...
import { Parser as FormulaParser } from 'hot-formula-parser'
//...

export default class Table extends React.Component {


constructor(props) {
//...
this.parser = new FormulaParser()

// When a formula contains a cell value, this event lets us


// hook and return an error value if necessary
this.parser.on('callCellValue', (cellCoord, done) => {
const x = cellCoord.column.index + 1

const y = cellCoord.row.index + 1

// Check if I have that coordinates tuple in the table range


if (x > this.props.x || y > this.props.y) {
throw this.parser.Error(this.parser.ERROR_NOT_AVAILABLE)
}

// Check that the cell is not self referencing


if (this.parser.cell.x === x && this.parser.cell.y === y) {
throw this.parser.Error(this.parser.ERROR_REF)
}

if (!this.state.data[y] || !this.state.data[y][x]) {
return done('')
}
// All fine
return done(this.state.data[y][x])
})

// When a formula contains a range value, this event lets us


// hook and return an error value if necessary
this.parser.on('callRangeValue',
(startCellCoord, endCellCoord, done) => {
const sx = startCellCoord.column.index + 1
const sy = startCellCoord.row.index + 1
const ex = endCellCoord.column.index + 1
const ey = endCellCoord.row.index + 1
const fragment = []

for (let y = sy; y <= ey; y += 1) {


const row = this.state.data[y]
if (!row) {
continue
}

const colFragment = []

for (let x = sx; x <= ex; x += 1) {


let value = row[x]
if (!value) {
value = ''
}

if (value.slice(0, 1) === '=') {


const res = this.executeFormula({ x, y },
value.slice(1))
if (res.error) {
throw this.parser.Error(res.error)
}
value = res.result
}

colFragment.push(value)
}
fragment.push(colFragment)
}

if (fragment) {
done(fragment)

}
})
}

//...

/**
* Executes the formula on the `value` usign the
* FormulaParser object
*/
executeFormula = (cell, value) => {
this.parser.cell = cell
let res = this.parser.parse(value)
if (res.error != null) {
return res // tip: returning `res.error` shows more details
}
if (res.result.toString() === '') {
return res
}
if (res.result.toString().slice(0, 1) === '=') {
// formula points to formula
res = this.executeFormula(cell, res.result.slice(1))
}

return res
}

render() {
//...
<Row
handleChangedCell={this.handleChangedCell}
executeFormula={this.executeFormula}
updateCells={this.updateCells}
key={y}
y={y}
x={this.props.x + 1}
rowData={rowData}
/>,
//...
}
}

In the constructor we initialize the formula parser. We pass the


executeFormula() method down to each Row, and when called we call our
parser. The parser emits 2 events that we use to hook our table state to
determine the value of specific cells ( callCellValue ) and the values
of a range of cells ( callRangeValue ), e.g. =SUM(A1:A5) .

The Table.executeFormula() method is building a recursive call around


the parser, because if a cell has an identity function pointing to another
identity function, it will resolve the functions until it gets a plain
value. In this way every cell of the table can be linked to each other,
but will generate an INVALID value when a circular reference is
determined, because the library has a callCellValue event that allows
me to hook into the Table state and raise an error if 1) the formula
reference a value out of the table 2) the cell is self-referencing

The inner working of each event responder is a bit tricky to get, but
don’t worry about the details, focus on how it works overall.
Improve performance
The updateCells prop passed down from Table to Cell is responsible for
rerendering all the cells in the table, and it’s triggered when a Cell
changes its content.

This is because another Cell might reference ours in a formula, and


multiple Cells could need to be updated because of a change in another
Cell.

At the moment we’re blindly updating all cells, which is a lot of


rerendering. Imagine a big table, and the amount of computation needed to
rerender could be bad enough to cause some problems.

We need to do something: implement the shouldComponentUpdate() in


Cell.

The Cell.shouldComponentUpdate() is key to avoiding performance


penalties in re-rendering the whole table:

//...

/**
* Performance lifesaver as the cell not touched by a change can
* decide to avoid a rerender
*/

shouldComponentUpdate(nextProps, nextState) {
// Has a formula value? could be affected by any change. Update
if (this.state.value !== '' &&
this.state.value.slice(0, 1) === '=') {
return true
}

// Its own state values changed? Update


// Its own value prop changed? Update
if (nextState.value !== this.state.value ||
nextState.editing !== this.state.editing ||
nextState.selected !== this.state.selected ||
nextProps.value !== this.props.value) {
return true
}

return false
}

//...
What this method does is: if there is a value, and this value is a
formula, yes we need to update as our formula might depend on some other
cell value.

Then, we check if we’re editing this cell, in which case - yes, we need to
update the component.

In all other cases, no we can leave this component as-is and not rerender
it.

In short, we only update formula cells, and the cell being modified.

We could improve this by keeping a graph of formula dependencies that can


trigger ad-hoc re-rendering of dependent cells of the one modified, which
is an optimization that with large amounts of data can be a lifesaver, but
it might even be causing delays itself, so I ended up with this basic
implementation.

Saving the content of the table


The last thing I want to introduce in this tutorial is how to save the
data we have in the table to localStorage, so that when we reload the
page, the data is still there. We can close the browser, reopen it next
week, and the data will still be there.

How do we do that?

We need to hook into the handleChangedCell() method of Table, and


change it from:

handleChangedCell = ({ x, y }, value) => {


const modifiedData = Object.assign({}, this.state.data)
if (!modifiedData[y]) modifiedData[y] = {}
modifiedData[y][x] = value
this.setState({ data: modifiedData })
}
to:

handleChangedCell = ({ x, y }, value) => {


const modifiedData = Object.assign({}, this.state.data)
if (!modifiedData[y]) modifiedData[y] = {}
modifiedData[y][x] = value
this.setState({ data: modifiedData })

if (window && window.localStorage) {


window.localStorage.setItem(this.tableIdentifier,
JSON.stringify(modifiedData))
}
}

so that whenever a cell is changed, we store the state into localStorage.

We set tableIdentifier in the constructor, using

this.tableIdentifier = `tableData-${props.id}`

We use an id prop so that we can use multiple Table components in the


same app, and they will all save on their own storage, by rendering them
this way:

<Table x={4} y={4} id={'1'} />


<Table x={4} y={4} id={'2'} />

We now just need to load this state when the Table component is
initialized, by adding a componentWillMount() method to Table :

componentWillMount() {
if (this.props.saveToLocalStorage &&
window &&
window.localStorage) {
const data = window.localStorage.getItem(this.tableIdentifier)
if (data) {
this.setState({ data: JSON.parse(data) })
}
}
}

Wrapping up
That’s it for this tutorial!

Don’t miss the in-depth coverage of the topics we talked about (React,
JSX, ES2015) on the Write Software website.

You might also like