You are on page 1of 84

Deploying Node.

js
Kati Frantz

Version 1.0.0, 2020-05-31


Table of Contents
1. Introduction to remote servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1  

1.1. Introduction to remote servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2  

1.2. Provisioning your first server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2  

1.3. Connecting to a remote server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6  

1.4. Adding public key to the remote server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9  

1.5. Connecting via SSH keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10  

1.6. Understanding users and access. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11  

1.7. Creating a deployment user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11  

1.8. Installing standard server packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13  

1.9. Server network security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20  

1.10. Cron Task automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22  

2. Web servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24  

2.1. A refresher on how the web works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25  

2.2. Introduction to Nginx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27  

2.3. Case Study - Static Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31  

2.4. Case Study - Single page applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35  

2.5. Case Study - Cryptoverter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37  

2.6. Case Study - Eventive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44  

2.7. Case Study - Midnight Tick Tock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46  

2.8. Case Study - Realtime Chat app . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47  

2.9. Case Study - Ghost v3 blog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48  

2.10. Case study - Database backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49  

2.11. Case Study - Node BB Forum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49  

3. Web server security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50  

3.1. Introduction to SSL / TLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51  

3.2. Obtain an SSL certificate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52  

3.3. Securing nginx. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55  

3.4. Case study - Securing Eventive with SSL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57  

4. Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
 

4.1. What is scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60  

4.2. Node.js scaling features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64  

4.3. Provisioning database servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66  

4.4. Provisioning application servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70  

4.5. Provisioning a load balancer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73  

4.6. Scaling Eventive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77  

5. Next steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
 

5.1. Next steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 


Chapter 1. Introduction to remote servers

1
1.1. Introduction to remote servers
You just developed a shiny new node.js application. To make this application available on the
internet, it needs to be running on a server, with as little downtime as possible. You could purchase
a physical server and maintain it 24/7, but this is very costly, and only large corporations do this.

Rather than purchasing a server, you could rent a server from a cloud provider. A cloud provider is
a company or business that delivers cloud-based infrastructure and services, such as servers,
databases, and load balancers. The cloud provider is responsible for maintaining the servers, while
you focus on running your application on that server.

Behind the scenes, a cloud provider maintains the physical servers and uses a technique called
virtualization to divide these servers into multiple virtual servers of different resource sizes.

Let’s take an example. A cloud provider can purchase a physical server with 1028 CPU cores, 3 TB of
RAM, and 100 TB of Hard disk space. A developer visits their website, and with a few clicks of a
button, the developer can create a virtual, isolated server with 1 CPU core, 2 GB of RAM and 50 GB
Hard disk space. The developer can destroy, provision, or modify the resources of the virtual server
anytime with enough flexibility.

So, therefore, what does it take to go live with a node.js application? First, we’ll choose a cloud
hosting provider. We’ll provision a virtual server, and finally, we’ll manage this server by installing
all technologies and resources we need to run the node.js application correctly.

1.1.1. Digital Ocean

Digital Ocean is currently one of the most popular cloud providers. They have a fluent API, simple
interface, clear, and easy to use products that make it very easy to provision, modify, and destroy
virtual servers. It is very developer and beginner-friendly, and we’ll be using it to provision any
servers we’ll need to deploy our applications throughout this book. You can apply the skills and
techniques you’ll learn from this book to almost any cloud provider you decide to use in the future.

1.1.2. Getting a Digital Ocean account

You’ll provision many servers, load balancers, and other resources during the learning process. Use
this link to sign up for a new Digital Ocean account and get free credits to use as you go through this
book.

1.2. Provisioning your first server


On Digital Ocean, virtual private servers are called droplets. Let’s launch our first droplet.

Visit your Digital Ocean account dashboard. Click the Create button on the top bar and select
Droplets.

2
1.2.1. Choose an image

To launch a VPS, we need to decide what operating system to install. Most web applications run on
Linux distributions. We’ll be choosing the Ubuntu Linux distribution as our operating system
because of its low learning curve, efficiency, and popularity.

Select version 18.04.3 (LTS). Most packages have been updated and maintained for this version.

An image on Digital Ocean can be a Linux distribution. It can also be a distribution with specific
software already installed. For example, we can select one that has the latest version of node.js
already installed. Let’s do everything from scratch for now.

3
1.2.2. Choose a plan

The droplet size you choose should depend on the app you are deploying. For example, if it has a
rapidly growing database, we’ll need a lot of Disk Space. If we’re using an in-memory database like
Redis extensively, we’ll need a lot of RAM. At the moment, we’ll create a standard $10/mo droplet
with 2GB of RAM, 1 CPU core, and 50GB of disk space.

1.2.3. Choose a datacenter region

Digital Ocean has multiple data centers all over the world. The data center we choose depends on
the location of the users. If we’re deploying an application for a local restaurant in Malaysia, we’ll
select a data center closest, which is Singapore. I’ll use New York 1 since it’s the closest to me.

1.2.4. Choose a virtual private cloud

A virtual private cloud is a private network of droplets and resources. Droplets in the same data
center can communicate with one another over a private network. If you have a web application
hosted on multiple droplets and these droplets need to connect, a private network is faster, more

4
secure, and completely isolated from the internet, except, of course, for the droplets you
intentionally expose. Every region has a default vpc. All servers in this region would be able to
connect over the private network of this vpc. We’ll select the default vpc for the New York 1 region.

1.2.5. Select additional options

IPV6

Computers (servers) on the internet are identified by a public IP address (also called IPV4). IPV6 is
the latest protocol for identifying servers on the internet. We won’t be needing an IPV6 address.

User data

User data is a script that Digital Ocean will run on the droplet after provisioning it. This script could
install a bunch of software needed to run our web application or perform some server tasks. We’ll
ignore this for now and come back to it in the future.

Monitoring

Digital Ocean can provide metrics about our droplet. Some metrics collected are:

• CPU Usage - The percentage of total processing power used.

• Disk Usage - The amount of disk space in use.

• Memory - The amount of physical RAM in use.

When your web application is actively in use, these metrics can guide you on whether to increase
or reduce your droplet size.

1.2.6. Authentication

Once our droplet is provisioned, we’ll need a way to get secure access to our droplet. Select One-time
password. With this option, Digital ocean sends us a password for the root user.

5
1.2.7. Choose a hostname

Give your droplet a memorable name.

1.2.8. Create Droplet

Click the Create Droplet button. Provisioning a droplet takes a minute.

1.3. Connecting to a remote server


After your droplet is successfully created, you should receive an email containing the root
password to your droplet.

This password allows us to login as the default user root on the droplet we just created. Keep in
mind that we have this user because we selected Ubuntu 18.04 LTS as the OS for the droplet. Every
new installation of Ubuntu comes with an administrator user called root.

6
1.3.1. Username/Password based authentication

The most popular and secure way of connecting to a remote server is a protocol called SSH which
stands for Secure Socket Shell. SSH establishes a secure connection between two computers, the
client and the remote server. It encrypts commands from the client before sending them the remote
server and encrypts output from the remote server before sending back to the client.

The first way to use SSH to connect to a remote server is using a Username/Password pair. Open a
terminal and type the following command:

ssh root@159.89.84.65

The SSH command requires the user we want to login as and the public IP address of the server. In
this case, the default user setup on the newly created droplet is root, and the IP address of my
droplet is 159.89.84.65. Replace this with your droplet IP address. You can find the IP address in the
email sent to you.

If you are on Mac or Linux, this should work just fine.

If you are on Windows, you’ll need to perform additional steps to make this work. Please follow this
guide to install the required software.

1. If its the first time you are connecting to a remote server, SSH would ask you to confirm you are
sure about connecting to an unknown host. Once you see the questions Are you you want to
continue connecting (yes/no) ?, type yes and hit Enter.

2. SSH would ask you to provide the root password. Copy the root password sent to your email and
paste.

3. SSH establishes a successful connection to the remote server since your credentials were
correct.

4. When Digital Ocean provisions droplets, it installs a script on the new droplet that automatically
forces you to reset the default root password after your first login. This is a critical security
measure. To do this, it’ll ask you to provide the (current) UNIX password. Copy and paste from
the email as before and hit Enter. Next, it’ll ask you to Enter new UNIX password. Type in a
memorable one and hit Enter. It’ll ask you to Retype new UNIX password. Do so and hit Enter the
final time.

5. The password has been successfully changed. Now you are securely connected to the newly
created droplet. You should see root@deploying-node.js. In this case, root is the name of the
default user, and deploying-node.js is the hostname we chose when configuring the droplet in
the last section.

root@deploying-node.js:~#

1.3.2. Disconnecting from a remote server

To disconnect from a remote server, type exit and hit Enter. This command would instruct SSH to

7
close the connection, and you should return to your local terminal or shell.

root@deploying-node.js:~# exit logout Connection to 159.89.84.65 closed.

1.3.3. SSH key-based authentication

Username/Password-based authentication has one weakness. Anyone from anywhere in the world
can get your public IP address, and try logging in as the root user. They won’t know your password,
but they’ll set up a brute force attack to try out random passwords a million times if that’s what it
takes. This attack could be very harmful, and at best, can slow down or even take down your
server. SSH Key-based authentication is a better and more secure way of connecting to a remote
server via SSH.

1.3.4. Generating SSH Keys

Before we dive deep into understanding how SSH keys work and how they differ from
username/password authentication, let’s generate the SSH keys we’ll use for authentication. Run
the following command to generate an SSH keypair:

ssh-keygen

The SSH keygen command would take you through a wizard, asking some questions about how it
should generate the key pair.

1. SSH asks for the file in which to save the key. The default path is /Users/<username>/.ssh/id_rsa.
When using SSH, if a key is not specified, SSH automatically uses this key. You can change this
behavior by specifying a full path to the file. For my case, I specified ~/.ssh/deploying-node.js.

2. To add more security, you can add a passphrase to your SSH keypair. Hit Enter to skip this.

3. SSH generates the key pair and saves it to the specified location.

To view the content of the key pair generated, run the cat command that prints the content of a file.
Start with cat /Users/<username>/.ssh/deploying-node.js.pub to view the public key. Remember to
update this path if you chose a different name for your key pair. Next, run the same command. The
public key ends with .pub and the private key has no extension.

cat /Users/<username>/.ssh/deploying-node.js.pub cat /Users/<username>/.ssh/deploying-node.js

1.3.5. Understanding how SSH keys work

In cryptography, a keypair is often used for encryption and decryption of data. Let’s say John needs
to send Peter a critical, secret message.

1. Peter generates a keypair. This keypair consists of a private key and a public key.

2. Peter sends his public key to John. Peter never shares his private key with anybody. He can
share the public key with as many people as he pleases.

3. Once John has Peter’s public key, he’ll write his message and use Peter’s public key to encrypt

8
this message. The encryption process transforms the message into an unreadable random string
of characters. For example, when Hello ! is encrypted, it can become
PWwN4DxlM+jT4wov/4UdJUdW6s=.

4. John then sends the encrypted message to Peter. The only way to get the understandable
original is to decrypt it with the private key. Peter’s private key is the only key that can decrypt a
message encrypted with his public key. This procedure is very secure because even if someone
intercepted it, she wouldn’t be able to make any sense of it without Peter’s private key.

That’s precisely how SSH keys work. Now let’s go through the process for a real-world scenario with
SSH keys.

1. We just generated an SSH keypair. The next step is to give the public key to the remote server
we want to exchange data with (connect to).

2. We’ll log in to the remote server using Username/Password authentication and manually add our
public key. Then we’ll log out.

3. We’ll then try connecting to the remote server via SSH keys. SSH would send our public key to
the remote server. The remote server checks to see if this public key has been added to it
previously. If it hasn’t, it terminates the connection immediately.

4. If the remote server confirms that the public key is available, it’ll generate a secret message,
encrypt it with that public key, and send back to our computer.

5. Our computer has to decrypt the message using our private key and send the decrypted version
back to the remote server. If the remote server confirms that the message decryption was
correct, it means the computer trying to connect has the right private key and can get access.

6. SSH establishes a tunnel, and now we have access to the remote server.

1.4. Adding public key to the remote server


SSH comes installed with a command called ssh-copy-id. This command adds the public key of a
generated SSH key pair to the authorized_keys of a remote server. The authorized_keys is a file that
contains all the public keys authorized to establish connections with the server. Run this command
to copy your public key to the remote server:

ssh-copy-id -i ~/.ssh/deploying-node.js root@159.89.84.65

1. The -i option defines the full path to the keypair we want to add.

2. root@159.89.84.65 defines the user and the public IP address of the remote server.

That’s all it takes. We can manually verify if the key copied successfully. Login to your remote
server using Username/Password authentication. Make sure you use the newly set UNIX password. Run
the following command on your remote server:

cat /root/.ssh/authorized_keys

This command prints the content of the file located at path /root/.ssh/authorized_keys. All

9
authorized SSH keys for the`root user on this server are stored here.

Exit the remote server.

1.5. Connecting via SSH keys


Our server now has two enabled methods of authentication, Username/Password and Key-based. To
connect via key-based authentication, run the following command:

ssh root@159.89.84.65 -i ~/.ssh/deploying-node.js

The -i option specifies the path to the SSH key to use for authentication. SSH automatically uses use
key-based authentication if you pass this option. Using this command, we are automatically
connected to the remote server without SSH asking for the root password.

1.5.1. Disabling password-based authentication

We still have the problem of brute-force attacks. Attackers can attempt logging in using
Username/Password authentication. To fix this, we’ll completely disabled Username/Password
authentication. Let’s connect to the remote server via key-based authentication.

ssh root@159.89.84.65 -i ~/.ssh/deploying-node.js

Once we’re in, we need to update SSH configurations to disable password-based authentication.
We’ll use a terminal-based text editor called nano. Run the following command:

nano /etc/ssh/sshd_config

This command opens the /etc/ssh/sshd_config file in the nano text editor.

Use the arrow keys to scroll up and down this file. Scroll down till you see the line
PasswordAuthentication yes. Change yes to no.

To save the file, press Cmd + X or Ctrl + X on windows, type Y and hit Enter.

You can run cat /etc/ssh/sshd_config to make sure the content of the ssh_config file now contains
PasswordAuthentication no

The final step is to reload SSH.

systemctl reload ssh

Exit the server.

To verify that Username/Password authentication has been disabled, try logging in as root again
without the SSH key. Any attempts should fail.

10
1.6. Understanding users and access
We are already familiar with the default root user that is available with every new droplet created
(specifically, every new Ubuntu instance launched). This user can perform all actions on the server
without any additional permissions or authorizations required. This user can delete files, software,
or data from the server while running commands. This user is a god user. It is highly recommended
not to use the root user in your day to day server management tasks. Here are some reasons why:

1. Let’s say everyone on your team could log in to the production server as the root user. Humans
are prone to making mistakes, and one of them might make an irreversible mistake, like
deleting critical database records. You do not want to learn by experience on this one.

2. To deploy our applications, we’ll be installing packages such as web servers, databases, cache
servers, and monitoring tools. If we install a package that we’ve not vetted for vulnerabilities, it
might contain a script from potential attackers. If you are installing these packages as the root
user, and that malicious script gets executed as the root user, bad things might happen.

I highly recommend creating different users who do not have god powers like the root user. That
way, before any script or software runs a script that requires god abilities, you can take some time
to evaluate its content to make sure it does not contain any malicious commands.

1.7. Creating a deployment user


We’ll add a user called deploy to be in charge of all things related to setting up our web server. Run
the following command:

adduser deploy

The adduser command takes the name of the new user we want to add. In this case, we’ll call the
user deploy. 1. The first step completed by this command is Adding user deploy. 2. Next, Adding a new
group deploy. User groups are a great way to manage a collection of users.

Let’s say we had a folder called secret-place where we store classified documents. We also have
500 engineers on our team, and each of them has an associated user account. Of all the engineers,
we want to give access to this folder only to the 50 team leads. Instead of manually giving each
team lead to access to the secret-place, we can create a user group called secret-place, and provide
folder access to that group instead. Now all team leads that require access to the secret-place folder
are added to the secret-place.

By default, all users created by the adduser command have an associated user group created for
them.

3. Next, Creating home directory /home/deploy. The /home/<user> is a folder where all the personal
files and configuration files for the user are stored. When the user logs in remotely, they are
also automatically directed here.

4. The adduser command also asks for a UNIX password for this user. Go ahead and provide one, hit
Enter, and confirm the password.

11
5. The next step asks for more information about the user, such as Full Name, Room Number,Work
Phone, Home Phone, and Other. You can skip each of these by hitting Enter. Finally, type Y to
confirm the information is correct.

1.7.1. Granting sudo access to our user

Even though we do not want all users to have god powers, on some rare occasions, we may need the
deploy user to run root-level commands such as adding a new database. To do this, we’ll make this
user a superuser. A superuser can perform administrative operations just like a root user. These
users are often called sudo users. Sudo stands for superuser do. By default, a user group called sudo
exists, and to give any user superuser permissions, we need to add the deploy user to the sudo user
group. We do that using the following command:

usermod -aG sudo deploy

Now the deploy user is a superuser. We can verify all the groups this user is a member of by
running the following command:

groups deploy

The groups command takes the name of the user whose groups we want to see. If you don’t provide
a user, it displays the groups of the user you used to run the command. In this case, it should
display deploy: deploy.

1.7.2. Verifying sudo powers

Let’s verify if the deploy user can execute root-level commands.

First, while logged in, we’ll switch from the root user to the deploy user. We can do that by running:

su - deploy

This command switches the terminal from root@deploying-node.js to deploy@deploying-node.js.

Now, we’ll try to run a command that only the root user normally has permission to run. Run the
command: ls -la /root. The ls command shows a list of all files a. Now we’re trying to access this
directory via the deploy user.

deploy@deploying-node.js:~$ ls -la /root


ls: cannot open directory '/root': Permission denied
deploy@deploying-node.js:~$

Now, let’s run the same command, invoking the sudo powers. Run sudo ls -la /root. It’ll ask you for
the sudo password of the deploy user. Type the password created for this user, and hit Enter. It
should successfully print out the list of files in the /root folder.

12
To go back to your session as the root user, type exit and hit enter.

1.7.3. Adding SSH key for deploy user

We need to be able to login directly as the deploy user via SSH key authentication. At the moment,
the authorized_keys file we added our key is only available to the root user. To use this SSH key to
login as the deploy user, we need to make this key available to the deploy user too.

The first step is to create a .ssh folder for the deploy user. This folder stores all the SSH keys and
configurations for this user.

The last step is to copy over the authorized_keys from the root user’s directory to the deploy user’s
directory. Make sure you’re running the following commands as the root user.

mkdir /home/deploy/.ssh
cp /root/.ssh/authorized_keys /home/deploy/.ssh/authorized_keys
chown -R deploy:deploy /home/deploy

1. The mkdir command creates a folder specified in the argument. In this case, it creates a folder
called`.ssh` at path /home/deploy, which is the home directory of the deploy user.

2. The cp command copies one file from one location to another location. In this case, it copies the
authorized_keys file from the root user’s .ssh folder to that of the deploy user.

3. You can also run the cat /home/deploy/.ssh/authorized_keys command to make sure the content
is as expected.

4. The chown command changes the owner of all files in the /home/deploy to the deploy user. The -R
option changes all files recursively within this folder. If we are going to log in as the deploy user,
then the deploy user needs the correct permissions to be able to read the /home/deploy/.ssh
folder.

Exit from your server.

Now try connecting to your server as the deploy user via SSH key-based authentication.

ssh deploy@159.89.84.65 -i ~/.ssh/deploying-node.js

Yes, it works!

1.8. Installing standard server packages


We need to install the software our applications need to run successfully on our server. We’ll be
installing the most commonly used software for node.js web applications. Each package has
different installation instructions. We need to go to each package’s documentation and find
installation instructions specifically for Ubuntu 18.04. Now, this might be tedious, but there are so
many tutorials and guides on the Digital Ocean Community Tutorials Page explaining how to install
x-package on Ubuntu 18.04. I highly recommend searching thousands of tutorials to find the guide
you need before installing any server packages.

13
Before installing any packages, make sure you logged into your server as the deploy user.

1.8.1. Installing node.js and Npm

Run the following script to install node.js:

curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -

sudo apt-get install -y nodejs

1. curl is a tool for transferring data from one server to the other. In this case, we’re downloading
an installation script from https://deb.nodesource.com/setup_12x to our server.

2. | is an operator taking the output from one command and sending it to the next as input. In this
case, we’re getting the output of the curl command and sending it as input to sudo -E bash -.

3. The bash command executes a bash script.

4. APT (Advanced Package Tool) is a package manager for Ubuntu and related Linux distributions.
It controls and installation and removal of software. The script we ran downloaded the node.js
package and using apt-get, we installed it.

After running both commands, you can verify the successful installation by running node -v and
npm -v.

node -v
v12.14.1
npm -v
6.13.4

1.8.2. Installing Yarn

Run the following command to install yarn:

curl -o- -L https://yarnpkg.com/install.sh | bash

source ~/.bashrc

Just like the installation of node.Js, we’re pulling down the yarn installation script and piping it to
bash.

After installing yarn, it won’t be available until we close our terminal and open a new one, because
for us to use yarn, it needs to be initialized. The .bashrc file is a script that the server calls every
time a new shell session has started. In other words, the server executes this script every time a
user logs in. We can log out of our server and login again so this script is executed and yarn works,
or we can run source ~/.bashrc to execute the script and see yarn working immediately.

To verify a successful installation, run yarn --version.

14
1.8.3. Install n node version manager

If you have multiple node.js applications on the same server running on different node.js versions,
n is a great node version manager. We can install it using npm.

sudo npm i -g n

You can install additional node.js versions using:

sudo n 8.17.0

If you run node -v and npm -v now you’ll realize you get v8.17.0 and 6.13.4. You can switch to a
different node.js version by running sudo n <version>.

1.8.4. Installing MongoDB

MongoDB is a NoSQL database used for many web applications today. The MongoDB installation
requires an authentication step. It is essential to verify that whatever we install is the actual
MongoDB and not malicious software. apt-get uses key cryptography to verify package authenticity.
Here’s how it works:

1. A software maintainer publishes a package such as MongoDB. The maintainer generates a key
pair (private and public) and hosts the public key on a server.

2. Next, we try to install MongoDB. To ensure we are downloading the correct package, we’ll
download the public key and attempt installing the package using apt-get.

apt-get would authenticate the source of the package by encrypting a message using the public key,
sending it to the maintainer’s server, and verifying that the maintainer’s server correctly decrypts
the message with its private key.

Run the following commands to install MongoDB community edition:

# Download the public key


wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | sudo apt-key add -

# Add mongodb to apt packages list


echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.2
multiverse" | sudo tee /etc/apt/sources.list.d/
mongodb-org-4.2.list

# Run package updates


sudo apt-get update

# Install mongodb
sudo apt-get install -y mongodb-org

15
1. wget is very similar to curl and can be used as an alternative. Using this command, we
download the public key from the MongoDB server.

2. APT keeps a list of all packages installed on this server. The second command adds Mongodb to
the list of packages managed by APT. That way, in the future, APT can check if there are new
versions of the package, security updates, and more.

3. apt-get update updates the local list of packages and gets information about their security
updates and latest versions.

4. The apt-get install command performs the actual MongoDB installation.

Verify your installation using: mongod --version

The Mongod service

MongoDB is installed alongside a service called mongod. Services in Linux are background running
processes performing a specific task.

sudo service mongod start

The service command can be used to manage Linux services. In this case, we are start`ing the
`mongod daemon.

To check the status of a service, run:

sudo service mongod status

The output should confirm that mongodb is active and running.

Securing MongoDB

The default MongoDB installation allows connections without authentication. To verify that you can
log in and run commands without authentication, run the MongoDB shell using:

mongo

Type db.stats() and hit Enter to get some statistics of the current database:

> db.stats()
{
  "db" : "test",
  "collections" : 0,
  "views" : 0,
  ...
}

In a production environment, this should not be the case. There must be an added layer of security

16
for customer and business data. No one with server access should have database access. We’ll
change this behavior in three steps:

1.8.5. Creating an admin database user

We’ll start by adding an administrator user. This user should have permissions over all databases
and users:

mongo admin --eval 'db.createUser ({


  user: "admin",
  pwd: "admin",
  roles: [
  { role: "userAdminAnyDatabase", db: "admin" },
  { role: "root", db: "admin" }, "readWriteAnyDatabase"
  ]})'

mongo admin --eval 'db.createUser ({


  user: "admin",
  pwd: "admin",
  roles: [
  { role: "userAdminAnyDatabase", db: "admin" },
  { role: "root", db: "admin" },
  "readWriteAnyDatabase"
  ]})'

Using the mongo command, we can execute commands directly on MongoDB. The first argument we
passed here is the database we are running the command on, which is admin. The next option is the
--eval option, which defines the command we want to execute.

The command db.createUser creates a user with name admin and password (pwd) admin . We assign
two roles to the admin user: readWriteAnyDatabase, and userAdminAnyDatabase.

The database we save the user is the authentication database for the user. In this case, we saved the
admin user to the admin database.

Enabling Access control

We need to modify the MongoDB configuration to enable authorization. Run the command:

sudo nano /etc/mongod.conf

Once you’ve provided your sudo password and the nano editor is open, use the arrow keys to scroll
down till you see #security. The # means it’s a comment. Remove it, and update it to the following:

security:
  authorization: enabled

17
To save the file, press Ctrl + X, type Y and hit Enter.

Restart MongoDB with: sudo service mongod restart

Verifying authorization

Now let’s try to get the database statistics one more time. Login to the MongoDB shell using: mongo.
Now run db.stats(). You should see a message saying you’re unauthorized, and that the command
dbStats requires authentication.

> db.stats()
{
  "ok" : 0,
  "errmsg" : "command dbStats requires authentication",
  "code" : 13,
  "codeName" : "Unauthorized"
}

To authenticate, first, we’ll switch to the authentication database with the command use admin.
admin is the name of the database.

Next, we’ll run the command to authenticate as the admin user:

> use admin


> db.auth('admin', 'admin')

The password I set for the admin user is admin.

Now run db.stats() again, and you should get the expected results.

1.8.6. Installing Redis

Redis is an in-memory database popularly used as a cache driver in web applications. Some of the
practice applications we’ll be deploying in this course would require Redis to run correctly. To
install Redis, run the following command:

sudo apt install -y redis-server

To verify that Redis is running correctly, run:

sudo service redis status

Redis is active and running.

18
1.8.7. Installing MySQL

Mysql is the most popular SQL database today used to develop many web applications. We can
install it in two steps. First, we’ll install the Mysql package, and then we’ll secure it.

sudo apt install -y mysql-server

Installing MySQL comes included with a security script called mysql_secure_installation. I highly
recommend running this security script after a fresh MySQL installation to change some less secure
default installation options. Three things this package does are:

1. Install a plugin called validate_password. This password validates the strength of the passwords
we choose for all database users we’ll be creating. It’ll ask you to select the validated password
strength.

2. Remove anonymous users. With a default installation, an anonymous user is created for testing
purposes that allow anyone to connect into MySQL without having a user account. We do not
want this in a production environment, so make sure you reply with y when prompted.

3. Disallow root login remotely. Using the MySQL shell, anyone can attempt to connect to our
MySQL server from anywhere. The best practice in a production environment would be to login
to the MySQL server using SSH keys. Reply with y when prompted to disallow remote root user
shell login.

4. Remove test database and access to it. A test database is created with a new installation. We
don’t need this. Reply to the prompt with y.

5. Reload privilege tables now. The answers selected above would update user passwords and
privileges. Accepting to reload privileges means we make these changes take effect
immediately.`

Now that you understand what this security script does, run it using:

sudo mysql_secure_installation

Creating a MySQL user and database

To establish a database connection from your application, I recommend using a non-root user. That
way, your application does not have complete access to the MySQL installation, but only access
granted to the non-root user. Run the following command to get into the MySQL shell:

sudo mysql

This command opens up a MySQL shell. First, for any application you’re deploying, you’ll need to
create a new database. Let’s create a database called eventive. Run the following SQL query:

CREATE DATABASE eventive;

19
Next, let’s create a user that we’ll use to connect to this database from our application.

CREATE USER 'eventive'@'%' IDENTIFIED BY '72br_%^nvk4';

Let’s break down this SQL query. - The CREATE USER command creates the user. The name of the user
is eventive. The @ specifies the location from where this user can log in. The % indicates this user can
log in from anywhere. If you want to restrict login to only the current server, replace % with
localhost. - The IDENTIFIED BY command sets the password for the user. It should be long, random,
contain symbols, numbers, and letters. That way, it’s not easy to guess.

Next, we need to grant this user permissions to access the database we created earlier.

GRANT ALL PRIVILEGES ON eventive.* TO 'eventive'@'%';

• eventive is the name of the database we created earlier.

• eventive.* specifies the tables in this database the user can access. * indicates this user can
access all tables.

• TO specifies the user to whom we are granting privileges.

Finally, run the query FLUSH PRIVILEGES to reload the user permissions. That way, we do not need to
restart the MySQL service for our changes to take effect.

1.9. Server network security


Our server accepts connections from the outside world through connection endpoints called ports.
For example, it accepts SSH connections by default on port 22. Also, our server accepts web traffic on
port 80. By default, all the ports on our server are accepting connections from anywhere. This
behavior is not secure.

For example, MongoDB runs on port 27017 by default, and now since all ports are open, anyone
from anywhere with our public IP address can try connecting to our database by brute force. Redis
runs on port 6379. Anyone from anywhere can also access this port.

We are going to secure access to our server and its services using a firewall. A firewall is a network
security software that monitors incoming and outgoing network traffic.

Using a firewall, we can restrict all incoming traffic from all sources, and allow only from the
sources we want.

Ubuntu comes installed by default with a service called ufw which stands for Uncomplicated
Firewall. This service helps us manage our firewall rules.

1.9.1. Creating firewall rules

Firewall rules define what kind of internet traffic to our server is allowed or blocked. For example,
we can allow traffic to port 22 if we want to allow access to SSH connections from anywhere.

20
SSH allows incoming traffic on port 22 by default. Let’s create a rule to allow access to port 22.
Remember, our firewall is still disabled, and if we enable it without adding a rule for SSH
connections, we won’t be able to connect to our servers via SSH anymore.

sudo ufw allow 22

You should receive a message saying the rule has been updated. Now, our server allows access to
port 22 from Anywhere.

Let’s add another rule. We’ll add a rule that permits us to connect to MongoDB. First of all, grab
your IP address by Googling my IP address.

Next, run the following command:

sudo ufw allow from 196.50.6.1 to any port 27017

With this command, I’m allowing access to port 27017, which is the default for MongoDB, only from
my IP address. In this case, my IP address is 196.50.6.1.

1.9.2. Enabling the firewall

We’ve added some rules, but for our firewall rules to work, we need to put up our firewall. Run the
command sudo ufw enable to enable the firewall. You might see a prompt informing you that this
command might disrupt your current SSH connection. If you do, type y and hit Enter.

sudo ufw enable

If you can’t run commands anymore after this, shut down your terminal and login to a new shell.

Now, run sudo ufw status to check the current status of the server firewall.

deploy@deploying-node.js:~$ sudo ufw status


Status: active

To Action From
-- ------ ----
22 ALLOW Anywhere
27017 ALLOW 196.50.6.1
22 (v6) ALLOW Anywhere (v6)

Notice port 22 is allowed access from Anywhere, while port 27017 is granted access only from
196.50.6.1, which is my IP address.

21
1.9.3. Deleting firewall rules

We suddenly decided we do not want to expose our MongoDB database anymore. Let’s delete the
firewall rule that exposes port 27017. Run the command:

sudo ufw delete allow from 196.50.6.1 to any port 27017

You should see a confirmation: Rule deleted.. Now recheck the firewall status. Only port 22 should
be allowed.

deploy@deploying-node.js:~$ sudo ufw status


Status: active

To Action From
-- ------ ----
22 ALLOW Anywhere
22 (v6) ALLOW Anywhere (v6)

1.10. Cron Task automation


Some applications we build might need to run automated tasks. These tasks may be maintenance
tasks, metrics aggregation, reporting, or backups. For example, we might need to send automated
emails to all users of our application every day at midnight. To do this, we’ll create a script that
fetches all users from the database and sends them the email. The challenge would be to run this
script every day at midnight.

Cron is a time-based job scheduling background process found in most Unix based operating
systems. If we want to run a script at midnight every day, we can pass this script to cron, and cron
executes it at the scheduled time.

1.10.1. Understanding how cron works

Cron jobs can be stored in a file called crontab. You can see a list of existing cron jobs by viewing the
contents of this file. Run the command sudo cat /etc/crontab to view its content. There are already
existing jobs in the crontab. To add our jobs, we need to add a new line to this file, using the
following structure:

minute hour day_of_month month day_of_week user command_or_script_to_run

• The first part of the job is a definition of the time when we want this job to run.

• Next, we have the user we want to use to run this command. The user could be root or deploy in
our case.

• Next, we have the command_or_script_to_run. An example is node /home/deploy/app/script.js.

Let’s add an example cron job that runs a command every minute. We need to edit the crontab file

22
and add our cron. Run the command sudo nano /etc/crontab. Add * * * * * root ls -la >
/etc/cron-tab-out-ls.log to the last line, save the content and exit. Let’s break down this cron job:

• * * * * _ schedules the cron job to run every minute (first _) of every hour (second _) of every
day (third _) of the month, of every month (fourth _)and of every day of the week (fifth _). We’ll
deep dive into how to define cron expressions soon.

• The root defines the user cron uses to run this command.

• ls -la > /etc/cron-tab-out-ls.log is the command cron runs every minute. This command lists
the files at the root directory and saves the output to a file located at etc/cron-tab-out-ls.log.

To ensure this command has run successfully, wait for a minute, then run sudo cat /etc/cron-tab-
out-ls.log. If this file does not exist, it means cron has not run the command yet. You should wait a
full minute until cron runs the command and then check the content of the log file. It should print
out a list of all files in the root folder.

sudo cat /etc/cron-tab-out-ls.log

Cron expressions follow a specific syntax. The schedule is in the format minute hour day_of_month
month day_of_week. Let’s look at some examples:

• * * * * 5: This schedule runs every minute, every month, every week, but only on Friday. *
indicates a wild card for all, and 5 shows the fifth day_of_week, which is Friday.

• 30 6 * * 2: This schedule runs at 06:30 every Tuesday. The 30 indicates the minute, which is 30, 6
indicates the hour, which is 6 AM, the first * indicates every day of the month, and the second *
indicates every month. The 2 means the second day of the week, which is Tuesday.

• 00 12 14 2 *: This job runs every Valentine’s day at midday. 00 indicates minute zero, 12 means
the 12th hour which is noon, 14 means the 14th day of the month, 2 indicates the second month
of the year which is February, and * means any day of the week.

I highly recommend writing down your cron jobs to get an understanding of how these expressions
work. You can use Cron Tab Guru to learn how to write basic expressions. It’ll be instrumental in
scheduling your jobs or debugging and understand jobs scheduled by others.

23
Chapter 2. Web servers

24
2.1. A refresher on how the web works
What happens when you visit a website? Let’s use nesabox.com in this case. Remember, we said web
applications are hosted on servers with unique IP addresses. How does this relate to the domain
names we use every day?

First, the reason why we have domain names is that IP addresses are hard to remember. So instead
of visiting the Nesabox website via the IP address, we use the domain name. When we type this
domain name into the url bar of our browser or click on it based on some google search results, the
following things happen:

1. The browser makes a call to a system called Domain Name System (DNS). The DNS is a large
internet phone book that resolves domain names into IP addresses. So the browser requests the
DNS and says, "Hey, can you please tell me what the IP address for nesabox.com is ?`. The DNS
checks its records and returns a valid IP address.

2. The browser then requests the IP address returned from the DNS.

3. The browser then receives a response from the server and displays it to you. This response is
usually valid HTML, or for APIs could be JSON or XML.

2.1.1. Purchasing domain names

We’ve successfully set up our server with almost all the software we need to run a web application.
We have the IP address of the server, but this wouldn’t be very useful for the users of our
applications. We need to purchase a domain name. After buying the domain name, we have to tell
the DNS somehow to connect this domain name to our IP address. That way, when users try to visit
our domain name, it’ll resolve to our IP address, and our web application can be served correctly.

You do not have to buy a domain name for the practice. I set up a simple application to enable you
to use a practice domain name to deploy your projects.

The practice domain is deployingnodejs.com. On this application, you can add domain records. The
app uses my digital ocean access token to add your domain record and IP address to the DNS. In a
real-world scenario, when you own the domain, you’ll be interacting directly with your DNS
manager to add any records.

Now how do you own a domain? Companies called Domain name providers are authorized to sell
domain names.

1. The first step in purchasing a domain name is deciding what you want your domain name to be.
The name you choose should be related to your business, book, course, organization, or reason
why you’re purchasing the domain. For the practical examples in this book, I chose the name
deployingnodejs.com.

2. The next step is for us to verify if this domain name is still available. Where do we do this? We
need to visit the website of a verified Domain name provider and perform a search. I am going
to be using one of the most popular providers called Go Daddy. The process is going to be very
similar if you use a different provider.

3. On any domain name provider website, you should see a search box where you can search for

25
your domain. That way, you can find out if the domain is still available or has been purchased
by someone.

4. Once you find an available domain, go through the checkout flow, and purchase your domain.

2.1.2. Managing Domain name records

Domain name records are configurations of our domain in the DNS address book. For example, to
tell the DNS that our domain deployingnodejs.com should point to a specific IP address, we need to
add a domain name record.

Now we just purchased our domain from Go Daddy, so we can add, modify, or delete records
directly from our Go Daddy dashboard. In this case, we say Go Daddy is our domain name provider
and is also our nameserver. A nameserver manages domain name records. Most, if not all, domain
name providers are equally domain nameservers.

There are also companies such as Digital Ocean, which have Domain name servers but are not
domain name providers. That means on Digital Ocean, we cannot purchase a domain, but we can
manage the records of a domain we already own.

We can also manage our domain name records with Go Daddy, but we’ll move this job to Digital
Ocean. When you have a project with many resources such as domains, servers, databases, it is
easier to manage all of them together if they are in one place. Since Digital Ocean already provides
our servers, let’s use it to manage our DNS records too.

2.1.3. Changing domain name servers

To transfer the management of our domain from Go Daddy to Digital Ocean, we need to update the
nameservers of our domain.

Click on DNS, and scroll down to the Nameservers section. Click on Change.

Next, click on Enter my own nameservers (Advanced).

To transfer your domain name management to a different provider, you need to find the
nameservers of that provider. Digital ocean has three: ns1.digitalocean.com, ns2.digitalocean.com,
and ns3.digitalocean.com. You must have seen those for Go Daddy, who are ns77.domaincontrol.com
and ns78.domaincontrol.com.

26
Now input the name servers and hit Save.

This process takes about 24 to 48 hours, so you have to be very careful if you’re changing the
nameservers of a domain that is already receiving traffic.

2.1.4. Add domain to Digital Ocean

The final step of the transfer is adding the domain name to Digital Ocean. On the digital ocean
dashboard, visit Networking from the menu. Here, you can add a newly transferred domain and
begin managing DNS records.

2.2. Introduction to Nginx


We’ve learned about DNS. We’ve also learned that browsers get our server’s IP address from the
DNS and sends web traffic to our server. The next step is to set up our server to be ready to handle
this traffic. The software required for this is called a web server.

Without web server software, our server is not able to respond correctly to web traffic. For
example, try visiting your IP address in your browser. That should give you a message saying this

27
site can't be reached.

There are many open-source alternatives out there, such as Nginx, HaProxy, and Apache.

We’ll be learning how to install and configure a web server software called Nginx due to its
simplicity and low learning curve.

Nginx is an open-source web server software. When web traffic comes into our server, we can
configure Nginx to handle this traffic appropriately. That can involve a lot of different tasks, such as
serving our actual website, caching the results of the request to enable much faster responses in
future, and load balancing (we’ll learn about it).

2.2.1. Installing Nginx

To install nginx, we’ll run the following command:

sudo apt-get install -y nginx

Now you can run nginx -v to confirm its installation.

That’s it. It’s that easy.

We can check the status of the Nginx service using sudo service nginx status. You should see Nginx
running.

Now, try visiting your IP address in the browser. Notice that we now have a welcome message from
Nginx.

28
2.2.2. The default nginx site

We see a default site. Let’s break down how this is served.

Nginx can serve multiple sites. For example, if your company’s blog blog.example.com is different
from the main website, www.example.com, you can host the two websites on a single server. That way,
if someone visits any of the sites, we can correctly configure Nginx to serve the correct website.

• The Nginx configuration and configurations for all sites in the /etc/nginx directory. We can run
the command ls -a /etc/nginx to view the content of this directory.

. conf.d fastcgi_params ... nginx.conf sites-enabled


.. fastcgi.conf koi-utf sites-available

• There are two directories we’re concerned with here. The sites-available directory and the
sites-enabled directory.

◦ The sites-available directory contains the configuration files for all the sites this server can
serve.

◦ The sites-enabled directory contains the configuration files for all the available sites on this
server that are currently enabled. That means we can have available sites that are not
enabled.

• In the sites-available folder, there’s just one file available called default. You can have a look at
it’s content using ls -a /etc/nginx/sites-available. This file is a configuration for the default
site we saw when we visited our IP address in the browser. Let’s see how it looks using the
command: cat /etc/nginx/sites-available/default. I removed all the comments (all the lines
starting with #), so it’s readable.

29
server {
  listen 80 default_server;
  listen [::]:80 default_server;
  root /var/www/html;
  index index.html index.htm index.nginx-debian.html;
  server_name _;
  location / {
  try_files $uri $uri/ =404;
  }
}

Let’s break down this site configuration.

• server is a server block. Since we can define multiple sites, also called multiple hosts, each host
or website is going to be defined in a server block.

• listen 80 default_server. The listen directive tells Nginx the hostname and port where it
should listen to HTTP connections or web traffic. Here, we’re listening to connections on port
80, which is the default port for receiving web traffic. Also, default_server is a fallback,
instructing Nginx to serve this host or site by default.

• listen [::]:80 default_server. That does the same thing as above, but for IPV6 connectivity.

• root /var/www/html. This directive defines the default root folder for this site or host. If we go to
this folder, we should see the web files for this site or host.

• index index.html index.htm index.nginx-debian.html - The index directive defines the index file
name. The index file is a file that is loaded by default when a user visits a site. In this case, we
have multiple values. First, Nginx will try to find the index.html file. If not found, it goes to the
next and so on.

• server_name _. The server_name directive defines the name of the host or site. For example, if we
were hosting our blog for a domain blog.example.com, this value will server_name
blog.example.com. That way, when a user visits blog.exmaple.com Nginx will find the appropriate
server block with the corresponding server_name and use it. In this case, the server_name is
blank.

• location /. The location block allows us to route a request to the correct location within the file
system. For example, defining / matches all routes, and setting /api matches only requests that
start with /api.

• try_files $uri $uri/ =404. try_files is a directive that instructs Nginx on how to find the file
for this specific location block. try_files $uri says we should try finding a file that matches the
current $uri. For example, if a user visits /products/marketing.html, it’s going to try finding and
serving a file /products/marketing.html in the root directory. The second part, $uri/ instructs
Nginx to check for a folder if it doesn’t find a file. The section =404 tells Nginx to return a 404 to
the browser, meaning it didn’t find a matching file or folder for this request path.

Let’s view the content of the root folder using ls -a /var/www/html. Notice that we have a file called
index.nginx-debian.html. Run cat /var/www/html/index.nginx-debian.html to view its content.

In summary, the default site is configured in a file /etc/nginx/sites-available/default. In this

30
configuration file, the root directive is /var/www/html, instructing Nginx to point to this directory to
serve the website.

2.2.3. Removing the default site.

For a new Nginx installation, I recommend having a blank slate. We shouldn’t have sites or hosts
we’re not using, so we’ll clean this up. To remove this site, first, we’ll delete the configuration from
the sites-enabled folder, then from the sites-available folder, and finally, we’ll remove the source
files for this site.

sudo rm /etc/nginx/sites-enabled/default
sudo rm /etc/nginx/sites-available/default

sudo rm -rf /var/www/html

After deleting, editing, or adding configuration files, I recommend running sudo nginx -t to test all
configuration files to make sure everything still works as expected.

Visiting our IP address now shows us a 404 page.

2.3. Case Study - Static Sites


In this case study, we’ll be learning how to configure and serve a static HTML site using Nginx. This
project is called More Recipes, the static HTML template for a recipe sharing application. The source
code for our static site is in this git repository.

2.3.1. Fetching the project source code

The first step is getting the site source code to our server. There are many ways to do this. We could
use SSH to send all the files to our server, but this would not be efficient when we have multiple
people working on a project and continuously making file changes.

The most efficient and modern way is to use git. Our source files are in a Git repository, so all we
need to do is clone this repository into our server. We are going to be cloning into the home folder
of the deploy user. Make sure you run cd ~ to go back to the home folder, then run the following
command to clone this repository into a folder on our server:

git clone https://github.com/deploying-nodejs/more-recipes-static-site

Great. We have the project source code. In the future, if you make changes to your repository,
deploying the new changes to this site would be as easy as running a git pull in this folder.

2.3.2. Configuring Nginx for static sites

Let’s create a configuration file for the new site. First, let’s decide what our server name will be.
We’ll be deploying this site to more-recipes-static-site.deployingnodejs.com. I’ll be using this site
already, so choose something very unique. For example, you can use more-recipes-static-site-

31
xtw.deployingnodejs.com.

This is the configuration file we’ll use:

server {
  listen 80;
  root /home/deploy/more-recipes-static-site;
  server_name more-recipes-static-site.deployingnodejs.com;
  location / {
  try_files $uri $uri/ =404;
  }
}

• The root of our project should point to the folder in which we cloned the source. In this case, it’s
/home/deploy/more-recipes-static-site.

• The server_name should match the domain of our site.

• Notice our configuration lacks an index directive. That is because, by default, Nginx checks for
the index.html file, which we have available in the source files.

Run the following command to open a new configuration file in our file editor called nano:

sudo nano /etc/nginx/sites-available/more-recipes-static-site.deployingnodejs.com

In this case, the name of our configuration file is more-recipes-static-site.deployingnodejs.com. It


could be anything, but the more descriptive it is of what site this configures, the easier it is to
manage in the future.

Copy and paste the above configuration. Make sure to modify it if you’re using a user with a
different name or a separate root folder.

Now save the file and exit. We just added the configuration as an available site. That still does not
make our site live. We need to add this same configuration to the sites-enabled folder to enable it.
We could copy and paste the file, but this won’t be easy to manage because we would have to
manually update both files if we change the configuration in the future. Instead, we’ll create a
symlink of the configuration file. A symlink is a file that references another file. So instead of
creating another configuration in the sites-enabled folder, we’ll create a symlink to the
configuration file in the sites-available folder instead.

sudo ln -s /etc/nginx/sites-available/more-recipes-static-site.deployingnodejs.com
/etc/nginx/sites-enabled/

Now, we’ll have a new file /etc/nginx/sites-enabled/more-recipes-static-site.deployingnodejs.com


which links to the configuration file in the sites-available folder. That way, any changes we make
to the configuration in the sites-available folder automatically reflect in the sites-enabled
configuration file.

32
After updating configuration files, we need to reload nginx, so it is aware of our changes:

sudo service nginx reload

2.3.3. DNS Configuration

One more thing is left to do. Our server is ready to receive web traffic, but we haven’t connected
our domain to our public IP address. Remember, we transferred our name servers over to Digital
Ocean. We need to visit our Digital Ocean Admin Panel to manage DNS records.

Domain name records are configurations of our domain in the DNS address book. On the domain
page, we can add, modify, or delete records, and this should be possible from any domain name
server you have. There are multiple types of domain name records. Let’s talk about the most
common:

• A-records: This record maps a domain to the IP address of the server hosting the domain. In this
case, our domain is more-recipes-static-site.deployingnodejs.com. We’ll be adding an A record
and pointing it to the IP address of our server.

• CNAME records (C stands for Canonical ): This record acts as an alias by mapping a hostname to
another hostname. For example, we may want all users who visit www.deployingnodejs.com to see
the content of google.com. To do this, we’ll add a CNAME record from www to the equal google.com.

• MX records - These records specify the mail servers responsible for accepting emails on behalf
of your domain.

• TXT records - These records can be used to associate a string of text with a hostname, and are
primarily used for verification. For example, if a web service wants to verify if you own a
domain, they can give you a string to add as a TXT record to your DNS configuration. Then,
they’ll check if this domain has that TXT value. You can only add the TXT record if you own the
domain, so this is an excellent verification mechanism.

Let’s proceed to add the A-record we need for the static site now. To add an A record for the static
site, visit the domain record management dashboard for your domain. In this case, digital ocean is
in charge of my domain name records.

33
Notice there’s a value called TTL (Time To Live), which by default is 3600. Remember, we said the
browser requests the DNS to get the IP address that maps to a domain. Now, if the browser always
has to make this request, visiting websites would be very slow. Instead, this server is cached, so
getting the value is fast. The TTL defines the amount of time the value of this record should be
cached before the browser has to request the value directly from the DNS again.

If you want to use a subdomain of deployingnodejs.com to host your site, add an A record on the a-
record dashboard here. You would need to provide the subdomain you want to add, and the IP
address of your server. The a-record panel would use my digital ocean personal access token and
add the record to the deployingnodejs.com domain.

34
Visit the browser, and you should see your application live.

2.4. Case Study - Single page applications


In this case study, we’ll be learning how to configure and serve a single page application using
Nginx. This project is called Community Blog, a single page blogging application built with Vuejs and
Vue Router. Knowledge of Vuejs is not required to follow along. The techniques you learn in this
section world work for almost any client-side single page application. The source code for our SPA
is in this git repository.

2.4.1. Setting up repository source code

The first step is getting the site source code to our server. Just like the last case study, we’ll use git to
clone the repository. Before cloning, make sure you’re running the clone command from your home
directory. You can make sure by running cd ~, which will change the directory to the home for the
currently logged in user.

git clone https://github.com/deploying-nodejs/spa-community-blog.git

Next step, if you have a look at the README.md, we have some instructions to run to fully set up
this project. First, we need to install the project dependencies. The project uses yarn, so we need to
make sure we’re using yarn too to deploy the application.

cd /home/deploy/spa-community-blog yarn

Now we need to build the application. Run the command yarn build. This command will generate a
production build into the dist/ folder. This means the actual application we’re deploying is in
/home/deploy/spa-community-blog/deploy/dist. We need to make sure our nginx configuration
reflects this.

35
2.4.2. Configuring Nginx for single-page applications

Let’s create a configuration file for the new site. First, let’s decide what our server name will be.
We’ll be deploying this site to spa-community-blog.deployingnodejs.com. Just like in the last project,
choose a unique subdomain to which you’ll deploy the application. Choose something very unique.
For example, you can use spa-community-blog-xpoe.deployingnodejs.com.

This is the configuration file we’ll use:

server {
  listen 80;
  root /home/deploy/spa-community-blog/dist;
  server_name spa-community-blog.deployingnodejs.com;

  location / {
  try_files $uri $uri/ /index.html;
  }
}

There are two changes we should take note of:

• The root does not point to the actual source code, but to the folder generated after we run our
build script. For most single-page applications, do not forget to configure nginx to point to the
production build folder.

• try_files $uri $uri/ /index.html. The try_files value is a little different for static HTML sites.
It was try_files $uri $uri/ =404;. For a static site, we instructed Nginx to return a 404 if it did
not find a matching file or folder $uri. For this single page application, we’re instructing nginx
to return the index.html file if no matching $uri was found. That is because the single page
application uses client-side routing, so we want to render the index.html file for routes not
handled by the server, and the client would know how to handle the route in the browser
properly.

Run the following command to open a new configuration file in our file editor called nano:

sudo nano /etc/nginx/sites-available/spa-community-blog.deployingnodejs.com

Next, we’ll run a command to link this configuration to the sites-enabled folder.

sudo ln -s /etc/nginx/sites-available/spa-community-blog.deployingnodejs.com /etc/nginx/sites-


enabled/

And finally, let’s reload nginx

sudo service nginx reload

2.4.3. DNS configuration

The last piece of the puzzle is configuring DNS to route traffic from spa-community-

36
blog.deployingnodejs.com to our IP address. We’ll be adding an A record that points the subdomain
spa-community-blog to the server IP address.

Visit the browser, and you should see your application live. Be sure to click around the app to see
how Nginx hands over the routing to the client-side.

2.5. Case Study - Cryptoverter


In this case study, we’ll be learning how to configure and serve a full-stack Nodejs application using

37
Nginx. This project is called Cryptoverter, a full-stack crypto converter application with a complete
authentication application system built with Nodejs, ExpressJs, MongoDB, Vuejs, and Vue Router.
Knowledge of any of these technologies is not required to follow along. The source code for our full-
stack application is in this git repository.

2.5.1. Project specifications

This project provides complete authentication functionality. A user can: - Register a new account -
Login and logout of their accounts - Confirm their email with a link sent from the application -
Forget and reset her password

2.5.2. Creating a MongoDB user

Since our project makes use of a MongoDB database, we need to create valid access credentials for
this specific application. We’ll create a user and a password, giving the user read and write
permissions to the database we’ll use for the application. Run the following command to create the
user we need:

mongo cryptoverter --eval 'db.createUser({


  user: "cryptoverter",
  pwd: "cryptoverter",
  roles: [{ "role": "readWrite", "db": "cryptoverter" }]
})' -u admin -p admin --authenticationDatabase admin

Let’s break this down:

• mongo cryptoverter: We’re running this command against a database called cryptoverter. The
user we create would be stored in this database.

• roles: [{ "role": "readWrite", "db": "cryptoverter" }]: We define the roles and access
permissions for our user. This user would be able to read and write from the database called
cryptoverter, which is the database our application will use.

• -u admin -p admin --authenticationDatabase admin: Since we have security enabled for our
mongodb installation, we need to provide authentication credentials for a user who has the
correct permissions to execute this command. The user-provided, in this case, is the admin user
we created in the last chapter. The password for this user was admin, and the authentication
database was admin too.

38
MongoDB shell version v4.2.2
connecting to: mongodb://127.0.0.1:27017/cryptoverter
?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session {
  "id" : UUID("99c2a49f-91a9-42ec-b45f-b4255ccd87b2")
}
MongoDB server version: 4.2.2
Successfully added user: {
  "user" : "cryptoverter",
  "roles" : [
  {
  "role" : "readWrite",
  "db" : "cryptoverter"
  }
  ]
}

For every new application that requires a database, I highly recommend creating a new user and
database.

2.5.3. Setting up repository source code

First, clone the repository into your server with git clone https://github.com/deploying-
nodejs/cryptoverter. Next, check the README.MD file for more instructions on how to set up this
project. This project uses a popular package called DotEnv to manage environment variables. At the
root of the project, a file exists called .env.example, giving us a clear example of all environment
variables to run the application smoothly. You can run cat .env.example to view the content of this
file:

MAIL_CONNECTION=ethereal
DATABASE_URL=mongodb://localhost:27017/cryptoverter
APP_URL=http://localhost:3000
PORT=3000
NODE_ENV=development

To set up the environment variables for this project, we’ll run the command cp .env.example .env
to copy the contents of the example environment file to a file called .env. The package DotEnv we use
is going to automatically set the values defined in this file as environment variables.

cd cryptoverter
cp .env.example .env

Now we need to modify the .env file to add the real environment variables. Run nano .env and
change it’s content to the following:

39
MAIL_CONNECTION=ethereal
DATABASE_URL=mongodb://cryptoverter:cryptoverter@localhost:27017/cryptoverter
APP_URL=http://cryptoverter.deployingnodejs.com
PORT=3000
NODE_ENV=production

Be sure to modify this file to match your own APP_URL and database credential. The DATABASE_URL is
of the format: mongodb://<username>:<password>@<host>:<port>/<database>. Be sure to update this
value to match your credentials. For example, if you created the user with a different username
and password, make sure you replace those in your DATABASE_URL.

The next step is for us to install project dependencies with yarn. Run yarn. Finally, we’ll build the
application for production using yarn build.

yarn
yarn build

This command builds the client and server-side code for production and puts the result of the build
in the dist/ folder. Now how do we run our server?

2.5.4. Process management with PM2

The next challenge we have is running our Nodejs server in a production environment. We can just
use the yarn start command, and this would work, but the process would be running, and we can’t
do anything else with this shell. If we SSH out of the server, the process stops running.

What we need is to run this process as a daemon. A daemon is a background running process and is
not under the direct control of the interactive user.

The next concern we have is, when an unhandled error occurs in a running Nodejs process, it shuts
down the process. That means if an error occurs, our application automatically goes down. That is
the worst in production, and the app is unavailable until you can manually restart it.

There are many more things we have to be concerned about, and we’re going to be using a process
manager called PM2 to handle most of these production concerns. PM2 would be in charge of
running our application as a daemon, monitoring it if it goes down, automatically restarting it if
there are any errors, and also saving logs from the running process.

PM2 comes as an npm package. To install it globally, run the command:

sudo npm i -g pm2

Once you install PM2, we can use it to start our application.

pm2 start dist/index.js --name=cryptoverter

40
• The start command requires the file we need to run, in this case, the dist/index.js file.

• The --name option takes a readable and memorable name we can use later to identify this
specific application.

That’s all it takes to start the production-ready application using the PM2 process manager. This
command should print a table showing all active PM2 processes.

Out of the box, we can see metrics such as the background process ID (pid), CPU (cpu), and memory
consumption (mem) of the running process. - The cpu shows the amount of CPU consumed by the
process. - The mem reveals the amount of RAM consumed by the process.

To view the logs from the background running process, run the command pm2 logs cryptoverter. It
will display the process logs in real-time.

2.5.5. Configuring Nginx as a reverse proxy

This application is a little different from the ones we’ve had so far. In this case, we have an actual
Nodejs server running, and we need to instruct Nginx to direct all incoming traffic to this Node
server. Nginx, in this case, will function as a reverse proxy. A proxy server receives a request from
a client, directs the request to the actual server, gets a response from the actual server, and returns
it to the client. To the client, the proxy appears as the main server.

Here’s the Nginx configuration we’ll use for this application:

server {
  listen 80;
  server_name cryptoverter.deployingnodejs.com;

  location / {
  proxy_pass http://localhost:3000;

  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;


  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header Host $http_host;

  proxy_http_version 1.1;
  }
}

We have some new directives. Let’s go through them:

1. proxy_pass - This is an essential directive. Remember, a server block defines the configuration
for a site. We are instructing Nginx, in this case, to direct all incoming traffic for the
cryptoverter.deployingnodejs.com site to the Nodejs server running on http://localhost:3000.
When the Nodejs server generates a response, Nginx receives the response and sends that
response back to the client or browser that made the request.

2. proxy_set_header X-Forwarded-For - The proxy_pass directive sends all incoming traffic to our
Nodejs application. To the Nodejs application, the HTTP request is coming from Nginx. That is

41
not very helpful, because the Nodejs application would most likely need to know the actual
origin of the request. With the proxy_set_header X-Forwarded-For directive, Nginx sets a header
on the request called X-Forwarded-For on the request. The value of this header is
$proxy_add_x_forwarded_for, which is an Nginx variable that contains all clients that had
handled the request before it got to this point. That means if we have four proxy servers, this
variable would contain the public IP addresses of all four.

3. proxy_set_header X-Real-IP - Nginx sets a header called X-Real-IP, which is the IP address of the
origin of this request. The value of this IP address is $remote_addr.

4. proxy_set_header Host - This sets the $http_host the request came to. In this case, if the user
visited cryptoverter.deployingnodejs.com, this would be the host set.

5. proxy_http_version - This defines what version of the HTTP protocol we’re using. The latest
version is 1.1. We’ll be using version 2.0 when we get us to secure our servers with SSL.

2.5.6. DNS configuration

All we have left is to configure the DNS for our site. We’ll add an A record that points to the IP
address of the server.

Visit your site in the browser to see the Nodejs application running. Try registering an account to
make sure your account can correctly connect to the database and write some data. Also, you can
log in to make sure it can read from the database too.

42
2.5.7. Serving static assets using Nginx

At the moment, when our website loads in the browser, it requests to fetch static assets from our
servers such as client-side Js Scripts, CSS stylesheets, and favicons. Our Nodejs Express server
serves these static assets because of Nginx proxies all requests to it. Nginx is a high-performance
web server, much better at handling static assets than Nodejs. Let’s tweak our Nginx configuration
to manage the static assets itself. Modify the current configuration using:

server {
  listen 80;
  server_name cryptoverter.deployingnodejs.com;

  location / {
  proxy_pass http://localhost:3000;

  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;


  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header Host $http_host;

  proxy_http_version 1.1;
  }

  location ~ \.(css|js|png) {
  root /home/deploy/cryptoverter/dist/public;
  }
}

We added a new location block. The first location block matches /, which means all requests. The
new one we added matches ~ \.(css|js|png) which means all requests that end with .css, .js or

43
.png. This directive instructs Nginx to match all Js scripts, CSS stylesheets, and PNG images and
serve them from the defined root.

Run this command to edit the configuration file:

sudo nano /etc/nginx/sites-available/cryptoverter.deployingnodejs.com

Once saved, reload Nginx with sudo service nginx reload

Now visit the application in the browser, give it a refresh and make sure everything still works fine.
With the new changes, every time the browser requests for a static asset ending in .css, .js or .png,
this request is handled at the Nginx level and never reaches the Nodejs application. Nginx fetches
and returns these assets.

2.6. Case Study - Eventive


This case study is a full-stack application called Eventive. It is slightly different from the others
we’ve studied so far. It has a separate backend API server and a separate frontend single page
application. I built the backend with the Adonis Js framework, and the frontend with the React js
framework. The backend exposes a REST API. A mysql database powers the backend for permanent
data storage and a redis database for queuing.

Again, you do not need to know any of these technologies to be able to deploy this application.
Here’s the Git repository for the project.

2.6.1. Project specifications

This project provides a simple interface for creating, fetching, and deleting events. It also has a
queue working system that sends a notification to the administrator in the background every time
an event is added or removed on the site.

The backend exposes a REST API. To run the backend application successfully, follow these
instructions:

• Clone the code repository using git clone https://github.com/deploying-nodejs/eventive.git.

• The backend is a completely separate npm project, so install the backend npm dependencies by
changing directory to the backend folder (cd eventive/backend) and running yarn install.

• Once the dependencies are installed, the next step is to set up environment variables by
creating a .env file at the root of the backend project folder and adding environment key-value
pairs to it. To see a sample of all variables required to run the backend correctly, copy the
example content from .env.example using the command cp .env.example .env. The example
environment variables are:

44
HOST=127.0.0.1
PORT=3333
NODE_ENV=production

APP_NAME=Eventive
APP_URL=http://${HOST}:${PORT}

CACHE_VIEWS=false

APP_KEY=5NURy6uYjmkP72brRzL5BTDshnvkutcc
REDIS_CONNECTION_URL=

DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_USER=root
DB_PASSWORD=
DB_DATABASE=eventive

HASH_DRIVER=bcrypt

• Mysql is required to run the application correctly. Create a Mysql database and user, and
provide the credentials in the environment variables DB_DATABASE, DB_USER and DB_PASSWORD.

• After setting up environment variables, run the command node ace migrations:run at the root of
the backend project. This command sets up the database tables and fields by running
migrations defined in the backend/database/migrations folder of the backend project.

• To run the backend REST API use the yarn start command. In production, this command should
be run using a process manager such as PM2.

• When events are created or deleted on the application, a mail notification job is queued to redis.
This job notifies the administrator that something happened on the platform. Mail notification
jobs are processed in the backend/workers/mail.js file. The API pushes the job to a queue on
redis, and the mail worker pulls these jobs and processes them. To have these notification jobs
processed, you also need to run the worker as a mail separate process. You can do this by
running the command yarn start-worker:mail from the root of the backend project.

The frontend of the application is a single page application created using Create React App. To run
the frontend application successfully, follow these instructions:

• The frontend is an entirely separate npm project, so install the frontend npm dependencies by
changing directory to the frontend folder (cd eventive/frontend) and running yarn install.

• Next, create a file called .env and set an environment variable REACT_APP_API_URL=, which should
point to the running backend REST API.

• Finally, create a production build using the yarn build command. This command will generate
the production build in the /build folder. You can then serve this build as a single page
application.

45
2.6.2. Requirements

The following tasks are required to complete this case study:

• Provision a new Digital Ocean droplet.

• Connect to this new droplet, disable password authentication, and allow only SSH connections
using a sudo user called node.

• Secure the server with a firewall so that only the HTTP port is available for connection.

• Install standard server packages such as Nodejs, npm, yarn, and PM2 on this droplet.

• Install and secure mysql on this droplet.

• Create a mysql database and database user for the application.

• Clone the project repository.

• Setup and run the backend REST API using PM2.

• Run the backend queue worker as a separate PM2 process.

• Setup and build the frontend of the application.

• Setup an nginx configuration to serve the frontend of the application as a single page
application.

• Setup an nginx configuration to proxy traffic to the backend API.

• Setup two DNS records (A records). The first points to the client-side and the second one points
to the API. The client-side should connect to the API using the configured domain. You can set up
a subdomain on your domain, or you can use deployingnodejs.com by adding an A record on the
A record dashboard.

• Submit screenshots to confirm that you have completed each of the above tasks.

2.7. Case Study - Midnight Tick Tock


This case study is a Nodejs application with just one functionality. It saves a list of endpoints in a
Redis database, and every minute (or hour, or midnight), it makes a GET request to all saved URLs.
Here’s a link to the project Git repository.

2.7.1. Project specifications

This project is a server-side rendered Node js application built on Express.

This application provides an interface with a single input. It takes in a valid url and saves this url to
a redis database. The script.js file fetches all the saved URLs from the database and makes a GET
request to all of them. This application can serve as a health checker that checks a given set of URLs
to see if they return a status of 200. To make this application periodically call the provided URLs, we
can set up a recurrent cron job that executes the script.js file.

To correctly run this project, follow these instructions:

• Clone the project repository with the command git clone https://github.com/deploying-
nodejs/midnight-tock-tock.git.

46
• Install the project dependencies with yarn install.

• Create a .env file to define environment variables. To see a sample of all variables required to
run the project correctly, copy the example content from .env.example using the command cp
.env.example .env. The example environment variables are:

REDIS_URL=

• The project requires redis to run correctly. Configure the correct redis url in the environment
file. If your application and your redis installation are running on the same server, the url can
be REDIS_URL=redis://localhost:6379, connecting to localhost on the default port.

2.7.2. Requirements

The following tasks are required to complete this case study:

• Provision a new Digital Ocean droplet.

• Connect to this new droplet, disable password authentication, and allow only SSH connections
using a sudo user called node.

• Secure the server with a firewall so that only the HTTP port is available for connection.

• Install standard server packages such as Nodejs, npm, yarn, and PM2 on this droplet.

• Install Redis on this droplet.

• Setup this application to run as a PM2 process.

• Create a CRON job that executes the <repo>/script.js file every 5 minutes and saves the output
in a log file. You can visit the chapter on CRON jobs to understand how to run the script.js file
every five minutes.

• Setup an Nginx site to run as a proxy to the PM2 process.

• Setup DNS records (A record) that point to this server and serve the application. You can set up
a subdomain on your domain, or you can use deployingnodejs.com by adding an A record on the
A record dashboard.

• Submit screenshots to confirm that you have completed each of the above tasks.

2.8. Case Study - Realtime Chat app


This case study is a Nodejs application built on Socket.io to enable real-time communication. This
case study is a little different from the usual Node.js deployments because it involves real-time
WebSocket communication. This application would require configuring Nginx to support
WebSocket connections.

2.8.1. Project specifications

This project is a server-side rendered chat application. It has a single page. To have it up and
running correctly, follow these instructions:

47
• Clone this repository using the command https://github.com/deploying-nodejs/realtime-chat-
app.git.

• Install the project dependencies with yarn install.

• Create a .env file to define environment variables. To see a sample of all variables required to
run the project correctly, copy the example content from .env.example using the command cp
.env.example .env. The example environment variables are:

PORT=

• Start the application using yarn start.

2.8.2. Requirements

The following tasks are required to complete this case study:

• Provision a new Digital Ocean droplet.

• Connect to this new droplet, disable password authentication, and allow only SSH connections
using a sudo user called node.

• Secure the server with a firewall so that only the HTTP port is available for connection.

• Install standard server packages such as Nodejs, npm, yarn, and PM2 on this droplet.

• Setup this application to run as a PM2 process.

• Setup an Nginx site to run as a proxy to the PM2 process.

• Setup DNS records (A record) that point to this server and serve the application. You can set up
a subdomain on your domain, or you can use a subdomain of deployingnodejs.com by adding an
A record on the A record dashboard.

• Submit screenshots to confirm that you have completed each of the above tasks.

2.9. Case Study - Ghost v3 blog


This case study requires installing and running a Ghost blog. Ghost is an open-source blogging
platform built with Node.js. A fresh installation of Ghost would give you a full-featured blogging
platform.

2.9.1. Requirements

To complete this case study, you are required to carefully follow the instructions in this Ghost guide
to set up a fully working Ghost blog. After successfully running your blog by following the
instructions in the article, do the following to complete the case study:

• Setup an Nginx site to run as a proxy to the daemon ghost process.

• Setup DNS records (A record) that point to this server and serve the application. You can set up
a subdomain on your domain, or you can use a subdomain of deployingnodejs.com by adding an
A record on the A record dashboard.

48
• Submit screenshots to confirm that you have completed each of the above tasks.

2.10. Case study - Database backups


A crucial part of every application is the data. Databases have to be very secure in order not to put
the business at risk. This case study requires creating an automated script that backs up the
database for the Ghost blog and cryptoverter case studies.

2.10.1. Requirements.

To complete this case study, follow these instructions:

• Create a nodejs script that backs up the mongodb database from the cryptoverter case study.
Setup another script for the mysql database from the Ghost blog case study. I highly recommend
writing these scripts free of any dependencies.

• Create a cron job for each of the backup scripts that runs every 6 hours.

• The backups should be saved securely on the server file system.

• Submit screenshots to confirm that you have completed each of the above tasks.

2.11. Case Study - Node BB Forum


This case study requires installing and running a Node BB forum. Node BB is an open-source forum
software built with Node.js. A fresh installation of Node BB would give you a full-featured forum
community platform.

2.11.1. Requirements

To complete this case study, you are required to carefully follow the instructions in this NodeBB
guide to set up a fully working Node BB forum. After successfully running your forum by following
the instructions in the tutorial, you need to:

• Setup an Nginx site to run as a proxy to the daemon node bb process.

• Setup DNS records (A record) that point to this server and serve the application. You can set up
a subdomain on your domain, or you can use a subdomain of deployingnodejs.com by adding an
A record on the A record dashboard.

• Submit screenshots to confirm that you have completed each of the above tasks.

49
Chapter 3. Web server security

50
3.1. Introduction to SSL / TLS
All the applications we’ve deployed so far run over the HTTP protocol. HTTP is a protocol over
which servers or computers communicate with each other. This protocol is insecure. While data is
being transferred from one server to another server over the internet, the data could be hijacked by
someone listening. That’s because data transferred over the HTTP protocol is in plain text. This
hijacking is popularly known as a man in the middle attack. When building modern web
applications, you do not want to deploy your applications over HTTP. These are some reasons:

1. Very insecure. If your application includes authentication or payments, transferring user


personal information over the internet in plain text is the worst. A hijacker can listen to data
transfer and grab some of this data, and the next thing that comes is a disaster.

2. Poor for SEO. Google, the most popular search engine, announced they would be giving an
upper hand to websites enabled with HTTPS, which is a secure version of HTTP.

3. Reduction of customer trust. In 2018, Google announced that all sites running on the internet
must be secure. Browsers flag insecure websites with a Not Secure message on the URL bar.
Customers cannot trust an insecure website. Meaning they most likely wouldn’t purchase items
on your website, or provide their private information.

To securely deploy our applications, we’ll be deploying our applications over the HTTPS protocol.
HTTPS is used to secure communication between servers using a protocol called TLS (Transport
Layer Security).

TLS is a security protocol designed to facilitate privacy and data security for communications over
the internet. HTTPS is an implementation of this protocol.

Most commonly, you’ll hear about SSL (Secure Sockets Layer). SSL is an older implementation of the
TLS security protocol.

3.1.1. Introduction to SSL certificates and how they work

A company called Acme launches a website. To ensure that the data provided by customers on their
websites such as emails and credit card details are secure, this company needs to secure its website
with SSL.

Acme generates a certificate called an SSL certificate. It is a file that contains details of the
organization, such as contact information and organization name. This file also contains a public
key. Acme then hosts this SSL certificate on its server.

When a user visits this website, the browser gets this certificate and verifies its authenticity. If
everything is fine, then the browser adds the Green Lock to the URL bar.

Now when the browser is about to send any request back to the server that contains any data, it
uses the certificate’s public key to encrypt the data. Since Acme generated the SSL certificate, their
server is the only one that can decrypt this encrypted data and handle the request appropriately.

51
3.1.2. SSL Certificates and Certificate Authorities

Let’s talk about the process that Acme goes through to obtain an SSL certificate for their domain
name www.acme.com.

1. Acme creates a file called a Certificate Signing Request (CSR). It is a request created when an
organization or person wants to request an SSL certificate. This file contains information about
the organization, such as contact information, website, and domain details, and organization
name.

2. Acme sends this CSR to an entity called Certificate Authority (CA). A CA is an entity authorized to
issue digital certificates.

3. Once the CA has confirmed and authenticated that Acme owns the domain, the CA issues the
digital certificate to Acme. Examples of common certificate authorities are DigiCert, Comodo
SSL and Let’s Encrypt.

4. Acme receives the confirmed certificate from the Certificate Authority and installs it on their
server.

5. A user visits the Acme website. The user’s browser detects the SSL certificate from the Acme
server and analyses and checks its validity. If the verification is successful, the browser adds the
Green Lock to the URL, and the user can see that Acme is secure. If the verification fails, the
browser marks the site as not secure and displays a message Unknown Certificate.

3.2. Obtain an SSL certificate


To obtain an SSL certificate, first, we need to choose the Certificate Authority we’ll be using. We’ll
be using Let’s Encrypt because it’s a non-profit, has a simple API, is automated, and issues
certificates for free. You can only obtain a certificate for a domain you own and manage. If you do
not own a domain to follow along, you can download an already issued wildcard certificate for
deployingnodejs.com. More about this later below.

Let’s Encrypt provides an API for automating certificate issuing. We can use one of the API clients to
interact with this API. We’ll be using the default client called Certbot, one which makes it easy to
acquire a certificate. Run the following command to install certbot on your server, and another
package called python-certbot-nginx which is a plugin for certbot to work with Nginx.

sudo apt-get install -y certbot python-certbot-nginx

Certbot goes through the following steps to generate a certificate:

1. We make a request using Certbot to the Let’s Encrypt server to acquire an SSL certificate

2. Let’s Encrypt creates a domain validation challenge. It is is a challenging step to confirm that we
have control over the domain we are trying to certify. It is how the Certificate Authority (Let’s
Encrypt) makes sure there is no fraud, and people can’t obtain certificates for domains they do
not own. The domain validation challenge happens in a few steps:

◦ Let’s Encrypt generates a unique token and sends back to us (in this case, back to Certbot).

◦ We are required to host this token on a unique path of the domain. Certbot does this for us.

52
Certbot hosts the challenge token at www.example.com/.well-known/acme-challenge/<UNIQUE-
TOKEN>. Another way to complete this challenge is by adding a TXT record to the domain.

◦ Once Certbot has completed the challenge, it calls Let’s Encrypt again. Let’s Encrypt verifies
the correct completion of the challenge by checking to make sure the unique token
generated correctly. It returns the certificates to Certbot.

3. We can instruct Certbot to configure Nginx with the new certificate automatically, but we won’t.
We’ll manually install the certificate.

Now before we run the command to obtain the certificate, we’ll make sure to setup DNS records
pointing our domain to the server. Let’s get a certificate for cryptoverter.deployingnodejs.com,. We
already have an A record pointing this domain to our server.

sudo certbot certonly --nginx -d cryptoverter.deployingnodejs.com

• The certonly option tells certbot not to install the certificate once issued.

• The --nginx flag indicates to Certbot that we configured our web server using Nginx, and it’ll
use the python-certbot-nginx package we installed. Since this flag passed, Certbot finds the
configuration file for the domain cryptoverter.deployingnodejs.com. Once found, it’ll update this
configuration to be able to handle the challenge from Let’s Encrypt. Once the challenge is
successful, it resets the configuration file back to its original state.

• The -d command defines the domain. We can also pass multiple domains.

This command generates a folder /etc/letsencrypt/live/cryptoverter.deployingnodejs.com and


places some files in it:

• privkey.pem : The private key for your certificate. Used to decrypt data signed using the public
key (certificate) from the user’s browser.

• fullchain.pem: It refers to the certificate file sent to clients.

3.2.1. Backup ssl certificates

If you try to access the /etc/letsencrypt/live/cryptoverter.deployingnodejs.com, you won’t be able


to. It is because this folder requires direct root access. To see the contents, you have to be logged in
as the root user. The certificate is very sensitive because if anyone gets access to your certificates,
they can decrypt data coming from our clients and get access to passwords, credit cards, social
security numbers, and more. Therefore we have to make sure we do not expose the certificate,
most especthe private key ever.

In the future, we might need to migrate our application to a different provider like Google Cloud or
Amazon Web Services. We need to be able to install our certificate on the new server we deploy.
That requires us to keep a secure backup of the certificate and private key. I highly recommend
copying the content of these files and saving them in a secure, online, encrypted store facility such
as Amazon S3, Dropzone, and Google Cloud Storage.

53
3.2.2. Automating certificate renewals

Let’s Encrypt certificates expire in 3 months. Certificate renewal is also free. To renew a certificate,
we can also use Certbot. When Certbot issue certificates, it automatically creates a cron job to
renew certificates. A cron job is a time-based job scheduler in Unix Operating Systems. It can
execute a script that renews the certificate when it’s about to expire. That means our certificate,
once installed, is already ready for renewal and is renewed every 30 days. To see the CRON job
created for this task, you can run cat /etc/cron.d/certbot.

We can see an example of what renewal looks like by running a dry run. Run the following
command to simulate a certificate renewal:

sudo certbot renew --dry-run

3.2.3. Downloading the wildcard certificate

A wildcard certificate of a domain can secure all subdomains of that domain. To follow along with
installing certificates, I obtained a wildcard certificate for the deployingnodejs.com domain. You can
download the certificate and private key from this certificate dashboard.

Once downloaded, the next step is to copy the certificate files to your server. We’ll be using SCP,
which stands for secure copy protocol. This protocol is based on the SSH protocol. With this, we can
securely copy the certificate file and private key over SSH to a specified directory on our server. On
your server, create a folder where the certificates would be copied to, and run these commands
locally:

scp <PATH_TO_CERT_FILE>/fullchain.pem root@159.89.84.65:/etc/nginx/ssl/fullchain.pem

scp <PATH_TO_PRIV_KEY_FILE>/privkey.pem root@159.89.84.65:/etc/nginx/ssl/privkey.pem

54
Replace the PATH_TO_CERT_FILE and PATH_TO_PRIVATE_KEY_FILE with the full path to those files. You
can then SSH into your server to make sure this was correctly copied.

3.3. Securing nginx


We successfully acquired the SSL certificate and saved it in a secure folder. The next step is
configuring Nginx to serve our site using this certificate. Let’s modify our Nginx configuration to
serve this certificate. Run the following to edit the nginx configuration:

sudo nano /etc/nginx/sites-available/cryptoverter.deployingnodejs.com

Now update the configuration to the following:

server {
  # listen 80;
  listen 443 ssl http2;
  server_name cryptoverter.deployingnodejs.com;

  ssl_certificate
/etc/letsencrypt/live/cryptoverter.deployingnodejs.com/fullchain.pem;
  ssl_certificate_key
/etc/letsencrypt/live/cryptoverter.deployingnodejs.com/privkey.pem;

  location / {
  proxy_pass http://localhost:3000;

  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;


  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header Host $http_host;

  proxy_http_version 1.1;
  }

  location ~ \.(css|js|png) {
  root /home/deploy/cryptoverter/dist/public;
  }
}

We added a few lines:

• listen 443 ssl http2;: Instead of listening on port 80, now we’re listening on 443, which is the
port for connections over HTTPS. We also use HTTP2 as the default connection protocol.

• ssl_certificate: This directive defines a path to the certificate file generated by Certbot.

• ssl_certificate_key: This directive defines a path to the private certificate key we generated.

That’s all we need to install a working certificate. Reload the Nginx configuration using sudo
service nginx reload. If we visit the domain at https://cryptoverter.deployingnodejs.com, we get a

55
green lock, showing the certificate is valid and active. You can click on the Green Lock to get more
information about the certificate and its validity.

3.3.1. Forcing redirects to HTTPS

If the user visits http://cryptoverter.deployingnodejs.com, a connection can still be made to our


application over the insecure HTTP. We do not want this. What we want is to force all connections
to use HTTPS. We’ll update the Nginx configuration to handle all redirects to HTTPS automatically.
Modify the configuration file to the following:

56
server {
  listen 80;
  server_name cryptoverter.deployingnodejs.com;

  return 301 https://cryptoverter.deployingnodejs.com$request_uri;


}

server {
  listen 443 ssl http2;
  server_name cryptoverter.deployingnodejs.com;

  ssl_certificate
/etc/letsencrypt/live/cryptoverter.deployingnodejs.com/fullchain.pem;
  ssl_certificate_key
/etc/letsencrypt/live/cryptoverter.deployingnodejs.com/privkey.pem;

  location / {
  proxy_pass http://localhost:3000;

  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;


  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header Host $http_host;

  proxy_http_version 1.1;
  }

  location ~ \.(css|js|png) {
  root /home/deploy/cryptoverter/dist/public;
  }
}

We added a new server block to the configuration. The main difference is, this one listens on port
80 and defines the same server name. So we have two server blocks for the same name, one listens
on port 80 and the other on port 443. For port 80, we defined return 301
https://cryptoverter.deployingnodejs.com$request_uri; which instructs Nginx to redirect all traffic
coming to port 80 to the same $request_uri, but on https. The 301 is a status code that stands for
Moved Permanently.

That’s all we need to do to instruct Nginx to redirect all insecure traffic to the secure protocol.
Restart the configuration with sudo service nginx reload. Now visit
http://cryptoverter.deployingjs.com and watch it redirect to HTTPS.

3.4. Case study - Securing Eventive with SSL


This case study requires installing the deployingnodejs.com wildcard SSL certificate for the eventive
application you deployed earlier in the book. - Visit the wildcard certificate site - Download the
certificate and private key files. - Copy the downloaded files to your server using the Secure Copy
Protocol (SCP) - Install this certificate on the frontend and backend servers for eventive. - Force

57
redirect from HTTP to HTTPS for both sites.

58
Chapter 4. Scaling

59
4.1. What is scalability
Scalability is the potential of an application to handle a growing number of customers, clients, or
users. It also defines how maintainable the application is over time.

We deployed our Nodejs application to a Digital Ocean server with 2GB of RAM, 1 vCPU core, and
50GB HDD space. If we have about 30 to 50 users visiting our application per second, this would be
fine. As the number of users increases, more queries are made to our database per second, more
requests would be made to our server per second, and our application would have to do more work
to handle all these users, which requires more computing power. As the amount of data saved in
our database increases, our application gets slower since we’ll be running out of disk space.

We can, therefore, conclude that the system we have is not ready for scale. How do we solve this?
We have two options: Vertical scaling and Horizontal scaling.

4.1.1. Vertical scaling

Vertical scaling means adding more resources to the single server that hosts our application. These
could be resources such as RAM, CPU, and Disk Space. How do we know we need to scale vertically?

We can monitor the server to understand how the server resources are being consumed. Visit your
Digital Ocean dashboard, select your droplet, and you’ll see the Graphs tab. This tab contains
insights about our server resources.

60
Let’s look at some of the metrics we have about the server:

• CPU Usage: This represents the average percentage of processing power used across all cores on
the server. The higher the CPU usage, the slower your application runs. Processes such as Nginx,
Nodejs consume CPU, and the more traffic the application gets, the more CPU is. Using this
metric, we can know when we need to increase the number of cores on our droplet.

• Memory Usage: This represents the average percentage of RAM in use on the droplet. If you are
using an in-memory database such as Redis that stores data in the RAM, or your application
processes large data sets that need to be in memory for computations, your droplet would have
large spikes of memory usage. This metric would tell us when to increase the amount of RAM on
our droplet.

• Disk Usage: This represents the average percentage of space being used on the droplet. If the
database size increases rapidly, or you have multiple files uploaded by users, the disk space
would run out rapidly.

Now that we have metrics, to vertically scale, we’ll increase the RAM and CPU of our droplet. First,
we need to power off our droplet. This process might take a while, depending on the current size of
your droplet. That is one of the disadvantages of vertical scaling because if anything goes wrong
with your hosting provider, or you have to scale your application, users have to experience
downtime.

Visit the Power tab, and click on Turn Off.

61
Next, visit the Resize tab and select the new droplet size.

I selected another standard plan, with 2 vCPUs on our droplet, after resizing, turn on your droplet.

Great! We successfully scaled our application vertically. As the number of users using our
application increases, we can repeat this process, and everything should go well. This method of
scaling has its advantages and disadvantages. Let’s look at some of them:

62
Advantages of Vertical Scaling

• It is easy to scale your application. We were able to increase resources on our server by simply
clicking a few buttons.

• Server monitoring is effortless.

Disadvantages of Vertical Scaling

• Scaling the application requires shutting down the server, causing users to experience
downtime.

• In the event of a data center shutdown, or application crash on the server, the whole
application goes down, and all users experience downtime.

4.1.2. Horizontal scaling

Horizontal scaling tries to solve some of the issues faced with Vertical scaling, and also improve on
the scalability of the whole system.

Our application has parts: a database layer with MongoDB, a caching layer with Redis, a web layer
with Nginx, and the application layer itself with Nodejs. The caching layer consumes much RAM
because it’s an in-memory database, the database layer consumes a lot of disk space and CPU
power, and the application layer, depending on the nature of your application, can consume a lot of
CPU power too.

The whole concept of horizontal scaling is to separate each of these layers to their separate servers.
We’ll have a dedicated server for our database, one for our web server Nginx, one for our Redis
server, and another for our application server. If your application has many more layers, the
concept of horizontal scaling also applies, and each layer gets it’s own dedicated virtual private
server.

Since our application is most likely to go down because of a lot of traffic or errors from the
application itself, we can have multiple application servers depending on our needs.

That also means in the future, if we realize that the database server is receiving much traffic, we
can further scale it horizontally by creating a cluster of database servers, which means we’ll have
multiple instances or servers for our database layer. We can do this too with our Redis server, and
also with Nginx if the traffic keeps growing and requires more scale.

Some of the problems solved by horizontal scaling are:

• Multiple instances of different parts of the application, therefore there are less likely to be
downtime for our users in the occurrence of one server going down for maintenance or because
of an error.

• Easily handle increasing traffic by adding more servers. If the database is doing more work, we
can add more database servers. If the Redis server is doing much work and keeps going down,
we can create a Redis cluster and have multiple Redis server instances.

With our demo application, we horizontally scale by creating a separate server for Nginx, a
separate database server for MongoDB, a separate one for Redis, and two servers that serve the

63
Nodejs application itself.

4.2. Node.js scaling features


In the last section, we talked about horizontal scaling and vertical scaling. Node.js has some inbuilt
features that can enable us to scale an application across a single server horizontally.

Our Node.js process, by default, runs on a single core. That means if we are running the Node.js
application on an 8 core server, we’ll be using only one core. But, if we use the Nodejs inbuilt
features, we can horizontally scale our application over all the 8 cores on the server. The inbuilt
module in Nodejs is called cluster.

4.2.1. Understanding clusters

A Nodejs application by default runs on a single core. To be able to take advantage of all the cores
on the server, we can use the cluster module. A cluster is a network of processes that share a single
server port.

A process is an instance of an application running on a single core.

If our server has 8 cores, we can run our application on 8 processes instead of one. That way, each
process takes advantage of a single core. Take a look at this sample of code from the Node.js cluster
documentation:

const cluster = require('cluster');


const http = require('http');
const numberOfCPUCores = require('os').cpus().length;

if (cluster.isMaster) {
  console.log(`Master ${process.pid} is running`);

  for (let i = 0; i < numCPUs; i++) {


  cluster.fork();
  }
} else {
  http.createServer((req, res) => {
  res.writeHead(200);
  res.end('hello world\n');
  }).listen(8000);

  console.log(`Worker ${process.pid} started`);


}

• The first thing we do in this script is to require the cluster module, http, and the os module.
With the os module, we can find the number of CPU cores the server on which the application is
running has.

• Next, we check if the current process is the main process using cluster.isMaster. If it’s the main
process, then for each of the cores we have on the server, we’ll create a new process using the

64
cluster.fork() method.

• In the else part of the statement, we check if the current process is a child process. If it is, then
we run our actual application code. In this case, we are starting a server that listens on port
8000 and returns a hello world message on every HTTP request. In the real world, this might be
starting an express.js application or similar.

I ran this script on my 2.8 GHz Intel Core i7 Mac OS computer, and here’s the output:

Master 31984 is running


Worker 31985 started
Worker 31986 started
Worker 31989 started
Worker 31988 started
Worker 31987 started
Worker 31990 started
Worker 31991 started
Worker 31992 started

Notice we have 8 worker processes because my computer has 8 cores. When an incoming HTTP
request is received, the cluster module handles the distribution of this traffic to any free worker
processes accordingly.

If we run this on our 2 CPU core Digital Ocean instance, we’ll have 2 worker processes running.

That means that in the future, as traffic increases on our server, we can vertically scale by resizing
the droplet to have more cores, and horizontally scale by rerunning our Nodejs application using a
cluster that takes advantage of all the CPU cores.

4.2.2. Running PM2 in cluster mode

Remember, we scaled our application to having two CPU cores. At the moment, our Node.js
application is taking advantage of only one of those cores. Let’s rerun our application, but this time
in cluster mode.

You’ll need to SSH into your server. Once you’re in, make sure you cd /home/deploy/cryptoverter.
You can run pm2 list to see a list of all PM2 processes.

┌─────┬───────────────────┬────────┬─────
│ id │ name │ mode │ pid
├─────┼───────────────────┼─────────────┼
│ 0 │ cryptoverter │ fork │ 3833
└─────┴───────────────────┴─────────────┴

Notice the cryptoverter is running in fork mode. We need it to run in cluster mode. First, run the
command pm2 delete cryptoverter. That deletes the process. Next, run the following command to
start the process in cluster mode:

65
pm2 start dist/index.js --name=cryptoverter -i max

The -i option takes in the number of worker processes we want to start. The max value tells PM2 to
start as many worker processes as we have CPU cores. In this case, since we’re running on a server
with 2 CPU cores, PM2 detects this and automatically spawns 2 worker processes.

┌─────┬─────────────────┬─────────┬────────
──
│ id │ name │ mode │ pid │
├─────┼─────────────────┼─────────┼────────
──┼
│ 0 │ cryptoverter │ cluster │ 3887 │
│ 1 │ cryptoverter │ cluster │ 3893 │
└─────┴─────────────────┴─────────┴────────
──┴

Now we have two processes running the Nodejs application. If a runtime error occurs with one
process, the second process handles requests, while pm2 automatically restarts the failed process.
This method is very efficient, and highly reduces the probability of having downtime, but what
happens when the whole droplet goes down? All running processes would also go down, and the
application would still be unavailable. To avoid downtime, we need multiple droplets handling the
requests.

4.3. Provisioning database servers


As an application scales, it makes more database queries, meaning the database receives more
traffic and requires more computing power. To scale the database to handle this much traffic, we’ll
provision a separate droplet just for the database. That way, the database can have all the
computing resources of that droplet to itself. We’ll be scaling the cryptoverter application, and we’ll
deploy a MongoDB database server.

First, we need to provision a droplet on digital ocean on which we’ll install our database software.

4.3.1. Creating an SSH key

When provisioning a droplet, we can automate the process of adding our SSH key to newly
provisioned droplets. That would significantly speed up the whole server setup process. First, let’s
add our SSH key to our Digital Ocean account.

Click the Security tab on the navigation bar. Locate the SSH keys section and click Add SSH Key.

66
To add your SSH key, you need to provide the contents of your public key and a memorable name.
Now when provisioning new droplets, this key would show up, and we can choose to have it added
directly to our droplet once it is provisioned.

4.3.2. Provision a database server.

Visit your digital ocean dashboard and provision a droplet. Since we need this for a database, we
would select a server size with more disk space. Let’s start with 50GB. Next, be sure to enable
Private networking. When horizontally scaling an application, we’ll have multiple servers, and
these servers would need to communicate with each other over a secure private connection. To
achieve this, make sure all the servers provisioned for the application are located in the same
region, and all have private networking enabled. Servers in the same region can communicate
securely and directly over a private network.

67
Provision the database server in New York 1. Also, be sure to select your SSH key, and it will
automatically be added to the new droplet. Finally, give your droplet a memorable name and
provision it.

After the droplet is provisioned, confirm that you can SSH as the root user into this droplet.

4.3.3. Installing MongoDB

You can use these commands to install and secure MongoDB on your new droplet. These commands
are taken directly from the earlier chapters.

wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | sudo apt-key add -

echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.2


multiverse" | sudo tee /etc/apt/sources.list.d mongodb-org-4.2.list

sudo apt-get update

sudo apt-get install -y mongodb-org

sudo service mongod start

mongo admin --eval 'db.createUser ({user: "admin",pwd: "admin",roles: [{ role:


"userAdminAnyDatabase", db: "admin" }, { role: "root", db: "admin" },
"readWriteAnyDatabase" ]})'
sed -i "s/#security:/security: \n authorization: enabled/" /etc/mongod.conf

Be sure to change the database user and password if you need to.

The last command replaces #security and enables authorization on the MongoDB installation.

68
4.3.4. Allowing external connections

The current configuration allows connections only to 127.0.0.1 localhost. This means that
applications trying to connect from other servers would not be granted access to the database
server. We need to update this configuration so that other servers can connect over a private
network. First, visit the digital ocean dashboard to get the private IP address of the MongoDB
server.

Next, run the following command to change the net.bindAddress configuration variable from
127.0.0.1 to the private IP address of the database server.

sed -i "s/127.0.0.1/10.136.11.78/" /etc/mongod.conf

Make sure to replace 10.136.11.78 with abase server’ ## Creating a database For every new
application, it is recommended to create a new database, and a new database user to have read and
write access to that database. Run the following command to automatically create the user and
database:

mongo cryptoverter --eval 'db.createUser({ user: "cryptoverter", pwd: "cryptoverter",


roles: [{ "role": "readWrite", "db": "cryptoverter" }] })' -u admin -p admin
--authenticationDatabase admin

That is the same command we used to create the admin user. We are running the command against
the cryptoverter database. MongoDB databases are automatically created.

• The name of the new user is cryptoverter, and the user’s password is cryptoverter.

• The user would have a readWrite role in the cryptoverter database.

• -u admin specifies the user we are running this command as, -p admin specifies the
authentication user’s password, and --authenticationDatabase admin specifies the database in
which the authentication user is stored.

Our database server is ready to receive traffic. In order for the other servers we’ll deploy to be able

69
to connect to this server, we need to modify firewall rules. We’ll come back to this after
provisioning the application servers.

4.4. Provisioning application servers


As the number of requests made to the application increase, the amount of computing power
required by the application would increase too. That means the application itself needs to be scaled.
Also, we need to have at least 2 running application servers, just in case one of them goes down at
runtime. These application servers would run identical copies of the application and would connect
to the database server over the private network. Let’s provision two droplets we’ll use as our
application servers.

When selecting the region, we have to make sure the applications are all in the same region as the
database server and have private networking enabled. That way, the application servers can
connect to the database server privately.

Add two droplets and give them memorable names. As the traffic on your application increases,
you can add more application servers.

4.4.1. Setting up application droplets

We need to clone the application from source control, install the repository, dependencies, and run
it with pm2. We need to do this on both application servers. Connect to the first server over SSH
and run these commands on the server:

70
curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
apt-get install -y nodejs

npm i -g yarn
npm i -g pm2
git clone https://github.com/deploying-nodejs/cryptoverter.git

cd cryptoverter

cat > .env << EOF


MAIL_CONNECTION=ethereal
DATABASE_URL=mongodb://cryptoverter:cryptoverter@10.136.11.78:27017/cryptoverter
APP_URL=http://localhost:3000
PORT=3000
NODE_ENV=production
JWT_SECRET=eYS9PERmH557nSPyghnJHrEM
EOF

yarn
yarn build

• First, we installed nodejs, yarn, pm2, and cloned the application repository.

• Next, we updated the environment variables by writing to the .env file.

• The PORT=3000 defines the port on which the application would run.

• The DATABASE_URL defines the connection url to our database server. To create the correct
connection string, first, you need to get the private IP address of the database server.
Connecting using the private IP address is much faster and secure. You can get this from the
digital ocean dashboard.

Now that we have the private IP address, we generate the connection url using the format
mongodb://user:password@server_ip:server_port/db_name. The user and password match the
cryptoverter database we created on the database server.

71
4.4.2. Configuring firewall on the database server

Before we run the application using pm2, we need to make sure the firewall on the database server
is correctly configured to accept connections from the application servers. Connect to the database
server using SSH and run the following commands:

ufw allow from 10.136.126.142 to any port 27017


ufw allow from 10.136.123.159 to any port 27017

ufw status

The private IP addresses of my application droplets are 10.136.126.142 and 10.136.123.159. Modify
these to match the private IP addresses of your servers. This command allows access to port 27017
from these private IP addresses. Once this is done, our application servers can now correctly
connect to the database server.

Now connect to the first application server and run the application with pm2:

cd cryptoverter

pm2 start dist/index.js --name=cryptoverter -i 4

root@cryptoserver-app-1:~/cryptoverter# pm2 start dist/index.js --name=cryptoverter -i


4
[PM2] Starting /root/cryptoverter/dist/index.js in cluster_mode (4 instances)
[PM2] Done.

id name mode status

0 cryptoverter cluster online

1 cryptoverter cluster online

2 cryptoverter cluster online

3 cryptoverter cluster online

This starts four instances of the application in cluster mode.

Now run the same commands all over again on the second application droplet.

Once you’re done deploying the applications to both application droplets, you can test that they are
correctly deployed by visiting port 3000 of their public addresses. You should see the application
correctly loaded in the browser.

72
Try registering a new user to confirm that the application can correctly connect and write to the
database.

We’ve successfully provisioned our application droplets. We have not yet configured firewall rules
on these droplets. We’ll come back to this after we have provisioned our load balancer.

4.5. Provisioning a load balancer


Load balancing is the distribution of workload across multiple computing resources. In the case of
web applications, this would mean distributing web traffic across multiple server instances. To
horizontally scale our application, we’ll create a load balancing server, which balances traffic to all
application servers running our application. The load balancer would receive incoming traffic to
our site, and proxy this traffic to all application servers.

4.5.1. Provisioning a load balancing server

We’ll need to provide a droplet that would be dedicated only to load balancing. Nginx would be the
webserver software used to balance the load between our application servers. Make sure the load
balancer has private networking enabled and is provisioned in the same region as the application
servers. That way, the communication between these servers can be private, fast, and secure.

4.5.2. Installing nginx

All we need to run the load balancing server is nginx. Install it using the following commands:

sudo apt-get install -y nginx

Once we have nginx installed, we need to create a site that matches the domain of our site. I’ll use

73
cryptoverter-scaled.deployingnodejs.com as the domain in this case. The following command
creates the nginx configuration for the site.

cat > /etc/nginx/sites-available/cryptoverter-scaled.deployingnodejs.com << EOF


upstream cryptoverter {
  server 10.136.126.142:3000;
  server 10.136.123.159:3000;
}

server {
  listen 80;
  server_name cryptoverter-scaled.deployingnodejs.com;

  location / {
  proxy_pass http://cryptoverter/;

  proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;


  proxy_set_header X-Real-IP \$remote_addr;
  proxy_set_header Host \$http_host;

  proxy_http_version 1.1;
  }
}
EOF

ls -s /etc/nginx/sites-available/cryptoverter-scaled.deployingnodejs.com
/etc/nginx/sites-enabled/

The first thing the configuration file does is define a pool of servers called an upstream. That
defines all the application servers to which the load balancer would be distributing traffic. The
servers are defined using the server directive, followed by the public or private IP address and the
port to which traffic should be sent. I defined two servers, with the private IP addresses of the two
application servers. Also, I specified port 3000 for both servers since we ran our applications on port
3000.

The final step is to configure a server that proxies the request to the upstream. proxy_pass
http://cryptoverter/ directs traffic to any of the servers in the upstream.

4.5.3. Load balancing algorithms

Nginx has to decide which server to proxy an incoming request to. The method used can be
specified in the upstream block. When none is defined, the round robin method is used. This method
distributes traffic evenly across all droplets. That is a great choice when developing most web
applications.

Other methods are:

• Least connections. Nginx sends the request to the server with the least number of active
connections. To use this method, define the least_conn directive in the upstream:

74
upstream cryptoverter {
  least_conn;
  server 10.136.126.142:3000;
  server 10.136.123.159:3000;
}

This method is suitable when the connection can be long-lived—for example, worker servers.

• IP hash. This method determines the server to which the request should be sent using the
client’s IP address. This method guarantees that requests from the same IP address get to the
same server provided the server is available. This method is very suitable for applications using
sessions. To use this method, define the ip_hash directive in the upstream block.

upstream cryptoverter {
  ip_hash;
  server 10.136.126.142:3000;
  server 10.136.123.159:3000;
}

4.5.4. Server weights

In some situations, you might want some application servers to handle more traffic than others.
That might be because they have more computing resources. To achieve this, you can modify the
weight of the server. Have a look at the following upstream:

upstream cryptoverter {
  server 10.136.126.142:3000 weight=5;
  server 10.136.123.159:3000;
}

Specifying the weight on the first server means nginx would direct more traffic to this server. That
would also modify the load balancing algorithm. The larger the weight of the server, the more
traffic it receives.

4.5.5. Configuring application server firewalls

Our load balancer would forward requests to port 3000 of our application servers. At the moment,
the firewalls on the application servers are inactive, so the requests should forward correctly. That
is not ideal since the application servers are open to connections from the entire internet. We can
fix this by enabling the firewall and granting access only to the load balancer. Connect to each of
the application servers and run the following commands:

ufw enable

ufw allow from 10.136.211.56 to any port 3000

75
• 10.136.211.56 is the private IP address of my load balancer. We are exposing port 3000, which
our nodejs server is running on to the load balancer. We are also using the private IP address
for a faster, more secure, and private connection.

Be sure to run these commands on all application servers.

4.5.6. Configuring DNS

We need to configure our domain name to point to the load balancer. The load balancer is the only
server exposed to the internet. It is the doorway to our private network of servers.

4.5.7. Securing the load balancer with Lets Encrypt

The final step to deploying the load balancer is securing it with ssl. We need to acquire a free ssl
certificate with Let’s encrypt. Just like before, if you are following along with a subdomain of
deployingnodejs.com, you wouldn’t be able to acquire a certificate because there are limits to how
many can be issued for a specific domain. Instead, download the wildcard certificate from the
certificate site and install it on the load balancer server. If you are using a custom domain, you
would need to install certbot and obtain a certificate with the following commands:

sudo apt-get install -y certbot python-certbot-nginx


sudo certbot certonly --nginx -d cryptoverter-scaled.deployingnodejs.com

Be sure to change the command to match your domain.

If you are using the wildcard certificate, use the following commands:

scp

76
With our load balancer up and secured, we have completed the process of horizontally scaling the
cryptoverter application across multiple servers. These are the servers we have at the end of the
day.

4.6. Scaling Eventive


4.6.1. Requirements

This case study focuses on scaling the Eventive application we deployed earlier. Have a detailed
look at the project specifications from the earlier chapter. Also, you can have a look at the readme
file of the project repository. To complete the scaling case study, complete the following tasks:

• Provision a droplet as a database server, install and secure mysql 5.7 on it.

• Create a database and database user on the mysql database. Grant this user read and write
permissions on this database.

77
• Expose the database server over port 3306. Establish all server connections with the private IP
address.

• Provision a droplet you’ll use as a Redis server.

• Expose Redis on port 6379 of this server.

• Provision three digital ocean droplets you’ll use as application servers. The first and second run
the backend API, and the third run the frontend client.

• On the first and second application servers, install node, npm, yarn, and pm2. Clone the project
repository and install the backend project dependencies. Run 4 instances of the backend API,
and one instance of the mail worker script using pm2.

• On the third application server, install node, npm, yarn, and pm2. Clone the project repository,
install the frontend project dependencies, and build it for production. Run the frontend client
using pm2.

• Set up firewalls on the Redis and mysql servers. The firewall should accept connections only
from the application servers. All server connections should be via a private network.

• Setup a digital ocean droplet you’ll use as a load balancer. On this server, install nginx.
Configure a site on the load balancer to serve the backend API and balance the load to the first
two application servers. Configure another site that proxies requests to the third application
server.

• Setup DNS records for the frontend and the backend applications. If you are using a subdomain
of deployingnodejs.com, you can add A records to the domain from the a-record dashbaord.

• Secure both the backend and frontend applications with Let’s Encrypt certificates. If you are
using a subdomain of deployingnodejs.com, you can download a valid wildcard certificate from
here.

• Submit screenshots to confirm that you’ve completed each of the above tasks.

78
Chapter 5. Next steps

79
5.1. Next steps
Throughout this book, we’ve covered all of the information you need to know to be able to deploy
production-ready applications to the cloud. We have also learned the skills we need to deploy
applications that can handle a medium to a large amount of traffic. If you are building an
application with thousands of daily active users, what we’ve learned so far is perfect for your
infrastructure. But it doesn’t end here. There’s always the next step. How do large companies with
millions of requests per second handle traffic? What is their deployment infrastructure? How do
they manage thousands of servers? How do they manage backups, cron jobs, queue workers, and
clusters? Here’s a brief introduction to help you continue your development as a cloud expert.

5.1.1. Disadvantages of horizontal scaling

Horizontally scaling an application across multiple servers is a great way to increase the
performance of the application under high traffic, but it comes with certain disadvantages.

• The first drawback of this approach is keeping all application servers in sync. If some of the
servers in your deployment have version-controlled files, or the state needs to be changed
regularly, the more of these servers you have, the more tedious it becomes to keep all of these
servers in sync. For example, when we deployed the cryptoverter app, we had two running
instances of the application. For every deployment, we have to make sure we deploy the new
application code to both servers so that the users of our application can use the same code. As
traffic increases, we could add ten more. Now we need to keep twelve servers in sync. We have
to make sure they all run the same node version, the same version of pm2, the same port, and
so much more. A server management tool can help manage all of these servers, but as we
continue adding more servers, this breaks down fast. What if there was a tool that could handle
the creation and destruction of servers for us, making sure all created servers have the same
state as the others?

• The second drawback is detecting and reacting to server unavailability. It is quite normal for
servers to crash for one reason or the other. It could be because of a failed update, an
application error, or a physical downtime from the server provider. In these scenarios, we need
to create a new replacement server as soon as possible, thus requiring the manual presence of
an engineer to handle the process. What if there was a tool that could automatically detect
when servers go down and automatically spin up new ones in replacement?

• Thirdly, this becomes more and more expensive over time. If the application we built is an e-
commerce application, for example, there would be days of high and low traffic. On high traffic
days, we might need 32 application servers, and on low traffic days, we might need just 12. The
process of tearing down and recreating servers becomes exhausting. What if there was a tool to
automatically detect high and low traffic scenarios, and automatically setup and teardown
servers as needed?

These are some of the challenges horizontal scaling would face sooner or later. Let’s try to tackle
each of these problems.

5.1.2. Introduction to Docker

The first problem we encountered with horizontal scaling is keeping all servers in sync. Same

80
environment and application state. What if we could take a snapshot of one entire environment
and replicate this environment on all the other servers? Let’s say we could create a detailed
snapshot of the first server, and make an exact copy of this server as many times as we want. Well,
we can.

Docker is a tool that can help us achieve that. How does it work?

First, we create an image file called a dockerfile. In this file, we can define the exact needs of our
application, node.js version, npm version, pm2 configuration, environment variables, and so much
more. We have an infinite world of possibilities with the dockerfile. Once we fully describe our
application, docker uses this file to create an image, some snapshot of how our application should
be.

Secondly, we can install and run the image created by docker on any number of servers we want,
and docker makes sure the application running is as described in the image we created.

Docker makes things very easy because we are very sure that all the servers running our
application run the same version, state, and environment.

Docker is much more that I have described, but this is one of the most significant advantages we
can get from using it.

5.1.3. Introduction to Kubernetes

We still have two more problems to go. We have a way of keeping everything we need in sync, but
we still need to create servers and watch out for downtime manually. Kubernetes is a tool that can
automate the whole process of horizontal scaling. With Kubernetes, we can define an infinite
number of servers, and Kubernetes would automate the process of watching these servers, tearing
down or spinning up new instances, responding to server errors, traffic changes, and so much
more. Kubernetes can be paired with docker, to know exactly how to spin up new application
servers. Combining these two technologies, we can infinitely scale our applications however we
want. There’s so much more these two tools can help us achieve, and these tools are used by some
of the biggest technology firms in the world to handle deployment and automation flows.

The next step in your journey is learning how to use Docker and Kubernetes to deploy applications.
I sincerely hope this book has given you the skillset you need to further your career. Keep
deploying, keep learning, explore other cloud providers such as AWS, Google Cloud, and Azure.

By continually practicing the concepts you’ve learned from this book and keeping up with the
advancements in the cloud space, you’ll keep growing as a cloud specialist for years to come.

81

You might also like