You are on page 1of 257

Telegram Channel @nettrain

Fast, Scalable And Secure Web Hosting For


Entrepreneurs
Learn to set up your server and website

Wim Bervoets

This book is for sale at


http://leanpub.com/fastscalableandsecurewebhostingforentrepreneurs

This version was published on 2016-01-14

* * * * *

This is a Leanpub book. Leanpub empowers authors and publishers with the
Lean Publishing process. Lean Publishing is the act of publishing an in-
progress ebook using lightweight tools and many iterations to get reader
feedback, pivot until you have the right book and build traction once you
do.

* * * * *

© 2015 - 2016 Wim Bervoets

Telegram Channel @nettrain


Telegram Channel @nettrain
Table of Contents
Preface
Who Is This Book For?
Why Do You Need This Book?
Introduction
Choosing Your Website Hosting
Types Of Hosting
Shared Hosting
VPS Hosting
Dedicated Hosting
Cloud Hosting
Unmanaged vs Managed Hosting
Our Recommendation
Checklist For Choosing Your Optimal Webhost Provider
Performance Based Considerations
Benchmarking web hosts
VPS Virtualization methods considerations
IPv4 / IPv6 Support
Example ordering process
RamNode Ordering

Installing the Operating System


Choosing a Linux Distribution
The Linux kernel used should be as recent as possible
Choosing a Server Distribution
Installing Ubuntu 15.04
Selecting the ISO template in the Ramnode Control Panel and Booting from CDROM
Using Tight VNC Viewer
Start the Ubuntu Server Install
Setting up SSH access to login to your server
Login via SSH
Updating system packages
Securing /tmp and /var/tmp
Installing Miscellaneous Tools
Enabling IPv6 Support
Useful Linux Commands
Upgrading Ubuntu 15.04 to 15.10
Remount the temp partition with noexec false
Remove any unused old kernels

Telegram Channel @nettrain


Do the upgrade via VNC

Performance Baseline
Serverbear
Testing Ping Speed
Tuning KVM Virtualization Settings
VPS Control Panel Settings
Configuring the CPU Model Exposed to KVM Instances
Tuning Kernel Parameters
Improving Network speeds (TCP/IP settings)
Disabling TCP/IP Slow start after idle
Other Network and TCP/IP Settings
File Handle Settings
Setup the number of file handles and open files
Improving SSD Speeds
Kernel settings
Scheduler Settings
Reducing writes on your SSD drive
Enable SSD TRIM
Other kernel settings

Installing OpenSSL
Installing OpenSSL 1.0.2d
Upgrading OpenSSL to a Future Release
Check Intel AES Instructions Are Used By OpenSSL
Securing your Server
Installing CSF (ConfigServer Security and Firewall)
Configuring the ports to open
Configuring and Enabling CSF
Login Failure Daemon (Lfd)
Ordering a Domain Name For Your Website
Choosing a Domain Name and a Top Level Domain
Ordering a Domain With EuroDNS
Configuring a Name Server
Ordering the DNSMadeEasy DNS Service
Installing MariaDB 10, a MySQL Database Alternative

Telegram Channel @nettrain


Download & Install MariaDB
Securing MariaDB
Starting and Stopping MariaDB
Upgrade MariaDB To a New Version
Tuning the MariaDB configuration
Kernel Parameters
Storage Engines
Using MySQLTuner
Enabling HugePages
System Variables
Enabling Logrotation for MariaDB
Diagnosing MySQL/MariaDB Startup Failures

Installing nginx Webserver


Why nginx ?
Installing nginx
Download nginx
Download the nginx Pagespeed module
Install Perl Compatible Regular expressions
Install NGX Cache Purge
Install set-misc-nginx module
Compile nginx
nginx Compilation Flags
PID files; what are they?
nginx Releases
nginx Configuration
Creating a nginx System User
Configure the nginx system user open files maximum
nginx Startup and Shutdown Scripts
nginx configuration files
Test the Correctness of the nginx Configuration Files
Updating the system user which launches nginx
Updating the PID File Location
Updating worker_processes
Updating worker_priority
Updating the Error Log File Location
Enable PCRE JIT
Configuring the Events Directive
Configuring the HTTP Directive
Configuring the Mime Types
Configuring Your Website Domain in nginx
Updating nginx

Installing PHP

Telegram Channel @nettrain


Which PHP Version?
What is an OpCode Cache?
Zend OpCache
Compile PHP From Source?
Install Dependencies for PHP Compilation
Downloading PHP
Compile and Install PHP
Compilation flags
Testing the PHP Install
Tuning php.ini settings
Set the System Timezone
Set maximum execution time
Set duration of realpath information cache
Set maximum size of an uploaded file.
Set maximum amount of memory a script is allowed to allocate
Set maximum size of HTTP POST data allowed
Don’t expose to the world that PHP is installed
Disable functions for security reasons
Disable X-PHP-Originating-Script header in mails sent by PHP
Sets the max nesting depth of HTTP input variables
Sets the max nesting depth of input variables in HTTP GET, POST (eg. $_GET, $_POST..
in PHP)
Enable Zend Opcache

Installing PHP-FPM
Configuring PHP-FPM
Configuring a PHP-FPM Pool
How to start PHP-FPM
nginx FPM Configuration
Viewing the PHP FPM Status Page
Viewing statistics from the Zend OpCode cache
Starting PHP-FPM Automatically At Bootup
Log File Management For PHP-FPM
Installing memcached
Comparison of caches
Downloading memcached And Extensions
Installing Libevent
memcached Server
Installing libmemcached
Installing igbinary

Telegram Channel @nettrain


Install pecl/memcache extension for PHP
Install pecl/memcached extension for PHP
Testing The memcached Server Installation
Setup the memcached Service
Installing a Memcached GUI
Testing the scalability of Memcached server with twemperf and
mcperf
Updating PHP To a New Version
Download the New PHP Version
Installing ImageMagick for PHP
Install ImageMagick
Installing PHPMyAdmin
Installing phpMyAdmin
Installing Java
Check the installed version Java
Downloading Java SE JDK 8 update 66
Installing Java SE JDK8 Update 66
Making Java SE JDK8 the default
Installing Jetty
Download Jetty 9.3.x
Creating a Jetty user
Jetty startup script
Configuring a Jetty Base directory
Overriding the directory Jetty monitors for your Java webapp
Adding the Jetty port to the CSF Firewall
Autostart Jetty at bootup
Enabling Large Page/HugePage support for Java / Jetty
Adding the jetty user to our hugepage group
Update memlock in /etc/security/limits.conf
Add UseLargePages parameter to the Jetty startup script
Forwarding requests from Nginx to our Jetty server
Improving Jetty Server performance
Installing Visual VM on your PC
Enabling Remote monitoring of the Jetty Server Java VM
Garbage Collection in Java

Telegram Channel @nettrain


Optimising Java connections to the MariaDB database
Download and install the MariaDB JDBC4 driver
Download and install the Oracle MySQL JDBC driver
Download and install HikariCP - a database connection pool
Configuring Jetty logging and enabling logfile rotation

Using a CDN
Choosing a CDN
Analysing performance before and after enabling a CDN
Configuring a CDN service
Create a New Zone.
Using a cdn subdomain like cdn.mywebsite.com
Tuning the performance of your CDN

HTTPS everywhere
Do you need a secure website?
Buying a certificate for your site
Standard certificate
Wildcard certificate.
Public and private key length
Extended Validation certificates and the green bar in the browsers
Buying the certificate
Generate a Certificate Signing request
Ordering a certificate
Configuring nginx for SSL
Getting an A+ grade on SSLLabs.com
Enabling SSL on a CDN
Enabling SPDY or HTTP/2 on a CDN

Configure Email For Your Domain


Selfhosted Email Servers vs External Mail Systems
Signup for Zoho email
Verify Domain
Add Users
Add Groups
Configure Email Delivery

Installing Wordpress
Downloading Wordpress
Enabling PHP In Your Nginx Server Configuration
Creating a Database For Your Wordpress Installation
Installing Wordpress
Enabling auto-updates for Wordpress (core, plugins and themes)

Telegram Channel @nettrain


Installing Wordpress plugins
Installing Yoast SEO

Optimizing Wordpress performance


EWWW Image Optimizer
Remove CSS Link IDs plugin
Disable Emojis support in Wordpress 4.2 and later
Optimizing Wordpress performance with the W3 Total Cache plugin
Installing W3 Total Cache
Speed up your site with Google PageSpeed nginx plugin
Configuring global Pagespeed settings.
Enabling Pagespeed filters
Optimizing the performance of the Pagespeed plugin
Cache lookups that were expired
Resources not rewritten because domain wasn’t authorized
CDN Integration & HTTPS support
Extending the Memcached cache size

Appendix: Resources
TCP IP
Ubuntu / Linux
KVM
SSD
OpenSSL
Nginx
PHP
MariaDB
Jetty
CDN
HTTPS

Telegram Channel @nettrain


Fast, Scalable And Secure Web Hosting For Entrepreneurs

Copyright 2015-2016 Wim Bervoets - Revision 1.0 Published by Wim


Bervoets www.fastwebhostingsecrets.com

License Notes

This book is licensed for your personal enjoyment only. This book may not
be re-sold or given away to other people. If you would like to share this
book with another person, please purchase an additional copy for each
person. Thank you for respecting the hard work of this author.

Disclosure of Material Connection

Some of the links in the book are affiliate links. This means if you click on
the link and purchase the item, I will receive an affiliate commission.
Regardless, I only recommend products or services I use personally and
believe will add value to my readers.

Telegram Channel @nettrain


Preface
Thanks for buying the book Fast, Scalable And Secure Web Hosting For
Entrepreneurs.

This book was born out of the need to provide a comprehensive and up-to-
date overview on how to configure your website hosting in the most
optimal way and from the start to finish. The book is packed with a great
deal of practical and real world knowledge.

Who Is This Book For?


This book is intended for entrepreneurs, startup companies and people who
have some experience in building websites and the surrounding
technologies but have outgrown their current hosting. If you want to create
a lightning fast and scalable website using your own server hardware, an
unmanaged VPS (Virtual Private Server) or a cloud server, then this book is
for you!

If you don’t have a technical background, this book can also serve as a
guide for the technical people in your organisation.

Why Do You Need This Book?


Good website performance is becoming a top priority for many webmasters.
This is because it gives the users of your site a better experience.

It also helps your ranking in search engines, which may offer you more free
traffic from them. If you have an e-commerce site, several studies have
shown that a faster website can increase the sale conversions, thus revenue!

Understanding the building blocks of your website will make sure that it
can scale in popularity while keeping it stable and fast!

Telegram Channel @nettrain


The first section of the book gives an overview on what to look for in a web
hosting company. Where should you host your site and with whom? We
will give you an overview of the alternatives and show you how you can
choose the best solution for your specific site(s).

After having chosen a web hosting package and server, we will install a
Linux-based OS on your server. We’ll guide you with choosing and
installing the right Linux distribution.

After the base OS installation has been completed, we’ll install and
configure commonly used web software including

Nginx web server


PHP
PHP-FPM
Memcached
Java
Jetty
MariaDB database (a drop-in replacement for MySQL)

We will also focus on tuning the OS for performance and scalability.


Making your server and website secure (via HTTPS) is a hot topic since
Google announced it can positively impact your Search Engine Rankings.
We’ll describe how to enable HTTPS and secure connections to your
website.

We’ll also explain technologies like SPDY, HTTP/2 and CDN and how they
can help you to make your site faster for your visitors from all around the
world.

Telegram Channel @nettrain


Introduction
So why is a fast site so important?

In one word: your users will always love a fast site. Actually they’ll take it
for granted. If you manage to make it too slow, they’ll start getting angered
by the slowness. And this can have a lot of consequences:

They may not come back after their initial visit


Your bounce rate could become higher
The number of pages viewed on your site could decrease
Your revenue could plummet (because it has been proven that making
a site faster can increase conversions)

Mobile users are even quicker to abandon your site when it loads slowly on
their tablet or smartphone on a 3G or 4G network. And we all know that
mobile usage is increasing very fast!

If you’re dependent on search traffic from Google or Bing, that’s another


big reason to make your site as fast as possible.

These days there is a lot competition between websites. One of the factors
Google uses to rank websites is their site speed. As Google wants the best
possible experience for their users, they’ll give an edge to a faster site (all
other things being equal). And higher rankings lead to more traffic.

Increased website performance can also reduce your bandwidth costs as


some of the performance optimizations will reduce your bandwidth usage.

So how do you make your site fast?

There are a lot of factors influencing the speed of your site. The first
decision (and an important one) you’ll have to make is which web hosting
company you want to use to host your server / site. Let’s get started!

Telegram Channel @nettrain


Choosing Your Website Hosting
Getting great website speeds starts with choosing the right website hosting.
In this chapter we will explain the different types of hosting and their
pro’s/contra’s.

We will give you a checklist with performance indicators which will allow
you to analyze and compare different hosting packages.

Types Of Hosting
To run a website, you’ll need to install the necessary web server software on
a server machine. Web hosts generally offer different types of hosting. We’ll
explain the pro’s/contra’s of each.

Shared Hosting
Shared hosting is the cheapest kind of hosting. Shared hosting means that the
web host has setup a (physical) server and let it host a lot of different sites
from different customers on that server. (into the hundreds)

The cost of the server is thus spread to all customers with a shared hosting
plan on that server.

Most people start with this kind of hosting because it is cheap. There are
some downsides though:

When your site gets busy, your site could get slower (because there are
few hundred sites from other people also waiting to be served)
Your web hosting company could force you to move to a more
expensive server because you’re eating up too many resources from the
physical server - negatively impacting the availability or performance
of other sites which are also hosted on that server.
You could be affected by security issues (eg. due to out of date server
software) created by the other websites you’re sharing the server with.
Your website performance could suffer because other sites misbehave

Telegram Channel @nettrain


Most of the time, there is almost no possibility to start configuring and
tuning the server, which leaves you at mercy how well the web host has
configured the server.

For the above reasons, we will not further mention this type of hosting in
this book, as it is not a good choice to get the best possible stability and
repeatable performance out of your server.

VPS Hosting
VPS stands for Virtual Private Server. Virtualization is a technique that
makes it possible to create a number of virtual servers on one physical
server. The big difference with shared hosting is that the virtual servers are
completely separated from each other:

Each virtual server can host its own operating system. (and can be
completely different from eachother)
Each virtual server gets an agreed upon slice of the physical server
hardware (cpu, disk, memory)

The main advantage is that it’s not possible that other virtual servers will
negatively affect the performance of your machine and your installation is
more secure.

The physical server typically runs a hypervisor which is tasked with


creating, releasing, and managing the resources of “guest” operating
systems, or virtual machines.

Most web hosts will limit the number of virtual private servers running on
one physical server to be able to give you the advertised features of the VPS.
(eg. 512MB RAM, 1Ghz CPU and so on)

Well known VPS web host providers include:

Ramnode
MediaTemple
BlueHost

Dedicated Hosting

Telegram Channel @nettrain


A dedicated server is a server which is completely under your control. You
get the full power of the hardware as you are the only one running website(s)
on it.

The extra layer of virtualization (which has some performance impact) is not
present here, unless you decide to virtualize the server yourself into separate
virtual servers. This means you generally get the best possible performance
from the hardware.

Cloud Hosting
Cloud servers are in general VPS servers, but with the following differences:

The size of your cloud server (amount of RAM, CPU and so on) can be
enlarged or downsized dynamically. Eg. during peak traffic times you
could enlarge the instance to be able to keep up with the load. When
traffic diminishes again, you can downsize the instance again.
Bigger instances will always cost more money then small instances.
Being able to change this dynamically can reduce your bills.
Cloud-based servers can be even moved to other hardware while the
server keeps running. They can also span multiple physical servers.
(This is called horizontal scaling)
Cloud-based servers allow your site(s) to grow more easily to really
high-traffic websites.

Well-known Cloud-based hosting providers include:

DigitalOcean

Unmanaged vs Managed Hosting


With unmanaged hosting, you’re responsible to manage your server
completely. This means installing the Operating System on your server,
installing the website software and so on.

This may seem daunting, if you have never done it before. At one point I
was in the same situation, but as you’ll read in this guide, it’ll become clear
that it’s all pretty doable if you’re interested to learn a few new things.

Telegram Channel @nettrain


Managed hosting costs more, as it include extra services, such as a system
administrator installing the OS, server software, security software and more.

Some packages include auto updates of your server software (eg. latest
security patches). Prices can differ greatly as can the included services. Be
sure to compare them well.

Our Recommendation
To create a stable and performant website, we recommend to choose
between VPS or Cloud-based servers.

Price-wise VPS based servers are generally cheaper then Cloud-based


servers. For most website servers which don’t have extreme volumes of
traffic, a good VPS will be able to handle the load just fine.

If you hope the create the next Twitter or Pinterest, cloud servers will give
you the ability to manage the growth of your traffic more easily. For
example a sudden, large traffic increase could instantly overload your server.
With a cloud server you’ll be able to ‘scale your server’ and get additional
computing power in minutes vs. creating from scratch a new instance that
could take a few hours.

Checklist For Choosing Your Optimal Webhost Provider


In this section we will determine the optimal web host provider for your
specific site via a checklist. We look at performance, stability and the price
point of a particular solution.

Performance Based Considerations


Latency and Bandwidth: Two Key Concepts To Understand

Before we start with our performance based considerations we need to


explain two key concepts related to website performance: latency and
bandwidth.

The following diagram shows how bandwidth and latency relate to eachother
(diagram adapted from Ilya Grigorik excellent book High Performance
Browser Networking).

Telegram Channel @nettrain


Bandwidth and Latency

Latency is the amount of time it takes for a request from a device (such as a
computer, phone or tablet) to arrive at the server. Latency is expressed in
milliseconds.

You can see in the picture that the bits and bytes are routed over different
networks, some of which are yours (eg. your WIFI network), some belong to
the Internet provider, and some belong to the web hosting provider.

All of these different networks have a different maximum throughput or


bandwidth. You can compare it with the number of lanes on a highway vs an
ordinary street.

In the example above the Cable infrastructure has the least bandwidth
available. This can be due to physical limitations or by arbitrary limits set by
your ISP.

The latency is affected by the distance between the client and the server. If
the two are very far away, the propagation time will be bigger, as light
travels at a constant speed over fiber network links.

The latency is also affected by the available bandwidth. For example if you
put a big movie on the wire (eg. 100MB) and the wire can transfer 10MB per
second maximally; it’ll take 10 seconds to put everything on the wire. There
are other kinds of delays, but for our discussion the above represent what we
need to know.

Telegram Channel @nettrain


Location Of The Site

Because the latency is affected by the distance between the physical location
of your visitor and the server it is very important to take a look at your target
audience.

If your target audience is Europe then having a VPS on the west coast of the
USA isn’t a great idea.
Target Audience Of Your Site

To know the target audience of your site there are two options:

If you have already an established website the chances are high you are
already analyzing your traffic via Google Analytics or similar tools.
This makes it easy to find out your top visitor locations.
If your site is new, you’ll need to establish who will be your target
customer and where he is located. (eg. on a Dutch site this could be
Belgium and the Netherlands).

In your Google Analytics account, go to the Geo -> Location report.

Telegram Channel @nettrain


Geo Location of your visitors

As you can see in the example above, our Top5 visitor locations are:

1. United States
2. Germany
3. India
4. United Kingdom
5. Brazil

The United States leads with a large margin in our example, that’s why we’ll
drill down in Google Analytics to a State level:

1. California
2. Texas
3. New York
4. Florida

Telegram Channel @nettrain


5. Illinois

Geo Location of your visitors

In the United States report, California wins by a pretty big margin. As we


have a lot of visitors from both the West coast of the US and Europe, in our
case the ideal location for our server would be in a datacenter location on the
US East Coast.

This combines the best response times for both Europe and the US.
Bandwidth Considerations

The amount of bandwidth used will depend on the popularity of your site
and also the type of content (hosting audio or video files for download will
result in higher bandwidth needs). Most hosts have a generous allocation of
available bandwidth (100GB+ per month).

The rate at which you might use your bandwidth will be determined by the
network performance and port speed of the host you’ve selected.

If you’re seeing speeds of ~10 MB/s then your host has a 100Mbit port
speed, if you see speeds of 100 MB/s then the host has a 1Gbit port.

We recommend to choose a host which uses Gigabit ports from/to the


datacenter where your server is located. With 100Mbit/s ports there is a

Telegram Channel @nettrain


much higher chance for congestion.
Tools For Testing Latency And Bandwidth

Ping latency checks from around the globe can be tested for free at
http://cloudmonitor.ca.com/en/ping.php

To use the tool you’ll need to specify the hostname or IP address of your
server (in the case you already have bought it). If you’re still researching the
perfect host, then you’ll need to find a test ping server from the web host
company.

Google ‘<name of the web host> ping test url’ or ‘<name of the web host>
looking glass’ to see if there is a public test URL. For example, RamNode
hosting provider has Ping server at IP address 23.226.235.3 for their Atlanta
US based server. If there is no test url available try to contact their pre sales
support.

When executing the Ping latency check you can see the minimum round trip
time, the average and the maximum round trip time. With round trip time we
mean the time it takes for the request to reach the server at your web host
and the time it takes to get back the response to the client.

For client locations very near your server the round trip time will be very
low: 25ms, while eg. clients from India or Australia which probably are
much further away from your server are eg. in the 200ms range. For a client
in Europe and a server on the east coast of the USA, the round trip time
cannot be much lower then 110ms (because of the speed of light).

To test the maximum download speeds of your web host (bandwidth), search
for Google ‘<name of the web host> bandwidth test url’ or ‘<name of the
web host> looking glass’ Eg. a Ramnode server in Atlanta can be tested by
going to http://lg.atl.ramnode.com/

You can also run a website check via Alertra


CPU Performance

When comparing web hosting plans, it’s important to compare the CPUs
with the following checks:

Telegram Channel @nettrain


number of cores available for your VPS. If you have more then one
core, work can be parallelized by modern OS and server software
clock speed of the CPU (512MHz to 3GHz)
Actual Intel cpu model or AMD CPU model used.

Later in this chapter we will show you how to run some CPU benchmarks.
I/O Performance

Writing and reading files from disk can quickly become a bottleneck when
you have a lot of simultaneous visitors to your site.

Here are a few examples where your site needs to read files from disk:

serving static HTML, CSS, JS files


Wordpress blog: reading the frameworks PHP code from disk
Java server: reading the servers class files
Database: reading database information from disk

Writing to the disk will happen less often, but will be also slower then the
read speeds; here are some examples:

Uploading images, video’s, … (eg. In a blog post)


A social site with a lot of user generated content, needs to be stored in a
persistent way)

For Wordpress blogs, with performant Disk I/O you’ll notice that your pages
cache and serve much faster.

Webhost servers are generally equipped with either:

hard drive based disks


SSD (Solid State Drives) based disks
A combination of both (SSD cached)

A solid state drive uses memory chips to store the data persistently, while
hard drives contain a disk and a drive motor to spin the disk.

Solid state drives have much better access times because there are no
moving parts. They also have much higher read and write speeds (x MB per

Telegram Channel @nettrain


second).

There are a few downsides though:

At this moment SSD drives are still more expensive then hard drive
based disks, thus your web host provider could also price these
offerings to be more expensive.
The second downside is that SSD drive sizes are smaller then the
biggest hard disks sizes available. (Gigabytes vs Terabytes). This could
be a problem if you want to store terabytes of video.

The third option SSD Cached - tries to solve the second downside; the
amount of available disk space.

SSD Cached VPSs come with more space. The most frequently used data is
served from SSDs; while the less frequently accessed data is stored on
HDDs. The whole process is automated by the web host by using high
performance hardware RAID cards.

Our recommendation is to go with an SSD based web hosting plan.

The following web host providers have plans with SSD disks:

Ramnode
MediaTemple
RAM

You’ll want 256MB or more for a WordPress based blog, preferably 512MB
if you can afford it.

When running a Java based server we recommend to have at least 1GB of


RAM available.
Bandwidth

You should check how much bandwidth is included in your hosting package
per month. For every piece of data that gets transferred to and from your site
(eg ssh connections, html, images, videos and so on) it’ll be counted towards
your bandwidth allowance per month.

Telegram Channel @nettrain


If not enough gigabytes or terabytes are included, you risk having to pay
extra $$ at the end of the month.

Some hosts will advertise unmetered or unlimited bandwidth. This is almost


never the case, they will have a fair use policy and could terminate your
account if you go over the fair use limits.

The rate at which you might use your bandwidth will be determined by the
network performance and port speed of the host you’ve selected. If you want
a quick indication of speed you can run ~~~~~~~~ wget
http://cachefly.cachefly.net/100mb.test ~~~~~~~~

from your Linux server.

If you’re seeing speeds of ~10 MB/s then your host has a 100Mbit port
speed. If you see speeds of 100 MB/s then the host has a 1Gbit port.

We recommend Gigabit links from/to the datacenter.

Obviously if you can get a host with 1Gbit port speeds then that’s ideal but
you also want an indication of what speeds you’ll get around the globe. This
can easily be tested with the Network performance benchmark in
ServerBear. (see next section)

Benchmarking web hosts


How can you assess the performance of a web host providers plan if you
aren’t a customer yet?

You could ask for a free trial, so you can run benchmarks on the server itself.
Check if the web host company offers a 30-day money-back guarantee, this
way you can cancel your order if expectations are not met.

We’ll discuss some benchmarks you can use on a Linux system to assess the
performance of the server in a later chapter.
Benchmarks from ServerBear

ServerBear.com is a great site where you can see the benchmark results of a
lot of web hosting plans by different companies. As the ServerBear

Telegram Channel @nettrain


benchmarks results are comparable with each other and they are shared by
real users of the web hosting plans; it’s possible to get an indication of the
performance you’ll receive.

There are a lot of options to filter the search results: eg. on price, type of
hosting (shared, VPS, cloud), ssd or hdd storage and so on)

ServerBear benchmarks CPU performance, IO performance and Bandwidth


performance (from various parts of the world).

You can also run the ServerBear benchmark on your machine (if you already
have hosting) and optionally share it with the world at ServerBear.com

Once the benchmark is complete ServerBear sends you a report that shows
your current plan in comparison to similar plans. For example if you’re on a
VPS ServerBear might show you similar VPS plans with better performance
and some lower tier dedicated servers.

Running the benchmarking script on your server is a completely secure


process. It’s launched on the command line (via a Secure Shell login to the
server). The exact command can be generated by going to ServerBear.com
and clicking on the green button “Benchmark Your Server”

VPS Virtualization methods considerations


As a virtualization layer adds a layer of complexity around your operating
system, you’ll want to take into account the performance and security
implications of choosing a Virtualization software. Not all hosts offer all
types of Virtualization, or offer them at different price points.

We will discuss a few well known VPS virtualizations methods:

OpenVZ
KVM
OpenVZ

OpenVZ has a low performance overhead as it doesn’t provide full


virtualization. Instead OpenVZ runs containers within the host OS, using an
OpenVZ enabled OS kernel.

Telegram Channel @nettrain


As such your VPSs are not that isolated from others when running on the
same server. We don’t recommend to use OpenVZ because of the following
reasons:

Not possible to tweak kernel (eg. TCP/IP settings)


Not possible to upgrade the Linux kernel
Webhost company overselling of resources within OpenVZ
environments

You’ll see in later sections that a recent and tuned Linux kernel is necessary
to get the best possible performance and scalability.
KVM

Virtualization technology needs to perform many of the same tasks as


operating systems, such as managing processes and memory and supporting
hardware. Because KVM is built on top of Linux, it can take advantage of
the strengths of the Linux kernel to benefit virtualized environments. This
approach lets KVM exploit Linux for core functions, taking advantage of
built-in Linux performance, scalability, and security. At the same time,
systemic feature, performance, and security improvements to Linux benefit
virtualized environments, giving KVM a significant feature velocity that
other solutions simply cannot match.

Virtual machines are Linux processes that can run either Linux or Microsoft
Windows as a guest operating system. As a result, KVM can effectively and
efficiently run both Microsoft Windows and Linux workloads in virtual
machines, alongside native Linux applications as required. QEMU is
provided for I/O device emulation inside the virtual machine.

Key advantages are:

Full isolation
Very good performance
Upgrading the kernel is possible
Tweaking the kernel is possible

We recommend to use a web host where you can order a KVM based VPS.

Telegram Channel @nettrain


IPv4 / IPv6 Support
IP stands for Internet Protocol. It is a standard which enables computers and
or servers to communicate with each other over a network.

IP can be compared with the postal system. An IP address allows you to


send a packet of information to a destination IP address by dropping it in the
system. There is no direct connection between the source and receiver.

An IP packet is a small bit of information; it consists of a header and a


payload (the actual information you want to submit). The header contains
routing information (eg. the source IP address, the ultimate destination
address and so on)

When you want to eg. download a file from a server, the file will be broken
up into multiple IP packets, because one packet is only small in size and the
file won’t fit in it.

Although those IP packets share the same source and destination IP address,
the route they follow over the internet may differ. It’s also possible that the
destination receives the different packets in a different order then the order
in which the source sent them.

TCP (Transmission Control Protocol) is a higher-level protocol. It


establishes a connection between a source and a destination and it is
maintained until the source and destination application have finished
exchanging messages.

TCP provides flow control (how fast can you sent packets/receive packets
before the source or destination is overwhelmed), handles retransmission of
dropped packets and sort IP packets back in the correct order.

TCP is highly tunable for performance, which we will see in later chapters.

IPv4 or IP version 4 allows computers to be identified by an IP address such


as the following:
74.125.230.120

Telegram Channel @nettrain


In total there are around 4 billion IPv4 addresses available for the whole
internet. While that may sound much, it is less then the number of people on
the planet. As more and more devices and people come on-line, IPv4
addresses are becoming more scarce rapidly.

IPv6 or version 6 aims to solve this problem. In this addressing scheme there
are 2^128 number of different IP addresses possible. Here is an example:
3ffe:6a88:85a3:0000:1319:8a2e:0370:7344

You can see it is divided in 8 parts of 4 hexadecimal numbers.

When choosing a web host and ordering a server you will generally get one
or two IPv4 addresses included. This means that if you host more then two
websites, they will be hosted on the same IP address. (this is possible via
virtual hosts support in web servers like nginx)

We recommend you to look how many IPv6 addresses are included. If there
are none, we can not recommend such a hosting provider as they are not
future proof. The best solution today is to make your site available both
under IPv4 and Ipv6, this way nobody is left in the cold and you can reach
the highest number of potential customers/devices. We’ll explain how to do
this in later chapters.

Example ordering process


For hosting our own sites we have chosen Ramnode. In the following section
we’ll show you how to order a VPS with RamNode

RamNode Ordering
You’ll need to surf to Ramnode via your browser and click on the View
Plans and Pricing. Then click on the KVM Full Virtualization option.

Telegram Channel @nettrain


RamNode features

Depending on your needs, you can choose between 3 different types:

Massive: uses SSD Cached, which gives you more storage for a cheaper
price
Standard: uses Intel E5 v1/v2 CPUs with a minimum per core speed of
2.3GHz. It comes with less bandwidth then the Premium KVM SSDs.
They are also not available in Europe (NL - Netherlands).
Premium: uses Intel E3 v2/v3 CPUs with a minimum per core speed of
3.3GHz. They are available in the Netherlands, New York, Atlanta and
Seattle.

Telegram Channel @nettrain


RamNode plans and pricing

When clicking on the orange order link you’ll be taken the shopping cart
where you can configure a few settings:

billing cycle
the hostname for your server (you should choose a good name
identifying the server. (eg. server.<mydomainname>.<com/co.uk/…>)
You can order an extra IPv4 address for 1,50USD
You can order an extra Distributed Denial of Service attack filtered
IPv4 address for 5USD (more information here

Telegram Channel @nettrain


RamNode shopping cart

On the next page you can finish the ordering process by entering personal
and payment details.

After completing the ordering process you should receive a confirmation


about the order.

After a few hours you’ll receive an email to notify the VPS has been setup
by RamNode.

At this point you can access a web based control panel at RamNode
(SolusVM) where you can install the operating system on your VPS.

This is the first step to complete to get your website up and running. We’ll
give you some recommendations and a step by step guide on how to install
the OS in the next chapter.

If you decide to use another host like MediaTemple, the process will be
pretty similar.

Telegram Channel @nettrain


Installing the Operating System
Most hosting providers allow you to install either a Windows Server OS or a Linux based flavor.

In our guide we focus solely on Linux because of the following reasons:

cheaper hosting due to absence of Windows licensing costs


Top notch performance and web server software available for free
A big community that’ll help you with possible problems & solutions.

Choosing a Linux Distribution


There are a lot of Linux distributions available to choose from. So which one should you use for your
server?

We recommend you to make a decision based on the following checklist.

The Linux kernel used should be as recent as possible


Linux kernels are the heart of your Linux OS installation. New versions are released every few
months by the Kernel team & Linus Torvalds at https://www.kernel.org/.

Using a recent kernel is important for the speed of your site as new versions can provide you with
performance and scalability improvements in networking and KVM hypervisor areas.

You can find an overview of notable improvements at the http://www.linuxfoundation.org/news-


media/lwf.

Below we give a short list of notable improvements:

Support for TCP Fast Open (TFO), a mechanism that aims to reduce the latency penalty imposed
on new TCP connections (Available since Linux kernel 3.7+)
the TCP Initial congestion window setting was increased (with broadband connections the
default setting limited performance) (Available since Linux kernel 2.6.39)
Default algorithm for recovering from packet loss changed to Proportional Rate Reduction for
TCP since Linux Kernel 3.2

Don’t underestimate the impact of kernel upgrade; here is what Ilya Grigorik from the Web Fast Team
at Google says about kernel upgrades:
(http://chimera.labs.oreilly.com/books/1230000000545/ch02.html#OPTIMIZING_TCP)

As a starting point, prior to tuning any specific values for each buffer and timeout variable in TCP, of
which there are dozens, you are much better off simply upgrading your hosts to their latest system
versions. TCP best practices and underlying algorithms that govern its performance continue to
evolve, and most of these changes are only available only in the latest kernels.

Our recommendation is to choose a stable version of a Linux distribution which comes with the most
recent kernel possible.

Telegram Channel @nettrain


Choosing a Server Distribution
Some Linux flavors come with a desktop & server distribution (eg. Ubuntu). It’s recommended to use
a server distribution as it’ll be smaller in size then its desktop variant. (when using an SSD, the size of
your storage may be limited).

The defaults for the server distribution may also be optimized better.

Some Linux distributions have a Longterm support version and stable mainline version. The latter is
usually newer, can come with a newer kernel, but is not supported as long as the Longterm support
version.

For example; with Ubuntu the current Long term support version (LTS) is version 14.04; and
supported until April 2019. With supported they mean:

more enterprise focused


updated to stay compatible with new hardware
More tested: the development window is shortened and the beta cycle extended to allow for more
testing and bug fixing

The latest stable version is now 15.10 which will be supported for the next nine months. By that time
a newer stable version will be out which you can upgrade to.

With Ubuntu a Long term support version is generally released every two years.

Our recommendation is to use the latest stable version. These releases are generally more then stable
enough, and will allow you to use newer kernels and functionality more rapidly.

We’ll be installing Ubuntu on our RamNode server. Looking at the current list of Linux flavors
available at RamNode we recommend you to install Ubuntu 15.04 as it comes with Kernel 3.19

Note that while we were writing this ebook, the latest version of Ubuntu was 15.04 which we’ll show
you how to install below.

In the next chapter we’ll also explain how to upgrade to Ubuntu 15.10.

Installing Ubuntu 15.04


Because of the recent kernel we choose Ubuntu 15.04 64 bit Server install.

Selecting the ISO template in the Ramnode Control Panel and Booting from CDROM
To start the install we need to select the Ubuntu 15.04 ISO template from the Ramnode control panel.

Go to https://vpscp.ramnode.com/control.php and login to your Control Panel.

Click on the CDRom tab and mount the latest Ubuntu x86_64 Server ISO file.
Click on the Settings tab and change the boot order to (1) CDROM (2) Hard Disk
Now reboot your server

Telegram Channel @nettrain


Mounting the Ubuntu installation CDROM

Changing the boot order

Rebooting the server

Using Tight VNC Viewer


To be able to remotely view the Ubuntu setup we need to connect to it via VNC. Click on the VNC
tab to view the details on how to connect.

You can use the following VNC Viewers via RamNodes Control panel:

VNC Clients

We recommend to use the HTML5 SSL VNC Viewer. If it doesn’t work somehow, provided you have
installed Java on your machine, you can use the Java VNC client.

All the VNC clients on RamNode control panel will automatically connect to the host address and
port you can see on the VNC tab.

Alternatively you can also install Tight VNC Viewer on Windows/Mac and connect to the
address/port and use the password provided.

Because you have rebooted the VPS and launched the CDROM with the Ubuntu install you should
now see the Ubuntu installer in the VNC window.

Telegram Channel @nettrain


Start the Ubuntu Server Install
The first option to choose is the language of the Setup. Choose English or another language to your
liking.

Choosing the setup language

Choose the option Install Ubuntu Server

Telegram Channel @nettrain


Choose the language to be used for the installation process. It’ll also be the default language for the
installed system.

Choose the language

Select the location of your server, to correctly set the time zone and system locale of your Linux
installation. This may be different then the country you’re living in if you are hosting your site in
another country.

Select your location

Telegram Channel @nettrain


Next up is configuring your keyboard. Choose No on the question ‘Detect keyboard layout’. Then
select the keyboard language layout you are using.

Configure the keyboard

Now it is time to configure the hostname for you server. This name will later be used when setting up
your web server. You can choose eg. server.<mydomain>.com or something similar.

Configure the network

Telegram Channel @nettrain


Now Ubuntu asks you to setup a user account to login with. This is an account for non-administrative
activities. When you need todo admin work, you will have to elevate your privileges via a superuser
command.

Enter your full name and on the second screen enter your firstname (in lowercase)

Setup users and passwords

Setup users and passwords

Telegram Channel @nettrain


Now choose a good password that you’re able to remember :-)

Setup users and passwords

To gain a little extra performance we didn’t let Ubuntu encrypt our home directory. If you will allow
multiple users to access your server it’s recommended to enable this setting though.

Encrypt home directory

Now confirm whether the timezone is correct.

Telegram Channel @nettrain


Configure the clock

The next step is partitioning the disk where you want to install Linux on. Of course you should make
sure there is no data on the disk which you want to keep as the partitioning process will remove all the
data.

Choose the Guided - use entire disk and set up LVM option.

Partitioning

Telegram Channel @nettrain


On our system we only have one SSD, identified as a Virtio Block Device which we will partition.

Virtio Block Device

Choose Yes to write the changes to disk and configure the LVM. (Logical Volume Manager)

Configuring the Logical Volume Manager

We use the full capacity of the SSD as our partition size.

Telegram Channel @nettrain


Configuring the Logical Volume Manager

The final step in the partitioning process. The installer show that it’ll create a root partition, a swap
partition and one partition Virtual disk 1 where we will install Ubuntu Linux on.

Partition disks

Now the package manager “apt” (used to install and update software packages on Ubuntu) needs to be
configured. You can leave the HTTP proxy line empty unless you’re using a proxy server to connect
to the Internet.

Telegram Channel @nettrain


Setting internet proxy for APT package manager

We recommend to enable Installing Security updates automatically:

Auto install security updates

In the software selection we unchecked all optional packages, as we want the initial install to be as
clean as possible. Later in this guide we will install Nginx, PHP, MariaDB, etc… manually

Telegram Channel @nettrain


Software selection

Software selection

Telegram Channel @nettrain


Software selection

Choose Yes to install the GRUB boot loader on the system.

Software selection

Telegram Channel @nettrain


Finishing the installation

When the installer asks you to reboot the server, you’ll first need to change the boot order again, so
Ubuntu is launched from your hard disk instead of the setup process on the CDROM.

To change the boot order you need to login into your Solus VM Control Panel and change the boot
order which we explained earlier.

Now you can continue the install by choosing Continue.

After the reboot you’ll need to login again with VNC (using the username created during the setup
and your chosen password).

Now you have succesfully installed Ubuntu Linux and you’re ready to install other packages to make
it a real server!

Setting up SSH access to login to your server


The first thing we need todo is to configure remote Secure Shell Access (SSH access) on your server;
as always needing to use VNC to login would be very cumbersome.

Secure Shell (SSH) is used to securely get access to a remote computer. We’ll use it to administer our
server eg. For installing new software, configuring/tuning settings or restarting our web server.

SSH is secure because the communication is encrypted.

The first thing we want to install is an SSH daemon. This will allow you to connect via an SSH client
or SFTP client to your server.

We assume the server has now booted and you’re viewing the command line on your server via the
VNC Viewer.

Here is what you should execute to install the SSH daemon:


$ sudo apt-get install openssh-server

Telegram Channel @nettrain


The SSH service is now installed and running. If you ever need to restart it you can use the following
command:
$ sudo /etc/init.d/ssh restart

If you want to change the default settings, such as the default port to connect to, edit the
/etc/ssh/sshd_config file:
$ sudo nano /etc/ssh/sshd_config

# What ports, IPs and protocols we listen for


Port 22

The default port is 22, it can be useful to change this to another port number to enhance the security of
your server as people will have to guess where the SSH service is running)

Login via SSH


Now you can use an SSH client on your PC to connect to your server. We recommend you to install
and use Putty. It can be downloaded from the Putty website.

Connect to the IP given when you bought your VPS

Fill in the IP Address of your server


Default SSH port is 22 (or choose the other port you’ve used if you changed the port number)
Connection type: choose SSH

If you want you can let Putty remember the details via Save.

Putty configuration

On a Mac, you can open a terminal and enter:


ssh -l <username> <ipaddress of your server>

Telegram Channel @nettrain


The first time Putty will ask you to save a fingerprint of the server. This is useful because when the IP
address would be hijacked to another server, the fingerprint will change.

Putty will then issue a warning so you don’t accidentally connect to a compromised server.

To upload files securely we recommend FileZilla as it supports SFTP, the secure version of File
Transfer Protocol (FTP).

You can download and install FileZilla from https://filezilla-project.org/

Open the FileZilla SiteManager and enter the same server details as for SSH login with Putty.

Make sure to use the SFTP protocol and login Type Normal to be able to login via username and
password.

Updating system packages


A command you’ll be running regularly is for updating the currently installed OS packages to their
latest versions.

Here is how you can install them:


$ sudo apt-get update
$ sudo apt-get upgrade

(type everything after the $ sign)

apt-get update downloads the package lists from the repositories and “updates” them to get
information on the newest versions of packages and their dependencies.

apt-get-upgrade will actually be used to install the newest versions of all packages currently installed
on the system

sudo allows users to run programs with the security privileges of another user (normally the
superuser, or root)

Securing /tmp and /var/tmp


A typical Linux installation will set the temp directory /tmp as mode 1777, meaning it has

the sticky bit set (1)


and is readable, writable, and executable by all users. (777)

For many, that’s as secure as it gets, and this is mostly because the /tmp directory is just that: a
directory, not its own filesystem.

By default the /tmp directory lives on the root partition (/ partition) and as such has the same mount
options defined.

In this section we will make the temp directory a bit more hackerproof. Because hackers could try to
place executable programs in the /tmp directory we’ll set some additional restrictions.

We will do that by setting /tmp on its own partition, so that it can be mounted independent of the root
/ partition and have more restrictive options set. More specific we will set the following options: *

Telegram Channel @nettrain


nosuid: no suid programs are permited on this partition * noexec: nothing can be executed from this
partition * nodev: no device files may exist on this partition

You could then remove the /var/tmp directory and create a symlink pointing to /tmp so that the
temporary files in /var/tmp also make use of these restrictive mount options.

Here are the exact commands:


$ sudo rm -rf /tmp
$ sudo mkdir /tmp
$ sudo mount -t tmpfs -o rw,noexec,nosuid tmpfs /tmp
$ sudo chmod 1777 /tmp
$ sudo nano /etc/fstab

Add tmpfs /tmp tmpfs rw,noexec,nosuid 0 0


$ sudo rm -rf /var/tmp
$ sudo ln -s /tmp /var/tmp

Reboot your system via


$ sudo reboot

Installing Miscellaneous Tools


Install Axel

Axel tries to accelerate downloads by using multiple connections (possibly to multiple servers) for
one download. It might be very useful as a wget replacement.
sudo apt-get install axel

Usage is:
$ axel <url to file you want to download>

Alternatively you can use the default available wget:


$ wget <url to file you want to download>

Installing NTP (and syncing time)

NTP is a TCP/IP protocol for synchronising time over a network. Basically a client requests the
current time from a server, and uses it to set its own clock.

To install ntpd, from a terminal prompt enter:


$ sudo apt-get install ntp

ntpdate

Ubuntu comes with ntpdate as standard, and will run it once at boot time to set up your time
according to Ubuntu’s NTP server.
$ ntpdate -s ntp.ubuntu.com

ntpq

View it is running:

Telegram Channel @nettrain


$ sudo ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
time-b.nist.gov .ACTS. 1 u 50 64 7 64.579 28.644 0.171
nbg1.shellvator 128.59.39.48 2 u 48 64 7 20.213 0.175 0.347
2604:a880:800:1 184.105.182.7 3 u 48 64 7 30.495 -3.165 2.721
pbx.denvercolo. 66.219.116.140 2 u 44 64 7 37.418 1.503 0.320
juniperberry.ca 193.79.237.14 2 u 44 64 7 96.359 -3.864 0.247

Enabling IPv6 Support


As we explained in our IPv4 / IPv6 support section, the use of IPv6 is on the rise.

In this section we’ll make sure that our Ubuntu Linux is correctly configured so that our server is
accessible via an IPv6 address.

By default, after the install of Ubuntu, IPv6 is not enabled yet. You can test if IPv6 support is enabled
by pinging an IPv6 server of Google from your VPS server:
$ ping6 ipv6.google.com

It’ll return unknown host if your server is not yet IPv6 enabled.

When you got your VPS, you were given 1 or more IPv4 addresses and IPv6 addresses. Look them
up, as we’ll need them shortly. With RamNode you can look them up via the SolusVM Control panel
at https://vpscp.ramnode.com/login.php

To have network access our VPS has a (Gigabit) network card. Each network card is available in
Linux under a name like eg. eth0, eth1, …

Let’s run ifconfig to see what we have in our VPS:


$ ifconfig
eth0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
inet addr:<ipv4address> Bcast:<ipv4address> Mask:255.255.255.0
...
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host

The eth0 is the Linux interface to our Ethernet network card. The inet addr tells us that an IPv4
address is configured on this interface/network card.

We don’t see any ‘inet6 addr’; that’ll be added below for our Ipv6 addresses.
$ sudo nano /etc/sysctl.conf

Add:
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.eth0.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.bindv6only = 0

0 means disabled, 1 means enabled.

Telegram Channel @nettrain


This will enable IPv6 on all network interfaces (eth0, lo, default, all). Bindv6only is disabled because
we want to make our site available on the same http port (80) via IPv6 and IPv4 addresses. (as
otherwise people surfing the web with an IPv4 address wouldn’t be able to access our site.)

/etc/network/interfaces is the file which contains the configuration details of all our networking
interfaces.
$ sudo nano /etc/network/interfaces

For the loopback network interface we add the following line in bold if it is not already present:
# The loopback network interface
auto lo
iface lo inet loopback
*iface lo inet6 loopback*

The following line auto starts the eth0 network interface at bootup:
# Auto start at boot
auto eth0

Here is our IPv4 configuration, which we will explain next:


iface eth0 inet static
address <ipv4 address>
netmask 255.255.255.0
gateway <gateway server of your host>
up ip addr add <2nd ipv4 address>/24 dev eth0 label eth0:0
down ip addr del <2nd ipv4 address>/24 dev eth0 label eth0:0
dns-nameservers 8.8.8.8 8.8.4.4

For Ramnode the netmask is 255.255.255.0. The gateway server should also be documented by your
VPS host. For Ramnode you can find the information here.

We will use the Google DNS nameservers 8.8.8.8 8.8.4.4; so our server can resolve domain names
hostnames.

We have a second IPv4 address, which we want to bind to the same physical network card (eth0).

The concept of creating or configuring multiple IP addresses on a single network card is called IP
aliasing. The main advantage of using IP aliasing is, you don’t need to have a physical network card
attached to each IP, but instead you can create multiple or many virtual interfaces (aliases) to a single
physical card.

These lines actually configures the IP aliasing:


up ip addr add <2nd ipv4 address>/24 dev eth0 label eth0:0
down ip addr del <2nd ipv4 address>/24 dev eth0 label eth0:0

Make sure you pair up/add line with a down/del line.

In the above example we setup the IPv4 addresses statically (hence the static keyword). There are
other options like dhcp. You can read more about it at https://wiki.debian.org/NetworkConfiguration

Now let’s bind the IPv6 addresses we have to the eth0 interface:

Telegram Channel @nettrain


iface eth0 inet6 static
address <your Ipv6 address>
netmask 64
gateway <ipv6 gateway of your host>
up ip -6 addr add <additional Ipv6 address #1>/64 dev eth0
up ip -6 addr add <additional Ipv6 address #n>/64 dev eth0
down ip -6 addr del <additional Ipv6 address #1>/64 dev eth0
down ip -6 addr del <additional Ipv6 address #n>/64 dev eth0
dns-nameservers 2001:4860:4860::8888 2001:4860:4860::8844

For Ramnode the netmask is 64. The gateway server should also be documented by your VPS host.

We will use the Google IPv6 DNS nameservers 2001:4860:4860::8888 and 2001:4860:4860::8844; so
our server can resolve domain names hostnames.

The up ip -6 / down ip -6 lines setup Ipv6 aliasing in a similar manner then we did for our second
Ipv4 address.

After making the changes run resolvconf:


$ sudo resolvconf -u

This makes sure that the DNS nameservers we have specified are added to the file /etc/resolv.conf:
$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 8.8.8.8
nameserver 8.8.4.4
nameserver 2001:4860:4860::8888

Now we’ll reboot your server.


$ sudo reboot

After booting, check the output of ifconfig again; it should now list a ‘Scope:Global’ Ipv6 address in
the eth0 section:
$ ifconfig

eth0 Link encap:Ethernet HWaddr 00:16:3c:ca:7c:37


...
inet6 addr: <ipv6 address>/64 Scope:Global

Now try to ping to Google via the Ipv6 enabled ping app:
$ ping6 ipv6.google.com
PING ipv6.google.com(yk-in-x8a.1e100.net) 56 data bytes
64 bytes from yk-in-x8a.1e100.net: icmp_seq=1 ttl=57 time=23.8 ms
64 bytes from yk-in-x8a.1e100.net: icmp_seq=2 ttl=57 time=15.1 ms

Congratulations, your server is now reachable via IPV4 and IPV6 addresses!

Useful Linux Commands


Below are some useful Linux commands you may use when administering your server.
Checking the available disk space

Telegram Channel @nettrain


If you want to check the available disk space then a good command to use is df -h
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/barefeetmedia--vg-root 40G 15G 24G 38% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 2.0G 4.0K 2.0G 1% /dev
tmpfs 2.0G 3.4M 2.0G 1% /tmp
tmpfs 396M 400K 395M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 2.0G 0 2.0G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/vda1 236M 90M 134M 41% /boot

Checking the Available Free RAM

A good way to check how much RAM is being used on your server is using the command free -m
total used free shared buffers cached
Mem: 3953 1896 2056 0 151 382
-/+ buffers/cache:1363 2590
Swap: 4095 0 4095

Make sure you look at the free RAM on the -/+ buffers/cache line, because the first line includes the
memory used for disk caching, which can make it seem like you have no RAM left.

Upgrading Ubuntu 15.04 to 15.10


At the time when we wrote the Ubuntu install chapter version 15.04 was the latest version. In this
chapter we’ll show you how to upgrade to a new version, in our case the 15.10 release.

If you’re not using an LTS version (Long Time Support version), your Ubuntu install may become
unsupported pretty soon. (eg. after 9 months). You’ll be greeted with the following warning if you’re
running an unsupported version when you login via SSH:
Your Ubuntu release is not supported anymore.
For upgrade information, please visit:
http://www.ubuntu.com/releaseendoflife

New release '15.10' available.


Run 'do-release-upgrade' to upgrade to it.

To be able to use the newest kernels we recommend non-LTS versions, which means you’ll upgrade
more regularly.

In the following sections we’ll give some important pre-upgrade checks you have todo before starting
the upgrade.

Remount the temp partition with noexec false


You’ll need to remount your tmp partition with noexec on false, so the installer can run executables
on the tmp partition. We removed that possibility to make it harder for trojans/viruses to execute on
our /tmp partition.

Here is how you can disable noexec; make sure the revert the change after the upgrade.
$ sudo nano /etc/fstab

Telegram Channel @nettrain


Replace
tmpfs /tmp tmpfs noatime,nodiratime,rw,noexec,nosuid 0 0

by
tmpfs /tmp tmpfs noatime,nodiratime,rw,nosuid 0 0

Remove any unused old kernels


Sometimes it is needed to get more free space available on the boot partition, to be able to start the
installation.

You can remove any unused old kernels from the boot partition via the following command:
$ sudo apt-get autoremove

Do the upgrade via VNC


It is not recommended to perform an upgrade over ssh because in case of failure it is harder to
recover.

We will execute via a Remote VNC session. For RamNode this can be accessed from the SolusVMCP
panel at https://vpscp.ramnode.com/remote.php

The VNC login looks very similar to an SSH based login. Some keys may not work as expected
though. For example the - character has to be entered as ALT-45 to work.

Here is the commands to execute:


$ sudo apt-get install update-manager-core

Edit the file /etc/update-manager/release-upgrades:


$ sudo nano /etc/update-manager/release-upgrades

Set Prompt=normal to enable the following behaviour:


Check to see if a new release is available. If more than one new release is foun\
d, the release upgrader will attempt to upgrade to the release that immediately \
succeeds the currently-running release.

$ sudo do-release-upgrade

If the upgrade asks to overwrite locally changed files choose the option to keep your version of the
files.

Reboot when asked.

Telegram Channel @nettrain


Performance Baseline
Before we start tweaking the basis install, we want to run some benchmarks
to form a performance baseline which we can use to compare before/after
tweaking results.

Serverbear
Go to http://serverbear.com/ and choose Benchmark my server. Choose
your hosting provider and which plan you have bought. Enter your email to
get the results of the benchmarks when they are ready.

At the bottom you can copy paste a command that you can use within your
SSH session.

When the benchmark finishes you will receive an email with a link to your
results.

Here are some example results:


UnixBench score: 3278.6
I/O rate: 1200.0 MB/second
Bandwidth rate: 74.2 MB/second

The server bear benchmarks tests the CPU (UnixBench score), the
performance of the I/O subsystem, and the download speed from various
locations to your server.

Testing Ping Speed


Testing the ping speed is a great way to test the latency from your location
to the server.

To test the latency from your location to the server you can use the
Windows/Mac builtin ping command:

Telegram Channel @nettrain


Open a cmd window and type
ping <your server IP address>

The latency should should approach the minimum ms we can physically


have (speed of light and distance between your location and the server.

Telegram Channel @nettrain


Tuning KVM Virtualization Settings
There are a few settings in the VPS Control panel which will provide optimal
performance when using the KVM virtualization.

VPS Control Panel Settings


In the VPS control panel choose

Network card: Intel PRO/1000. This makes sure were not limited to 100Mbit/sec
bandwidth on the network card. (but instead have the full 1000Mbit/sec available).
Disk driver: virtio - this can give better IO performance

Network card settings

After this change we can run the ServerBear benchmark again with the following results:
UnixBench score: 3561.0
I/O rate: 1300.0 MB/second
Bandwidth rate: 110.0 MB/second

Telegram Channel @nettrain


Configuring the CPU Model Exposed to KVM Instances
When using KVM, the CPU model used inside the server is not exposed directly to our
Linux installation by default. Instead a QEMU Virtual CPU version will be exposed.

You can view the processor information with the following command:
$ cat /proc/cpuinfo

Here is what is displayed on our server:


processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 13
model name : QEMU Virtual CPU version (cpu64-rhel6)
stepping : 3
microcode : 0x1
cpu MHz : 3400.022
cache size : 4096 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 4
wp : yes
flags : fpu de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pse36 cl\
flush mmx fxsr sse sse2 syscall nx lm nopl pni cx \
16 \
hypervisor lahf_lm
bogomips : 6800.04
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

processor : 1
...

From the output you can see we have a multi core CPU (processor: 0, processor: 1). The
model name is as you can see QEMU Virtual CPU version. In the flags section you can
see what kind of special instruction sets the CPU supports. Eg. SSE2

In reality our CPU supports a lot more instruction sets, eg. SSE4,… But it is not visible
yet in /proc/cpuinfo, which means Linux apps or when compiling new software, may not
use the instructions either.

To be sure we maximize performance we can enable the CPU Passthrough mode so that
all CPU features are available.

Telegram Channel @nettrain


Actually as we running inside a VPS, the KVM installation was done by RamNode,
which means the CPU Passthrough mode can only be enabled by RamNode support

After the change the /proc/cpuinfo changed to the below information:


processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Xeon(R) CPU E3-1240 V2 @ 3.40GHz
stepping : 9
microcode : 0x1
cpu MHz : 3400.022
cache size : 4096 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov \
pat pse36 clflush mmx fxsr sse sse2 ss syscall nx lm constant_tsc arch_perfmon n\
opl eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadl\
ine_timer aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep
bogomips : 6800.04
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

Now we can see we’re running on a Intel(R) Xeon(R) CPU E3-1240 V2 @ 3.40GHz.
The flags line also has a lot more instruction sets available now.

Telegram Channel @nettrain


Tuning Kernel Parameters
In this chapter we will tune performance related Kernel parameters. These parameters can be grouped into
different categories:

Improving SSD speeds


Improving Network speeds (TCP/IP settings)
Improving scalability (file handlers, TCP/IP Settings,… )

Tuning the Linux kernel can be done with the sysctl program or by editing the /etc/sysctl.conf file.

After editing the sysctl.conf file you can run sudo sysctl -p to reflect the changes without rebooting your server.

Improving Network speeds (TCP/IP settings)


In this section we will optimize the TCP/IP settings. TCP is a complex protocol and as such is highly tunable.
Academic research papers continue to research the protocol and find performance optimalizations, which
ultimately find their way in newer kernel versions.

We will not try to explain every parameter in detail because this is highly technical. If you’re interested in more
information about TCP/IP we recommend you to read the book High Performance Browser Networking by Ilya
Grigorik

To start editing the settings open the sysctl.conf file in an editor; all settings in the next section can be added to this
file (at the end preferably)
$ sudo nano /etc/sysctl.conf

Disabling TCP/IP Slow start after idle


When a client and a server start communicating with each other they don’t know the available network capacity
between the two. The only way to measure it, is by exchanging data.

This is what the TCP/IP Slow start algorithm is designed to do. The general idea is that the sender of the data starts
to send data slowly, waiting for ACKs (=acknowledgements) by the receiver. If the network is great between the
two, and the sender gets the ACKs back correctly, the sender will increase the amount of data in the next round
(before waiting for ACKs to return). This is called the TCP/IP congestion Window.

This phase of the TCP connection is commonly known as the “exponential growth” algorithm, as the client and the
server are trying to quickly communicate at maximum speed on the network path between them.

No matter the available bandwidth, every TCP connection must go through the slow-start phase. This can hinder
performance when downloading eg. an html file with multiple Javascript files and style sheets.

These are all separate HTTP connections which run on the TCP/IP protocol. Because the files are small in size it is
not unusual for the requests to terminate before the maximum TCP window size is reached. Because of this, slow-
start limits the available bandwidth throughput, which has an adverse effect on the performance of small transfers.

To decrease the amount of time it takes to grow the congestion window, we can decrease the roundtrip time
between the client and server so the ACKs will arrive faster at the sender, which can then in turn start sending
more data sooner.

In Linux Kernel 2.6.x and later the initial congestion window size was increased to 10 segments of data. (eg. due
to better networks TCP/IP connections will transfer more data now before waiting for ACKs). It’s one of the

Telegram Channel @nettrain


reasons why using a recent Linux kernel is important.

TCP also implements a Slow start restart mechanism, which reduces again the size of the TCP congestion window
after a connection has been idle for a period of time. (because network conditions could have changed in the mean
time).

This can have a significant impact on TCP connections which are alive for a longer time. It is recommended to
disable the Slow start restart mechanism on the server.
# Disable TCP IP Slow start after idle; this affects IPv6, too, despite the name
net.ipv4.tcp_slow_start_after_idle=0

Other Network and TCP/IP Settings


For servers which are handling large numbers of concurent session there are few settings that can be tweaked.
$ sudo nano /etc/sysctl.conf

net.ipv4.tcp_keepalive_time=1800

This setting reduces the keepalive timout of a TCP connection to 30 minutes. This way we have less memory
usage.
net.ipv4.tcp_max_syn_backlog = 10240

This setting increases the maximum number of remembered connection requests, which still did not receive an
acknowledgment from the connecting client. You may need to lower this number if you have a memory
constrained VPS. The default is 1024.
net.core.netdev_max_backlog = 2500

The number of packets that can be queued should be increased from the default of 1000 to 2500
net.ipv4.tcp_max_tw_buckets = 1440000

With a web server you will see a lot of TCP connections in the TIME-WAIT state. TIME_WAIT is when the
socket is waiting after close to handle packets still in the network. This setting should be high enough to prevent
simple Denial of Service attacks.
net.ipv4.ip_local_port_range = 1024 65535

This setting defines the local port range that is used by TCP traffic. You will see in the parameters of this file two
numbers: The first number is the first local port allowed for TCP on the server, the second is the last local port
number allowed. For high traffic sites the range can be increased, so that more local ports are available for use
concurrently by the web server.
net.ipv4.tcp_max_orphans = 60000

An orphan socket is a socket that isn’t associated to a file descriptor. For instance, after you close() a socket, you
no longer hold a file descriptor to reference it, but it still exists because the kernel has to keep it around for a bit
more until TCP is done with it. If this number is exceeded, the orphaned connections are reset immediately and
warning is printed. Each orphan eats up to 64K of unswappable memory.

You can view the number of orphan sockets here:


$ cat /proc/net/sockstat
TCP: inuse 42 orphan 2 tw 12 alloc 77 mem 39

net.ipv4.tcp_mem= 976562 976562 976562

Telegram Channel @nettrain


The tcp_mem variable defines how the TCP stack should behave when it comes to memory usage; when the third
number is reached, then packets are dropped. Configuring this option depends on how much memory your server
has.

The number is not in bytes but in number of pages (where most of the time 1 page = 4096 bytes)

You can view your page size with:


$ getconf PAGESIZE
4096

So if we take the total memory (4GB=4000 000 000 bytes) of our server we can do the math:
4000 000 000 / 4096 = 976562 pages

We actually recommend to not set tcp_mem manually as it is already auto-tuned by Linux based on the amount of
RAM.

We don’t recommend to use higher values unless you have more memory available.

You can view the current amount of pages needed by TCP via:
$ cat /proc/net/sockstat
sockets: used 169
TCP: inuse 32 orphan 0 tw 9 alloc 68 mem 34

net.ipv4.tcp_rmem=4096 87380 16777216

The first value in this variable tells the minimum TCP receive buffer space available for a single TCP socket.
Unlike the tcp_mem setting, this one is in bytes. The second value is the default size. The third value is the
maximum size.
net.ipv4.tcp_wmem=4096 65536 16777216

The first value in this variable tells the minimum TCP send buffer space available for a single TCP socket. Unlike
the tcp_mem setting, this one is in bytes. The second value is the default size. The third value is the maximum size.
16MB per socket sounds much, but most sockets won’t use anywhere near this much. (+ it is nice to be able to
expand if necessary)
net.ipv4.tcp_sack = 1

The tcp_sack variable enables Selective Acknowledgements (SACK). It was developed to handle lossy
connections better.
net.ipv4.tcp_timestamps = 1

This is an TCP option that can be used to calculate the Round Trip Measurement in a better way than the
retransmission timeout method can.
net.ipv4.tcp_window_scaling = 1

This specifies how we can scale TCP windows if we are sending them over large bandwidth networks. When
sending TCP packets over these large pipes, we experience heavy bandwidth loss due to the channels not being
fully filled while waiting for ACK’s for our previous TCP windows.

Enabling tcp_window_scaling enables a special TCP option which makes it possible to scale these windows to a
larger size, and hence reduces bandwidth losses due to not utilizing the whole connection.
net.ipv4.tcp_ecn = 1

Telegram Channel @nettrain


The Explicit Congestion Notification (ECN) mechanism is an end-to-end congestion avoidance mechanism.
According to http://www.ietf.org/rfc/rfc2884.txt it increases network performance by enabling it.
net.ipv4.tcp_early_retrans=2

Linux 3.5 kernel and later implement TCP Early Retransmit with some safeguards for connections that have a
small amount of packet reordering. This allows connections, under certain conditions, to trigger fast retransmit and
bypass the costly Retransmission Timeout (RTO). By default it is enabled in the failsafe mode
tcp_early_retrans=2.
net.core.somaxconn=1024

The maximum number of “backlogged sockets”. The default is 128. This is only needed on very loaded server.
You’re effectively letting clients wait instead of returning a connection abort.
net.core.wmem_max=16777216

The maximum OS send buffer size in bytes for all types of connections
net.core.rmem_max=16777216

The maximum OS receive buffer size in bytes for all types of connections
net.core.rmem_default=65536

The default OS receive buffer size in bytes for all types of connections.
net.core.wmem_default=65536

The default OS send buffer size for all types of connections

File Handle Settings


$ sudo nano /etc/sysctl.conf
fs.file-max = 5000000

When you’re serving a lot of html, stylesheets, etc; it is usually the case that the web server will open a lot of local
files simultaneously. The kernel limits the number of files a process can open.

Raising these limits is sometimes needed.

Setup the number of file handles and open files


You can view the current maximum number of file handles and open files with ulimit.

To scale your web server (eg. we will install Nginx later), you need to increase this number:
$ ulimit -Hn
4096

$ ulimit -Sn
1024

-Hn shows us the hard limit, while -Sn shows the soft limit.

If you want to increase this limit then edit the limits.conf:


$ sudo nano /etc/security/limits.conf

Add the following two lines to this file:

Telegram Channel @nettrain


* soft nofile 65536
* hard nofile 65536

This means that the soft and hard limit on the number of files per user is 65536. The * means that we apply this
limit to all user accounts on your system.

You can increase this further if needed.

Now reboot the machine so the settings will take effect:


$ sudo reboot

Improving SSD Speeds


Kernel settings
$ sudo nano /etc/sysctl.conf

These settings are optimized for SSD disks. Do not apply these if you don’t have an SSD in your server.
vm.swappiness=1

With the action above, you limit the use of the swap partition (the virtual memory on the SSD). Ubuntu’s
inclination to use the swap, is determined by a setting.

On a scale of 0-100, the default setting is 60. We set it to 1 so that swap is not used (= less I/O traffic) unless the
server gets severe RAM shortage.
vm.vfs_cache_pressure=50

Improves the file system cache management for SSDs.

Scheduler Settings
I/O performance, or the read/write latency of a web server can seriously impact the overall page load times of your
server. Making a simple change to the IO scheduler that’s built into the kernel can decrease your IO latency by
20% or more.

Scheduling algorithms attempt to reoder disk access patterns to mitigate the shortcomings of traditional HDDs.
They work best with I/O devices that have reasonable transfer speeds and slow seek times.

Depending on the Linux distribution and version you have installed, the I/O Scheduler can be different. There are a
few possible Linux schedulers:

Deadline
CFQ
NoOp
None

By default, Ubuntu 14.04 and later uses the I/O scheduler Deadline, which is working well for both SSDs and
conventional hard disks.

When running Ubuntu 14.04 inside a KVM based VPS, you cannot choose the I/O scheduler as you are accessing
a virtual device. The I/O request is passed straight down the stack to the real disk. The scheduler used will be
“none”. This is due to kernel changes in Kernel 3.13 and later.

By default Ubuntu 12.04 uses the CFQ scheduler which is good for conventional hard disks but not for SSDs.
CFQ, Completely Fair Queuing, tries to balance I/O among all the processes fairly. This isn’t the best option for
web servers.

Telegram Channel @nettrain


The NoOp (no operation) scheduler is the recommended choice if your server only has SSDs inside. It’ll
effectively completely let the scheduling be done by the SSD hardware controller, assuming it will do a better job.

Here is how to view the current and available schedulers on your Linux system:

You may need to substitute the vda portion of the command with your disk devices, which may be sda, sdb, sdc or
hda, hdb, etc.
$ cat /sys/block/vda/queue/scheduler

Here is the output on Ubuntu 13.x on a KVM VPS:


noop [deadline] cfq

It says the deadline scheduler is the current IO scheduler. cfq and noop are also available.

Our recommendation is to use the noop scheduler, which will allow the host to determine the order of the
read/write requests from your VPS instead of the VPS re-ordering and potentially slowing things down with the
cfq scheduler.

Here is how you can do this on your Ubuntu 13.x install:


$ sudo nano /etc/rc.local

Add
echo 'noop' > /sys/block/vda/queue/scheduler
echo '0' > /sys/block/vda/queue/rotational

The last line is used to check if the device is of a rotational type or non-rotational type. For SSD disks this should
be set to 0; because seek times are much lower then on traditional HDDs. (where the scheduler spends extra CPU
cycles to minimize head movement.)

After saving the file execute the rc.local:


$ /etc/rc.local

Setting the correct scheduler on the KVM host is something that your VPS provider has to set. Ideally they will use
the noop scheduler for SSD based VPSs and deadline if the server also has HDDs in the mix.

Here is the output on Ubuntu 14.x on a KVM VPS:


none

In this case no changes are necessary inside the KVM VPS. Just be sure or ask your VPS provider to set deadline
or noop scheduler on the KVM host.

Reducing writes on your SSD drive


Linux has a mount option for file systems called ‘noatime’. If this option is set for a file system in /etc/fstab, then
reading accesses will no longer cause the atime information (last access time) to be updated. If it is not set, each
read of a file will also result in a write operation. Therefore, using noatime is recommended to improve
performance.

Here is how to enable it:


$ sudo nano /etc/fstab

Now add ‘noatime’ in /etc/fstab for all your partitions on the SSD.

There is also a ‘nodiratime’ which is a similar option for directories.

Telegram Channel @nettrain


# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/barefeetmedia--vg-root / ext4 noatime,nodiratime,discard,error\
s=remount-ro 0 1
UUID=36cc6fc0-e230-45cb-b59d-44526031f306 /boot ext2 noatime,nodiratime,defau\
lts 0 2
/dev/mapper/barefeetmedia--vg-swap_1 none swap noatime,nodiratime,sw \
0 0
tmpfs /tmp tmpfs noatime,nodirati
0

Enable SSD TRIM


The TRIM command allows the OS to notify the SSD drive which data can be overwritten, allowing the SSD drive
to manages the erase process. By moving erasing out of the write process, writes can be faster.

To enable this on a KVM VPS, your VPS provider will need to add a discard=’unmap’ attribute added to the disk
definition for the domain (https://libvirt.org/formatdomain.html#elementsDisks)

It is then recommended to regurarly run fstrim:


$ sudo fstrim /

Other kernel settings


kernel.panic = 10

Reboots the kernel after a panic automatically in 10s

Telegram Channel @nettrain


Installing OpenSSL
OpenSSL is a general purpose cryptography library which supports the Transport
Layer Security (TLS) protocol used by browsers and websites. Its main purpose is thus
enabling secure communications over the https protocol between your users and the
server.

We can also use it to eg. connect securely to our database, generate private/public keys
and offers keystores, certificates,…

Last year OpenSSL was all over the news because it had some major security bugs.
These have all been fixed in the current version.

In the guide below we will show you how to compile the latest version from source.
We’ll also describe you how you can update to a new version if one appears.

Installing OpenSSL 1.0.2d


By default your Linux distribution will most probably already have a precompiled
version of OpenSSL installed.

You can view the installed version of openssl on your system by typing:
$ openssl
then type
version

OpenSSL> version
OpenSSL 1.0.1j 15 Oct 2014
OpenSSL>

Here we see were not running the latest version. 1.0.2d was released in July 2015 and
fixes 12 security bugs. You can find the newest versions at https://www.openssl.org/

Here is the procedure to install OpenSSL 1.0.2d:


$ cd ~
$ axel https://www.openssl.org/source/openssl-1.0.2d.tar.gz
$ tar xvfz openssl-1.0.2d.tar.gz
$ cd openssl-1.0.2d
$ ./config
$ make
$ sudo make install

Telegram Channel @nettrain


All OpenSSL files, including binaries and manual pages are installed under the
directory /usr/local/ssl. To ensure users use this version of OpenSSL instead of the
previous version which come by default on Ubuntu, you must update the paths for the
manual pages (documentation) and the executable binaries.

Edit the file /etc/manpath.config, adding the following line before the first
MANPATH_MAP:
$ sudo nano /etc/manpath.config
MANPATH_MAP /usr/local/ssl/bin /usr/local/ssl/man

Update the man database by executing:


$ sudo mandb

Edit the file /etc/environment and insert the path for the new OpenSSL version
(/usr/local/ssl/bin) before the path for Ubuntu’s version of OpenSSL (/usr/bin).
$ sudo nano /etc/environment
My environment file looks like this:
PATH="/usr/local/sbin:/usr/local/bin:/usr/local/ssl/bin:/usr/sbin:/usr/bin:/sbin\
:/bin:/usr/games"

Now reboot the server:


$ sudo reboot

After reboot check whether executing openssl displays the version you’ve just
upgraded to:
$ openssl
OpenSSL> version
OpenSSL 1.0.2d 9 Jul 2015

Upgrading OpenSSL to a Future Release


Upgrading to a future release is actually the same procedure as described in the
previous section; but you don’t need to update the /etc/manpath.config and
/etc/environment anymore.

Check Intel AES Instructions Are Used By OpenSSL


Intel AES instructions are a new set of instructions available on the 32nm Intel
microarchitecture (formerly codenamed Westmere-EP). These instructions enable fast
and secure data encryption and decryption, using the Advanced Encryption Standard
(AES).

Telegram Channel @nettrain


You can compare if it’s running or not by executing OpenSSL once with special
parameters to disable AES and once without these. If the latter executes much faster it
means AES enabled.

Run OpenSSL with AES disabled:


$ openssl_ia32cap="~0x200000200000000" openssl speed -elapsed -evp aes-128-cbc

This will produce the following performance numbers:


The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes
aes-128-cbc 280842.98k 318053.40k 307241.39k 306882.56k 304111.62k

Run OpenSSL with default settings:


$ openssl speed -elapsed -evp aes-128-cbc
The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes
aes-128-cbc 585509.67k 616012.74k 619633.41k 658522.11k 661039.79k

You can see that AES enabled and used by OpenSSL on our system.

Make sure you’re exposing the AES instructions to your KVM VPS by Configuring
the CPU model exposed to KVM instances, which we explained previously.

Telegram Channel @nettrain


Securing your Server
In this chapter we will make our server more secure by installing CSF (ConfigServer
Security and Firewall). CSF is one of the best firewall/Intrusion detection-prevention tool
out there for Linux. It has the following features:

A firewall
Checks login authentication failures for SSH, FTP, …
Excessive connection blocking
Syn flood protection
Ping of death protection

Installing CSF (ConfigServer Security and Firewall)


Ubuntu comes with Universal Firewall (ufw) preinstalled by default. Let’s make sure we
disable it if that was not yet the case:
$ ufw disable

Let’s now download and install CSF:


$ cd ~
$ wget http://www.configserver.com/free/csf.tgz
$ tar -xzf csf.tgz
$ cd csf
$ sudo sh install.sh

CSF is autostarted on bootup of your server. It reads its configuration settings from the
file /etc/csf/csf.conf

You can test the CSF installation by running:


$ sudo perl /etc/csf/csftest.pl
Testing ip_tables/iptable_filter...OK
Testing ipt_LOG...OK
Testing ipt_multiport/xt_multiport...OK
Testing ipt_REJECT...OK
Testing ipt_state/xt_state...OK
Testing ipt_limit/xt_limit...OK
Testing ipt_recent...OK
Testing xt_connlimit...OK
Testing ipt_owner/xt_owner...OK
Testing iptable_nat/ipt_REDIRECT...OK
Testing iptable_nat/ipt_DNAT...OK

RESULT: csf should function on this server

Telegram Channel @nettrain


If you receive the result ‘csf should function on this server’ everything is OK!

Configuring the ports to open


Inside the csf.conf there are 8 parameters which let’s you whitelist inbound and outbound
ports (and the ability to connect on them).
sudo nano /etc/csf/csf.conf

1. TCP_IN: Incoming IPv4 TCP Ports


2. TCP_OUT: Outgoing IPv4 TCP Ports
3. UDP_IN: Incoming IPv4 UDP Ports
4. UDP_OUT: Outgoing IPv4 UDP Ports
5. TCP6_IN: Incoming IPv6 TCP Ports
6. TCP6_OUT: Outgoing IPv6 TCP Ports
7. UDP6_IN: Incoming IPv6 UDP Ports
8. UDP6_OUT: Outgoing IPv6 UDP Ports

As you can see the last 4 are used for IPv6 connections. Incoming connections are
connections coming from the outside world to your server. (these could be HTTP
connections, SSH login, …). Outgoing connections are connections created by your
server to the outside world.

You should only add the ports for services you are really using; the ports which are not
whitelisted are closed by default, reducing the possible attack surface for hackers.

Below you can find an example configuration:


TCP_IN = "22,25,53,80,110,143,443,465,587,993,995,11211,3306,9000,8080,26706"
TCP_OUT = "22,25,53,80,110,443"
UDP_IN = "53"
# To allow outgoing traceroute add 33434:33523 to this list
UDP_OUT = "53,123,33434:33523"
TCP6_IN = "22,25,53,80,110,143,443,465,587,993,995,11211,3306,9000,8080,1099,267\
06"
TCP6_OUT = "22,25,53,80,110,113,443"
UDP6_IN = "53"
# To allow outgoing traceroute add 33434:33523 to this list
UDP6_OUT = "53,113,123,33434:33523"

Here is a list of the default services normally running on these ports:

22: SSH (Secure Shell)


25: SMTP (Simple Mail Transfer Protocol)
53: DNS (Domain Name System)
80: HTTP
110: POP3 (Post Office Protocol)
143: IMAP (Internet Message Access Protocol)

Telegram Channel @nettrain


443: HTTPS (Secure Sockets Layer (SSL)/HyperText Transfer Protocol)
995: POP3-SSL (POP3 over TLS/SSL)
993: IMAP-SSL (Internet Message Access Protocol (IMAP) over TLS/SSL)
3306: mysql / MariaDB
8080: Jetty Java server
9000: PHP-FPM
11211: Memcached port

Configuring and Enabling CSF


On Ubuntu 15.x you’ll also need to change the location of the systemctl binary to
/bin/systemctl, otherwise csf will not be able to start:
SYSTEMCTL = "/bin/systemctl"

At the top of the file make sure that the Testing mode is enabled for now. This will make
sure we don’t accidently lock us out of our own server by an incorrect configuration.
TESTING = "1"

You can enable CSF by running the following command manually:


$ sudo csf -e

No errors should be returned. To check if you can still login via SSH, open a second SSH
connection to your server after enabling CSF. You should be able to login correctly. If so
you can disable the testing mode in CSF by setting TESTING = “0” in the
/etc/csf/csf.conf file. Afterwards restart csf by running:
$ sudo csf -r

CSF Firewall can also allow or whitelist ip addresses with the following command:
$ csf -a xxx.xxx.xxx.xxx

(where xxx.xxx.xxx.xxx is the IP address)

If you want to ban an IP address, use:


csf -d xxx.xxx.xxx.xxx

Login Failure Daemon (Lfd)


CSF als includes a Login Failure Daemon (lfd). This is a daemon process that runs all the
time and periodically (every X seconds) scans the latest log file entries for login attempts
against your server that continually fail within a short period of time.

Telegram Channel @nettrain


Such attempts are often called “Brute-force attacks” and the daemon process responds
very quickly to such patterns and blocks offending IP’s quickly.

When CSF is enabled, the login failure daemon is also automatically running.

You can view the logs of LFD (including which auto bans were performed) via the
following command:
$ sudo cat /var/log/lfd.log

Oct 29 15:50:39 lfd[883]: daemon started on xxx - csf v8.07 (generic)


Oct 29 15:50:39 lfd[883]: CSF Tracking...
Oct 29 15:50:39 lfd[883]: IPv6 Enabled...
Oct 29 15:50:39 lfd[883]: LOAD Tracking...
Oct 29 15:50:39 lfd[883]: Country Code Lookups...
Oct 29 15:50:39 lfd[883]: Exploit Tracking...
Oct 29 15:50:39 lfd[883]: Directory Watching...
Oct 29 15:50:39 lfd[883]: Temp to Perm Block Tracking...
Oct 29 15:50:39 lfd[883]: Port Scan Tracking...
Oct 29 15:50:39 lfd[883]: Account Tracking...
Oct 29 15:50:39 lfd[883]: SSH Tracking...
Oct 29 15:50:39 lfd[883]: SU Tracking...
Oct 29 15:50:39 lfd[883]: Console Tracking...
...

Telegram Channel @nettrain


Ordering a Domain Name For Your Website
In this chapter we will guide you through ordering a domain name for your
website.

With the domain name your visitors can easily find your website. Without
Domain Name services your users would have to use the IP address of your
server to access your site. (eg. 72.13.444.2)

Choosing a Domain Name and a Top Level Domain


A domain name generally consists of a few levels, let’s take
www.google.com as an example.

The first level is the .com part. This is either a top level domain (tld)
such as .gov, .com, .edu or a country code top level domain (.be, .fr,…)
The second level is the google part.
The third level is the www part.

To create your fully qualified domain name you’ll generally use something
like:
www.<mydomainname>.<tld>
or
<mydomainname>.<tld>

as the use of the www is optional.

When ordering your domain name you’ll thus have to decide which top level
domain (<tld>) you want to choose and which <mydomainname> you want
to choose.

We recommend to use a country level based top level domain if the target
audience of your site is from one country only. Otherwise the .com tld is still
the most used.

Telegram Channel @nettrain


Of course you’ll only be able to choose a domain name which has not been
taken yet. To get a quick overview of those we recommend to query for
domain names at EuroDNS

Ordering a Domain With EuroDNS


We recommend the domain registrar EuroDNS, because we have been
registering and renewing our domain names with them for years. Why?

Their prices are cheap


The ordering process is easy
The configuration panel is easy to use and conforms to all our needs.
Their email support is quick

Here is the step-by-step process to order a domain at EuroDNS.

1. Go to the EuroDNS website


2. Enter your domain name (without and country or generic top level
domain)
3. Click on the green shopping cart button to add the domain to your cart
4. Then click on Configure your order
5. Create a new EuroDNS account if needed

Telegram Channel @nettrain


EuroDNS ordering process

After signup or login you can enable the option Domain Privacy if needed.
Enabling Domain Privacy will hide your name, address, email and phone
number from ‘whois’ lookups via your domain name.

You also need to specify the name server to use. A name server is a
computer server that connects your domain name to any services you may
use (e.g. email, web hosting). We’ll go into more detail on how to configure
this in the next section. For now you can use the EuroDNS default name
server.

Now click on ‘Review and Payment’ to enter the credit card details and
finish the ordering process.

You’ll receive an email from EuroDNS as soon as everything is ready.

Configuring a Name Server


When a user types in a domain name in their browser, one of the first steps
that needs to be done is the translation of the domain name to its

Telegram Channel @nettrain


corresponding IP address. This is done via a domain name server (DNS) that
has been assigned the responsibility of your domain name. Eg. it contains all
the configuration needed to be able to host a website on www.
<yourdomain>.com or routing of email to a yourname@<yourdomain>.com
address.

This name server can be the same server as your VPS server you’ll use for
hosting your website. In this case you would have to install name server
software on your VPS server and update the nameserver configuration at
EuroDNS to use your name server.

We don’t recommend to use your own server as a domain name server


though, but instead opt for a managed DNS service.

Managed DNS is a service that allows you to outsource DNS to a third party
provider. Here are the reasons why we recommend this option:

Simplicity: You don’t have to worry about setting up and maintaining


your own DNS server. Management of DNS records is also easier
because providers enable this using a simple browser-based GUI or API
Performance: Providers that specialize in DNS have often invested
significant time and capital setting up global networks of servers that
can respond quickly to DNS queries regardless of a user’s location
Availability: Managed DNS providers employ dedicated staff to
monitor and maintain highly available DNS services and are often
better equipped to handle service anomalies like Distributed denial of
service attacks.
Advanced Features: Managed DNS providers often offer features that
are not part of the standard DNS stack such as integrated monitoring
and failover and geographic load balancing

Ordering the DNSMadeEasy DNS Service


We recommend to use DNSMadeEasy as the provider of your DNS service.
You can find an overview of their offerings here

The Small Business plan at 29,95USD will be fine for most sites, unless you
have massive amounts of traffic to your site. (in this case you’ll need to
check the number of queries included in each plan)

Telegram Channel @nettrain


After ordering you’ll be able to login at their control panel at
https://cp.dnsmadeeasy.com/
Adding a Domain in DNSMadeEasy

After login in the control panel, you can add a domain this way:

1 Select the “DNS” Menu, select “Managed DNS” 1. Click “Add Domains”,
on the right 1. Enter a domain name and Click “Ok”

Add a domain at DNSMadeEasy

You can watch a video on Youtube too: https://www.youtube.com/watch?


v=2s-Et_Aqv2w&feature=youtu.be
Configuring a Domain in DNSMadeEasy

Now that we have added a domain, we need to configure it.

First we will configure the DNS records.

There are a few different types of DNS records:

Telegram Channel @nettrain


A Records
AAAA Records
CNAME Records
MX Records
TXT Records
System NS Records

For each we will discuss why they are used and how to configure them.
A Records

An A or Address record is used to map a host name prefix to an IPv4


address. Generally you’ll have at least two such records; one for the www
version of your site and one for the non-www version. Eg. www.mysite.com
and mysite.com

An example can make this clear:

Add an A Record at DNSMadeEasy

You can of course add more subdomains here; eg. files.mytestsite.com

To add an A record click on the plus icon and:

fill in the name (leave it blank once and for the second A record fill in
www)
the IPv4 address of your server
Dynamic dns: off

Telegram Channel @nettrain


TTL: The Time to live in seconds. This means that DNS servers all
over the world can cache the IP address for x amount of seconds before
they have to ask the authorative server again. A commonly used TTL
value is 86400 (= 24h). If you lower this value to eg. 1800 (30
minutes), a change of IP address/server will have less impact for your
visitors. Setting a lower value will increase the number of hits to
DNSMadeEasy (your authorative DNS) - make sure to find the right
balance so you’re not exceeding the maximum number of included
queries in the small business plan.
AAAA Records

This is very similar to A Records; except that the AAAA records will map to
IPv6 addresses of your server.
CNAME Records

CNAME Records, or Canonical Name records specify an alias for another


domain, the ‘canonical’ domain.

One such example where you’ll need this is when using a CDN service
(Content Delivery Network) for optimizing latency and download speeds of
your website resources (eg. Images, html, css,… )

We will explain this in more detail in our CDN chapter.


MX Records

MX or a Mail exchange record maps a domain name to a mail transfer agent


(MTA) or mail relay. The mail relay is software which takes care of sending
email messages from one computer to another via SMTP (Simple Mail
transfer Protocol).

More simply explained; if you want to send an email from joe@mysite.com


you’ll need to provide MX records. We’ll discuss this further in our Setting
up mails chapter.
TXT Records

TXT Records are used for multiple purposes, one of which is for email anti-
spam measures. We’ll discuss this further in our Setting up mails chapter.
Using the DNSMadeEasy nameservers at your DNS registrar EuroDNS

Telegram Channel @nettrain


The final step you need to take, to get everything working is to specify the
DNSMadeEasy nameservers at your DNS registrar EuroDNS, where you
ordered your domain name.

This will make sure that DNS servers around the world will find all the
configuration you’ve applied in the previous section.

Here are the steps to complete:

1. Go to https://www.eurodns.com and login to your control panel


2. Click on Name Server profiles
3. Click on Add name server profile
4. Add a descriptive name for the profile (eg. DNSMadeEasy)
5. Add all the following DNSMadeEasy nameservers
ns0.dnsmadeeasy.com
ns1.dnsmadeeasy.com
ns2.dnsmadeeasy.com
ns3.dnsmadeeasy.com
ns4.dnsmadeeasy.com

Now save the profile

Telegram Channel @nettrain


DNSMadeEasy nameserver configuration at EuroDNS

You can attach this profile to your domain name via the following steps:

1. Go to the registered domains list


2. Click on Manage
3. Click on Update NS
4. Choose ‘my own name servers’ and choose the dnsmadeeasy profile
you just created
Verifying If Everything Is Setup Correctly

There are some handy free website tools which can verify if your DNS
configuration is well setup. Some tools also test the performance of the DNS
server or Service you are using.

Slow DNS performance is often overlooked when trying to speed up a


website.

Telegram Channel @nettrain


At http://bokehman.com/dns_test.php you can perform a DNS latency test
for your domain.

The following sites also analyze your DNS configuration and performance:
https://cloudmonitor.ca.com/en/dnstool.php

Telegram Channel @nettrain


Installing MariaDB 10, a MySQL Database Alternative
Most of the popular websoftware used these days needs a database to store its
information. These include popular blogplatforms like Wordpress and Drupal, forums like
phpBB3 and many more.

In this chapter we will install MariaDB, which is a drop-in replacement for the well
known MySQL database.

MariaDB is a community developed fork of the MySQL database which should remain
free. It is led by the original developers of MySQL after the acquisition of MySQL by
Oracle.

It is very easy to switch from MySQL to MariaDB, because MariaDB is a drop-in


replacement. That means that

All client APIs, protocols and structs are identical.


All filenames, binaries, paths, ports, sockets, and etc… should be the same.
All MySQL connectors for other languages (PHP, Perl, Python, Java, .NET,
MyODBC, Ruby, MySQL C connector) work unchanged with MariaDB.

This means that for most cases, you can just use MariaDB where you would use MySQL.
If your website software doesn’t explicitly support MariaDB, but it does support MySQL,
it should work out-of-the-box with MariaDB.

Here is a short list of reasons why we choose MariaDB over MySQL:

Better performance
By default uses the XtraDB storage engine which is a performance enhanced fork of
the MySQL InnoDB storage engine
Better tested and fewer bugs
All code is open source

Download & Install MariaDB


Ubuntu 14.10 or 15.x doesn’t include the latest version of MariaDB 5.5 in the official
repositories. However in our guide we will use the latest stable release which is the 10.1.8
series.

MariaDB 10.1 adds the following performance features since 10.0:

Telegram Channel @nettrain


a lot of performance improvements to perform better on CPUs with more processing
power and more cores.
on the disk IO side there are several improvements like page compression and
defragmentation. Defragmentation is based on the patch developed first by Facebook
and then further by Kakao Corp.

The commands below will add a Maria DB repository to Ubuntu 15.x which will allow us
to install MariaDB and future updates via the apt-get system.
sudo apt-get install software-properties-common
sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082\
a1bb943db
sudo add-apt-repository 'deb http://lon1.mirrors.digitalocean.com/mariadb/repo/1\
0.1/ubuntu vivid main'

You can find the exact steps for Ubuntu 15.x via the configuration guide at
https://downloads.mariadb.org/mariadb/repositories

The above steps only needs to be performed once on a given server. The apt-key
command enables apt to verify the integrity of the packages it downloads.
$ sudo apt-get update
$ sudo apt-get install mariadb-server

During installation MariaDB will ask for a password for the root user.

Choose a good password because the root MariaDB user is an administrative user that has
access to all databases!)

That’s it, MariaDB is now installed on your server!

To auto start it at system startup execute the following command:


sudo update-rc.d mysql defaults

Securing MariaDB
By default the MariaDB installation still has a test user, database, and anonymous login
which should be disabled on a production server.

You can run the following script to make your installation fully secure:
$ sudo mysql_secure_installation

It’ll make the following changes to your MariaDB installation: * Setting a password for
the root user * Remove anonymous users * Disallow root login remotely * Remove test
database and access to it

Telegram Channel @nettrain


Starting and Stopping MariaDB
MariaDB can be stopped and started via the following commands:
$ sudo /etc/init.d/mysql stop
$ sudo /etc/init.d/mysql start

Upgrade MariaDB To a New Version


When you upgrade MariaDB to a new version; the procedure described above can be
used.

One important step you should still run after the upgrade is to run mysql_upgrade
$ sudo mysql_upgrade --user=<root mariadb user> --password=<password>

You should run mysql_upgrade after upgrading from one major MySQL/MariaDB release
to another, such as from MySQL 5.0 to MariaDB 5.1 or MariaDB 10.0 to MariaDB 10.1.
It is also recommended that you run mysql_upgrade after upgrading from a minor
version, like MariaDB 5.5.40 to MariaDB 5.5.41, or even after a direct “horizontal”
migration from MySQL 5.5.40 to MariaDB 5.5.40. If calling mysql_upgrade was not
necessary, it does nothing.

Symptoms you may receive when you didn’t run msyql_upgrade include:

Errors in the error log that some system tables don’t have all needed columns.
Updates or searches may not find all the records.
Executing CHECKSUM TABLE may report the wrong checksum for MyISAM or
Aria tables.

Tuning the MariaDB configuration


The MariaDB configuration is located in /etc/mysql/my.cnf

When tuning the performance of MariaDB, there are some hardware considerations to be
reflected on.

Amount of memory available: when MariaDB has more memory available, larger
key and table caches can be stored in memory. This reduces disk access which is of
course much slower.
Disk access: fast disk access is critical, as ultimately the database data is stored on
disks. The key figure is the disk seek time, a measurement of how fast the physical
disk can move to access the data. Because we’re using an SSD, seek times are very
fast :-)

Kernel Parameters

Telegram Channel @nettrain


Swapping
$ sudo nano /etc/sysctl.conf
vm.swappiness=1

With the action above, you limit the use of the swap partition (the virtual memory on the
SSD). Ubuntu’s inclination to use the swap, is determined by a setting.

On a scale of 0-100, the default setting is 60. We set it to 1 so that swap is not used unless
the server gets severe RAM shortage.

This improves MariaDBs performance because:

MariaDB’s internal algorithms assume that memory is not swap, and are highly
inefficient if it is.
Swap increases IO over just using disk in the first place as pages are actively
swapped in and out of swap.
Database locks are particularly inefficient in swap. They are designed to be obtained
and released often and quickly, and pausing to perform disk IO will have a serious
impact on their usability.

Storage Engines
MariaDB comes with quite a few storage engines. They all store your data and they all
have their pro’s and cons depending on the usage scenario.

The most used storage engines you’ll come across frequently are:

MyISAM: this is the oldest storage engine from MySQL; and is not transactional
Aria: a modern improved version of MyISAM
InnoDB: a transactional general purpose storage engine.
XtraDB: a performance improved version of InnoDB. It is meant to be near 100%
compatible with InnoDB. From MariaDB 10.0.0.15 on it is the default storage
engine.

We recommend to use XtraDB as it is a transactional DB engine with better performance.

Using MySQLTuner
MySQLTuner is a free Perl script that can be run on your database which will give you a
list of recommendations to execute to improve performance.

It uses statistics available in MySQL/MariaDB to give reliable recommendations.

It is advised to run this script when your database has been up for at least a day or longer.
Otherwise the recommendations may be inaccurate.

Here is how to run the MySQLTuner script:

Telegram Channel @nettrain


$ cd ~
$ wget http://mysqltuner.pl/ -O mysqltuner.pl
$ perl mysqltuner.pl

MySQLTuner

For our Database they eg. advise to defragment our tables and add more INDEXes when
we are joining tables together. Here is how you can defragment all databases with
MySQL/MariaDB:
$ sudo mysqlcheck -uroot -p<password> -o all-databases

To lookup slow queries (possibly due to missing indexes), take a look at the queries
logged in:

Telegram Channel @nettrain


$ cat /var/log/mysql/slowqueries.log

It’ll enable you or your application developers to analyze what is wrong and how to make
the queries more performant.

Also take a look at the “highest usage of available connections”. If this is much lower
then your max_connection setting, then you’ll be wasting memory which is never used.
For example in our server the max_connection settings is on 300 simultanuous
connections while the highest number of concurrent sessions is only 30. This make it
possible to reduce the max. connection settings to eg 150.

Enabling HugePages
What are huge pages or Large Pages?

When a process uses some memory, the CPU is marking the RAM as used by that
process. For efficiency, the CPU allocates RAM by chunks of 4K bytes (it’s the default
value on many platforms). Those chunks are named pages. Those pages can be swapped
to disk, etc.

The CPU and the operating system have to remember which page belong to which
process, and where it is stored. Obviously, the more pages you have, the more time it
takes to find where the memory is mapped.

A Translation-Lookaside Buffer (TLB) is a page translation cache inside the CPU that
holds the most-recently used virtual-to-physical memory address translations.

The TLB is a scarce system resource. A TLB miss can be costly as the processor must
then read from the hierarchical page table, which may require multiple memory accesses.

By using a bigger page size, a single TLB entry can represent a larger memory range and
as such there could be reduced Translation Lookaside Buffer (TLB) cache misses
improving performance.

Most current CPU architectures support bigger pages then the default 4K in Linux. Linux
supports these bigger pages via the Huge pages functionality since Linux kernel 2.6. By
default this support is turned off.

Note that enabling Large Pages can reduce performance if your VPS doesn’t have ample
RAM available. This is because when a large amount of memory is reserved by an
application, it may create a shortage of regular memory and cause excessive paging in
other applications and slow down the entire system.

We recommend to enable it only when you have enough RAM in your VPS (2GB or
more).

Telegram Channel @nettrain


In this section we will enable MariaDB to use HugePages. In later chapters we will also
enable this for Memcached and Java.
How do I verify that my kernel supports hugepage?

Type the following command:


$ grep -i huge /proc/meminfo

A kernel with Hugepage support should give a similar output like below:
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB

You can see a single hugepage is 2048KB (2MB) in size on our system. Support for
Hugepage is also one of the reasons to use a recent kernel. The total number of
HugePages is zero because we have not yet enabled HugePages support.
Enabling HugePages

We can enable HugePages via the sysctl.conf configuration file:


$ sudo nano /etc/sysctl.conf

Then add:
# Enable Large pages in our case 1 page = 2048kb
vm.nr_hugepages = 512

We reserve 1GB of RAM (2048*512) for Large pages use. You should adjust this setting
based on how much memory your VPS has. We have 4GB of RAM available which
means we reserved 25%. You have to take into account this RAM is not available to
processes that are not using Huge pages.

We also need to give MariaDB access to these HugePages. The setting


vm.hugetlb_shm_group in /etc/sysctl.conf tells the Kernel which Linux group can access
the Large Pages. Effectively this means we have to create a group called for example
‘hugepage’ and make the MySQL/MariaDB user part of that group.
# Set the group number that is allowed to access the HugePages
vm.hugetlb_shm_group=<group number>

Because we want to allow more then one process to access the HugePages we will create
a group ‘hugepage’. Every user who needs access can then add this group to its list of
groups.

For example, for user mysql we have the following groups attached now:

Telegram Channel @nettrain


$ id mysql
uid=108(mysql) gid=116(mysql) groups=116(mysql)

We will add the group hugepage to it.

First create the group ‘hugepage’:


$ sudo groupadd hugepage

Now add the user mysql to the group hugepage:


$ sudo useradd -G hugepage mysql

Now you can run id mysql again:


uid=108(mysql) gid=116(mysql) groups=116(mysql),1003(hugepage)

In our example you can now set the group number 1003 in the sysctl.conf:
$ sudo nano /etc/sysctl.conf
vm.hugetlb_shm_group=1003

Now we need to update the Linux shared memory kernel parameters SHMMAX and
SHMALL inside /etc/sysctl.conf

Shared Memory is a type of memory management in the Linux kernel . It is a memory


region which can be shared between different processes.

SHMMAX is the maximum size of a single shared memory segment. Its size is expressed
in bytes.
kernel.shmmax = 1073741825

It should be higher then the amount of memory in bytes allocated for Large Pages. As we
reserved 1GB of RAM this means: 1 * 1024 * 1024 * 1024 + 1 = 1073741825 bytes. Now
we want to specify that the max total shared memory (SHMALL) may not exceed 2GB:
(50% of the available RAM)

2GB = 2 * 1024 * 1024 * 1024 = 2147483648 bytes

Now SHMALL is not measured in bytes but in pages, so in our example where 1 page is
4096 bytes in size, we need to:

2147483648 / 4096 = 524288


kernel.shmall = 524288

There are some other configuration changes needed:

Telegram Channel @nettrain


$ sudo nano /etc/security/limits.conf

Append the following lines:


mysql soft nofile 65536
mysql hard nofile 65536
@mysql soft memlock unlimited
@mysql hard memlock unlimited

The Linux OS can enforce memory limits a process/user can consume. We adjust the
memlock parameter to set no limit for the mysql user. We also set that the mysql user can
open max. 65536 files at the same time. In the next section we will set the open-files-limit
MariaDB parameter to the same 65536 value. (MariaDB can’t set it’s open_files_limit to
anything higher then the what was specified for user mysql in limits.conf.)

Now reboot the server


$ sudo reboot

After reboot your HugePages should now be enabled:


$ cat /proc/meminfo | grep Huge
AnonHugePages: 0 kB
HugePages_Total: 512
HugePages_Free: <x>
HugePages_Rsvd: <y>
HugePages_Surp: 0
Hugepagesize: 2048 kB

$ ipcs -lm

shows information about the shared memory: maximum size, max. segment size and
more. You can use it to verify the settings you’ve made in the sysctl.conf file after reboot.

System Variables
Tuning a database server is vast subject which could span a whole book by its own. We
will only scratch the surface in this and the following sections. We will optimize the
configuration of MyISAM and InnoDB/XtraDB storage engines.

We will give you some recommendations and tools. Of course your settings could be
different due to having more or less memory or applications which use the database
differently.

The MariaDB/MySQL settings are located at: /etc/mysql/my.cnf.

We have based this configuration file on https://github.com/Fleshgrinder/mysql-mariadb-


configuration/blob/master/my.cnf

Telegram Channel @nettrain


It’s targetted to a VPS server with 2GB RAM and limited computing power.

The configuration file is split into three parts: * [client]: configuration settings that are
read by all client programs. (eg. PHP accessing the database) * [mysqld_safe]:
configuration settings for the mysqld_safe daemon (see below) * [mysqld]: general
mysql/MariaDB configuration settings.

mysqld_safe starts mysqld the mysql daemon. mysqld_safe will check for an exit code. If
mysqld did not end due to system shutdown or a normal service mysql stop, mysqld_safe
will attempt to restart mysqld.
$ sudo nano /etc/mysql/my.cnf
# ----------------------------------------------------------------------
# CLIENT CONFIGURATION
# ----------------------------------------------------------------------

[client]
# As always, all charsets default to utf8.
default_character_set = utf8

# The listening socket.


socket = /var/run/mysqld/mysqld.sock

# Port where MariaDB listens on


port = 3306

[mysqld_safe]
# mysqld_safe is the recommended way to start a mysqld server on Unix. mysqld_sa\
fe adds some safety features such as restarting the server when an error occurs \
and logging runtime information to an error log file.
# Write the error log to the given file.
#
# DEFAULT: syslog
general-log-file = /var/log/mysql/mysql.log
log-error = /var/log/mysql/mysqld-safe-error.log

# The process priority (nice). Enter a value between -19 and 20; where
# -19 means highest priority.
#
# SEE: man nice
# DEFAULT: 0
nice = 0

# Do not write error messages to syslog; use error log file


#
# DEFAULT: false
skip_syslog = true

# The Unix socket file that the server should use when listening for
# local connections
socket = /var/run/mysqld/mysqld.sock

# The number of file descriptors available to mysqld. Increase if you are gettin\
g the Too many open files error.
Open-files-limit = 65536

Telegram Channel @nettrain


# Global server configuration
[mysqld]
log-error = /var/log/mysql/mysqld-error.log
log-warnings=2

# Port
port = 3306

# The number of file descriptors available to mysqld. Increase if you are gettin\
g the Too many open files error.
open-files-limit = 65536

# If you run multiple servers that use the same database directory (not recommen\
ded), each server must have external locking enabled.
skip-external-locking = true

# Instead of skip-networking the default is now to listen only on


# localhost which is more compatible and is not less secure.
bind-address = 127.0.0.1
# When a new connection to MySQL is made, it can go into the back_log, which eff\
ectively serves as a queue for new connections on operating system size to allow\
MySQL to handle spikes.
# This limit is also defined by the operating system. You should not set this to\
the maximum the system allows, otherwise you might now be able to log in if all\
handles are in use!
#
# The maximum value on Linux is directed by tcp_max_syn_backlog sysctl parameter
# net.ipv4.tcp_max_syn_backlog = 10240
#
# When the connection is in the back_log, the client will have to wait until the\
server is ready to process the connection
#
# SEE:
# http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_ba\
ck_log
# http://www.mysqlperformanceblog.com/2012/01/06/mysql-high-number-connections-\
per-secon/
#
# DEFAULT: OS value
back_log = 500
# The MySQL installation base directory. Relative path names for other
# variables usually are resolved to the base directory.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_basedir
# DEFAULT:
basedir = /usr

# ALL character set related options are set to UTF-8. We do not support
# any other character set unless explictely stated by the user who's
# working with our database.
character_set_server = utf8

# ALL collations are set to utf8_general_ci because it's the fastest!


collation_server = utf8_general_ci

# MyISAM specific setting: the storage engine supports concurrent


# inserts to reduce contention between readers and writer for a given

Telegram Channel @nettrain


# table.
#
# 0 = Disables concurrent inserts
# 1 = Enables concurrent insert for MyISAM table that do not have holes.
# 2 = Enables concurrent insert for all MyISAM tables.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_concurrent_insert
# SEE: http://dev.mysql.com/doc/refman/5.5/en/concurrent-inserts.html
# DEFAULT: 1
concurrent_insert = 2

# Lower limit then default on 64bit OS


myisam_max_sort_file_size = 2048M

# http://major.io/2007/08/03/obscure-mysql-variable-explained-max_seeks_for_key/
max_seeks_for_key = 1000

# The number of seconds that mysqld server waits for a connect packet
# before responding with "Bad Handshake". The defautl value is 10 sec
# as of MySQL 5.0.52 and 5 seconds before that. Increasing the
# connect_timeout value might help if clients frequently encounter
# errors of the form "Lost connection to MySQL server at 'XXX', system
# error: errno".
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_connect_timeout
# DEFAULT: 10
connect_timeout = 5

# The MySQL data directory.


#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_datadir
# DEFAULT:
datadir = /var/lib/mysql

# The default storage engine to use on server startup. This storage


# engine will be used for all databases and tables unless the user sets
# a different engine in his session or query while creating a new table.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_default_storage_engine
# DEFAULT: InnoDB (since 5.5.5)
default_storage_engine = innodb

# Enable the MariaDB User Feedback Plugin.


#
# SEE: http://kb.askmonty.org/en/user-feedback-plugin/
enable_feedback = true

# The number of days for automatic binary log file removal.


#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_expire_logs_days
# DEFAULT: 0
# expire_logs_days = 10

Telegram Channel @nettrain


# The minimum size of the buffer that is used for plain index scans,
# range index scans, and joins that do not use indexes and thus perform
# full table scans. High values do not mean high performance. You should
# not set this to you very large amoutn globally. Instead stick to a
# small value and increase it only in sessions that are doing large
# joins. Drupal is performing a lot of joins, so we set this to a
# reasonable value.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_join_buffer_size
# http://serverfault.com/questions/399518/join-buffer-size-4-m-is-not-advised
# DEFAULT: ?
join_buffer_size = 128K

# Index blocks for MyISAM tables are buffered and shared by all threads.
# The key_buffer_size is the size of the buffer used for index blocks.
# The key buffer is also known as the key cache.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_key_buffer_size
# http://www.ewhathow.com/2013/09/what-is-the-recommended-value-of\
-key_buffer_size-in-mysql/
# DEFAULT: 8388608
key_buffer_size = 32M

# Number of open tables for all threads. See Optimizing table_open_cache for sug\
gestions on optimizing. Increasing table_open_cache increases the number of file\
descriptors required.
# https://mariadb.com/kb/en/optimizing-table_open_cache/
table_open_cache = 2048

# Whether large page support is enabled. You must ensure that your
# server has large page support and that it is configured properly. This
# can have a huge performance gain, so you might want to take care of
# this.
#
# You MUST have enough hugepages size for all buffers you defined.
# Otherwise you'll see errno 12 or errno 22 in your error logs!
# Hugepages can give you a hell of a headache if numbers aren't calc-
# ulated whisely, but it's totally worth it as you gain a lot of
# performance if you're handling huge data.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/large-page-support.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_large_pages
# DEFAULT: 0
large_pages = true

# The locale to use for error messages.


#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/error-message-language.html
# DEFAULT: ?
lc_messages = en_US

# The directory where error messages are located.


#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/error-message-language.html
# DEFAULT: ?
lc_messages_dir = /usr/share/mysql

Telegram Channel @nettrain


# The purpose of the binary log is to allow replication, where data is sent from\
one or more masters to one or more slave servers based on the contents of the b\
inary log,
# as well as assisting in backup operations.
#
# Whether the binary log is enabled. If the --log-bin option is used,
# then the value of this variable is ON; otherwise it is OFF. This
# variable reports only on the status of binary logging (enabled or
# disabled); it does not actually report the value to which --log-bin is
# set.
#
# https://mariadb.com/kb/en/replication-and-binary-log-server-system-variables/#\
log_bin
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_log_bin
# DEFAULT: OFF
# log_bin = /var/log/mysql/mariadb-bin

# The index file for binary log file names. See Section 5.2.4, The
# Binary Log. If you omit the file name, and if you did not specify one
# with --log-bin, MySQL uses host_name-bin.index as the file name.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/replication-options-binary-log.htm\
l#option_mysqld_log-bin-index
# DEFAULT: OFF
# log_bin_index = /var/log/mysql/mariadb-bin.index

# Specifies that only a fraction of sessions should be logged. Logging


# is enabled for every nth session. By default, n is 1, so logging is
# enabled for every session. Rate limiting is disabled for the
# replication thread.
#
# SEE: http://www.percona.com/doc/percona-server/5.5/diagnostics/slow_extended_5\
5.html#log_slow_rate_limit
log_slow_rate_limit = 1
# Queries that don't use an index, or that perform a full index scan where the i\
ndex doesn't limit the number of rows, will be logged to the slow query log.
# The slow query log needs to be enabled
# for this to have an effect.
log_queries_not_using_indexes = true

# Specifies how much information to include in your slow log. The value
# is a comma-delimited string, and can contain any combination of the
# following values:
#
# - microtime: Log queries with microsecond precision (mandatory).
# - query_plan: Log information about the query``s execution plan (optional).
# - innodb: Log InnoDB statistics (optional).
# - full: Equivalent to all other values OR``ed together.
# - profiling: Enables profiling of all queries in all connections.
# - profiling_use_getrusage: Enables usage of the getrusage function.
#
# Values are OR``ed together.
#
# For example, to enable microsecond query timing and InnoDB statistics,
# set this option to microtime,innodb. To turn all options on, set the
# option to full.
#

Telegram Channel @nettrain


# SEE: http://www.percona.com/doc/percona-server/5.5/diagnostics/slow_extended_5\
5.html#log_slow_verbosity
log_slow_verbosity = query_plan
# Print out warnings such as Aborted connection... to the error log.
# Enabling this option is recommended, for example, if you use
# replication (you get more information about what is happening, such as
# messages about network failures and reconnections). This option is
# enabled (1) by default, and the default level value if omitted is 1.
# To disable this option, use --log-warnings=0. If the value is greater
# than 1, aborted connections are written to the error log, and access-
# denied errors for new connection attempts are written.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-options.html#option_mysqld_\
log-warnings
# SEE: http://dev.mysql.com/doc/refman/5.5/en/communication-errors.html
# DEFAULT: 1
log_warnings = 2

# If a query takes longer than this many seconds, the server increments
# the Slow_queries status variable. If the slow query log is enabled,
# the query is logged to the slow query log file. This value is measured
# in real time, not CPU time, so a query that is under the threshold on
# a lightly loaded system might be above the threshold on a heavily
# loaded one. The minimum and default values of long_query_time are 0
# and 10, respectively. The value can be specified to a resolution of
# microseconds. For logging to a file, times are written including the
# microseconds part. For logging to tables, only integer times
# are written; the microseconds part is ignored.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_long_query_time
# SEE: http://dev.mysql.com/doc/refman/5.5/en/slow-query-log.html
# DEFAULT: 10
long_query_time = 1
# The maximum size of one packet or any generated/intermediate string.
# The packet message buffer is initialized to net_buffer_length bytes,
# but can grow up to max_allowed_packet bytes when needed. This value by
# default is small, to catch large (possibly incorrect) packets. You
# must increase this value if you are using large BLOB columns or long
# strings. It should be as big as the largest BLOB you want to use. The
# protocol limit for max_allowed_packet is 1GB. The value should be a
# multiple of 1024; nonmultiples are rounded down to the nearest
# multiple. When you change the message buffer size by changing the
# value of the max_allowed_packet variable, you should also change the
# buffer size on the client side if your client program permits it. On
# the client side, max_allowed_packet has a default of 1GB. Some
# programs such as mysql and mysqldump enable you to change the client-
# side value by setting max_allowed_packet on the command line or in an
# option file. The session value of this variable is read only.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_max_allowed_packet
# DEFAULT: ?
max_allowed_packet = 16M

# If a write to the binary log causes the current log file size to
# exceed the value of this variable, the server rotates the binary logs
# (closes the current file and opens the next one). The minimum value is
# 4096 bytes. The maximum and default value is 1GB. A transaction is

Telegram Channel @nettrain


# written in one chunk to the binary log, so it is never split between
# several binary logs. Therefore, if you have big transactions, you
# might see binary log files larger than max_binlog_size. If
# max_relay_log_size is 0, the value of max_binlog_size applies to relay
# logs as well.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/replication-options-binary-log.htm\
l#sysvar_max_binlog_size
# DEFAULT: 1073741824
# max_binlog_size = 100M
# The maximum permitted number of simultaneous client connections.
# Increasing this value increases the number of file descriptors that
# mysqld requires.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/table-cache.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_max_connections
# DEFAULT: 151
max_connections = 300

# This variable sets the maximum size to which user-created MEMORY


# tables are permitted to grow. The value of the variable is used to
# calculate MEMORY table MAX_ROWS values. Setting this variable has no
# effect on any existing MEMORY table, unless the table is re-created
# with a statement such as CREATE TABLE or altered with ALTER TABLE or
# TRUNCATE TABLE. A server restart also sets the maximum size of
# existing MEMORY tables to the global max_heap_table_size value. This
# variable is also used in conjunction with tmp_table_size to limit the
# size of internal in-memory tables.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_max_heap_table_size
# SEE: http://dev.mysql.com/doc/refman/5.5/en/internal-temporary-tables.html
# SEE: http://www.mysqlperformanceblog.com/2007/01/19/tmp_table_size-and-max_hea\
p_table_size/
# DEFAULT: 16777216
max_heap_table_size = 32M
# Sets the MyISAM storage engine recovery mode.
#
# Before the server automatically repairs a table, it writes a note about the re\
pair to the error log. If you want to be able to recover from most problems with\
out user intervention, you should use # the options BACKUP,FORCE. This force$
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-options.html#option_mysqld_\
myisam-recover-options
# DEFAULT: OFF
myisam_recover_options = BACKUP

# The size of the buffer that is allocated when sorting MyISAM indexes
# during a REPAIR TABLE or when creating indexes with CREATE INDEX or
# ALTER TABLE.
#
# The maximum permissible setting for myisam_sort_buffer_size is 4GB.
# Values larger than 4GB are permitted for 64-bit platforms (except
# 64-bit Windows, for which large values are truncated to 4GB with a
# warning).
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_myisam_sort_buffer_size

Telegram Channel @nettrain


# DEFAULT: 8388608
myisam_sort_buffer_size = 512M

# The absolute path to the process identifier (PID) file.


pid_file = /var/run/mysqld/mysqld.pid

# Do not cache results that are larger than this number of bytes. The
# default value is 1MB.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_query_cache_limit
# DEFAULT: 1048576
query_cache_limit = 512K

# The amount of memory allocated for caching query results. The permiss-
# ible values are multiples of 1024; other values are rounded down to
# the nearest multiple. The query cache needs a minimum size of about
# 40KB to allocate its structures.
#
# 256 MB for every 4GB of RAM
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_query_cache_size
# DEFAULT: 0
query_cache_size = 128M

# Sets the global query cache type. There are three possible enumeration
# values:
# 0 = Off
# 1 = Everything will be cached; except for SELECT SQL_NO_CACHE
# 2 = Only SELECT SQL_CACHE queries will be cached
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_query_cache_type
# DEFAULT: 1
# http://major.io/2007/08/08/mysqls-query-cache-explained/
query_cache_type = 1

# Each thread that does a sequential scan for a MyISAM table allocates
# a buffer of this size (in bytes) for each table it scans. If you do
# many sequential scans, you might want to increase this value.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/memory-use.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_read_buffer_size
# http://www.mysqlperformanceblog.com/2007/09/17/mysql-what-read_buffer_size-val\
ue-is-optimal/
# DEFAULT: 131072
read_buffer_size = 128K
# When reading rows from a MyISAM table in sorted order following a key-
# sorting operation, the rows are read through this buffer to avoid disk
# seeks. Setting the variable to a large value can improve ORDER BY
# performance by a lot. However, this is a buffer allocated for each
# client, so you should not set the global variable to a large value.
# Instead, change the session variable only from within those clients
# that need to run large queries.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/order-by-optimization.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/memory-use.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\

Telegram Channel @nettrain


r_read_rnd_buffer_size
# DEFAULT: 262144
read_rnd_buffer_size = 256K

# Only use IP numbers and all Host columns values in the grant tables
# must be IP addresses or localhost.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/host-cache.html
# DEFAULT: false
skip_name_resolve = true

# if set to 1, the slow query log is enabled. See log_output to see how log file\
s are written
slow_query_log=1

# location of the slow query log


slow_query_log_file = /var/log/mysql/slowqueries.log

# The absolute path to the Unix socket where MySQL is listening for
# incoming client requests.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_socket
# DEFAULT: /tmp/mysql.sock
socket = /var/run/mysqld/mysqld.sock

# Each session that needs to do a sort allocates a buffer of this size.


# This is not specific to any storage engine and applies in a general
# manner for optimization.
#
# It is my understanding that the sort_buffer is used when no index are availabl\
e to help the sorting
#
# http://www.mysqlperformanceblog.com/2007/08/18/how-fast-can-you-sort-data-with\
-mysql/
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/order-by-optimization.html
# DEFAULT: 2097144
sort_buffer_size = 256K

# Size in bytes of the per-thread cache tree used to speed up bulk inserts into \
MyISAM and Aria tables. A value of 0 disables the cache tree
bulk_insert_buffer_size = 16M

# SEE: http://dev.mysql.com/doc/refman/5.5/en/table-cache.html
# SEE: http://www.mysqlperformanceblog.com/2009/11/16/table_cache-negative-scala\
bility/
# SEE: http://www.mysqlperformanceblog.com/2009/11/26/more-on-table_cache/
# DEFAULT: ?
table_cache = 400
# Should be the same as table_cache
table_definition_cache = 400

# How many threads the server should cache for reuse. When a client
# disconnects, the client's threads are put in the cache if there are
# fewer than thread_cache_size threads there. Requests for threads are
# satisfied by reusing threads taken from the cache if possible, and
# only when the cache is empty is a new thread created. This variable

Telegram Channel @nettrain


# can be increased to improve performance if you have a lot of new
# connections. Normally, this does not provide a notable performance
# improvement if you have a good thread implementation. However, if
# your server sees hundreds of connections use cached threads. By
# examining the difference between the connections and threads created
# status variables, you can see how efficient the thread cache is.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-status-variables.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_thread_cache_size
# http://serverfault.com/questions/408845/what-value-of-thread-cache-size-should\
-i-use
# DEFAULT: 0
thread_cache_size = 128

# The maximum size of internal in-memory temporary tables. (The actual


# limit is determined as the minimum of tmp_table_size and
# max_heap_table_size.) If an in-memory temporary table exceeds the
# limit, MySQL automatically converts it to an on-disk MyISAM table.
# Increase the value of tmp_table_size (and max_heap_table_size if
# necessary) if you do many advanced GROUP BY queries and you have lots
# of memory. This variable does not apply to user-created MEMORY tables.
#
# You can compare the number of internal on-disk temporary tables
# created to the total number of internal temporary tables created by
# comparing the values of the Created_tmp_disk_tables and
# Created_tmp_tables variables.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_tmp_table_size
# SEE: http://dev.mysql.com/doc/refman/5.5/en/internal-temporary-tables.html
# SEE: http://www.mysqlperformanceblog.com/2007/01/19/tmp_table_size-and-max_hea\
p_table_size/
# DEFAULT: OS value
tmp_table_size = 32M

# The directory used for temporary files and temporary tables.


#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_tmpdir
# DEFAULT: ?
tmpdir = /tmp

# The number of seconds the server waits for activity on a


# noninteractive connection before closing it.
#
# On thread startup, the session wait_timeout value is initialized from
# the global wait_timeout value or from the global interactive_timeout
# value, depending on the type of client (as defined by the
# CLIENT_INTERACTIVE connect option to mysql_real_connect()).
#
# SEE: interactive_timeout
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_wait_timeout
# DEFAULT: 28800
wait_timeout = 120

# Run the mysqld server as the user having the name user_name or the
# numeric user ID user_id

Telegram Channel @nettrain


#
# DEFAULT: root
user = mysql
# ----------------------------------------------------------------------
# INNODB / XTRADB CONFIGURATION
# ----------------------------------------------------------------------

# Enable Facebook's defragmentation code in MariaDB 10.1


innodb-defragment=1

# The size of the memory buffer InnoDB / XtraDB uses to cache data and
# indexes of its tables. The larger this value, the less disk I/O is
# needed to access data in tables. A save value is 50% of the available
# operating system memory.
#
# total_size_databases + (total_size_databases * 0.1) = innodb_buffer_pool_size
#
# SEE: http://www.mysqlperformanceblog.com/2007/11/03/choosing-innodb_buffer_poo\
l_size/
# SEE: http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_inno\
db_buffer_pool_size
# DEFAULT: 128M
innodb_buffer_pool_size = 256M

# If enabled, InnoDB / XtraDB creates each new table using its own .idb
# file for storing data and indexes, rather than in the system table-
# space. Table compression only works for tables stored in separate
# tablespaces.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/innodb-multiple-tablespaces.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_inno\
db_file_per_table
# DEFAULT: FALSE
innodb_file_per_table = 1

# This setting can have a positive or negative effect on performance and


# should always be tested on the current server the DBS is running on!
#
# http://www.mysqlperformanceblog.com/2007/11/03/choosing-innodb_buffer_pool_siz\
e/
# SEE: http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_inno\
db_flush_method
# DEFAULT: fdatasync
innodb_flush_method = O_DIRECT
# "InnoDB pages are organized in blocks of 64 pages. When the check-
# pointing algorithm has picked a dirty page to be written to disk, it
# checks if there are more dirty pages in the block and if yes, writes
# all those pages at once. The rationale is, that with rotating disks
# the most expensive part of a write operation is head movement. Once
# the head is over the right track, it does not make much difference if
# we write 10 or 100 sectors."
# ~~ Axel Schwenke
#
# Use none if you are on an SSD drive!
#
# SEE: https://mariadb.com/blog/how-tune-mariadb-write-performance
# DEFAULT: area
# innodb_flush_neighbor_pages = none
# renamed in in mariadb10!!

Telegram Channel @nettrain


innodb_flush_neighbors=0
# An upper limit on the I/O activity performed by the InnoDB background
# tasks, such as flushing pages from the buffer pool and merging data
# from the insert buffer.
#
# Refer to the manual of your drive to find out the IOPS.
#
# You can monitor the IOPS with e.g. iostat (package sysstat on Debian).
#
# SEE: http://blog.mariadb.org/how-to-tune-mariadb-write-performance/
# SEE: http://dev.mysql.com/doc/refman/5.5/en/innodb-performance-thread_io_rate.\
html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-diskio.html
# DEFAULT: 200
#
# Some commonly used SSD drives.
#innodb_io_capacity = 400 # Simple SLC SSD
#innodb_io_capacity = 5000 # Intel X25-E
#innodb_io_capacity = 20000 # G-Skill Phoenix Pro
#innodb_io_capacity = 60000 # OCZ Vertex 3
#innodb_io_capacity = 120000 # OCZ Vertex 4
#innodb_io_capacity = 200000 # OCZ RevoDrive 3 X2
#innodb_io_capacity = 100000 # Samsung 840 Pro
#
# Only really fast SAS drives (15,000 rpm) are capable of reaching 200
# IOPS. You might consider lowering the value if you are using a slower
# drive.
#innodb_io_capacity = 100 # 7,200 rpm
#innodb_io_capacity = 150 # 10,000 rpm
#innodb_io_capacity = 200 # 15,000 rpm default
#
# I have an SAS RAID.
# http://www.mysqlplus.net/2013/01/07/play-innodb_io_capacity/
# the InnoDB write threads will throw more data at the disks every second that \
they can possible handle, and youll get I/O queueing.
innodb_io_capacity = 50000

# Default values
innodb_read_io_threads = 4
innodb_write_io_threads = 4

# Number of threads dedicated to XtraDB purge operations. If set to 0, the defau\


lt, purging is done with the master thread. If set to 1, the current maximum, pu\
rging is done on a separate thread with could reduce contention. Currently h$
innodb_purge_threads = 1

# If set to 1, the default, to improve fault tolerance InnoDB first stores data \
to a doublewrite buffer before writing it to data file. Disabling will provide a\
marginal peformance improvement.
innodb_doublewrite = 1

# Individual InnoDB data files, paths and sizes.


# Default values
innodb_data_file_path= ibdata1:10M:autoextend

# The size in bytes of the buffer that InnoDB uses to write to the log
# files on disk. A large log buffer enables large transactions to run
# without a need to write the log to disk before the transaction commit.

Telegram Channel @nettrain


# Thus, if you have big transactions, making the log buffer larger saves
# disk I/O.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_inno\
db_log_buffer_size
# DEFAULT: 8388608
innodb_log_buffer_size = 8M

# This variable is relevant only if you use multiple tablespaces in


# InnoDB. It specifies the maximum number of .ibd files that InnoDB can
# keep open at one time. The minimum value is 10. The file descriptors
# use for .ibd files are for InnoDB only. They are independent of those
# specified by the --open-files-limit server option, and do not affect
# the operation of the table cache.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_inno\
db_open_files
# DEFAULT: 300
innodb_open_files = 1000
# http://www.mysqlperformanceblog.com/2008/11/21/how-to-calculate-a-good-innodb-\
log-file-size/
innodb_log_file_size = 64M

# If set to 1, the default, the log buffer is written to the log file and a flus\
h to disk performed after each transaction. This is required for full ACID compl\
iance. If set to 0, nothing is done on commit; rather the log buffer write a$
innodb_flush_log_at_trx_commit = 2

# Once this number of threads is reached (excluding threads waiting for locks), \
XtraDB/InnoDB will place new threads in a wait state in a first-in, first-out qu\
eue for execution, in order to limit the number of threads running concurren$
innodb_thread_concurrency = 9

# Maximum length in bytes of the returned result for a GROUP_CONCAT() function


group_concat_max_len = 1024

# If the extra columns used for the modified filesort algorithm would contain m\
ore bytes than this figure, the regular filesort algorithm is used instead. Sett\
ing too high can lead some sorts to result in higher disk activity and lower$
max_length_for_sort_data = 1024

# The starting size, in bytes, for the connection and thread buffers for each cl\
ient thread. The size can grow to max_allowed_packet. This variable's session va\
lue is read-only. Can be set to the expected length of client statements if $
net_buffer_length = 16384

# Limit to the number of successive failed connects from a host before the host\
is blocked from making further connections. The count for a host is reset to ze\
ro if they successfully connect. To unblock, flush the host cache with a FLU$
max_connect_errors = 10

# Minimum size in bytes of the blocks allocated for query cache results.
# http://dba.stackexchange.com/questions/42993/mysql-settings-for-query-cache-mi\
n-res-unit
query_cache_min_res_unit = 2K

# Size in bytes of the oersistent buffer for query parsing and execution, alloca\
ted on connect and freed on disconnect. Increasing may be useful if complex quer\
ies are being run, as this will reduce the need for more memory allocations $

Telegram Channel @nettrain


query_prealloc_size = 262144

# Size in bytes of the extra blocks allocated during query parsing and executio\
n (after query_prealloc_size is used up).
query_alloc_block_size = 65536

# Size in bytes to increase the memory pool available to each transaction when t\
he available pool is not large enough
transaction_alloc_block_size = 8192
# Initial size of a memory pool available to each transaction for various memory\
allocations. If the memory pool is not large enough for an allocation, it is in\
creased by transaction_alloc_block_size bytes, and truncated back to transac$
transaction_prealloc_size = 4096

#
# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# For generating SSL certificates I recommend the OpenSSL GUI "tinyca".
#
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem

# MariaDB 5.5 Threadpooling


# https://mariadb.com/kb/en/mariadb/threadpool-in-55/

thread_handling=pool-of-threads

# ----------------------------------------------------------------------
# MYSQLDUMP CONFIGURATION
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/mysqldump.html
# ----------------------------------------------------------------------

[mysqldump]

# This option is useful for dumping large tables. It forces mysqldump


# to retrieve rows for a table from the server a row at a time rather
# then retrieving the entire row set and buffering it in memory before
# writing it out.
quick = true

# Quote identifiers (such as database, table, and column names) within


# '' characters. If the ANSI_QUOTES SQL mode is enabled, identifiers
# are quoted within " characters. This option is enabled by default.
# It can be disabled with --skip-quote-names, but this option should be
# given after any option such as --compatible that may enable
# --quote-names.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/mysqldump.html#option_mysqldump_qu\
ote-names
# DEFAULT:
quote_names = true

# The maximum size of the buffer for client/server communication.

Telegram Channel @nettrain


max_allowed_packet = 32M

[myisamchk]
# SEE: mysqld.key_buffer
key_buffer = 32M
sort_buffer = 16M
read_buffer = 16M
write_buffer = 16M

[mariadb]
# Default is Latin1, if you need UTF-8 set all this (also in client section)
#
character-set-server = utf8
collation-server = utf8_general_ci
character_set_server = utf8
collation_server = utf8_general_ci

userstat = 0

# https://mariadb.com/kb/en/segmented-key-cache/
# For all practical purposes setting key_cache_segments = 1 should be slower tha\
n any other option and should not be used in production.
key_cache_segments = 1

aria_log_file_size = 32M
aria_log_purge_type = immediate

# The size of the buffer used for index blocks for Aria tables. Increase this to\
get better index handling (for all reads and multiple writes) to as much as you\
can afford.
aria_pagecache_buffer_size = 128M

# The buffer that is allocated when sorting the index when doing a REPAIR or wh\
en creating indexes with CREATE INDEX or ALTER TABLE.
aria_sort_buffer_size = 512M

[mariadb-5.5]
# If set to 1 (0 is default), the server will strip any comments from the query \
before searching to see if it exists in the query cache.
query_cache_strip_comments=1

# http://www.mysqlperformanceblog.com/2010/12/21/mysql-5-5-8-and-percona-server-\
on-fast-flash-card-virident-tachion/
innodb_read_ahead = none
# ----------------------------------------------------------------------
# Location from which mysqld might load additional configuration files.
# Note that these configuration files might override values set in this
# configuration file!
# !includedir /etc/mysql/conf.d/

Enabling Logrotation for MariaDB


Log files from running MariaDB can quickly take up a lot of diskspace when your site(s)
has a lot of visitors.

We’ll setup the Linux ‘logrotate’ service to solve this problem automatically.

Telegram Channel @nettrain


‘logrotate’ provides us with a lot of options:

renaming (rotating) & compressing log files when they reach a certain size
keeping compressed backups for log files (with limits)

In the end the options make sure that the size taken by the log files will be constant.

In this section we will logrotate the MariaDB log files, so that they’ll not fill the entire
disk. MariaDB has a few different log files:

The error log: it contains information about errors that occur while the MariaDB
server is running
The general query log: it contains general information about the queries. In our
configuration we haven’t enabled this log
The slow query log: it consists of slow queries. This is very useful to find SQL
queries which need to be optimized.

Let’s check which log files we have specified in our my.cnf MariaDB configuration:
$ sudo cat /etc/mysql/my.cnf | grep "\.log"
general-log-file = /var/log/mysql/mysql.log
log-error = /var/log/mysql/mysqld-safe-error.log
log-error = /var/log/mysql/mysqld-error.log
slow_query_log_file = /var/log/mysql/slowqueries.log

Now let’s create the logrotate configuration file for MariaDB:


$ sudo nano /etc/logrotate.d/mysql-server
/var/log/mysql/mysql.log /var/log/mysql/mysqld-safe-error.log /var/log/mysql/mys\
qld-error.log /var/log/mysql/slowqueries.log {
daily
rotate 7
size=100M
missingok
create 640 mysql mysql
compress
sharedscripts
postrotate
test -x /usr/bin/mysqladmin || exit 0

# If this fails, check debian.conf!


MYADMIN="/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.c\
\
nf"
if [ -z "`$MYADMIN ping 2>/dev/null`" ]; then
# Really no mysqld or rather a missing debian-sys-maint user?
# If this occurs and is not a error please report a bug.
if ps cax | grep -q mysqld; then
exit 1
fi
else
$MYADMIN flush-logs
fi

Telegram Channel @nettrain


endscript
}

Here are the options we’ve chosen:

Autorotate the logs of MariaDB:

daily
max 7 backups, before log files are deleted
size take by log files is max 100MB
when rotating the log files, compress them
do a daily compress
do a postrotate when MariaDB is restarted

Diagnosing MySQL/MariaDB Startup Failures


In this section we will list some errors you can receive when starting MySQL with an
incorrect my.cnf configuration file. These errors are logged in the /var/log/mysql/mysqld-
safe-error.log or /var/log/mysql/mysqld-error.log log files.
Warning: Failed to allocate <X> bytes from HugeTLB memory

Warning: Failed to allocate 2097152 bytes from HugeTLB memory. errno 1


Warning: Using conventional memory pool

This means that the HugePages configuration was not properly setup. Please reread the
Enabling HugePages to see if you missed some details.
InnoDB: Error: log file ./ib_logfile0 is of different size <x> <y> bytes

InnoDB: Error: log file ./ib_logfile0 is of different size 0 5242880 bytes


InnoDB: than specified in the .cnf file 0 67108864 bytes!

This can happen when you have changed the size of the innoDB logfile in my.cnf. We
need to delete the previous InnoDB binary logfiles.
$ cd /var/lib/mysql
$ sudo rm ib_logfile0
$ sudo rm ib_logfile1

For safety reasons first make a backup of these files.

Telegram Channel @nettrain


Installing nginx Webserver

Why nginx ?
nginx, pronounced ‘engine X’, has been gaining a lot of users in the Linux web server
market. Although Apache HTTP Server is still the market share leader in December 2014;
nginx is the new cool kid on the block.

So why is nginx so popular these days?

nginx has an event driven design which can make better use of the available hardware
than Apache’s process driven design. nginx can thus serve more concurrent clients with
higher throughput then Apache on the same hardware.

Another benefit is that configuring nginx is easier then Apache in my opinion.

Nginx gets very regular releases that fix bugs and add new features. (like the SPDY and
HTTP/2 support for enhancing performance of https websites).

For new servers we thus recommend you to install nginx instead of Apache.

Installing nginx
As nginx doesn’t support adding libraries dynamically we will compile nginx and its extra
libraries from source.

There may be some precompiled packages available, but most of the time they don’t have
the extra libraries we want (eg. nginx Google Pagespeed plugin) or come with old
versions of nginx/plugins.

Because we want to use the newest versions we will compile everything from source.
This is actually pretty simple and not that hard.

Download nginx
First we need to download the latest nginx version sources. These can be downloaded
from: http://nginx.org/en/download.html

From the commandline on your server:


$ cd ~
$ axel http://nginx.org/download/nginx-1.9.9.tar.gz
$ tar xvfz nginx-1.9.9.tar.gz

Telegram Channel @nettrain


nginx 1.9.9 is now downloaded and unpacked.

Because we will add some plugins to Nginx, we will first download those plugin sources
before we start the compilation.

Download the nginx Pagespeed module


The nginx Pagespeed module from Google is a very nice plugin to enhance the
performance of your website. We will configure it in later chapters, for now we will
download it and make sure it’s available for nginx.

Here we follow the install instructions that can be found at


https://developers.google.com/speed/pagespeed/module/build_ngx_pagespeed_from_sour
ce
Install Dependencies
$ sudo apt-get install build-essential zlib1g-dev libpcre3 libpcre3-dev

Download ngx_pagespeed

$ cd ~
$ NPS_VERSION=1.10.33.2
$ wget https://github.com/pagespeed/ngx_pagespeed/archive/release-${NPS_VERSION}\
-beta.zip -O release-${NPS_VERSION}-beta.zip
$ unzip release-${NPS_VERSION}-beta.zip
$ cd ngx_pagespeed-release-${NPS_VERSION}-beta/
$ wget https://dl.google.com/dl/page-speed/psol/${NPS_VERSION}.tar.gz
$ tar -xzvf ${NPS_VERSION}.tar.gz # extracts to psol/

Install Perl Compatible Regular expressions


The PCRE library add support for regular expressions in the nginx configuration files.
This comes in handy when eg. you want to add some redirects rules for parts of your site.
(eg. /oldurl/* should go to /newurl/*).

The latest version is v8.37. PCRE2 has also been released recently but is not yet
compatible with nginx due to a changed API.

Here is how to download PCRE:


$ cd ~
$ wget ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.37.zip
$ unzip pcre-8.37.zip
$ cd pcre-8.37
$ ./configure
$ make -j4
$ sudo make install

make -j4 means we use 4 cores for the compilation process (as our VPS has 4 cores
available). Note this may not always work.

Telegram Channel @nettrain


Install NGX Cache Purge
nginx can be used to cache responses coming from a backend. (eg. PHP Wordpress or
Java servlet container). It then plays a similar role then for example a Varnish cache.

While it can cache responses from a backend, there is no support built-in for
programmatically purging content from the cache? Why would you want this support?

A good use case is eg. A Wordpress blog; which is written in PHP & stores it posts in a
database. Caching the content with the nginx FastCGI cache will make sure that most
people will get a cached response back where no PHP or database calls are necessary.

Now if a new article is published on the Wordpress blog; the cache inside nginx of eg. the
homepage should be purged, to allow the new content to be visible.

Purge support is available via the ngx_cache_purge module. We’ll cover the Wordpress
nginx Helper plugin (https://wordpress.org/plugins/nginx-helper/) in the Wordpress
chapter which works in conjunction with the nginx cache purge module.
$ cd ~
$ wget http://labs.frickle.com/files/ngx_cache_purge-2.3.tar.gz
$ tar -xzvf ngx_cache_purge-2.3.tar.gz

Install set-misc-nginx module


The set-misc-nginx module adds various set-xxx directives which you can use in the
nginx configuration files. An overview can be found here
http://wiki.nginx.org/NginxHttpSetMiscModule
$ cd ~
$ wget https://github.com/openresty/set-misc-nginx-module/archive/v0.29.tar.gz
$ tar -xzvf v0.29.tar.gz

set-misc-nginx depends on the ngx_devel_kit module which we will download below:


$ cd ~
$ wget https://github.com/simpl/ngx_devel_kit/archive/master.zip
$ unzip master.zip

Compile nginx
$ cd ~
$ cd nginx-1.9.9
$ ./configure --sbin-path=/usr/local/sbin --conf-path=/usr/local/nginx/conf/ngin\
x.conf --with-ipv6 --add-module=../ngx_pagespeed-release-1.10.33.2-beta --with-h\
ttp_v2_module --with-http_ssl_module --with-http_gzip_static_module --with-http_\
stub_status_module --with-http_secure_link_module --with-http_flv_module --with-\
http_realip_module --with-pcre=../pcre-8.37 --with-pcre-jit --add-module=../n\
gx_cache_purge-2.3 --with-openssl=../openssl-1.0.2d --with-openssl-opt=enable-tl\
sext --add-module=../ngx_devel_kit-master --add-module=../set-misc-nginx-module-\
0.29 --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/ac\

Telegram Channel @nettrain


cess.log --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid
$ make
$ sudo make install

We’ll go over the options in the configure command one by one to explain why they are
used.

nginx Compilation Flags


–with-ipv6

This option is needed to be able to run your server on an IPv6 address.


–add-module=../ngx_pagespeed-release-1.10.33.2-beta

This option adds the Google Pagespeed module for nginx. We’ll configure this plugin in a
later chapter to enhance the performance of your site.
–with-http_spdy_module (deprecated since 1.9.5; see below for the HTTP/2 module)

This option enables the SPDY module. SPDY is a Google specification which
manipulates HTTP traffic, with the goal to reduce web page load latency. It uses
compression and prioritizes and multiplexes the transfer of a web page so that only one
connection per client is required. (eg. Getting the html, images, stylesheets and javascript
files all happens with a connection that is kept open).

SPDY will form the basis of the next generation standardized HTTP v2 protocol.

SPDY requires the use of TLS (Transport Layer security) encryption (eg. https) for
security and better compatibility across proxy servers.
–with-http_v2_module

The ngx_http_v2_module module (since nginx 1.9.5) provides support for HTTP/2 and
supersedes the ngx_http_spdy_module module. Note that accepting HTTP/2 connections
over TLS (https) requires the ‘Application-Layer Protocol Negotiation’ (ALPN) TLS
extension support, which is available only since OpenSSL version 1.0.2; which we have
installed in our OpenSSL chapter.
–with-http_ssl_module

Enables SSL / TLS support (eg. To run your website over https)

In 2014 Google launched the HTTPS everywhere initiative; which tries to make secure
communications to your website the default (eg. Https everywhere).

This YouTube video explains their reasonings:

Google HTTPS initiative

Reasons:

Telegram Channel @nettrain


you want to protect the privacy of the visitors coming to your site
your visitors know that they are talking to your site (and not a malicious server)
your visitors can be sure that the content of your site was not altered in transit. (some
ISPs have added Google Adsense banners in the past)
nobody can eavesdrop the communcation between the visitor and your site. (eg.
think NSA or someone eavesdropping on an unencrypted Wifi channel at your local
coffeeshop)

This only works when your site is fully on https; not just the admin sections or shopping
cart.

To use https, you’ll also need to buy an SSL/TLS certificate from a certificate provider to
prove you’re the owner of your website domain. We’ll explain this whole process in later
chapters.
–with-http_gzip_static_module

Enables nginx to compress the html, css and javascript resources with gzip before sending
it to the browser. This will reduce the amount of data sent to the browser and increase the
speed. This module also allows to send precompressed .gz files if they are available on
the server.
–with-http_stub_status_module

This module enables you to view basic status information of your nginx server on a self
chosen URL.

Here is an example of the status output:


Active connections: 291
server accepts handled requests
16630948 16630948 31070465

Reading: 6 Writing: 179 Waiting: 106

Here you can see there are 291 active connections; for 6 of them nginx is reading the
request, for 179 nginx is writing the response and 106 requests are waiting to be handled.

More info can be found at HTTP Stub Status module

When we create the nginx configuration we will show you how to define the URL where
this information will be visible.
–with-http_secure_link_module

The ngx_http_secure_link_module can be used to protect resources from unauthorized


access and to limit the lifetime of a link. (eg. this could be useful for a download link
which should only be available for a user with the correct link and for 5 days).

Telegram Channel @nettrain


–with-http_flv_module

This module is usefull when you’re hosting Flash Video files on your site (FLV files).

This module provides pseudo-streaming server-side support for Flash Video (FLV) files.
It handles requests with the start argument in the request URI’s query string specially, by
sending back the contents of a file starting from the requested byte offset and with the
prepended FLV header.
–with-http_realip_module

This module is used to get the real IP address of the client. When nginx is behind a proxy
server or load balancer; the IP address of the client will sometimes be the IP address of
the proxy server or load balancer; not the IP address of your visitor. To make sure the
correct IP is available for use, you can enable this module. This can also be usefull when
you are trying to determine the country of the visitor from the IP address. (eg. via the Geo
IP module of nginx).

There is good tutorial at CloudFront which explains how to use this (advanced) module.
–with-pcre=../pcre-8.37

Directory where the PCRE library sources are located. It adds support for regular
expressions in the nginx configuration files (as explained previously)
–with-pcre-jit

Enables the Just In Time compiler for regular expressions. Improves the performance if
you have a lot of regular expressions in your nginx configuration files.
–add-module=../ngx_cache_purge-2.3

Directory where the nginx cache purge module is located.


–with-openssl=../openssl-1.0.2d

Directory where the OpenSSL sources are located


–with-openssl-opt=enable-tlsext

This is an important module which enables the TLS extensions. One of such extension is
SNI or Server Name Indication.

This is used in the context of https sites. Normally only one site (https domain) can be
hosted on the same IP address and port number on the server.

SNI allows a server to present multiple https certificates on the same IP address and port
number to the browser. This makes it possible for your VPS with eg. only 1 IPv4 address
to host multiple https websites/domains on the standard https port without having them to
use all the same certificate. (which would give certificate warnings in the browsers as a
https certificate is generally valid for one domain)

Telegram Channel @nettrain


SNI needs support in the webbrowers; luckily all modern browsers have support builtin.

The only browser with a little bit of market share that does not support SNI is IE6. Users
will still be able to browser your site, but will receive certificate warnings. We advise you
to check in your Google Analytics tool to know the market share of each browser.
–add-module=../ngx_devel_kit-master and –add-module=../set-misc-nginx-module-0.29

These modules add miscellaneous options for use in the nginx rewrite module (for HTTP
redirects, URL rewriting, …)
–error-log-path=/var/log/nginx/error.log and –http-log-path=/var/log/nginx/access.log

Paths to error logs and HTTP logging files

PID files; what are they?


--lock-path=/var/lock/nginx.lock
--pid-path=/var/run/nginx.pid

The PID path is the path to the nginx PID file. But what is a PID file?

pid files are written by some Unix programs to record their process ID while they are
starting. This has multiple purposes:

It’s a signal to other processes and users of the system that that particular program is
running, or at least started successfully.
You can write a script to check if a certain programm is running or not.
It’s a cheap way for a program to see if a previous running instance of the programm
did not exit successfully.
The nginx.pid file is thus used to see if nginx is running or not.
Startup/shutdown/restart scripts for nginx which we will use in later chapters depend
upon the presence of the nginx.pid file.

Lock files can be used to serialize the access to a shared resource (eg. a file, a memory
location); when two concurrent processes request for it at the same time. On most systems
the locks nginx takes are implemented using atomic operations. On some nginx will use
the lock-path option.

nginx Releases
Each nginx release comes with a changelog file describing the fixes and enhancements in
each version. You can find those on the download page of nginx

Here we list some recent changes that improve performance: * v1.7.8 fixes a 200ms delay
when using SPDY * v1.7.6 has some security fixes * v1.7.4 has bugfixes in the SPDY
module and improved SNI support * v1.9.5 introduces HTTP/2 support

Telegram Channel @nettrain


nginx Configuration
Creating a nginx System User
When we launch nginx, we need todo that with a certain OS user. For security reasons it
is best to create a dedicated user ‘nginx’ to launch the web server. This way you can limit
what the ‘nginx’ user has access to, and what is forbidden.

This is good way to reduce the attack possibilities of hackers when they would manage to
find a security hole in nginx.

Here is how we create the nginx user:


$ sudo mkdir /home/nginx
$ sudo groupadd nginx
$ sudo useradd -g nginx -d /home/nginx -s /usr/sbin/nologin nginx

First we create a directory /home/nginx which we will use as home directory for the user
nginx.

Secondly we create a group ‘nginx’

Thirdly, we create the user ‘nginx’ and specify the home directory (/home/nginx) via the -
d option.

We also specify the login shell via -s. The login shell /usr/sbin/nologin actually makes
sure that we can not remotely login (SSH) via the nginx user. We do this for added
security.

If you now take a look in your home directory you’ll see something like this:
$ cd/home
$ ls -ll
drwxr-xr-x 2 nginx nginx 4096 Nov 6 07:47 nginx

Configure the nginx system user open files maximum


For the user ‘nginx’ we will now set the hard and soft limits for the number of files a
process (eg. nginx) may have open at a time.

A soft limit may be changed later by the process, up to the hard limit value. The hard
limit can not be increased, except for processes running with superuser priviliges (eg. as
root).
$ sudo nano /etc/security/limits.conf

after

Telegram Channel @nettrain


* soft nofile 65536
* hard nofile 65536

add:
nginx soft nofile 65536
nginx hard nofile 65536

Now reboot your server with:


$ sudo reboot

nginx Startup and Shutdown Scripts


To start and stop nginx we will create a startup script at /etc/init.d/nginx (all our startup
scripts will be located in the directory /etc/init.d)
$ sudo nano /etc/init.d/nginx

Copy paste the following file contents:


### BEGIN INIT INFO
# Provides: nginx
# Required-Start: $all
# Required-Stop: $all
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts the nginx web server
# Description: starts nginx using start-stop-daemon
### END INIT INFO

PATH=/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/local/sbin/nginx
DAEMON_OPTS="-c /usr/local/nginx/conf/nginx.conf"
NAME=nginx
DESC=nginx
PID_FILE=/var/run/nginx.pid

test -x $DAEMON || exit 0

# Include nginx defaults if available


if [ -f /etc/default/nginx ] ; then
. /etc/default/nginx
fi

set -e

case "$1" in
start)
echo -n "Starting $DESC: "
start-stop-daemon --start --quiet --pidfile $PID_FILE \
--exec $DAEMON -- $DAEMON_OPTS
echo "$NAME."
;;
stop)
echo -n "Stopping $DESC: "

Telegram Channel @nettrain


start-stop-daemon --stop --quiet --pidfile $PID_FILE \
--exec $DAEMON
echo "$NAME."
;;
restart|force-reload)
echo -n "Restarting $DESC: "
start-stop-daemon --stop --quiet --pidfile \
$PID_FILE --exec $DAEMON
sleep 1
start-stop-daemon --start --quiet --pidfile \
$PID_FILE --exec $DAEMON -- $DAEMON_OPTS
echo "$NAME."
;;
reload)
echo -n "Reloading $DESC configuration: "
start-stop-daemon --stop --signal HUP --quiet --pidfile $PID_FILE\
\
--exec $DAEMON
echo "$NAME."
;;
*)
N=/etc/init.d/$NAME
echo "Usage: $N {start|stop|restart|reload|force-reload}" >&2
exit 1
;;
esac

exit 0

In the DAEMON_OPTS we specify the main nginx configuration file (nginx.conf). We


also specify the PID_FILE (/var/run/nginx.pid) which should correspond with what we
used during the compilation process.
DAEMON_OPTS="-c /usr/local/nginx/conf/nginx.conf"
NAME=nginx
DESC=nginx
PID_FILE=/var/run/nginx.pid

Now we need to make the script executable with the chmod +x command:
$ sudo chmod +x /etc/init.d/nginx

To register the script to start nginx during boot we need to run update-rc.d with the name
of the startup script.
$ sudo /usr/sbin/update-rc.d -f nginx defaults

To start nginx manually you’ll have to run:


$ sudo /etc/init.d/nginx start

To restart nginx:
$ sudo /etc/init.d/nginx restart

Telegram Channel @nettrain


To stop nginx:
$ sudo /etc/init.d/nginx stop

To view the status of nginx:


$ sudo /etc/init.d/nginx status

nginx configuration files


Configuring nginx is done via one or more .conf files which are located by default in
/usr/local/nginx/conf

You’ll notice that there a few *.conf.default files in this directory. They contain example
configurations of common use cases; such as a website on port 80 and http, PHP support,
https and more.

We advice you take a look at these examples to get a feeling on how to configure nginx.

The main configuration file is nginx.conf. The file contains general configuration and
includes other configuration files. This way you can eg. group configuration together per
site you want to host on your server.

nginx configuration files generally have the following layout:

settings at the root level


settings inside a named { … } block: the name defines the ‘context’ of the settings
inside the block. An example is eg. http { … }

Some settings only make sense inside a { … } block; eg. in a certain context. If you put
parameters in the wrong location, nginx will refuse to startup, telling you where the
problem is located in your configuration file(s).

At http://nginx.org/en/docs/ngx_core_module.html you can find all the information of


every setting possible available in the core nginx functionality. They also describe where
the setting makes sense. (eg for example in a http { … } block).

Test the Correctness of the nginx Configuration Files


When you make changes to the configuration files of nginx in the next section, you’ll
want to test your changes for correctness before restarting nginx.

This can be done via the following command:


$ sudo nginx -t
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful

Telegram Channel @nettrain


If there are errors, you can correct them first; while your currently running instance of
nginx is not impacted. Below we will start with the settings at the root level of the nginx
configuration file.

Updating the system user which launches nginx


We will launch nginx with the system user nginx and group nginx. This can be specified
in the nginx.conf as follows:
$ sudo nano /user/local/nginx/conf/nginx.conf
user nginx nginx;

Updating the PID File Location


We’ll update the nginx.conf file with the correct nginx PID file location:
$ sudo nano /user/local/nginx/conf/nginx.conf
pid /var/run/nginx.pid;

Updating worker_processes
The worker-processes directive is responsible for letting nginx know how many processes
it should spawn. It is common practice to run 1 worker process per CPU core.

To view the number of available cores in your system, run the following command:
$ grep processor /proc/cpuinfo | wc -l

We have 4 CPU cores so we set the value to 4:


$ sudo nano /user/local/nginx/conf/nginx.conf
worker_processes 4;

Updating worker_priority
This defines the scheduling priority for nginx worker processes. A negative number
means higher priority. (acceptable range: -20 to + -20)
$ sudo nano /user/local/nginx/conf/nginx.conf
worker_priority -10;

Updating timer_resolution

$ sudo nano /user/local/nginx/conf/nginx.conf


timer_resolution 100ms;

Reduces timer resolution in worker processes, thus reducing the number of


gettimeofday() system calls made. By default, gettimeofday() is called each time a kernel
event is received. With reduced resolution, gettimeofday() is only called once per
specified interval.

Telegram Channel @nettrain


Updating the Error Log File Location
We will update the error log location to the /var/log/nginx directory. Here are the
commands todo this:
$ sudo mkdir -p /var/log/nginx
$ sudo touch /var/log/nginx/localhost.access.log
$ sudo touch /var/log/nginx/localhost.error.log
$ sudo chmod -R 0666 /var/log/nginx/*

The 0666 permission settings mean all users can read and write to this directory but no
files may be executed here; again to harden security.
$ sudo nano /user/local/nginx/conf/nginx.conf
error_log /var/log/nginx/error.log;

Enable PCRE JIT


Remember we added support for PCRE (Perl Regular expressions) JIT in nginx rewrite
rules during the compilation process?

To enable the Just In Time compiler to improve performance we also need to add it to the
nginx.conf:
$ sudo nano /user/local/nginx/conf/nginx.conf
pcre_jit on;

PCRE JIT can speed up processing of regular expressions in the nginx configuration files
significantly.

Configuring the Events Directive


Here is our events configuration in nginx.conf:
events {
worker_connections 1024;
use epoll;
#multi_accept on;
}

worker_connections sets the maximum number of simultanuous connections that can be


opened per worker process. (eg in our example we have 4 worker processes). So we can
have 4096 simultaneous connections open.

Configuring the HTTP Directive


All settings described in the following section are put into the http { … } section in the
nginx.conf file.

These settings are all documented at


http://nginx.org/en/docs/http/ngx_http_core_module.html

Telegram Channel @nettrain


Access Log File Location and Format.

The access log logs all requests that came in for the nginx server. We will use the default
log format provided by nginx. This log format is called ‘combined’ and we will reference
it in the log file location parameter.

The default log format ‘combined’ displays the following information:


$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent\
' '"$http_referer" "$http_user_agent"';

Here is an example which will make everything more clear:


68.180.198.23 - - [09/Jan/2015:18:39:07 -0500] "GET /index.html HTTP/1.1" 200 17\
8 "-" "Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysea\
rch/slurp)"

http {
...
access_log /var/log/nginx/access.log combined buffer=32k;
...
}

Logging every request takes CPU and I/O cycles; that’s why we define a buffer.
(buffer=32k). This causes nginx to buffer a series of log entries and write them to the file
together instead with a separate write operation for each.
open_log_file_cache max=100 inactive=30s min_uses=2;

Defines a cache that stores the file descriptors of frequently used logs whose names
contain variables. The directive has the following parameters:

max: sets the maximum number of descriptors in a cache; if the cache becomes full
the least recently used descriptors are closed
inactive: sets the time after which the cached descriptor is closed if there were no
access during this time; by default, 10 seconds
min_uses: sets the minimum number of file uses during the time defined by the
inactive parameter to let the descriptor stay open in a cache; by default, 1
Enabling HTTP Status output

The following configuration enables nginx HTTP status output on the URL /nginx-info
# status information nginx ngx_http_stub_status_module
location /nginx-info {
stub_status;
}

Here is an example of the status output:

Telegram Channel @nettrain


Active connections: 291
server accepts handled requests
16630948 16630948 31070465

Reading: 6 Writing: 179 Waiting: 106

Here you can see there are 291 active connections; for 6 of them nginx is reading the
request, for 179 nginx is writing the response and 106 requests are waiting to be handled.
HTTP Performance Optimalizations

sendfile on;

The Unix sendfile allows to transfer data from a file descriptor to another directly in
kernel space. This saves a lot of resources and is very performant. When you have a lot of
static content to be served, sendfile will speed up the serving significantly.

When you have dynamic content (eg. Java, PHP, …) this setting will not be used by
nginx; and you won’t see performance differences.
tcp_nopush on;
tcp_nodelay on;

These two settings only have effect when ‘sendfile on’ is also specified.

tcp_nopush ensures that the TCP packets are full before being sent to the client. This
greatly reduces network overhead and speeds up the way files are sent.

When the last packet is sent (which is probably not full), the tcp_nodelay forces the
socket to send the data, saving up to 0.2 seconds per file.
Security optimalizations

server_tokens off;
server_name_in_redirect off;

Disabling server_tokens makes sure that no nginx version information is added to the
response headers of an http request.

Disabling server_name_in_redirect makes sure that the server name (specified in


server_name) is not used in redirects. It’ll instead use the name specified in the Host
request header field.
Timeouts

keepalive_timeout 15s;
send_timeout 10s;

The keepalive_timeout assigns the timeout for keep-alive connections with the client.
Connections who are kept alive, for the duration of the timeout, can be reused by the

Telegram Channel @nettrain


client (faster because the client doesn’t need to setup an entirely new TCP connection to
the server; with all the overhead associated).

Internet Explorer will disregard the timeout setting, and will auto close them by itself
after 60 seconds.
keepalive_disable msie6;

Disables keep-alive connections with misbehaving browsers. The value msie6 disables
keep-alive connections with Internet Explorer 6
reset_timedout_connection on;

Allow the server to close the connection after a client stops responding. Frees up socket-
associated memory.
Gzip Compression

Gzip can help reduce the amount of network transfer by reducing the size of html, css,
and javascript.

Here is how to enable it:


gzip on;
# Add a vary header for downstream proxies to avoid sending cached gzipped files\
to IE6
gzip_vary on;
gzip_disable "MSIE [1-6]\.";
gzip_static on; # allows sending precompressed files with the '.gz' filename ext\
ension instead of regular files
gzip_min_length 1400;
gzip_http_version 1.0; # Sets the minimum HTTP version of a request required to \
compress a response
gzip_comp_level 5; # Sets a gzip compression level of a response, setting this t\
oo high can waste cpu cycles and not getting better compression
gzip_proxied any; # Enables or disables gzipping of responses for proxied req\
uests
gzip_types text/plain text/css text/xml application/javascript application/x-ja\
vascript application/xml application/xml+rss application/ecmascript application/\
json image/svg+xml;

Buffers

Buffers sizes should be big enough so that nginx doesn’t need to write to temporary file
causing disk I/O.
client_body_buffer_size 10k;

The buffer size for HTTP POST actions (eg. Form submissions).
client_header_buffer_size 1k;

Telegram Channel @nettrain


The buffer size for HTTP headers.
client_max_body_size 8m;

Sets the maximum allowed size of the client request body, specified in the “Content-
Length” request header field. If the size in a request exceeds the configured value, the 413
(Request Entity Too Large) error is returned to the client
large_client_header_buffers 2 2k;

The maximum number and size of buffers for large client headers
Miscellaneous

proxy_temp_path /tmp/nginx_proxy/;

Defines a directory for storing temporary files with data received from proxied servers.
Caches for Frequently Accessed Files
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;

Configures a cache that can store:

open file descriptors, their sizes and modification times;


information on existence of directories;
file lookup errors, such as “file not found”, “no read permission”, and so on.

Configuring the Mime Types


Mime types or media types identify commonly used files on Websites. Eg. These could be
HTML, Javascript, CSS, PDF and many more.

They are used by browsers to tell the server which kind of types they understand/accept.
The server also uses them by specifying the mime type in the server responses.

nginx will use to file extension to correctly send back the corresponding mime type. An
example would be: extension .html corresponds to ‘text/html’ mime type.

A default list of such mappings is included in the nginx.conf file via:


http {
...
include mime.types;
}

You can add more if needed. Just define a new type {…} block inside the http block:

Telegram Channel @nettrain


types {
application/x-font-ttf ttf;
font/opentype ott;
}

Configuring Your Website Domain in nginx


In the previous chapter ‘Ordering a domain name for your website’ we configured our
DNS settings and nameservers. Now we will cover adding a website domain to nginx and
related setup.

Here is what we’ll do:

Create a system user for each website domain which can login via SFTP (secure file
transfer protocol) - this way you can upload your website to the server
Create a home directory for each website domain where you will put the html files,
… of your website
Create a nginx config file for each website domain (so nginx actually listens for that
domain.

In the example below we’ll use an example domain www.mywebsite.com.


Create a System User for mywebsite.com

We’ll create a user mywebsite, with an associated home directory: /home/mywebsite.

In that home directory we’ll put some subdirectories:

web: where we will place the html, css, … files of our website
log: where specific logs for this website will be logged

When SFTPing to the server with the user mywebsite you’ll not be able to step outside of
the root directory /home/mywebsite.
$ sudo useradd -s /usr/sbin/nologin -d /home/mywebsite/ -g nginx mywebsite

The above command will add a user mywebsite, that is part of the nginx group. The home
directory is /home/mywebsite and we don’t provide any login shell via SSH for this user.

Remark: /usr/sbin/nologin is correct in Ubuntu; in other Linux distributions this can also
be /sbin/nologin !

Now we will generate a good password for the user mywebsite with openssl:
$ openssl rand 8 -base64
yDIv39Eycn8=

Apply the password to the user via the following commad:

Telegram Channel @nettrain


$ sudo passwd mywebsite

Creating a home directory for mywebsite.com

Now we will create the home directory /home/mywebsite:


$ sudo mkdir /home/mywebsite/

Change the ownership to root for this directory (this is needed for the remote SFTP to
work correctly)
$ sudo chown root.root /home/mywebsite

Make sure that /home/mywebsite is only writable by the root user. (read only for the rest
of the world):
$ ls -l /home

should display drwxr-xr-x 4 root root for the mywebsite subdirectory.

We will create two subdirectories web and log which we will later use to put our website
files and log files respectively.

These two directories need to be owned by the mywebsite:nginx user; so it is possible to


write into them via the SFTP user mywebsite.
$ cd /home/mywebsite
$ sudo mkdir web
$ sudo mkdir log
$ sudo chown mywebsite:nginx web
$ sudo chown mywebsite:nginx log
$ sudo chmod g+s web
$ sudo chmod g+s log

Setting directories g+s with chmod makes all new files created in said directory have their
group set to the directory’s group by default. (eg. in this case our group is nginx). This
makes sure that nginx is correctly able to read the html files etc. for our website.
Configuring Remote SFTP access For mywebsite.com

In this section we will configure remote SFTP access by configuring the ssh daemon.
$ sudo nano /etc/ssh/sshd_config

We will change the Subsystem sftp from sftp-server to internal-sftp. We do this because
we want to enforce that the SFTP user can not get outside of its home directory for
security reasons. Only internal-sftp supports this.

Thus change

Telegram Channel @nettrain


Subsystem sftp /usr/lib/openssh/sftp-server

with
Subsystem sftp internal-sftp

Now at the end of the sshd_config add:


Match User mywebsite
ChrootDirectory %h
AllowTCPForwarding no
X11Forwarding no
ForceCommand internal-sftp

Match User mywebsite indicates that the lines that follow only apply for the user
mywebsite.

The ChrootDirectory is the root directory (/) the user will see after the user is
authenticated.

“%h” is a placeholder that get’s replaced at run-time with the home folder path of that
user (eg. /home/mywebsite in our case).

ForceCommand internal-sftp - This forces the execution of the internal-sftp and ignores
any commands that are mentioned in the ~/.ssh/rc file.

You also need to add /usr/sbin/nologin to /etc/shells, or the sftp user will not be able to
login:
$ sudo nano /etc/shells

add
/usr/sbin/nologin

at the end

Restart ssh with:


$ sudo /etc/init.d/ssh restart

You can now try out to login via SFTP (eg. by using FileZilla on Windows). If the process
fails it is best to check in the authentication log of the ssh daemon:
$ cat /var/log/auth.log

In this log file you could also find login attempts of (would-be) hackers trying to break in
into your site.

Telegram Channel @nettrain


Create a nginx config file for each website domain

First we’ll create a seperate nginx configuration file for our domain. We’ll then include it
in the main nginx.conf. This way, if we add more sites we have everything cleanly
seperated.
$ sudo mkdir -p /usr/local/nginx/conf/conf.d

We’ll put all these configuration files in the subdirectory conf.d

Now we’ll add an include line in the nginx.conf to load all .conf files in the conf.d
directory.
$ sudo mkdir -p /usr/local/nginx/conf/nginx.conf

http {
...
include /usr/local/nginx/conf/conf.d/*.conf;
}

Now create a configuration file for your mywebsite.com domain.


$ sudo nano /usr/local/nginx/conf/conf.d/mywebsite.com.conf

server {
server_name www.mywebsite.com;
listen <your IPv4 address>:80; # Listen on IPv4 address
listen [<your IPv6 address>]:80; # Listen on IPv6 address

root /home/mywebsite/web;
access_log /home/mywebsite/log/access.log combined buffer=32k;
error_log /home/mywebsite/log/error.log;
}

A server block defines a ‘virtual host’ in nginx. It maps a server_name to an IPv4 and/or
an IPv6 address. You can see we’re listening on the default HTTP port 80. The root tag
specifies in which directory the html files are present.

access_log and error_log respectively log the requests that arrive for
www.mywebsite.com and the errors.

You can create multiple server { … } blocks that map different server names to the same
IPv4 address. This allows you to host multiple websites using only 1 IPv4 or IPv6
address. These are also sometimes called ‘virtual hosts’ (Most probably you’ll only have
1 Ipv4 address available for your VPS because they are becoming more scarce).

Now we’ll add the mywebsite.com variant (without the www) in a second server { … }
block.

Telegram Channel @nettrain


server {
server_name mywebsite.com;
listen <your Ipv4 address>:80; # Listen on IPv4 address
listen [<your Ipv6 address>]:80; # Listen on IPv6 address
return 301 http://www.mywebsite.com$request_uri;
}

The last line tells nginx to issue an HTTP 301 Redirect response to the browser. This will
make sure that everyone that types in mywebsite.com will be redirected to
www.mywebsite.com. We do this because we want our website to be available on a single
address for Search Engine optimalization reasons. Google, Bing and other search engines
don’t like content that is not unique, eg. when it is available on more then one website.

The $request_uri is a variable, and contains whatever the user typed in the browser after
mywebsite.com
Configuring a Default Server

When you issue a request to www.mywebsite.com with your browser it’ll normally
include a Host: www.mywebsite.com parameter in the HTTP request.

nginx uses this “Host” parameter to determine which virtual server the request should be
routed to. (as multiple hosts can be available on one IP address)

If the value doesn’t match any server name or if the Host parameter is missing, then nginx
will route the request to the default server (eg. Port 80 for http).

If you haven’t defined a default server, nginx will take the first server { … } block as the
default.

In the config below we will explicitly set which server should be default, with the
default_server parameter.
# Default server
server {
listen 80 default_server; # listen on the IPv4 address
listen [::]:80 default_server; #listen on the IPv6 address
return 444;
}

We return a HTTP response 444, this returns no information to the client and closes the
connection.
Setting Up Log Rotation For nginx

Logs from nginx can quickly take up a lot of diskspace when your site(s) have a lot of
visitors.

We’ll setup the Linux ‘logrotate’ application to solve this problem automatically.

Telegram Channel @nettrain


‘logrotate’ provides us with a lot of options:

renaming (rotating) & compressing log files when they reach a certain size
keeping compressed backups for log files (with limits)

In the end the options make sure that the size taken by the log files will be constant.

Here are the options we’ve chosen:

Autorotate the logs of nginx: * daily * max 10 backups, before log files are deleted * size
take by log files is max 100MB * when rotating the log files, compress them * do a daily
compress * do a postrotate when nginx is restarted

We’ll create the following configuration file for the above settings:
$ sudo touch /etc/logrotate.d/nginx
$ sudo nano /etc/logrotate.d/nginx

We use the /etc/logrotate.d directory because logrotate will look there by default (default
settings are in /etc/logrotate.conf)
/var/log/nginx/*.log /usr/local/nginx/logs/*.log /home/*/log/*.log {
daily
missingok
rotate 10
size=100M
compress
delaycompress
notifempty
sharedscripts
postrotate
sudo /etc/init.d/nginx restart
endscript
}

The postrotate specifies that’ll restart nginx after the rotation of log files took place.

By using sharedscripts we are ensuring that the post rotate script doesn’t run on every
rotated log file but only once.

You can view the status of the logrotation via the command:
$ cat /var/lib/logrotate/status

You can run the logrotation manually with:


$ sudo logrotate --force /etc/logrotate.d/nginx

You’ll see that log files will have been compressed and renamed as <filename of log
file>.1 (1 = first backup)

Telegram Channel @nettrain


Disabling nginx Request Logs and Not Found Errors

By default nginx will log all requests for html files, images, … to a log file called the
access log. The information recorded contains the IP address, the date of visit, the page
visited, the HTTP response code and more.

If you don’t need this information, eg. if you have added Google Analytics code to your
site, you can safely disable this logging and lessen the load on your I/O subsystem a bit.

Here is how: inside a http or server block you can specify the access_log and error_log
variables:
http {
...
log_not_found off;
access_log off;
}

server {
....
error_log /home/<site>/log/error.log;
}

In the above example we completely turn off the access log. We still log errors to the
error_log, but we disable the HTTP 404 not found logs. We recommend to use Google
Webmaster tools to keep track of bad links on your site.

Updating nginx
nginx regurarly comes out with new versions. As we have compiled nginx from source,
we will need to recompile the nginx version.

Here is our guide on how to do this. (very similar to a clean install)


Backup the nginx config files

We start by taking a backup of our nginx config files which are located in
/usr/local/nginx/conf/
$ sudo cp -R /usr/local/nginx/conf/ ~/nginx-conf-backup

Download & Compile the New Version

Like previously explained download the sources of the new nginx version. Also check
whether there are new versions of the modules we’re using. (eg. Google Pagespeed nginx
plugin, OpenSSL).

If there are new versions of the modules; also download these and update the ./configure
command used for building nginx with the new paths.

Start the compilation:

Telegram Channel @nettrain


$ ./configure <PARAMETERS>
$ make

Before we can do ‘sudo make install’ we need to stop the currently running server:
$ sudo /etc/init.d/nginx stop
$ sudo make install

Now start nginx again:


$ sudo /etc/init.d/nginx start

You can view the version of nginx via this command:


$ sudo nginx -v
nginx version: nginx/1.9.9

Telegram Channel @nettrain


Installing PHP
PHP stands for PHP: Hypertext Preprocessor, a server-side scripting language which is
used by more then 240 million websites. The very well-known blogplaftorm Wordpress is
implemented using PHP.

In this chapter we’ll install and configure PHP, We’ll also explain how to configure nginx
to pass requests to the PHP code interpreter in the most scalable & performant way.

Which PHP Version?


We recommend you to install the latest stable version of PHP. As of the summer of 2014
this is the PHP 5.6.x series.

The single biggest performance improvement for PHP based websites comes with the use
of an OpCode cache.

What is an OpCode Cache?


An OpCode cache (or operation code cache) enhances the performance of PHP by
caching the results of the PHP compilation phase for later reuse. While the cache uses a
little exra memory, it should always be used in production because it provides a great
performance increase.

When an OpCode cache is added, the slow parsing and compiling steps are skipped,
leaving just the execution of the code.

Zend OpCache
Since PHP5.5 a default OpCode cache has been included: Zend OpCache.

We recommend using Zend OpCache instead of the older APC cache. (Alternative PHP
cache) because APC Cache was not always 100% stable with the latest PHP versions and
Zend OpCache appears to be more performant.

Compile PHP From Source?


We recommend you to install PHP from source. This way you always use the latest
version. The compiled PHP versions in Linux distributions most of the time are a major
release behind. (eg. for example Ubuntu 13.x has PHP 5.4.9)

Telegram Channel @nettrain


It’s also pretty easy to update the PHP version to a new version when compiling from
source as you’ll see below.

Install Dependencies for PHP Compilation


You’ll need to download and install the below dependencies before we start the compile
of PHP. Otherwise the configure command will fail with the following errors:
Problem: utf8_mime2text() has new signature, but U8T_CANONICAL is missing.
configure: error: mcrypt.h not found. Please reinstall libmcrypt.
configure: error: Please reinstall libedit - I cannot find readline.h

$ sudo apt-get install autoconf


$ sudo apt-get install apt-file (to search for a file: $ apt-file search xml2-co\
nfig)
$ apt-file update
$ sudo apt-get install libxml2-dev
$ sudo apt-get install libbz2-dev
$ sudo apt-get install libjpeg62-dev
$ sudo apt-get install libpng12-dev
$ sudo apt-get install libxpm-dev
$ sudo apt-get install libfreetype6-dev
$ sudo apt-get install courier-imap
$ sudo apt-get install libc-client2007e-dev
$ sudo apt-get install libmcrypt-dev
$ sudo apt-get install libedit-dev

Downloading PHP
We will now download and unzip the latest version of the PHP sources. You can find the
download links at http://php.net/downloads.php
$ cd ~
$ sudo wget http://php.net/distributions/php-5.6.14.tar.gz
$ sudo tar xzvf php-5.6.14.tar.gz

Compile and Install PHP


Now we will compile and install PHP with the following commands:
$ cd php-5.6.14
$ sudo ./configure --enable-opcache --enable-cgi --enable-fpm --with-mcrypt --wi\
th-zlib --with-gettext --enable-exif --enable-zip --with-bz2 --enable-soap --\
enable-sockets --enable-sysvmsg --enable-sysvsem --enable-sysvshm --enable-shm\
op --with-pear --enable-mbstring --with-openssl --with-mysql=mysqlnd --with-my\
sqli=mysqlnd --with-mysql-sock=/var/run/mysqld/mysqld.sock --with-curl --wit\
h-gd --enable-bcmath --enable-calendar --enable-ftp --enable-gd-native-ttf --\
with-freetype-dir=/usr/lib --with-jpeg-dir=/usr/lib --with-png-dir=/usr/lib \
--with-xpm-dir=/usr/lib --enable-pdo --with-pdo-sqlite --with-pdo-mysql=mysqln\
d --enable-inline-optimization --with-imap --with-imap-ssl --with-kerberos \
--with-libedit --with-fpm-user=nginx --with-fpm-group=nginx
$ sudo make install

will compile and install this version of PHP

Telegram Channel @nettrain


Let’s take a look at the configure options we have specified. You can find additional
documentation at http://php.net/manual/en/configure.about.php:

Compilation flags
–enable-opcache

The enable-opcache option will make sure the OpCache is available.


–enable-cgi

CGI module will be built with support for FastCGI


–enable-fpm

Enable the FastCGI Process Manager (FPM)

FPM (FastCGI Process Manager) is an alternative PHP FastCGI implementation with


some additional features (mostly) useful for heavy-loaded sites.

These features include:

advanced process management with graceful stop/start


ability to start workers with different uid/gid/chroot/environment, listening on
different ports and using different php.ini (replaces safe_mode)

We’ll cover PHP FPM in more detail in the next chapter.


–with-mcrypt

This is an interface to the mcrypt library, which supports a wide variety of block
algorithms such as DES, TripleDES, Blowfish (default), 3-WAY, SAFER-SK64, SAFER-
SK128, TWOFISH, TEA, RC2 and GOST in CBC, OFB, CFB and ECB cipher modes

This is used by eg. the Magento shopping cart and other PHP frameworks.
–with-zlib

PHP Zlib module allows you to transparently read and write gzip compressed files. Thus
it is used for serving faster content to the end users by compressing the data stream.

Some applications like Pligg require zlib compression enabled by default in the PHP
engine
–with-gettext

The gettext functions can be used to internationalize your PHP applications.


–enable-exif

With the exif extension you are able to work with image meta data. For example, you
may use exif functions to read meta data of pictures taken from digital cameras by

Telegram Channel @nettrain


working with information stored in the headers of the JPEG and TIFF images.
–enable-zip

Enables ZIP support (eg. Joomla PHP framework uses this)


–with-bz2

The bzip2 functions are used to transparently read and write bzip2 (.bz2) compressed
files.
–enable-soap

Enables SOAP Webservices support


–enable-sockets

The socket extension implements a low-level interface to the socket communication


functions based on the popular BSD sockets, providing the possibility to act as a socket
server as well as a client.
–enable-sysvmsg, –enable-sysvsem, –enable-sysvshm

These modules provide wrappers for the System V IPC family of functions. It includes
semaphores, shared memory and inter-process messaging (IPC).
–enable-shmop

Shmop is an easy to use set of functions that allows PHP to read, write, create and delete
UNIX shared memory segments
–with-pear

PEAR is short for “PHP Extension and Application Repository” and is pronounced just
like the fruit. The purpose of PEAR is to provide:

A structured library of open-source code for PHP users


A system for code distribution and package maintenance
A standard style for code written in PHP
The PHP Extension Community Library (PECL)
A web site, mailing lists and download mirrors to support the PHP/PEAR
community
–enable-mbstring

mbstring provides multibyte specific string functions that help programmers deal with
multibyte encodings in PHP. In addition to that, mbstring handles character encoding
conversion between the possible encoding pairs. mbstring is designed to handle Unicode-
based encodings such as UTF-8 and UCS-2 and many single-byte encodings for
convenience.
–with-openssl

Telegram Channel @nettrain


This option needs to be enabled if you want to work with certificates and
verify/encrypt/decrypt functions in PHP
–with-mysql=mysqlnd

Enables the user of the MySQL native driver for PHP which is highly optimized for and
tightly integrated into PHP.

More information at http://dev.mysql.com/downloads/connector/php-mysqlnd/


–with-mysqli=mysqlnd

Enables the use of the MySQL native driver with mysqli, the improved MySQL interface
API which is used by a lot of PHP frameworks.
–with-mysql-sock=/var/run/mysqld/mysqld.sock

Sets the path of the MySQL unix socket pointer (used by all PHP MySQL extensions)
–with-curl

Enables the PHP support for cURL (a tool to transfer data from or to a server)
–with-gd

PHP can not only output HTML to a browser. By enabling the GD extension PHP can
output image streams directly to a browser. (eg. JPG, PNG, WebP, …)
–enable-gd-native-ttf

To enable support for native TrueType string function


–with-freetype-dir=/usr/lib

To enable support for FreeType 2 fonts


–with-jpeg-dir=/usr/lib

To enable support for jpeg images


–with-png-dir=/usr/lib

To enable support for png images


–with-xpm-dir=/usr/lib

To enable support for xpm images


–enable-bcmath

Enables the use of arbitrary precision mathematics in PHP via the Binary Calculator
extension. Supports numbers of any size and precision, represented as strings.
–enable-calendar

Telegram Channel @nettrain


The calendar extension presents a series of functions to simplify converting between
different calendar formats.
–enable-ftp

A PHP script can use this extension to access an FTP server providing a wide range of
control to the executing script
–enable-pdo

Enables an Object oriented API interface for accessing MySQL databases.


–with-pdo-sqlite, –with-pdo-mysql=mysqlnd

Enables the use of the MySQL native driver with the PDO API interface
–enable-inline-optimization

Inlining is a way to optimize a program by replacing function calls with the actual body
of the function being called at compile-time.

It reduces some of the overhead associated with function calls and returns.

Enabling this configuration option will result in potentially faster php scripts that have a
larger file size.
–with-imap, –with-imap-ssl, –with-kerberos

Adds support for the IMAP protocol (used by email servers) and related libraries
–with-fpm-user=nginx, –with-fpm-group=nginx

When enabling the FastCGI Process Manager (FPM), we set the user and group to our
nginx web server user.

Testing the PHP Install


PHP has now been installed in: usr/local/bin/php

With the following command you can verify the version of PHP:
$ php -v
PHP 5.6.14 (cli) (built: Dec 29 2014 18:05:44)
Copyright (c) 1997-2014 The PHP Group
Zend Engine v2.6.0, Copyright (c) 1998-2014 Zend Technologies

To verify the PHP installation, let’s create a test php file which we will execute using the
php command line tool:
$ cd ~
$ nano phpinfo.php
<?php

Telegram Channel @nettrain


phpinfo();
?>

Save the phpinfo.php file. Now run it via:


$ php phpinfo.php | more

Tuning php.ini settings


We’ll now configure the general PHP settings which will apply for every PHP script.

To edit the php.ini settings execute the following command:


$ sudo nano /usr/local/lib/php.ini

Set the System Timezone


We will set the system timezone to fix the warning
Warning: phpinfo(): It is not safe to rely on the system's timezone settings. Yo\
u are *required* to use the date.timezone setting or the date_default_timezone_s\
et() function. In case you used any of those methods and you are still getting t\
his warning, you most likely misspelled the timezone identifier. We selected the\
timezone 'UTC' for now, but please set date.timezone to select your timezone.

We need to set the following property:


date.timezone=America/New_York

As our server is located in the New York timezone we selected this timezone. You can
find a list of possible timezones at http://www.php.net/manual/en/timezones.php. Choose
the continent/city which is nearest your server data center.

Set maximum execution time


max_execution_time = 30

This sets the maximum time in seconds a script is allowed to run before it is terminated
by the parser. This helps prevent poorly written scripts from tying up the server. The
default setting is 30. When running PHP from the command line the default setting is 0.

Your web server can have other timeout configurations that may also interrupt PHP
execution.

Set duration of realpath information cache


realpath_cache_ttl = 360

Duration of time (in seconds) for which to cache realpath information for a given file or
directory. For systems with rarely changing files, consider increasing the value.

Telegram Channel @nettrain


Set maximum size of an uploaded file.
upload_max_filesize = 15M

Sets the maximum filesize that can be uploaded by a PHP script. Eg. If your Wordpress
blog system complains that you can not upload a big image, this can be caused by this
setting.

Set maximum amount of memory a script is allowed to allocate


memory_limit = 256M

This sets the maximum amount of memory in bytes that a script is allowed to allocate.
This helps prevent poorly written scripts for eating up all available memory on a server.
Note that to have no memory limit, set this directive to -1.

Set maximum size of HTTP POST data allowed


post_max_size = 20M

Sets max size of HTTP POST data allowed. This setting also affects file upload. To
upload large files, this value must be larger than upload_max_filesize. If memory limit is
enabled by your configure script, memory_limit also affects file uploading. Generally
speaking, memory_limit should be larger than post_max_size.

Don’t expose to the world that PHP is installed


expose_php = Off

This setting is a security setting. This means that you don’t want to expose to the world
that PHP is installed on the server.

Disable functions for security reasons


disable_functions=exec,passthru,shell_exec,system,proc_open,popen

This directive allows you to disable certain PHP functions for security reasons. It takes on
a comma-delimited list of function names.

Only internal functions can be disabled using this directive. User-defined functions are
unaffected.

Disable X-PHP-Originating-Script header in mails sent by PHP


mail.add_x_header = Off

Don’t add a X-PHP-Originating-Script HTTP header that will include the UID (Unique
identifier) of the script followed by the filename.

Telegram Channel @nettrain


Sets the max nesting depth of HTTP input variables
max_input_nesting_level = 128

Sets the max nesting depth of input variables in HTTP GET, POST (eg.
$_GET, $_POST.. in PHP)
Set the maximum amount of input HTTP variables
max_input_vars = 2000

The maximum number of HTTP input variables that may be accepted (this limit is
applied to $_GET, $_POST and $_COOKIE separately).

Using this directive mitigates the possibility of denial of service attacks which use hash
collisions. If there are more input variables than specified by this directive, further input
variables are truncated from the request.

Enable Zend Opcache


As we have already explained, enabling an OpCode cache is critical for PHP
performance.

By default Zend OpCache is included in PHP5.5 and later. As benchmarks suggest it to


be the fastest OpCode cache as well (http://massivescale.blogspot.com/2013/06/php-55-
zend-optimiser-opcache-vs-xcache.html); we’ll add it to our php.ini configuration to
enable it.

Zend Opcache can be configured as follows in php.ini:


; Enable the opcode cache
opcache.enable=1
; Enable the opcode cache for php CLI (command line interface)
opcache.enable_cli=1
; Sets how much memory to use; 128M
opcache.memory_consumption=128
; Sets how much memory should be used by OPcache for storing internal strings
;(e.g. classnames and the files they are contained in)
opcache.interned_strings_buffer=8

; The maximum number of files OPcache will cache


opcache.max_accelerated_files=4000
;How often (in seconds) to check file timestamps for changes to the shared
;memory storage allocation.
opcache.revalidate_freq=60

;If enabled, a fast shutdown sequence is used for the accelerated code
;The fast shutdown sequence doesn't free each allocated block, but lets
;the Zend Engine Memory Manager do the work.
opcache.fast_shutdown=1
; Location of the opcache library.
zend_extension=/usr/local/lib/php/extensions/no-debug-zts-20131226/opcache.so

Telegram Channel @nettrain


Doublecheck whether /usr/local/lib/php/extensions/no-debug-zts-20131226/opcache.so
exists before adding the zend_extension configuration directive.

Now reboot your server and check the boot log whether everything succeeded. (eg. No
PHP error messages).

Telegram Channel @nettrain


Installing PHP-FPM
When you’re using PHP on your site, there has to be a way that your web server nginx
forwards the request to the PHP interpreter and receives the HTML response that has to
be sent back to the users browser.

When you have a lot of visitors, it is possible that the PHP interpreter is launched multiple
times concurrently. (eg. Multiple processes).

PHP-FPM (FastCGI Process Manager) configures/handles the process management of


launching and stopping PHP interpreters. It also allows nginx to forward php requests to
the PHP interpreter as we’ll see later.

PHP-FPM is included in PHP core as of PHP 5.3.3.

Configuring PHP-FPM
To configure PHP-FPM we’ll create the following php-fpm configuration file.
$ sudo nano /usr/local/etc/php-fpm.conf

First we’ll add the path to the PID file for PHP-FPM. We’ll use this later in our
startup/shutdown script of PHP-FPM.
; process id
pid = /var/run/php-fpm.pid

Specify the location of the error log:


error_log = /var/log/php-fpm-www-error.log

emergency_restart_threshold = 10
emergency_restart_interval = 1m
process_control_timeout = 10s

The above 3 settings configure that if 10 PHP-FPM child processes exit with an error
within 1 minute then PHP-FPM restarts automatically. This configuration also sets a 10
seconds time limit for child processes to wait for a reaction on signals from the master
PHP-FPM process.

Configuring a PHP-FPM Pool


By using multiple so called PHP-FPM pools, PHP-FPM can run different PHP websites
under different Linux users. This can be a handy way to help establish security when your

Telegram Channel @nettrain


share your VPS server with multiple users.

As we assume our server is completely private, we will configure only one PHP-FPM
Pool, called “www”

For this pool we’ll configure:

the listening port of PHP-FPM


management options
$ sudo nano /usr/local/etc/php-fpm.conf

[www]
user = nginx
group = nginx

The [www] defines a new pool with the name “www”. We’ll launch PHP with our nginx
user which makes sure that our PHP interpreter will only be able to read/write
files/directories that are owned by nginx. (in our case that would be our websites and
nothing else).
listen = 127.0.0.1:9000

The listen configuration is the IP address and port where PHP-FPM will listen for
incoming requests (in our case nginx forwarding requests for PHP files).
listen.allowed_clients = 127.0.0.1

The allowed_clients setting will limit from where the clients can access PHP-FPM. We
specify 127.0.0.1 or the local host; this makes sure that no one is able to access the PHP-
FPM from the outside (evil) world. Only applications also running on the same server can
communicate with PHP-FPM. In our case nginx will be able to communicate with the
PHP-FPM server.
pm = ondemand

You can choose how the PHP-FPM process manager will control the number of child
processes. Possible values are static, ondemand and dynamic.

static - the number of child processes is fixed by pm.max_children


ondemand - the processes spawn on demand (when requested).
dynamic - the number of child processes are set dynamically based on the following
directives: pm.max_children, pm.start_servers, pm.min_spare_servers,
pm.max_spare_servers.
pm.max_children = 50

Telegram Channel @nettrain


Ondemand spawns the processes on demand; as such we don’t have any PHP-FPM
childprocesses lingering around (being idle in the worst case). Only when processes are
needed, they will be started, until the maximum of max_children is reached. Because the
processes need to be started, this option may not be the most performant. It’s a trade-off
between memory consumption and performance.
pm.process_idle_timeout = 10s;

The number of seconds after which an idle process will be killed with the ondemand
process manager.

You could also go for the following dynamic configuration which preforks 15 processes
(pm.start_servers):
pm = dynamic
pm.max_children = 5
; The number of child processes created on startup. Default Value: min_spare_ser\
vers + (max_spare_servers - min_spare_servers) / 2
pm.start_servers = 15
; The desired minimum number of idle server processes.
pm.min_spare_servers = 2
; The desired maximum number of idle server processes.
pm.max_spare_servers = 35
; The number of requests each child process should execute before respawning. Th\
is can be useful to work around memory leaks in 3rd party libraries.
pm.max_requests = 500

In our testing we had the best results with pm = dynamic.


pm.status_path = /phpstatus

This defines the path were we can view the PHP FPM status page. The status page is very
usefull to see if we need to tweak the FPM settings further.

To view this information on your site via your browser you’ll need to add a location to the
nginx configuration too. We’ll add this later in this chapter when we configure nginx to
pass PHP requests to PHP-FPM.
rlimit_files = 65536

Sets the maxiumum number of open files.


rlimit_core = 0

This setting disables the creation of core dumps when PHP FPM would crash. This way
we save on disk space.
slowlog = /var/log/php-fpm-www-slow.log

Telegram Channel @nettrain


This log file will contain all PHP files which executed very slowly. (this could for
example be due to a slow database call in the PHP script).
ping.path = /phpping

The ping URI to call the monitoring page of FPM. This could be used to test from outside
that FPM is alive and responding. (eg. with Pingdom
ping.response = FPM Alive

This directive may be used to customize the response to a ping request.


security.limit_extensions = .php .php3 .php4 .php5

Limits the extensions the PHP FPM will process. Eg. Only files with a php, php3, php4 or
php5 extension are allowed. This will prevent malicious users to use other extensions to
exectute php code.
php_admin_value[error_log] = /var/log/php-fpm-www-php.error.log

Sets the PHP error_log location. Because we use php_admin_value the log location
cannot be overriden in a users php.ini file.

How to start PHP-FPM


Now that we have covered the configuration of PHP-FPM, let’s try to start it. We’ll add a
startup script to our scripts directory /etc/init.d.

You can copy the php-fpm init.d script from the php sources:
$ cd ~
$ sudo cp php-5.6.16/sapi/fpm/init.d.php-fpm /etc/init.d/php-fpm
$ sudo chmod 755 /etc/init.d/php-fpm

Now edit the file and check whether the following paths are correct:
$ sudo nano /etc/init.d/php-fpm
prefix=/usr/local
exec_prefix=${prefix}

php_fpm_BIN=${exec_prefix}/sbin/php-fpm
php_fpm_CONF=${prefix}/etc/php-fpm.conf
php_fpm_PID=/var/run/php-fpm.pid

You can now start php-fpm with the following command:


$ sudo /etc/init.d/php-fpm start

Stopping php-fpm:

Telegram Channel @nettrain


$ sudo /etc/init.d/php-fpm stop

nginx FPM Configuration


Now we still need to configure nginx to use the PHP FPM process manager when
requests for PHP files are made by the browser.

Let’s create a php.conf in the directory /usr/local/nginx/conf:


$ sudo nano /usr/local/nginx/conf/php.conf

Now add the following contents and save the file.


location ~ \.php$ {
try_files $uri =404; # return a HTTP 404 response if the php file doesnt\
exist

fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $request_filename;

fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_buffer_size 40k;
fastcgi_buffers 290 40k;
fastcgi_busy_buffers_size 512k;
fastcgi_temp_file_write_size 512k;
fastcgi_intercept_errors on;

fastcgi_param HTTPS $server_https;

fastcgi_param PATH_INFO $fastcgi_path_info;


fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;

fastcgi_param QUERY_STRING $query_string;


fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;

fastcgi_param SCRIPT_NAME $fastcgi_script_name;


fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;

fastcgi_param GATEWAY_INTERFACE CGI/1.1;


fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;

fastcgi_param REMOTE_ADDR $remote_addr;


fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;

# PHP only, required if PHP was built with --enable-force-cgi-redirect

Telegram Channel @nettrain


fastcgi_param REDIRECT_STATUS 200;

# Sendmail path using msmtp


fastcgi_param PHP_ADMIN_VALUE "sendmail_path=/usr/bin/msmtp -C /etc/.msm\
tp_php --logfile /var/log/msmtp.log -a $server_name -t";
}

The fastcgi_buffers and fastcgi_buffer_size settings can be tweaked after your site is
receiving traffic. Then it is possible to compute the average and maximum response sizes
of the HTML files via the access logs. Here is the command to compute the average
response size for all requests in an access.log file.
$ echo $(( `awk '($9 ~ /200/)' access.log | awk '{print $10}' | awk '{s+=$1} END\
{print s}'` / `awk '($9 ~ /200/)' access.log | wc -l` ))

Set the fastcgi_buffer_size accordingly. (eg. a little bit higher then the average response
size. This makes sure that most requests will fit into 1 buffer, possibly optimizing
performance).
fastcgi_buffer_size 40k;

To compute the maximum response size use:


$ awk '($9 ~ /200/)' access.log | awk '{print $10}' | sort -nr | head -n 1

Then divide this number (in bytes) by 1000 (to get kb) and 40,
fastcgi_buffers 290 40k;

So now we have 290 buffers of 40k

To enable PHP for a nginx server { … } configuration you should include the php.conf in
the server block:
server {
...
include /usr/local/nginx/conf/php.conf
...
}

Viewing the PHP FPM Status Page


Remember we added a /phpstatus and /phpping URL in the PHP-FPM configuration.

We’ll now add the necessary nginx configuration for allowing access to it:
location ~ ^/(phpstatus|phpping)$ {
access_log off;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $fastcgi_script_name;
# fastcgi_params in conf directory of nginx

Telegram Channel @nettrain


include fastcgi_params;
}

This location block should be placed in one of your server { … } blocks.

Viewing statistics from the Zend OpCode cache


We will add an OpCode cache GUI so we can see how well the OpCode cache performs,
and if any tweaks are necessary. (eg. If you have a lot of cache misses due to the cache
size being too small.)

We’ll use Amnuts OpCache GUI available here: https://github.com/amnuts/opcache-


gui/releases

Now we will download OpCache GUI


$ cd ~
$ wget https://github.com/amnuts/opcache-gui/archive/v2.2.0.zip
$ unzip v2.2.0.zip
$ cd opcache-gui-2.2.0
$ mv index.php opcache.php
$ sudo mv opcache.php /home/<yourwebsitehome>/web

Now opcache.php is available in the directory where your nginx is looking for its files.

Going to the opcache.php file via your browser should display the following information:

OpCache

Starting PHP-FPM Automatically At Bootup


In case you need to reboot your server, you of course want PHP-FPM to be launched
automatically at boot up.

Here is how to enable this:


$ sudo /usr/sbin/update-rc.d -f php-fpm defaults

Log File Management For PHP-FPM


We will configure lograte for the PHP-FPM logs via the following command:

Telegram Channel @nettrain


$ sudo nano /etc/logrotate.d/php-fpm

/var/php-fpm*.log {
daily
missingok
rotate 10
size=100M
compress
delaycompress
notifempty
sharedscripts
postrotate
sudo /etc/init.d/php-fpm restart
endscript
}

Telegram Channel @nettrain


Installing memcached
Memcached is a free and high performance memory object caching system. It is intended
to speed up dynamic websites by reducing the load on the database.

Memcached stores small chunks of arbitrary data that comes from results of database
calls, API calls or page rendering. It thus reduces calls made to your database, which will
increase the speed of regurarly accessed dynamic webpages. It’ll also improve the
scalability of your infrastructure.

Multiple client APIs exist, written in different languages like PHP and Java. In this guide
we’ll focus on the PHP integration, which will make it possible to let Wordpress, phpBB
and other PHP frameworks use the memcached server.

Comparison of caches
At this moment we have already configured an OpCode cache in PHP. Why would we
need another kind of caching?

The OpCode cache in PHP stores the compiled PHP code in memory. Running PHP code
will speed up significantly due to this.

Memcache can be used as a temporary data store for your application to reduce calls to
your database. Applications which support Memcache include PHPBB3 forum software,
Wordpress with the W3 Total Cache plugin and more.

Two PHP APIs for accessing memcached exist:

pecl/memcache
pecl/memcached (newer)

We will install the two, because some PHP webapp frameworks may still use the older
API.

Downloading memcached And Extensions


Installing Libevent
Memcached uses the library libevent for scalable sockets, allowing it to easily handle tens
of thousands of connections. Each worker thread on memcached runs its own event loop
and handles its own clients. They share the cache via some centralized locks, and spread
out protocol processing.

Telegram Channel @nettrain


We will install it before installing the memcached server to make sure it is available for
use by the memcached server.
$ cd ~
$ wget -cnv --progress=bar --no-check-certificate https://sourceforge.net/projec\
ts/levent/files/libevent/libevent-2.0/libevent-2.0.22-stable.tar.gz --tries=3
$ tar xzf libevent-2.0.22-stable.tar.gz
$ cd libevent-2.0.22-stable
$ ./configure
$ sudo make
$ sudo make install

The libevent library is now installed in /usr/local/lib/libevent.so

memcached Server
You can find the latest release of the memcached server at
http://memcached.org/downloads

Download the release as follows:


$ cd ~
$ wget http://memcached.org/files/memcached-1.4.25.tar.gz
$ tar -zxvf memcached-1.4.25.tar.gz
$ cd memcached-1.4.25
$ ./configure && make && sudo make install

memcached is now installed.

You can view the version information by executing:


$ memcached -h
memcached 1.4.24

Now we’ll make sure that Memcached server is able to find the libevent library:
$ sudo nano /etc/ld.so.conf.d/libevent-i386.conf

Add
/usr/local/lib/

Save the file


$ sudo ldconfig

/usr/local/lib is the directory where libevent.so is located. With sudo ldconfig we make
the configuration change active.

You should now be able to start Memcached manually via:

Telegram Channel @nettrain


$ memcached -vv

Press Ctrl-C to stop the memcached server, as we will add a startup script that’ll start
memcached at boot up of our server.

Installing libmemcached
libMemcached is an open source C/C++ client library for the memcached server . It has
been designed to be light on memory usage, thread safe, and provide full access to server
side methods.

It can be used by the pecl/memcached PHP extension as a way to communicate with the
Memcached server. Because it is written in C it is also very performant.

Here is how to install it:


$ cd ~
$ wget https://launchpad.net/libmemcached/1.0/1.0.18/+download/libmemcached-1.0.\
18.tar.gz
$ tar -xzf libmemcached-1.0.18.tar.gz
$ cd libmemcached-1.0.18
$ sudo ./configure
$ sudo make install

After installation the library will be present at /usr/local/lib/libmemcached.so

Installing igbinary
Igbinary is a PHP extension which provides binary serialization for PHP objects and data.
It’s a drop in replacement for PHP’s built in serializer.

The default PHP serializer uses a textual representation of data and objects. Igbinary
stores data in a compact binary format which reduces the memory footprint and performs
operations faster.

Why is this important? Because the pecl/memcached PHP extension will use the igbinary
serializer when saving and getting data from the memcached server. The memcached
server cache will then contain the more compact binary data and use less memory.
$ cd ~
$ wget http://pecl.php.net/get/igbinary-1.2.1.tgz
$ tar xvf igbinary-1.2.1.tgz
$ cd igbinary-1.2.1
$ phpize
Configuring for:
PHP Api Version: 20131106
Zend Module Api No: 20131226
Zend Extension Api No: 220131226
$ sudo ./configure CFLAGS="-O2 -g" --enable-igbinary
$ sudo make
$ sudo make install

Telegram Channel @nettrain


The igbinary.so library is now installed in the default extension directory. (in our case
/usr/local/lib/php/extensions/no-debug-zts-20131226/)

Now add igbinary to your php.ini configuration:


$ sudo nano /usr/local/lib/php.ini

Add:
; Load igbinary extension
extension=igbinary.so

; Use igbinary as session serializer


session.serialize_handler=igbinary

; Enable or disable compacting of duplicate strings


; The default is On.
igbinary.compact_strings=On

Now restart PHP-FPM to reload the changed php.ini configuration:


$ sudo /etc/init.d/php-fpm restart

You can view whether the igbinary module is now available in php via:
$ php -m

igbinary should be in the list of modules.

Install pecl/memcache extension for PHP


Let’s now download and unzip the latest version of the original pecl/memcache PHP
extension.
$ cd ~
$ wget -cnv --progress=bar http://pecl.php.net/get/memcache-3.0.8.tgz --tries=3
$ tar xzf memcache-3.0.8.tgz
$ cd memcache-3.0.8
$ phpize
$ ./configure --enable-memcache --with-php-config=/usr/local/bin/php-config
$ sudo make
$ sudo make install

Now add memcache.so library to your php.ini configuration:


$ sudo nano /usr/local/lib/php.ini

Add
extension=/usr/local/lib/php/extensions/no-debug-zts-20131226/memcache.so

Telegram Channel @nettrain


Now restart PHP-FPM to reload the changed php.ini configuration:
$ sudo /etc/init.d/php-fpm restart

Install pecl/memcached extension for PHP


The pecl/memcached extension provides a newer PHP API for accessing the Memcached
server. It also works together with libmemcache C++ client library which we installed
previously.

Here is how to install it:


$ cd ~
$ wget -cnv --progress=bar http://pecl.php.net/get/memcached-2.2.0.tgz --tries=\
3
$ tar xzf memcached-2.2.0.tgz
$ cd memcached-2.2.0
$ phpize
$ ./configure --with-php-config=/usr/local/bin/php-config --enable-memcached-igb\
inary --enable-memcached-json --with-libmemcached-dir=/usr
$ sudo make
$ sudo make install

Now add memcached.so library to your php.ini configuration:


$ sudo nano /usr/local/lib/php.ini

Add
extension=/usr/local/lib/php/extensions/no-debug-zts-20131226/memcached.so

Now restart PHP-FPM to reload the changed php.ini configuration:


$ sudo /etc/init.d/php-fpm restart

Testing The memcached Server Installation


You can start the memcached server running with its default settings via the following
command:
$ sudo memcached -d

Now memcached is running as a daemon. On the command line we can now check if it is
listening on the default port 11211:
$ sudo netstat -tap | grep memcached

You should see output like


tcp 0 0 localhost:11211 *:* LISTEN \
1486/memcached

Telegram Channel @nettrain


tcp 0 0 localhost:11211 localhost:42471 ESTABLISHED \
1486/memcached

Now lets create a Memcached server startup/stop script that is automatically run on boot
up of our server.

In the startup script we’ll also put in some Memcached configuration settings.

Setup the memcached Service


Create the following memcached startup script:
$ sudo nano /etc/init.d/memcached

Copy paste the following info:


#! /bin/bash
### BEGIN INIT INFO
# Provides: memcached
# Required-Start: $syslog
# Required-Stop: $syslog
# Should-Start: $local_fs
# Should-Stop: $local_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: memcached - Memory caching daemon
# Description: memcached - Memory caching daemon
### END INIT INFO

# Usage:
# /etc/init.d/memcached start
# /etc/init.d/memcached stop

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/local/bin/memcached
DAEMONNAME=memcached
DESC=memcached

# Run as user nobody


USER=nobody

# ?
CON=1024

# nr of threads; It is typically not useful to set this higher than the number o\
f CPU cores on the memcached server
THREADS=4

# ?
MINSP=72

# ?
CHUNKF=1.25

# Port to listen on
PORT1=11211

Telegram Channel @nettrain


# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default
# Note that the daemon will grow to this size, but does not start out holding th\
is much
# memory
MEMSIZE=64

# Server IP
SERVERIP='127.0.0.1'

# Experimental options
OPTIONS='-o slab_reassign,slab_automove,lru_crawler,lru_maintainer,maxconns_fast\
,hash_algorithm=murmur3'

test -x $DAEMON || exit 0

set -e

case "$1" in
start)
echo -n "Starting $DESC: "
$DAEMON -d -m $MEMSIZE -l $SERVERIP -p $PORT1 -c $CON -t $THREADS -n $MI\
NSP -f $CHUNKF -u $USER $OPTIONS
;;
stop)
echo -n "Stopping $DESC: "
killall $DAEMON 2>/dev/null
;;
*)
N=/etc/init.d/$DESC
echo "Usage: $N {start|stop}" >&2
exit 1
;;
esac

exit 0

Save the file and then make the script executable via:
$ sudo chmod +x /etc/init.d/memcached

Add the script to the boot configuration:


$ sudo update-rc.d memcached defaults

After rebooting memcached should be running. You can check this via the following
command:
$ ps -ef | grep memcached
nobody 2430 1 0 17:17 ? 00:00:00 /usr/local/bin/memcached -d -m 6\
4 -l 127.0.0.1 -p 11211 -c 1024 -t 4 -n 72 -f 1.25 -u nobody -o slab_reassign,s\
lab_automove,lru_crawler,lru_maintainer,maxconns_fast,hash_algorithm=murmur3

Installing a Memcached GUI

Telegram Channel @nettrain


PhpMemcachedAdmin is a great way to look into the inner workings of a Memcached
server via your browser. It’ll give you an overview on how well your Memcached server
is running:

cache hit & miss ratio


number of objects that were evicted from the cache. (maybe your cache size is too
small)

You can install this way:


$ cd ~
$ mkdir memcache-admin
$ cd memcache-a
$ wget http://phpmemcacheadmin.googlecode.com/files/phpMemcachedAdmin-1.2.2-r262\
.tar.gz
$ tar xvf phpMemcachedAdmin-1.2.2-r262.tar.gz
$ rm phpMemcachedAdmin-1.2.2-r262.tar.gz

Move eveything over to a subdirectory phpmemcache in /home/<mysite>/web

Update the owner of the files if needed:


$ sudo chown -fR <mysite>:nginx phpmemcache/

You should now be able to browse to the index.php file via your browser to view the stats.

Telegram Channel @nettrain


phpMemcachedAdmin control panel

Testing the scalability of Memcached server with twemperf and


mcperf
Twemperf is a small application which allows you to stress test your Memcached server.
It is capable of generating connections and requests at a high rate. You can find more
information at https://github.com/twitter/twemperf

Here is how you can compile & run it:


$ cd ~
$ wget https://github.com/twitter/twemperf/archive/v0.1.1.tar.gz -O mcperf-0.1.1\
.tar.gz
$ tar xzf mcperf-0.1.1.tar.gz
$ cd twemperf-0.1.1
$ autoreconf -fvi
$ ./configure
$ make
$ sudo make install

Now you can run a test like:


mcperf --linger=0 --timeout=5 --conn-rate=1000 --call-rate=1000 --num-calls=10 -\
-num-conns=1000 --sizes=u1,16

Telegram Channel @nettrain


The previous example creates 1000 connections to a memcached server running on
localhost:11211. The connections are created at the rate of 1000 conns/sec and on every
connection it sends 10 ‘set’ requests at the rate of 1000 reqs/sec with the item sizes
derived from a uniform distribution in the interval of [1,16) bytes.

Telegram Channel @nettrain


Updating PHP To a New Version
When PHP releases a new version, you should be able to update the PHP version of your
server in an easy way. Because we compile PHP from source you’ll have to install the
new version with a few manual actions. (as opposed to waiting for your Linux
distribution making the new version available on their update repositories.

Here is how you should update PHP to a new version

Download the new PHP version


Compile and install the new PHP version
Test the command line version of PHP
(Sometimes necessary) Update the php.ini configuration
Restart PHP-FPM

Download the New PHP Version


In the below example we will update to version 5.6.16:
$ cd ~
$ wget http://php.net/distributions/php-5.6.16.tar.gz
$ tar xvf php-5.6.16.tar.gz
$ cd php-5.6.16
$ sudo ./configure --enable-opcache --enable-cgi --enable-fpm --with-mcrypt --wi\
th-zlib --with-gettext --enable-exif --enable-zip --with-bz2 --enable-soap --\
enable-sockets --enable-sysvmsg --enable-sysvsem --enable-sysvshm --enable-shm\
op --with-pear --enable-mbstring --with-openssl --with-mysql=mysqlnd --with-my\
sqli=mysqlnd --with-mysql-sock=/var/run/mysqld/mysqld.sock --with-curl --wit\
h-gd --enable-bcmath --enable-calendar --enable-ftp --enable-gd-native-ttf --\
with-freetype-dir=/usr/lib --with-jpeg-dir=/usr/lib --with-png-dir=/usr/lib \
--with-xpm-dir=/usr/lib --enable-pdo --with-pdo-sqlite --with-pdo-mysql=mysqln\
d --enable-inline-optimization --with-imap --with-imap-ssl --with-kerberos \
--with-libedit --with-fpm-user=nginx --with-fpm-group=nginx
$ sudo make
$ sudo make install

Now run php on command line via:


$ php

It could be that due to the upgrade he it will complain about Unable to initialize module:
PHP Warning: PHP Startup: memcache: Unable to initialize module
Module compiled with module API=20121212
PHP compiled with module API=20131226

or

Telegram Channel @nettrain


undefined symbol: basic_globals_id in Unknown on line 0

This means you’ll have to recompile PHP modules which you have added in php.ini
under extensions=

For those modules you’ll have todo phpize, configure, sudo make install (like explained
in previous chapters).

Sometimes when you recompile the modules after installing a new PHP version, the
location where the compiled PHP modules are placed changes: (eg.
/usr/local/lib/php/extensions/no-debug-non-zts-20131226).

If that’s the case you’ll need to update the php.ini file (/usr/local/lib/php.ini) and change
the extensions= locations to match the correct path.

Telegram Channel @nettrain


Installing ImageMagick for PHP
ImageMagick is a software program that can create, edit and convert bitmap
images. It can read and write images in a variety of formats. You can use
ImageMagick to resize, flip, mirror, … your bitmaps.

PHP frameworks can use the power of Imagick via the pecl imagick PHP
extension.

We will descibe how to install them below:

Install ImageMagick
$ cd ~
$ wget http://www.imagemagick.org/download/ImageMagick.tar.gz

Unpack the distribution with this command:


$ tar xzf ImageMagick.tar.gz
$ cd ImageMagick-<release version>

Next configure and compile ImageMagick:


$ sudo ./configure
$ sudo make

If ImageMagick configured and compiled without complaint, you are ready to


install it on your system. Administrator privileges are required to install. To
install, type
$ sudo make install

You may need to configure the dynamic linker run-time bindings:


$ sudo ldconfig /usr/local/lib

Install Imagick PHP extension

Telegram Channel @nettrain


We will now install the PHP extension for letting PHP frameworks use
ImageMagick:
$ cd ~
$ wget http://pecl.php.net/get/imagick-3.3.0.tgz
$ tar xzf imagick-3.3.0.tgz
$ cd imagick-3.3.0
$ phpize
$ sudo ./configure --with-php-config=/usr/local/bin/php-config
$ sudo make
$ sudo make install

imagick.so should now have been created in /usr/local/lib/php/extensions/no-


debug-zts-20131226/

We’ll add the extension location to our php.ini file now:


$ sudo nano /usr/local/lib/php.ini

Add
extension=/usr/local/lib/php/extensions/no-debug-zts-20131226/imagick.so

Telegram Channel @nettrain


Installing PHPMyAdmin
phpMyAdmin is a free software tool written in PHP, intended to handle the
administration of MySQL over the Web. phpMyAdmin supports a wide range of
operations on MySQL and MariaDB.

Frequently used operations (managing databases, tables, columns, relations, indexes,


users, permissions, etc) can be performed via the user interface, while you still have the
ability to directly execute any SQL statement.

Installing phpMyAdmin
We’ll download the latest version of phpMyAdmin and add it to our Nginx website root:
$ cd ~
$ wget https://files.phpmyadmin.net/phpMyAdmin/4.5.3.1/phpMyAdmin-4.5.3.1-all-la\
nguages.zip
$ unzip phpMyAdmin-4.5.3.1-all-languages.zip

Copy the contents of the phpMyAdmin-4.5.3.1-all-languages directory to your website


document root (eg. /home/<mywebsite>/web/dbadmin)
$ sudo mkdir /home/<mywebsite>/web/dbadmin
$ sudo cp -fr phpMyAdmin-4.5.3.1-all-languages/* /home/<mywebsite>/web/dbadmin

Now we will use the browser based setup pages to setup PHPMyAdmin. We’ll configure
on which host, port, username and password our database can be accessed.

To setup phpMyAdmin via the browser, you must manually create a folder “config” in
the phpMyAdmin directory. This is a security measure. On a Linux system you can use
the following commands:
$ sudo mkdir config # create directory for saving
$ sudo chmod 774 config # give it world writable permissions

Now make all the files owned by your <mywebsite> user:


$ sudo chown -fR <mywebsite>:nginx dbadmin

Add a new location to your website nginx configuration .conf as follows:


server {
...
# PhpMyAdmin

Telegram Channel @nettrain


location /dbAdmin {
index index.php;
try_files $uri $uri/ /dbAdmin/index.php?$args;
}
}

After restart of nginx (sudo /etc/init.d/nginx restart), you can go the phpMyAdmin setup
screen with your browser at http://<mywebsite>/dbMyAdmin/setup/index.php and create
a new server.

In our case, our database server runs on the same host as everything else (localhost) and
the MySQL default port (3306). We recommend to put Connection Type on socket (it is a
little bit faster then tcp) and also fill in the Server socket (/var/run/mysqld/mysqld.sock)

PHPMyAdmin: add a server

In the Authentication tab, you can enter the user and password phpMyAdmin should use
to connect to the database. We previously showed you how you can create users to limit
access to databases. When you use the root MySQL user you’ll have access to everything
which could be a security risk.

Now press Apply; we’re back in the main config screen where we can download the
generated configuration file. (config.inc.php).

Upload this file to /home/<mywebsite>/web/dbadmin and delete the config directory.

Now you should be able to browse to http://<yourwebsite.com>/dbAdmin/index.php,


login and see the databases available on this server.

In the https / SSL support chapter we will enable secure access to our PHP My Admin
installation.

Telegram Channel @nettrain


Installing Java
In this chapter we will install the most recent version of the Java virtual machine
software on our server. Just like PHP, it is regularly used for creating dynamic
websites.

In the next section we will install a Java based web application server Jetty, which
scales very well and is very performant.

The latest major release of Java is Java 8. Each release sees new features and
optimized performance (eg. better garbage collection, …). We’ll describe how to
install or update your Java VM to the latest version below.

Check the installed version Java


To check whether your Linux distribution has a Java version installed, you’ll need to
run the Java version command from the SSH terminal:
$ sudo java -version
java version "1.7.0_45"
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)

If Java is not installed you’ll get no version information back.

Downloading Java SE JDK 8 update 66


As of this moment the latest Oracle Java version is 8 update 66 which can be
downloaded manually from here:
http://www.oracle.com/technetwork/java/javase/downloads/index.html

Oracle made it a little bit difficult to download the JDK from the SSH commandline
via wget. (eg. they want you to accept license agreements etc.).

At https://ivan-site.com/2012/05/download-oracle-java-jre-jdk-using-a-script/ you can


find working commandline versions for all Java versions released.the for updated
links.

Here is the command to download JDK8 update 66 for Linux 64bit:


$ wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%\
2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.\

Telegram Channel @nettrain


oracle.com/otn-pub/java/jdk/8u66-b17/jdk-8u66-linux-x64.tar.gz"

Installing Java SE JDK8 Update 66


Next we will copy and unzip the jdk-8u66-linux-x64.tar.gz file into
/usr/local/java/jdk1.8.0_66
$ sudo mkdir /usr/local/java (only needed if it doesn't exist yet)
$ sudo cp -r jdk-8u66-linux-x64.tar.gz /usr/local/java/

Run the following commands on the downloaded Oracle Java jdk-8u66-linux-


x64.tar.gz file. We do this as root in order to make them executable for all users on
your system.
$ cd /usr/local/java
$ sudo chmod a+x jdk-8u66-linux-x64.tar.gz

Unpack the compressed Java jdk-8u66-linux-x64.tar.gz


$ sudo tar xvzf jdk-8u66-linux-x64.tar.gz

Now Java8 has been unpacked into /usr/local/java/jdk1.8.0_66.

Making Java SE JDK8 the default


To make sure that all Java applications use our freshly installed Java SE JDK8 we
need to configure a few things:
$ sudo nano /etc/profile

At the end of this file you should add:


JAVA_HOME=/usr/local/java/jdk1.8.0_66
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
export JAVA_HOME
export PATH

This adds all the java executables to the system path and defines the JAVA_HOME
variable to /usr/local/java/jdk1.8.0_66

Now we need to tell the Linux OS that the Oracle Java version is available for use.
We execute 3 commands, one for the java executable, one for the javac executable
(java compiler) and one for javaws (java webstart):
$ sudo update-alternatives --install "/usr/bin/java" "java" "/usr/local/java/jdk\
1.8.0_66/bin/java" 1
$ sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/local/java/j\
dk1.8.0_66/bin/javac" 1

Telegram Channel @nettrain


$ sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/local/java\
/jdk1.8.0_66/bin/javaws" 1

The following commands tell the OS to use our Java8 version as the default Java:
$ sudo update-alternatives --set java /usr/local/java/jdk1.8.0_66/bin/java
$ sudo update-alternatives --set javac /usr/local/java/jdk1.8.0_66/bin/javac
$ sudo update-alternatives --set javaws /usr/local/java/jdk1.8.0_66/bin/javaws

In the last step we will reload the system wide PATH /etc/profile by typing the
following command:
$ . /etc/profile

Don’t forget the dot!

Now test if you have succesfully installed/updated your Java version with:
$ java -version

Telegram Channel @nettrain


Installing Jetty
Jetty is a very fast, stable and lightweight Java container. We recommend it over the well
known Apache Tomcat because it is faster and has performance features which are not yet
implemented in Apache Tomcat.

We recommend you to use the latest stable Jetty version. (at this moment 9.3). You can
see an overview of all Jetty versions at
http://www.eclipse.org/jetty/documentation/current/what-jetty-version.html

Jetty 9.3 supports:

Java 1.8 and up


HTTP/1.1, Websockets and HTTP/2 support
Servlet 3.1 support
JSP 2.3 support

We recommend Jetty 9 because:

It is compatible with Java 1.8 and above and uses Java 1.7/1.8 features. Jetty will
have better performance when run on recent Java versions.
It supports the latest HTTP/2 protocol which speeds up websites
It supports the latest Servlet programming API (usefull for Java developers)

Download Jetty 9.3.x


Jetty 9.x can be downloaded from http://download.eclipse.org/jetty/stable-9/dist/. At the
time of writing the version was 9.3.6 Here is how to download it:
$ cd ~
$ wget http://download.eclipse.org/jetty/stable-9/dist/jetty-distribution-9.3.6.\
v20151106.tar.gz
$ tar xvf jetty-distribution-9.3.6.v20151106.tar.gz

Let’s move the resulting jetty directory to its correct place:


$ mv jetty-distribution-9.3.6.v20151106 jetty-9.3.6
$ sudo mkdir /usr/local/jetty-9.3.6
$ cd jetty-9.3.6
$ sudo mv -f * /usr/local/jetty-9.3.6
$ cd /usr/local/jetty-9.3.6/

Creating a Jetty user

Telegram Channel @nettrain


We will create a jetty user which will have access to the /usr/local/jetty-9.3.6 map only.
$ sudo mkdir /home/jetty
$ sudo groupadd jetty
$ sudo useradd -g jetty -d /home/jetty -s /usr/sbin/nologin jetty

The following commands will make the jetty user the owner of the jetty installation
directory. We will also create a directory /var/log/jetty where Jetty can put its logs (which
is also owned by the jetty user):
$ sudo chown -R jetty:jetty /usr/local/jetty-9.3.6/
$ sudo mkdir -p /var/log/jetty
$ sudo chown -R jetty:jetty /var/log/jetty

Jetty startup script


Like we did for Nginx, MariaDB, and other apps we want to be able to start/stop and
restart Jetty easily from the command line.

Jetty comes with such a script which we will use. Only a few modifications are needed
which we describe below.

The startup script is located in /usr/local/jetty-9.3.6/bin/jetty.sh.

Let’s copy it to the /etc/init.d directory where all our other startup scripts are located.
$ sudo cp /usr/local/jetty-9.3.6/bin/jetty.sh /etc/init.d/jetty

Edit the jetty script and update the following variables:


$ sudo nano /etc/init.d/jetty

JETTY_ARGS=jetty.port=8080
JETTY_HOME=/usr/local/jetty-9.3.6
JETTY_LOGS=/var/log/jetty
JETTY_PID=/var/run/jetty.pid

We’re setting the path where Jetty has been installed, the log files location and the process
id file respectively.

Let’s make the jetty startup script executable:


$ sudo chmod +x /etc/init.d/jetty

Now you can start and stop jetty via:


$ sudo /etc/init.d/jetty stop
$ sudo /etc/init.d/jetty start

Telegram Channel @nettrain


If you want more logging when stop and starting Jetty you can edit /etc/init.d/jetty and put
change DEBUG=0 to DEBUG=1 in the file.

You’ll then receive similar output like below from the start script:
START_INI = /usr/local/jetty-conf/start.ini
JETTY_HOME = /usr/local/jetty-9.3.6
JETTY_BASE = /usr/local/jetty-conf
JETTY_CONF = /usr/local/jetty-9.3.6/etc/jetty.conf
JETTY_PID = /var/run/jetty.pid
JETTY_START = /usr/local/jetty-9.3.6/start.jar
JETTY_ARGS = jetty.port=8080 jetty.state=/usr/local/jetty-conf/jetty.state \
jetty-logging.xml jetty-started.xml
JAVA_OPTIONS = -Djetty.logs=/var/log/jetty -Djetty.home=/usr/local/jetty-9.3.\
6 -Djetty.base=/usr/local/jetty-conf -Djava.io.tmpdir=/tmp -XX:+UseLargePages -X\
mx512m
JAVA = /usr/bin/java
RUN_CMD = /usr/bin/java

You can also view the status of a Jetty instance be executing:


$ sudo /etc/init.d/jetty status
START_INI = /usr/local/jetty-conf/start.ini
JETTY_HOME = /usr/local/jetty-9.3.6
JETTY_BASE = /usr/local/jetty-conf
JETTY_CONF = /usr/local/jetty-9.3.6/etc/jetty.conf
JETTY_PID = /var/run/jetty.pid
JETTY_START = /usr/local/jetty-9.3.6/start.jar
JETTY_ARGS = jetty.port=8080 jetty.state=/usr/local/jetty-conf/jetty.state \
jetty-logging.xml jetty-started.xml
JAVA_OPTIONS = -Djetty.logs=/var/log/jetty -Djetty.home=/usr/local/jetty-9.3.\
6 -Djetty.base=/usr/local/jetty-conf -Djava.io.tmpdir=/tmp -XX:+UseLargePages -X\
mx512m
JAVA = /usr/bin/java
RUN_CMD = /usr/bin/java

Checking arguments to Jetty:

START_INI = /usr/local/jetty-conf/start.ini
JETTY_HOME = /usr/local/jetty-9.3.6
JETTY_BASE = /usr/local/jetty-conf
JETTY_CONF = /usr/local/jetty-9.3.6/etc/jetty.conf
JETTY_PID = /var/run/jetty.pid
JETTY_START = /usr/local/jetty-9.3.6/start.jar
JETTY_LOGS = /var/log/jetty
JETTY_STATE = /usr/local/jetty-conf/jetty.state
CLASSPATH =
JAVA = /usr/bin/java
JAVA_OPTIONS = -Djetty.logs=/var/log/jetty -Djetty.home=/usr/local/jetty-9.3.\
6 -Djetty.base=/usr/local/jetty-conf -Djava.io.tmpdir=/tmp -XX:+UseLargePages -X\
mx512m
JETTY_ARGS = jetty.port=8080 jetty.state=/usr/local/jetty-conf/jetty.state \
jetty-logging.xml jetty-started.xml
RUN_CMD = /usr/bin/java -Djetty.logs=/var/log/jetty -Djetty.home=/usr/lo\
cal/jetty-9.3.6 -Djetty.base=/usr/local/jetty-conf -Djava.io.tmpdir=/tmp -XX:+Us\
eLargePages -Xmx512m -jar /usr/local/jetty-9.3.6/start.jar jetty.port=8080 jetty\
.state=/usr/local/jetty-conf/jetty.state jetty-logging.xml jetty-started.xml

Telegram Channel @nettrain


Configuring a Jetty Base directory
Starting with Jetty 9.1, it is now possible to maintain a separation between the binary
installation path of the standalone Jetty (known as ${jetty.home}), and the customizations
path for your specific sites (known as ${jetty.base}).

This makes it very easy to seperate the configuration of your site from the Jetty
installation. When a new version of Jetty is released you can install it without risking to
overwrite your configuration files.

We will configure our Jetty Base directory in /usr/local/jetty-conf

Let’s first create the directory:


$ sudo mkdir /usr/local/jetty-conf
$ cd /usr/local/jetty-conf

To create a default set of configuration files you can run the following command from our
jetty-conf directory:
$ sudo java -jar /usr/local/jetty-9.3.6/start.jar --add-to-start=http,deploy,js\
p,logging

Let’s make sure that the configuration files are all owned by our jetty user:
$ sudo chown --recursive jetty:jetty /usr/local/jetty-conf/
$ sudo chown --recursive jetty:jetty /usr/local/jetty-9.3.6/

We need to update our Jetty launch script to specify our Jetty Base directory:
$ sudo nano /etc/init.d/jetty

Add/Update in this file that:


JETTY_BASE=/usr/local/jetty-conf

You should now be able to stop and start the Jetty server via sudo /etc/init.d/jetty
stop/start with the configuration inside the JETTY_BASE directory.

You can also view the configuration details of the Jetty server and Jetty base
configuration via:
$ cd /usr/local/jetty-conf
$ sudo java -jar /usr/local/jetty-9.3.6/start.jar --list-config

Here is some sample output from this command:


Java Environment:
-----------------

Telegram Channel @nettrain


java.home = /usr/local/java/jdk1.8.0_45/jre
java.vm.vendor = Oracle Corporation
java.vm.version = 25.31-b07
java.vm.name = Java HotSpot(TM) 64-Bit Server VM
java.vm.info = mixed mode
java.runtime.name = Java(TM) SE Runtime Environment
java.runtime.version = 1.8.0_45-b14
java.io.tmpdir = /tmp
user.dir = /usr/local/jetty-conf
user.language = en
user.country = US

Jetty Environment:
-----------------
jetty.version = 9.3.6.v20151106
jetty.home = /usr/local/jetty-9.3.6
jetty.base = /usr/local/jetty-conf

This command will also list the Jetty Server classpath in case you would come across
some classpath or jar file issues.

You can of course modify the generated configuration. We’ll not cover this in detail but
will give an example we have used in production systems:

A start.ini file has been created which we can modify below to eg. Change the listening
port, max. number ot threads and more:
$ sudo nano /usr/local/jetty-conf/start.ini
--module=http
jetty.http.port=8080
jetty.http.idleTimeout=30000

# If this module is activated, then all jar files found in the lib/ext/ paths wi\
ll be automatically added to the Jetty Server Classpath
--module=ext

--module=server
# minimum number of threads
jetty.threadPool.minThreads=10

# maximum number of threads


jetty.threadPool.maxThreads=200

# thread idle timeout in milliseconds


jetty.threadPool.idleTimeout=60000

# What host to listen on (leave commented to listen on all interfaces)


jetty.http.host=localhost

# Dump the state of the Jetty server, components, and webapps after startup
jetty.server.dumpAfterStart=false

# Dump the state of the Jetty server, before stop


jetty.server.dumpBeforeStop=false

--module=deploy

Telegram Channel @nettrain


--module=jsp
--module=logging
jetty.log.retain=90
jetty.logs=/var/log/jetty

# Support jdbc/MyDB jndi names


--module=jndi
# Supports jetty-env.xml configuration in WEB-INF/
--module=plus
--lib=/usr/local/java/jdk1.8.0_66/lib/tools.jar

Overriding the directory Jetty monitors for your Java webapp


Jetty can deploy a Java web application by monitoring a directory where the webapp is
located. It can also optionally scan for changes & deploy those inside the server
automatically.

All this is configured via the jetty-deploy.xml. We’ll add a jetty-deploy.xml file to our
Jetty base directory to configure these settings:
$ cd ~
$ sudo mkdir -p /usr/local/jetty-conf/etc
$ sudo cp jetty-9.3.6/etc/jetty-deploy.xml /usr/local/jetty-conf/etc
$ sudo nano /usr/local/jetty-conf/etc/jetty-deploy.xml

Here we have copied a default jetty-deploy.xml from the unzipped Jetty download and
added it to our jetty-conf directory.

Here is an example configuration:


<Configure id="Server" class="org.eclipse.jetty.server.Server">
<Call name="addBean">
<Arg>
<New id="DeploymentManager" class="org.eclipse.jetty.deploy.DeploymentMana\
ger">
<Set name="contexts">
<Ref refid="Contexts" />
</Set>
<Call name="setContextAttribute">
<Arg>org.eclipse.jetty.server.webapp.ContainerIncludeJarPattern</Arg>
<Arg>.*/servlet-api-[^/]*\.jar$</Arg>
</Call>

<!-- Add a customize step to the deployment lifecycle -->


<!-- uncomment and replace DebugBinding with your extended AppLifeCycle.\
Binding class
<Call name="insertLifeCycleNode">
<Arg>deployed</Arg>
<Arg>starting</Arg>
<Arg>customise</Arg>
</Call>
<Call name="addLifeCycleBinding">
<Arg>
<New class="org.eclipse.jetty.deploy.bindings.DebugBinding">
<Arg>customise</Arg>

Telegram Channel @nettrain


</New>
</Arg>
</Call> -->

<Call id="webappprovider" name="addAppProvider">


<Arg>
<New class="org.eclipse.jetty.deploy.providers.WebAppProvider">
<Set name="monitoredDirName">/home/<mywebsite>/web</Set>
<Set name="defaultsDescriptor"><Property name="jetty.home" default\
="." />/etc/webdefault.xml</Set>
<Set name="scanInterval">1</Set>
<Set name="extractWars">true</Set>
<Set name="configurationManager">
<New class="org.eclipse.jetty.deploy.PropertiesConfigurationMana\
\
ger\
">
...
</New>
</Set>
</New>
</Arg>
</Call>
</New>
</Arg>
</Call>
</Configure>

In the jetty-deploy.xml you can set the monitoredDirName:


<Set name="monitoredDirName">/home/<mywebsite>/web</Set>

Jetty can use the monitoredDirName to find the directory where your Java webapp is
located. When starting Jetty, you’ll see Jetty deploying this directory.

Another important parameter is the scanInterval. This setting defines the number of
seconds between scans of the provided monitoredDirName.

A value of 0 disables the continuous hot deployment scan, Web Applications will be
deployed on startup only.

For production it is recommended to disable the scan for performance reasons and restart
the server when you have done webapp changes. For development you can use a 1 second
interval to see your changes immediately.

Adding the Jetty port to the CSF Firewall


As we have installed the CSF Firewall in a previous chapter, we may need to add the port
Jetty is listening on (port 8080) in the Firewall configuration to accept incoming
connections. This is done by editing the csf.conf and modifying the TCP_IN and
TCP6_IN variables.

Telegram Channel @nettrain


$ sudo nano /etc/csf/csf.conf
TCP_IN = "22,25,53,80,110,143,443,465,587,993,995,11211,3306,9000,8080"
TCP6_IN = "22,25,53,80,110,143,443,465,587,993,995,11211,3306,9000,8080"

TCP6_IN is applied for TCP connections using Ipv6

Autostart Jetty at bootup


We want to start Jetty at bootup of the server. This can be done as follows:
$ sudo /usr/sbin/update-rc.d -f jetty defaults

Enabling Large Page/HugePage support for Java / Jetty


As we explained in the Enabling HugePages MariaDB chapter, HugePage support can
optimize performance by having less processor Translation Lookaside buffer cache
misses. In our MariaDB chapter we enabled HugePage support for the database.

We can do the same for every Java app we run on our server. Beginning with Java 5 you
can start any Java app with the parameter -XX:+UseLargePages.

In our case we will configure the Jetty Java server to use large pages.

Adding the jetty user to our hugepage group


In the Enabling HugePages MariaDB chapter, we already created a hugepage group. If
you have not yet done this, please enable HugePages for the MariaDB server first.

Now we will add the jetty to the group hugepage.


$ sudo usermod -G hugepage jetty

You can view the groups where user jetty is member of via:
$ sudo groups jetty

Update memlock in /etc/security/limits.conf


Just like in the MariaDB section, we need to tell the Linux OS that the jetty user is not
limited by the amount of memory it can take:
$ sudo nano /etc/security/limits.conf
Add:
@jetty soft memlock unlimited
@jetty hard memlock unlimited

Now reboot your system

Add UseLargePages parameter to the Jetty startup script

Telegram Channel @nettrain


Basically, you have to launch java with -XX:+UseLargePages. Also make sure that the
maximum size of the Java heap (-Xmx) should fit in your reserved Huge pages.
$ sudo nano /etc/init.d/jetty

Update the JAVA_OPTIONS variable to include -XX:+UseLargePages


JAVA_OPTIONS+=("-Djetty.home=$JETTY_HOME" "-Djetty.base=$JETTY_BASE" "-Djava.io.\
tmpdir=$TMPDIR" "-XX:+UseLargePages" )

Now restart Jetty with the command:


$ sudo nano /etc/init.d/jetty restart

Forwarding requests from Nginx to our Jetty server


In our server configuration, our Jetty server is only accessible from localhost. This means
we can only access the server from our own server; not from the outside world.

This is because we want to put Nginx in front of our Jetty server. Nginx is connected to
the outside world, and we’ll let Nginx forward the incoming requests to Jetty. (eg. for
example this could be all requests ending with .jsp or another URL pattern which should
be served by a Java application server)

Here is how you can forward the requests to Jetty running on port 8080 inside an existing
nginx configuration server {…} block.
server {
...
# Pass .jsp requests to Jetty
location ~ \.jsp$ {
proxy_pass http://localhost:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
}
...
}

We also set a few headers so Jetty

receives the real IP address of the user visiting the site (X-Real-IP)
receives the URL that is in the users webbrowser bar (Host)
knows whether the website was accessed on https or http (X-Forwarded-Proto)

Improving Jetty Server performance


As Jetty is running on the Java VM; there are multiple ways to improve the performance,
scalability and latency of the server.

Telegram Channel @nettrain


To start tuning, we need to be able to look inside the running Java VM. Oracle has a
graphical monitoring tool VisualVM which is included in the Java Developers Kit (JDK).

With VisualVM you can see:

the CPU consumption


the total amount of memory allocated (memory in-use, memory free, …)
the number of threads
the Garbage Collection statistics (a very tunable area for Java apps/servers)

Visual VM: looking inside the VM

Installing Visual VM on your PC


As our server doesn’t have a GUI, we will install Visual VM on our PC/Mac and connect
over the internet to our Jetty server.

To launch Visual VM you’ll need to download and install the latest Java JDK
(http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-
2133151.html)

After installation you can launch c:Program FilesJava\jdk1.8.0_66\bin\jvisualvm.exe. (or


a similar path on your Mac)

Enabling Remote monitoring of the Jetty Server Java VM


Visual VM can be connected to our Jetty server VM.

Telegram Channel @nettrain


For this we need to enable JMX (Java Management Extensions) inside the Jetty server, as
this protocol is used to exchange information. We will also need to specify a JMX port,
where Visual VM can access the information it needs.

We’ll update our Jetty Base directory /usr/local/jetty-conf as follows:


$ cd /usr/local/jetty-conf
$ sudo java -jar ../jetty-9.3.6/start.jar --add-to-start=jmx

Now let’s add the port and host where the JMX information will be made available:
$ sudo nano start.ini

Add the following after the JMX section:


# ---------------------------------------
# Module: jmx
--module=jmx
-Dcom.sun.management.jmxremote

Now save the file.

Now we will add a Jetty JMX host and port variable in the Jetty launch script:
$ sudo nano /etc/init.d/jetty

Search for a JAVA_OPTIONS like below


JAVA_OPTIONS=(${JAVA_OPTIONS[*]} "-Djetty.home=$JETTY_HOME" "-Djetty.base=$JETTY\
_BASE" "-Djava.io.tmpdir=$TMPDIR" "-XX:+UseLargePages" "-Xmx512m" "-XX:+DisableE\
xplicitGC" "-XX:+PrintGCDateStamps" "-XX:+PrintGCTimeStamps" "-XX:+PrintGCDetail\
s" "-XX:+PrintTenuringDistribution" "-Xloggc:/var/log/jetty/gc.log" "-XX:+UseG1G\
C" "-XX:+UseStringDeduplication"

Add two parameters to the same line:


"-Djetty.jmxrmihost=<yourhostname>" "-Djetty.jmxrmiport=36806"

yourhostname should be the hostname you’ve chosen when you installed the Linux
distribution. In case you forgot you can find it via:
$ hostname

Please choose a jmxrmiport of your liking. In our example we chose 36806. We’ll need to
edit our firewall configuration so that this port is not blocked for incoming connections:
$ sudo nano /etc/csf/csf.conf
TCP_IN = "22,25,53,80,110,143,443,465,587,993,995,11211,3306,9000,8080,36806"
TCP6_IN = "22,25,53,80,110,143,443,465,587,993,995,11211,3306,9000,8080,36806"

Telegram Channel @nettrain


(the second line is for IPv6 connections)

Now restart the firewall via:


$ sudo csf -r

We’re not yet done. Let’s create a jetty.conf file in our etc jetty base directory; this file is
automatically read by the /etc/init.d/jetty startup script we’re using.
$ sudo nano /usr/local/jetty-conf/etc/jetty.conf

Enter the following information in this file:


/usr/local/jetty-conf/etc/jetty-logging.xml
/usr/local/jetty-conf/etc/jetty-started.xml
/usr/local/jetty-conf/etc/jetty-jmx.xml
/usr/local/jetty-conf/etc/jetty-jmx-remote.xml

These xml files can be copied from the Jetty home directory:
$ sudo cp ../jetty-9.3.6/etc/jetty-jmx-logging.xml .
$ sudo cp ../jetty-9.3.6/etc/jetty-jmx-started.xml .
$ sudo cp ../jetty-9.3.6/etc/jetty-jmx.xml .
$ sudo cp ../jetty-9.3.6/etc/jetty-jmx-remote.xml .

You’ll need to make one change to jetty-jmx.xml:


$ sudo nano /usr/local/jetty-conf/etc/jetty-jmx.xml

Uncomment the following section and add the IP Address of your server here: <Call
class=”java.lang.System” name=”setProperty”> <Arg>java.rmi.server.hostname</Arg>
<Arg><Your Server IP></Arg> </Call>

That’s it, now you need to restart your Jetty so the settings take effect.
$ sudo /etc/init.d/jetty restart

Now let’s get back to the VisualVM we started on our PC/Mac. Right click on the Remote
option and choose Add Remote host. Enter your servers IP Address here and click OK.

Now right click on the Remote host you’ve just added, and choose Add JMX Connection.

Add the JMX port after the colon in the Connection textfield and press OK. You should
now be able to connect to the remotely running Jetty server and monitor its status

Garbage Collection in Java


Garbage Collection is an automatic process in a running Java app. It reclaims used
memory for application objects which are not in use anymore. If the Garbage collector

Telegram Channel @nettrain


didn’t run, the app would eventually take more and more memory.

When memory sizes grow large, the Garbage Collector can start to take more time to
reclaim memory. When the Garbage Collector is busy, the rest of the application is pauzed
temporarily.

For a server it’s very important to minimize these pauzes as much as possible, so they are
not visible to the end user. Eg. A GC pauze of 1 second is unacceptable.

Garbage Collection is a very tunable subject in Java. We’ll give our starting
recommendation for a Jetty web server.

There are multiple Garbage Collectors available. For a web server we recommend the
new G1 Garbage Collector which is available since Java 7.

The Garbage-First (G1) collector is a server-style garbage collector, targeted for multi-
processor machines with large memories. It has small pause times and achieves a high
throughput.

We will update the Jetty startup script to enable the G1 collector. Find the
JAVA_OPTIONS parameter and add the following options:

“-XX:+UseG1GC” : enables the G1 garbage collector


“-XX:+PrintGCDetails” : prints the Garbage Collection details to a log file
“-XX:+DisableExplicitGC” : disables garbage collection when a (rogue) application
would ask for it. Garbage collection should never be necessary to be called manually.
“-XX:+PrintGCDateStamps” : prints date information when logging GC details
“-XX:+PrintGCTimeStamps” : prints timestampinformation when logging GC
details
“-XX:+PrintTenuringDistribution” : prints detailed information about the region and
age of objects in memory.
“-Xloggc:/var/log/jetty/gc.log” : where to write the garbage collection details logs

Our complete JAVA_OPTIONS:


JAVA_OPTIONS=(${JAVA_OPTIONS[*]} "-Djetty.home=$JETTY_HOME" "-Djetty.base=$JETTY\
_BASE" "-Djava.io.tmpdir=$TMPDIR" "-XX:+UseLargePages" "-Xmx512m" "-XX:+DisableE\
xplicitGC" "-XX:+PrintGCDateStamps" "-XX:+PrintGCTimeStamps" "-XX:+PrintGCDetail\
s" "-XX:+PrintTenuringDistribution" "-Xloggc:/var/log/jetty/gc.log" "-XX:+UseG1G\
C" "-XX:+UseStringDeduplication" "-XX:+PrintStringDeduplicationStatistics" "-Dj\
etty.jmxrmihost=<HOST>" "-Djetty.jmxrmiport=<PORT>"

There is one other G1 collector settings which we kept on its default setting, and thus
haven’t had to include:
-XX:MaxGCPauseMillis=200

Telegram Channel @nettrain


Sets a target for the maximum GC pause time. This is a soft goal, and the JVM will make
its best effort to achieve it. Therefore, the pause time goal will sometimes not be met. The
default value is 200 milliseconds.

You can also see that we’ve added “-XX:+UseStringDeduplication” “-


XX:+PrintStringDeduplicationStatistics”

Here is why:

String objects generally consume a large amount of memory in an average application. It


is very much possible that there are multiple instances of the same string in memory.

The String deduplication algorithm can be enabled for the G1 GC collector since Java7
update 6. If it can find two strings with the same content, it’ll make sure there is only one
underlying character array instead of two character arrays; cutting the memory
consumption in half this way.

-XX:+PrintStringDeduplicationStatistics enables string deduplication logging in the


Garbage collection log.

Optimising Java connections to the MariaDB database


Connecting your Java apps with a SQL database like MySQL or MariaDB is done via a
JDBC driver. (Java Database Connectivity)

In this chapter we will explain how to configure a connection from Jetty to MariaDB
using a JDBC driver. We will also use a connection pool for managing multiple JDBC
Connections and optimizing scalability and speed.

Two JDBC drivers exist for connecting to MariaDB. We have MariaDBs own JDBC
driver and Oracles MySQL JDBC driver works as well.

Currently our recommendation is to use the Oracle MySQL JDBC driver. Generally
spoken it still has more features then MariaDBs JDBC driver and is very performant.

There is also a difference in license terms: Oracle MySQL JDBC driver is GPL licensed,
while MariaDB JDBC driver is LGPL licensed. Here is an interesting thread about the
implications of the license: http://stackoverflow.com/questions/620696/mysql-licensing-
and-gpl

Below we will install & show configuration examples for both the Oracle and MariaDB
JDBC driver.

Download and install the MariaDB JDBC4 driver


The official site for the MariaDB JDBC driver is at
https://mariadb.com/kb/en/mariadb/about-the-mariadb-java-client/

Telegram Channel @nettrain


At this moment the latest version is v1.3.3 and can be downloaded as follows:
$ cd /usr/local/jetty-conf/lib/ext
$ sudo wget https://code.mariadb.com/connectors/java/connector-java-1.3.3/mariad\
b-java-client-1.3.3.jar

Download and install the Oracle MySQL JDBC driver


The official home for the MySQL JDBC driver from Oracle (which can be used with
MariaDB) is located at http://dev.mysql.com/downloads/connector/j/

At this moment the latest version is v5.1.38 and can be downloaded as follows:
$ cd /usr/local/jetty-conf/lib/ext
$ sudo wget http://cdn.mysql.com/Downloads/Connector-J/mysql-connector-java-5.1.\
38.tar.gz
$ sudo tar xvf mysql-connector-java-5.1.38.tar.gz
$ cd mysql-connector-java-5.1.38/
$ sudo cp *.jar ..
$ cd ..
$ sudo rm -fr mysql-connector-java-5.1.38
$ sudo rm -fr mysql-connector-java-5.1.38.tar.gz

Download and install HikariCP - a database connection pool


Using a database connection pool speeds up the communication between your Java code
and the database server by taking readily available connections from the pool.

Multiple Connection Pool implementations exist. We have chosen HikariCP, because tests
show that it is currently the fastest & most stable choice for connecting to MySQL and/or
MariaDB.

The HikariCP project is hosted at https://github.com/brettwooldridge/HikariCP

Here is how we can download it and install it in the library extension directory of our
Jetty configuration:
$ cd /usr/local/jetty-conf/lib/ext
$ sudo wget https://repo1.maven.org/maven2/com/zaxxer/HikariCP/2.4.3/HikariCP-2.\
4.3.jar

Configuring Jetty to use a HikariCP connection pool with the MariaDB JDBC driver

In the following configuration example we will make jdbc/<MyDataSourceName>


available for use in our Java application. We will add a jetty-env.xml file to the WEB-INF
directory of our Java webapp.
$ sudo nano /home/<yourwebsitehomedir>/web/root/WEB-INF/jetty-env.xml

<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://j\
etty.mortbay.org/configure.dtd">

Telegram Channel @nettrain


<Configure id="<yourwebsitehost>" class="org.eclipse.jetty.webapp.WebAppContext">
<Set name="virtualHosts">
<Array type="java.lang.String">
<Item>www.<yourwebsitehost>.com</Item>
</Array>
</Set>
<New id="HikariConfig" class="com.zaxxer.hikari.HikariConfig">
<Set name="maximumPoolSize">100</Set>
<Set name="dataSourceClassName">org.mariadb.jdbc.MySQLDataSource</Set>
<Call name="addDataSourceProperty">
<Arg>url</Arg>
<Arg>jdbc:mariadb://localhost:3306/<your_database_name></Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>user</Arg>
<Arg><your_database_user></Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>password</Arg>
<Arg><your_database_password></Arg>
</Call>
</New>

<New id="dataSource" class="org.eclipse.jetty.plus.jndi.Resource">


<Arg><Ref id="<yourwebsitehost>"/></Arg>
<Arg>jdbc/<MyDataSourceName></Arg>
<Arg>
<New class="com.zaxxer.hikari.HikariDataSource">
<Arg><Ref refid="HikariConfig" /></Arg>
</New>
</Arg>
</New>
</Configure>

You can see we’re using a specific MariaDB Datasource class


org.mariadb.jdbc.MySQLDataSource and a specific JDBC URL
jdbc:mariadb://localhost:3306/<your_database_name> as we are using the MariaDB
JDBC URL
Configuring Jetty to use a HikariCP connection pool with the Oracle MySQL JDBC driver

Alternatively we can also configure the Oracle MySQL Datasource and JDBC URL:
$ sudo nano /home/<yourwebsitehomedir>/web/root/WEB-INF/jetty-env.xml

<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://j\
etty.mortbay.org/configure.dtd">
<Configure id="wimsbios" class="org.eclipse.jetty.webapp.WebAppContext">
<Set name="virtualHosts">
<Array type="java.lang.String">
<Item>www.<yoursite>.com</Item>
</Array>
</Set>
<New id="HikariConfig" class="com.zaxxer.hikari.HikariConfig">
<Set name="maximumPoolSize">100</Set>
<Set name="dataSourceClassName">com.mysql.jdbc.jdbc2.opt\

Telegram Channel @nettrain


ional.MysqlDataSource</Set>

<Call name="addDataSourceProperty">
<Arg>url</Arg>
<Arg>jdbc:mysql://localhost:3306/<your_database_\
name></Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>user</Arg>
<Arg><your_database_user></Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>password</Arg>
<Arg><your_database_password></Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>useServerPrepStmts</Arg>
<Arg>true</Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>cachePrepStmts</Arg>
<Arg>true</Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>prepStmtCacheSqlLimit</Arg>
<Arg>2048</Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>prepStmtCacheSize</Arg>
<Arg>500</Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>cacheServerConfiguration</Arg>
<Arg>true</Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>useLocalTransactionState</Arg>
<Arg>true</Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>rewriteBatchedStatements</Arg>
<Arg>true</Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>maintainTimeStats</Arg>
<Arg>false</Arg>
</Call>
</New>

<New id="dataSource" class="org.eclipse.jetty.plus.jndi.Resource\


">
<Arg><Ref id="<yourwebsitehost>"/></Arg>
<Arg>jdbc/<MyDataSourceName></Arg>
<Arg>
<New class="com.zaxxer.hikari.HikariDataSource">
<Arg><Ref refid="HikariConfig" /\
></Arg>
</New>
</Arg>

Telegram Channel @nettrain


</New>
</Configure>

We have added some MySQL JDBC driver options to optimize performance/scalability.


With stress tests you can further tune these settings for our environment:

prepStmtCacheSize: 500 - sets the number of prepared statements that the MySQL
JDBC driver will cache per connection
prepStmtCacheSqlLimit: 2048 - This is the maximum length of a prepared SQL
statement that the driver will cache
cachePrepStmts: true - enable the prepared statement cache
useServerPrepStmts: true - enable server-side prepared statements
cacheServerConfiguration: true - caches the Maria DB server configuration in the
JDBC driver
useLocalTransactionState: true
rewriteBatchedStatements: true
maintainTimeStats: false

Configuring Jetty logging and enabling logfile rotation


To enable logging in Jetty we need to enable the logging module in our start.ini file in our
Jetty base configuration:
$ sudo nano /usr/local/jetty-conf/start.ini

Add
--module=logging

Now we can add a jetty-logging.properties inside a resources directory which will be


automatically read by Jetty during startup:
$ cd /usr/local/jetty-conf
$ sudo mkdir resources
$ cd resources
$ sudo nano jetty-logging.properties

Add:
jetty.logs=/var/log/jetty
org.eclipse.jetty.LEVEL=INFO

Now we will configure the logrotate configuration for Jetty


$ sudo nano /etc/logrotate.d/jetty
/var/log/jetty/*.log {
daily
missingok
rotate 10
size=100M

Telegram Channel @nettrain


compress
delaycompress
notifempty
sharedscripts
postrotate
sudo /etc/init.d/jetty restart
endscript
}

The postrotate specifies that it will restart jetty after the rotation of log files took place.

By using sharedscripts we are ensuring that the post rotate script doesn’t run on every
rotated log file but only once.

You can view the status of the logration via the command:
$ cat /var/lib/logrotate/status

You can run the logrotation manually with:


$ sudo logrotate --force /etc/logrotate.d/jetty

Telegram Channel @nettrain


Using a CDN
A CDN or Content Delivery Network can be used to speed up the
performance of your website(s).

A CDN is a network of servers which are located around the world. They
cache static resources of your website, like images, stylesheets, javascript
files, downloads etc.

A CDN does not replace your server, but is a service you can buy &
configure separately. It will offload much of the traffic your server receives
for static content to the Content Delivery Network.

A CDN can improve the load time performance of your website because it:

off loads traffic from your server (less load on your own server)
the static resources will be fetched from the CDN server closest to the
user; which will probably be closer then your web hosting server;
because a CDN has many servers located around the world. There will
be less latency for these browser requests for static resources.

Choosing a CDN
There are quite a few CDN providers in the market place. A good overview
is listed at http://www.cdnplanet.com/. Pricing can differ quite a lot, so we
advise you to take your time to choose a CDN.

One term you’ll come across is POP, which stands for Point of Presence.
Depending on where your site visitors are coming from (Europe, USA, Asia,
Australia, …) you should investigate whether the CDN provider has one or
more POPs in that region.

For example, if your target market is China, you should use one of the CDNs
which has a POP in China.

Telegram Channel @nettrain


There are two different types of CDNs: Push and Pull based. With most
CDN providers you can configure both.

With a Push CDN, the user manually uploads all the resources to the CDN
server. He/She then links to these resources from eg. the webpages on his
site/server.

A pull CDN takes another approach. Here the resources (images, javascript,
…) stay on the users server. The user links to the resources via the Pull CDN
URL. When the Pull CDN URL is asked for a file, it’ll fetch this file from
the original server (‘pulling’ the file) and serve it to the client.

The Pull CDN will then cache the resource until it expires. It can cause some
extra traffic on the original server if the resources would expire too soon (eg.
before being changed). Also, the first person asking for a certain resource
will have a slower response because the file is not yet cached yet (or if it was
expired).

Which type should you use? The Pull CDN is easier to use as you don’t have
to upload new resources manually. If the resources remain static or if your
site has a minimal amount of traffic, you could choose for a Push CDN;
because there the content stays available and is never re-pulled from the
original server.

Here is a list of features we think you should take into consideration when
choosing a CDN:

Support for https - if you have a secure website, the static resources
should also be loaded over https so that everything is secure. This
means the CDN also has support this
Support for SPDY 3.1 protocol - SPDY is a Google protocol to achieve
higher transmission speeds
Support for HTTP/2 protocol - HTTP/2 the next generation HTTP
standard is available since 2015 and supersedes the SPDY protocol.
Support for Gzip compression - compresses javascript and stylesheets
before sending it to the browser; reducing the size of the response
significantly

Telegram Channel @nettrain


Support for using your custom DNS CNAMEs (eg. cdn.
<mywebsite>.com should redirect traffic to the CDN servers)
Support for Push and Pull Zones
Easy to use Dashboard for configuring your CDN
Pricing: check what the bandwidth charges are (eg. x amount of GB
costs x USD cents/eurocents
Performance

Our recommendations for stable, good performing and cheap CDNs are:

KeyCDN
MaxCDN

They both have all of the above features available at a very attractive
pricepoint.

Analysing performance before and after enabling a CDN


We recommend to run the Google PageSpeed analysis before enabling the
CDN integration. This way you have a baseline and see how much
performance improves after enabling the CDN.

We recommend to keep track of the results in a document; so you can update


it whenever you update your CDN configuration.

Here is the URL to the Google PageSpeed service:


https://developers.google.com/speed/pagespeed/insights/

Configuring a CDN service


In the below section we have ordered a CDN service at KeyCDN.

We will configure a Pull Zone via the KeyCDN dashboard

In the dashboard, choose the Zones tab.

Create a New Zone.


Click on Create a new zone.

Telegram Channel @nettrain


Choose a zone name, this name will be part of the CDN URL generated by
MaxCDN. Choose Pull as the Zone Type.

For now we won’t yet configure the advanced features available under Show
Advanced Features. They include interesting performance related settings
like usage of Gzip, SPDY, HTTP/2 and https usage which we will cover in
our HTTPS chapter.

Pull Zone Settings

The Origin URL is the URL to your server where the CDN will find your
website resources (eg. CSS, Javascript, …)

For example if your website is hosted at www.mywebsite.com then you


should fill in http://www.mywebsite.com (or https://www.mywebsite.com if
you have configured https - which we will cover in our HTTPS chapter)

For now leave the other settings at their default values. We’ll cover them in
the tuning section. When you save the zone an URL will be created of the
form: <name of zone>*.kxcdn.com. For example let’s suppose this is
mywebsite-93x.kxcdn.com.

Then your website resources will be downloadable via that URL; eg. a
request to http://mywebsite-93x.kxcdn.com/images/logo.png will pull the
resource from your Origin server at
http://www.mywebsite.com/images/logo.png and cache it at the CDN server
(mywebsite-93x.kxcdn.com)

Now you have enabled your Pull CDN zone, you can start using it on your
site. You’ll need to replace all URLs to static resources (images, css,

Telegram Channel @nettrain


javascript) that reference your www.mywebsite.com with new URLs that
start with mywebsite-93x.kxcdn.com.

If you’re using a blogging platform like Wordpress, you can use the W3
Total Cache plugin to automate this process.

Using a cdn subdomain like cdn.mywebsite.com


Now if you’re like us, we don’t really like URLs like mywebsite-
93x.kxcdn.com because they look ugly. Also for Search Engine
Optimalization reasons Google will not view the kxcdn.com domain part of
your mywebsite.com domain.

Luckily it is possible to use a subdomain on your website which will point to


the URL of our Pull Zone we created in the previous section. For example
let’s see how we can make http://cdn.mywebsite.com point to
http://mywebsite-93x.kxcdn.com without any of your visitors (or Google)
ever seeing the mywebsite-93x.kxcdn.com in their browser.

The easiest way todo is to create a DNS CName record at your DNS
provider. If you’ve chosen DNSMadeEasy like we explained in the DNS
chapter; this is quite easy.

Login to the DNS Made Easy control panel:


Choose the domain <mywebsite.com>
Click on the plus icon in the CNAME Records table

Fill in the following values: * Name: cdn * Alias to: mywebsite-


93x.kxcdn.com. (the dot at the end is needed!) * Leave the Time to live
(TTL) at its default value. A higher value means that DNS servers will cache
this alias longer which will result in less queries to DNSMadeEasy (which
you’re paying for). A higher value will also result in changes propagating
slower.

Tuning the performance of your CDN


Enable Gzip

With KeyCDN it is possible to compress the resources that are hosted on


your CDN with Gzip. You can easily enable this setting in the advanced

Telegram Channel @nettrain


options of your zone.

Login at KeyCDN
Go to Zones
Click on the Managed - Edit button of your zone
Click on Show Advanced Features
Set Gzip to Enabled
Set expiration of resources

By default KeyCDN will not modify or set any cache response headers for
instructing browsers to expire resources (images, …). This means that
whatever cache response headers set by the Origin server stay intact. (in our
case this would be what nginx sets).

You can also override this behavior via the KeyCDN management dashboard
in the Show Advanced Features section of your zone.

We recommend to set the value to the max value allowed. (1 year) if possible
for your website. If you have resources which change a lot, you can get into
problems that visitors of your site keep seeing the cached version in their
browser cache. To circumvent this problem the image(s) should be served
from a different URL in those cases. Via the Google PageSpeed plugin
which we will cover later, this can be automated.
Setting the canonical URL Location

When a resource is available via your Pull CDN URL, eg.


http://cdn.mywebsite.com/images/logo.png, it is of course also available
with the direct URL http://www.mywebsite.com/images/logo.png. This may
create confusion for the Google and Bing search engines about which URL
they should cache; because they are duplicated on two different URLs.

By default Google will use the URL it thinks is best. As we are linking to
cdn.mywebsite.com in our HTML code, Google will likely cache the images
with the cdn.mywebsite.com domain. We recommend you to use this
approach.

If you would like Google to use the image available on your origin server (
http://www.mywebsite.com/images/logo.png), you can try the following

Telegram Channel @nettrain


method; but it’ll not give any guarantee of success according to SEO
experts.

When KeyCDN responds with the resource it can add a canonical Link
header to the HTTP Response. The canonical URL will be your origin URL
(eg. http://www.mywebsite.com/images/logo.png). This way Google and
Bing know which version of the resource is the ‘original’ one and will only
index these in eg. their image search.

You can enable this feature via the KeyCDN management dashboard in the
Show Advanced Features section of your zone:

KeyCDN canonical URL location

Configuring robots.txt

In the same Advanced Features section you can also enable a robots.txt file
instructing Google and other search engines not to crawl any content at
cdn.wimsbios.com. We don’t recommend to enable this unless you know
very well what you’re doing. Eg. For example if you have enabled the
Canonical Header option in the previous section, you shouldn’t enable a
robots.txt which disallows Google to fetch the images and read the canonical
URL.

Telegram Channel @nettrain


HTTPS everywhere
In this chapter we’ll first explain the reasons why it is a good idea to secure your site.
Then we will show you the exact steps on how to make your site secure, starting from
ordering a certificate to testing your https website.

Do you need a secure website?


You can make your site secure by running it over HTTPS, which stands for Hypertext
Transfer Protocol Secure. Using https protects the integrity and confidentiality of your
user’s data.

Here are some reasons why this is important:

HTTP is an insecure protocol, which means everything that is sent between the
browser and your server is in plain text and readable by anyone tapping the internet
connection. This could be a government agency (eg. NSA, …) or someone using the
same free unencrypted free WIFI hotspot as your user.
HTTPS on the other hand encrypts the communication between the browser and the
server. As such nobody can listen to your users “conversations”. A https certificate
for a website also proves that users communicate with the intended website and not a
fake website run by malicious people.
Since the summer of 2014, Google has publicly said that having a https site can give
a small ranking boost in the search engine results.

It is also vital that you secure all parts of your website. This includes all pages, all
resources (images, javascript, css,), all resources hosted on a CDN, …

When you would only use https for eg. A login into a forum or a credit card detail
information page, your website is still ‘leaking’ sensitive information hackers can use.

More in detail this could be a session identifier or cookies which are typically set after a
login. The hacker could reuse this information to hijack the users session and being
effectively logged in without knowing any password.

In October 2010 the Firesheep plugin for the Firefox browser was released which
intercepted unencrypted cookies from Twitter and Facebook, forcing them to go https
everywhere.

We also recommend to only offer an https version of your site and redirect any users
accessing the http version to the https version. We’ll explain how to do this technically in

Telegram Channel @nettrain


the next sections.

Buying a certificate for your site


When the users browser requests an HTTPS connection to a webpage on your server, the
server needs to send back its TLS certificate. This initial exchange to setup a secure
connection is called a TLS / SSL handshake.

The browser will do the necessary checks to see if the certificate is still valid, is for the
correct site (eg. https://www.myexamplewebsite.com) and more.

To acquire a certificate, you’ll need to buy one from a certificate authority. A certificate
authority is a company which is trusted by the major browsers to issue valid certificates.
Well known names include Comodo, VeriSign and more.

There are a few things you need to know about the different certificate types that exist
before you can buy one.

Standard certificate
A standard certificate can be used for a single website domain. Eg. if all your content is
hosted on www.mywebsite.com, you could buy a standard certificate which is valid for
www.mywebsite.com. Note this doesn’t include any subdomains which you may also use.

For example cdn.mywebsite.com is not included. Browsers will issue warnings to the user
if you try to use a certificate which is not valid. You could by a second certificate for the
subdomain cdn.mywebsite.com to solve this problem.

Wildcard certificate.
A wildcard certificate is still only valid for one top domain (eg. mywebsite.com), but it
also supports all subdomains (*.mywebsite.com); hence the name wildcard certificate.

This kind of certificate is usually a little bit more expensive, then a standard certificate.
Depending on the price and on the number of subdomains you’re going to use you’ll need
to decide between a standard and wildcard certificate.

Other types of certificates exists (eg. Multidomain), but are usually pretty expensive; so
we’ll not cover them here.

Public and private key length


Certificates work with public and private keys to encrypt the communications running
over https. Anything encrypted with the public key can only be decrypted by someone
who has the private key. Anything encrypted with the private key can only be decrypted
by the public key.

Telegram Channel @nettrain


When a browser and a web server communicate with eachother, the private key needs to
remain in a secure location on the web server. The public key is intended to be distributed
to the browser, so it is able to decrypt the information which was encrypted with the
private key.

To counter brute-force attacks that are trying to acquire the private key in use, the key
needs to be big enough. In the past 1024 bit keys were generally created by the certificate
authorities. Nowadays you should use 2048 bit keys, because 1024 bit keys have become
to weak. (we’ll guide you through the technical details later)

Extended Validation certificates and the green bar in the browsers


Earlier we said that a https certificate for a website proves that users communicate with
the intended website and not a fake website run by malicious people.

Of course the Certificate Authority plays a vital role in this: when you order a certificate
they should verify you’re the owner of the domain name.

With a normal certificate the validation is quicker and less extensive then when you order
an EV (Extended Validation) certificate. An EV certificate is more expensive due to the
extended manual verification of the site owner.

Browsers do place a lot more trust in an EV certificate. They will display the name of the
company inside a green bar in the browsers address bar. For example:

EV Certificate shows a green bar in the browser

An EV certificate could be interesting for an ecommerce site because it gives your user a
greater trust in your site which could lead to more sales.

There are some restrictions with EV certificates though: only companies can order an EV
certificates, individuals cannot. EV certificates are always for one domain only; there are
no wildcard EV certificates at this moment.

Buying the certificate


You can buy a certificate from a Certificate Authority. You should choose a Certificate
Authority which is trusted by both current and older browsers/operating systems for
maximum compatibility. These include, but are not limited to:

GlobalSign
Network Solutions
Symantec
Thawte
Trustwave

Telegram Channel @nettrain


Comodo

You can view daily updated reports of the market shares of the leading Certificate
Authorities at http://w3techs.com/technologies/overview/ssl_certificate/all

Because of better pricing we have chosen to buy a certificate from Comodo. They also
support generating 2048 bit certificates for better security.

Many companies resell certificates from the above Certificate Authorities. They are the
exact same certificates, but come with a reduced price tag. We recommend you to shop
around.

One such reseller we recommend and use is the SSLStore which we will use in the
example ordering process below.

Generate a Certificate Signing request


When ordering a Certificate from a Certificate Authority you’ll need to create a
Certificate Signing request. (CSR)

A Certificate Signing request is file with encrypted text that is generated on the server
where the certificate will be used on. It contains various details like your organization
name, the common name (=domain name), email address, locality and country. It also
contains your public key; which the Certificate Authority will put into your certificate.

When we create the Certificate Signing request below we will also generate a private key.
The Certificate Signing request will only work with the private key that was generated
with it. The private key will be needed for the certificate you’ll buy, to work.

Here is how you can create the Certificate Signing request on your server:
$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout myprivatekey.key -out cer\
tificate-signing-request.csr

Let’s explain the openSSL parameters in detail:

req: activates the part of openssl that deals with certificate requests signing
-nodes: no des, stores the private key without protecting it with a passphrase. While
this is not considered to be best practice, many people do not set a passphrase or later
remove it, since services with pass phrase protected keys can not be auto-restarted
without typing in the passphrase
-newkey: generate a new private key
rsa:2048 1024 is the default bit length of the private key. We will use 2048 bit keys
because our Certificate Authority supports this and is required for certificates which
expire after October 2013

Telegram Channel @nettrain


-sha256: used by certificate authorities to generate a SHA-2 certificates (which is
more secure then SHA-1)
-keyout myprivatekey.key: store the private key in a file called myprivatekey.key (in
PEM format)
-out certificate-signing-request.csr: store the certificate request in a file called
certificate-signing-request.csr

When launching the above command you’ll be asked to enter information that will be
incorporated into your certificate request.

There are quite a few fields but you can leave some blank. For some fields there will be a
default value (displayed in […] brackets). If you enter ‘.’, the field will be left blank.

Country Name (2 letter code) [AU]: <2 letter country code> eg. BE for Belgium
State or Province Name (full name) [Some-State]
Locality Name (eg. city) []
Organization Name (eg. company) [Internet Widgits Pty Ltd]: Wim Bervoets
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []: this is an important setting
which we will discuss below.
Email Address []: email address which will be in the certificate and used by the
Certificate Authority to verify your request . Make sure this email is valid & you
have access to it. The email address should also match with the email address in the
DNS contact emails used for the particular domain you’re requesting a certificate for.

The Common Name should be the domain name you’re requesting a certificate for. Eg.
www.mywebsite.com

This should include the www or the subdomain you’re requesting a certificate for.

If you want to order a wildcard certificate which is valid for all subdomains you should
specify this with a star; eg. *.mywebsite.com

OpenSSL will now ask you for a few ‘extra’ attributes to be sent with your certificate
request:

a challenge password []: leave empty


an optional company name []: leave empty

Now we can download the freshly generated csr file and use it when ordering our SSL
certificate at the SSLStore.

Ordering a certificate
Let’s suppose we want a Comodo Wildcard certificate. Go to
https://www.thesslstore.com/wildcardssl-certificates.aspx?aid=52910623 and click on the

Telegram Channel @nettrain


Add To cart button next to ‘Comodo EssentialSSL Wildcard certificate’.

Next you’ll be asked for your billing details and credit card information. After completing
these steps an email will be sent with a link to the Configure SSL service of Comodo
(together with a PIN)

Comodo SSL Configuration wizard

Here you’ll also need to provide the Certificate Signing request you have generated in the
previous section.

After completing these steps, your domain will be validated by Comodo. Depending on
the type of certificate this will take a few hours to one week to complete.

As we didn’t choose an Extended Validation certificate, this validation was quick and we
soon received a ‘Domain Control Validation’ email with another validation code for our
certificate we requested.

This email was sent to the DNS contacts listed for our domain.

After entering the validation code on the Comodo website, the certificate was emailed to
our email address.

The certificate zip file actually contained a few different certificates:

A root Certificate Authority Certificate.crt


2 Intermediate Certificate Authority certificates (COMODORSAAddTrustCA.crt
and COMODORSADomainValidationSecureServerCA.crt)
The certificate for your domain.

You may wonder why there are so many different certificates included and what you need
to do with it.

Telegram Channel @nettrain


To explain this, we’ll need to cover what SSL Certificate chains are.

Browsers and devices connecting to secure sites have a fixed list of Certificate Authorities
they trust - the so called root CAs. The other kind of CAs are the intermediate Certificate
Authorities.

If the certificate of the intermediate CA is not trusted by the browser or device, the
browser will check if the certificate of the intermediate CA was issued by a trusted CA
(this goes on until a trusted (root) CA is found).

This chain of SSL certificates from the root CA certificate, over the intermediate CA
certificates to the end-user certificate for your domain represents the SSL certificate
chain.

You can view the chain in all popular browsers, for example in Chrome you can click on
the padlock item of a secure site, choose Connection and then Certificate data to view the
chain:

HTTPS Certificate chain

For the Comodo certificate the chain is as follows:

Telegram Channel @nettrain


your domain certificate
Comodo RSA Domain Validation Secure Server CA
Comodo RSA Certification authority
AddTrust External CA Root

In the next section we’ll make use of the certificates as we install them in our nginx
configuration.

Configuring nginx for SSL


First we will combine our domain certificate with all the intermediary certificates (except
the root CA certificate).

We do this for the following reasons:

the browser will receive the full certificate chain. (except for the root certificate but
the browser already has this one builtin).
Some browsers will display warnings when they can not find a trusted CA certificate
in the chain. This can happen if the chain is not complete.
Other browsers will try to download the intermediary CA certificates; this is not
good for the performance of your website because it slows down setting up a secure
connection. If we combine all the certificates and configure nginx properly this will
be much faster.

Note: In general a combined SSL certificate with less intermediary CAs will be a little bit
better performance wise still.

You can combine the certificates on your server, after you have uploaded all the
certificate .crt files with the following command:
$ cat <your_domain>.crt COMODORSADomainValidationSecureServerCA.crt COMODORSAAdd\
TrustCA.crt > yourdomain.chained.crt

yourdomain.chained.crt can now be configured in nginx:

You’ll need to add the following configuration inside a server {…} block in the nginx
configuration. Please refer to our Configuring your website domain in nginx section.
$ sudo nano /usr/local/nginx/conf/conf.d/mywebsite.com.conf

server {
server_name www.mywebsite.com;
# SSL config
listen <ipv4 address>:443 default_server ssl http2;
listen [ipv6 address]:443 default_server ssl http2;

ssl_certificate /usr/local/nginx/conf/<yourdomain.chained.crt>;
ssl_certificate_key /usr/local/nginx/conf/<yourprivate.key>;

Telegram Channel @nettrain


...
}

In this configuration we tell nginx to listen on an IPv4 and Ipv6 address on the default
HTTPS port 443. We enable ssl and http2.

HTTP/2 is the next generation standardized HTTP v2 protocol. It is based on the SPDY
Google specification which manipulates HTTP traffic, with the goal to reduce web page
load latency. It uses compression and prioritizes and multiplexes the transfer of a web
page so that only one connection per client is required. (eg. Getting the html, images,
stylesheets and javascript files all happens with a connection that is kept open).

You can check an example what kind of performance improvements are possible with
HTTP2 on the Akaimai HTTP2 test page

HTTP/2 is best used with TLS (Transport Layer security) encryption (eg. https) for
security and better compatibility across proxy servers.

Now restart the nginx server. Your site should now be accessible via https.

We recommend you to now run an SSL analyzer. You’ll get a security score and a
detailed report of your SSL configuration:

HTTPS Certificate chain

To get an A+ score the default nginx SSL configuration shown above is not enough. More
likely you’ll receive one or more of the following warnings:

Telegram Channel @nettrain


HTTPS Certificate chain

Let’s tune the https configuration in the next section!

Getting an A+ grade on SSLLabs.com


Disabling http access to your server

To make your users use the https version of your site by default, you’ll need to redirect all
http traffic to the https protocol. Here is an example server nginx configuration which
does this:
server {
server_name www.yourwebsite.com;
listen <ip_address>:80; # Listen on the HTTP port
listen [<ip_address>]:80; # Listen on IPv6 address and HTTP 80 port

return 301 https://$server_name$request_uri;


}

Fixing Server configuration does not include all intermediate certificates

Telegram Channel @nettrain


Actually you should not be receiving this error, as we previously combined all the
intermediate certificates with our domain certificate. If the SSLLabs test still reports this
error, then you should revisit the previous section.

If you don’t fix this error users may receive strong browser warnings and experience slow
performance.

HTTPS Certificate chain problems

Another tester which analyzes the intermediate certificates is


https://www.wormly.com/help/ssl-tests/intermediate-cert-chain
Session can be vulnerable to BEAST / POODLE attacks / SSLv3 is enabled on the server

Sometimes security issues are found in the security protocols or ciphers used for securing
websites. Some of these issues get an official name like BEAST or POODLE attacks.

By using the latest version of OpenSSL and properly configuring the nginx SSL settings
you can mitigate most of these issues.

https://wiki.mozilla.org/Security/Server_Side_TLS has an up-to-date list of configuration


settings to be used on your server. Actually there are three sets of configurations, a
Modern, an Intermediate and Old configuration.

We recommend to at least use the Intermediate or the Modern version as they give you
higher levels of security between browsers/clients and your server. The modern version is
the most secure, but doesn’t work well with old browsers.

Here are the minimum versions supported for the Modern & Intermediate configuration.

Modern: Firefox 27, Chrome 22, IE 11, Opera 14, Safari 7, Android 4.4, Java 8
Intermediate: Firefox 1, Chrome 1, IE 7, Opera 5, Safari 1, Windows XP IE8,
Android 2.3, Java 7

Telegram Channel @nettrain


We’ll use the online tool at https://mozilla.github.io/server-side-tls/ssl-config-generator/
to generate an ‘Intermediate’ configuration.

SSL Config generator

Choose nginx, Intermediate and fill in the nginx version & OpenSSL version. The nginx
configuration generated should be like the one in the screenshot above.

In summary these settings will: * disable SSL3.0 protocol (even IE6 on Windows XP
supports the successor TLSv1 with Windows update) - clients are forced to use at least
TLSv1 * The SSL Ciphersuites nginx/OpenSSL supports server side are ordered: the
most secure at the beginning of the list. This will make sure client/servers will try to use
the most secure options they both support. * Specifies that server ciphers should be
preferred over client ciphers when using the TLS protocols (to fix BEAST SSL attack) *
Enable OSCP stapling (explained in the next chapter)

For Diffie-Hellman based ciphersuites an extra parameter is needed:


ssl_dhparam /usr/local/nginx/conf/dhparam.pem;

This file can be created by the following OpenSSL command:

Telegram Channel @nettrain


$ sudo openssl dhparam -out /usr/local/nginx/conf/dhparam.pem 2048
Generating DH parameters, 2048 bit long safe prime, generator 2
This is going to take a long time
................................................................................\
................

2048 means the parameter is 2048 bits in size.


OCSP Stapling

OCSP stands for Online Certificate Status Protocol. Let’s explain the context a bit.

Certificates issued by a Certificate Authority can be revoked by the CA. For example
because the customer lost their private key or was stolen, or the domain was transferred to
a new owner.

The Online Certificate Status Protocol (OCSP) is one method for obtaining certificate
revocation information. When presented with a certificate, the browser asks the issuing
CA if there are any problems with it. If the certificate is fine, the CA can respond with a
signed assertion that the certificate is still valid. If it has been revoked, however, the CA
can say so by the same mechanism.

OCSP has a few drawbacks:

it slows down new HTTPS connections. When the browser encounters a new
certificate, it has to make an additional request to a server operated by the CA.
Additionally, if the browser cannot connect to the CA, it must choose between two
undesirable options: ** It can terminate the connection on the assumption that
something is wrong, which decreases usability. ** It can continue the connection,
which defeats the purpose of doing this kind of revocation checking.

OCSP stapling solves these problems by having the site itself periodically ask the CA for
a signed assertion of status and sending that statement in the handshake at the beginning
of new HTTPS connections.

To enable OCSP stapling in nginx; add the following options:


ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /usr/local/nginx/conf/ca.root.crt;

The ssl_trusted_certificate file should only contain the root Certificate Authority
certificate. In our case, we created this file like this:
cat AddTrustExternalCARoot.crt > ca.root.crt

When nginx asks for the revocation status of your certificate, it’ll ask the CA this in a
secure manner using the root CA certificate (ca.root.crt in our case).

Telegram Channel @nettrain


To validate OSCP stapling is working run the following command:
$ openssl s_client -connect www.<yourwebsite>.com:443 -tls1 -tlsextdebug -status\
< /dev/null| grep OCSP

It should give back:


OCSP Response Data:
OCSP Response Status: successful (0x0)
Response Type: Basic OCSP Response

when it is working.

“OCSP response: no response sent” means it is not active yet.

You may need to rerun this command a few times if you just recently started nginx.

If OCSP is not working correctly nginx will also issue the following warning in its error
log file (/var/log/nginx/error.log)
2015/12/12 04:47:03 [error] 1472#0: OCSP_basic_verify() failed (SSL: error:27069\
065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable\
to get issuer certificate) while requesting certificate status, responder: gv.s\
ymcd.com

Implementing HTTP Strict Transport Security

Suppose a user types in the URL of your website in a browser without any https or http
protocol specified. Then the browser will likely choose to load the site via http (the
unsecure version). Even if you have configured your server to redirect all http requests to
https, the user may will talk to the non-encrypted version of the site before being
redirected.

This opens up the potential for a man-in-the-middle attack, where the redirect could be
exploited to direct a user to a malicious site instead of the secure version of the original
page.

The HTTP Strict Transport Security feature lets a website inform the browser it should
never try to load the site using HTTP, and that it should automatically convert all attempts
to access the site using HTTP to HTTPs requests.

In your nginx configuration you’ll need to add the following line:


add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";

This adds an HTTP Strict-Transport-Security header, specifiying that all subdomains


should also be run on https and that the browser should not try the http version for one
year.

Telegram Channel @nettrain


Optimizing SSL

To further optimize the SSL performance of nginx we can enable some caches.
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 10m;

The ssl_session_cache will create a shared cache between all the nginx worker processes.
We have reserved 20MB for storing SSL sessions (for 10 minutes). According to the
nginx documentation 1MB can store about 4000 sessions. You can reduce or increase the
size of the cache based on the traffic you’re expecting.

Enabling SSL on a CDN


When serving your site over https, you need to make sure that all resources used by your
HTML are also served via HTTPS. (eg. Images, javascript, stylesheets).

When you’re using a CDN to host your resources, you’ll need to configure the SSL
settings in your CDN Account.

We’re going to show you how you can enable HTTPS on a KeyCDN server. The process
will be similar for eg. MaxCDN.

For setting up a CDN, take a look at our Chapter Using a CDN.

Go to KeyCDN and login to your account.


Click on Zones and click on the Manage button -> Edit for the zone you want to
configure.
Click on Show Advanced features

The settings we need to configure are:

SSL
Custom SSL certficate
Custom SSL Private key
Force SSL

Enabling SSL on a CDN

As we want to configure https://cdn.<yourdomain.com>, we choose the Custom SSL


option.

In the Custom SSL Certificate, we need to include our domain certificate and the
intermediate CA certificates.

Telegram Channel @nettrain


You should copy the text from our chained certificate file at
/usr/local/nginx/conf/<yourdomain.chained.crt>. Below you can see the exact syntax to
use.

Custom SSL certificate on a CDN

You’ll also need to provide your private key in the Custom SSL Private Key section. This
key is available at /usr/local/nginx/conf/<yourprivate.key>

Custom SSL private key

Lastly enable the setting to redirect cdn.<yourwebsite.com> requests to https:

Redirect CDN URLs to https

Make sure to use a https URL for your Origin URL too (eg.
https://www.yourwebsite.com)

Use a https URL for your Origin URL

Please note that most CDNs that support SSL implement it via Server Name Indication
which means multiple certificates can be presented to the browser on 1 single IP address.
This reduces their need for dedicated IP addresses per customer which lowers the cost

Telegram Channel @nettrain


significantly. The only (small) downlside of SNI is that it isn’t supported by IE6 on
Windows XP, meaning those users will see a certificate warning.

Enabling SPDY or HTTP/2 on a CDN


As we have enabled https on our CDN, we can now also enable the Google SPDY
protocol or HTTP/2 which will speed up the https communications significantly.

Enabling SPDY on a CDN

Telegram Channel @nettrain


Configure Email For Your Domain
In this chapter we’ll explain how you can configure email for your domain;
eg. being able to send emails from an email account like
‘yourname@yourdomain.com’

Selfhosted Email Servers vs External Mail Systems


To send emails from an @yourdomain.com address you’ll either need to
setup email server software yourself or use a third party provider.

We recommend to go with a third party provider. Here is why:

Installing your own email server takes extra server resources for the
processing and storage of your emails. It also generates extra server
administration work for you. Eg. you’ll need to make sure that SPAM is
properly blocked while allowing legitimate messages to go through. You’ll
also want a good uptime of your email server.

Another reason to go with a third party provider could be that when you
change servers or your website server crashes under load; your email service
will continue to work when using a 3rd party provider.

Below is a list of the most used third party email providers.

1. Google Apps Business edition ($5/email account/month with 30GB


storage)
2. Zoho Mail (free for the first 10 users with 5GB email + 5GB file
storage)
3. Outlook365 mail
4. FastMail Personal Enhanced plan ($40/year with 15GB email + 5GB
file storage)

Depending on usage there is still a free option available (Zoho Mail), which
we’ll configure in the remainder of the chapter. If you choose another

Telegram Channel @nettrain


provider, the setup details may differ a bit, but the general concepts should
remain the same.

Signup for Zoho email


To signup for Zoho email go to their website at https://www.zoho.com/mail/
and click on the red Get Started Button.

If you only need emails for one domain name, you can choose the free
option which gives you 5GB mailbox storage per user. Otherwise take any of
the other options.

Zoho Email pricing

Next, specify your existing company or business domain name for which
you want to setup a youremail@yourdomain email address.

Zoho Email Domain name

Telegram Channel @nettrain


Now provide your name, lastname, password and the email address you
would like to create. Also provide a contact email address. (eg. a gmail or
hotmail address you’re already using).

Zoho Mail will now configure your account and within a few seconds come
back to congratulate you:

Zoho Email Congratulations

Now click on the link ‘Setup <your domain> in Zoho’. You’ll need to
complete the following steps:

Telegram Channel @nettrain


Zoho Email Steps

Verify Domain
You need to start by verifying that you own the domain you want to use in
your email address. Select your domain DNS Hosting provider from the list.
If you’re using DNSMadeEasy then choose Other.

Telegram Channel @nettrain


You can validate your domain via three methods:

1. CNAME Method
2. TXT Method
3. HTML Method

The CNAME and TXT method both need configuration changes inside your
DNS Service Provider Control Panel for your domain. You’ll either need to
add a CNAME record or a TXT record in eg. your DNSMadeEasy control
panel. Please check our ‘Ordering a domain name’ chapter for more
information on how to setup DNSMadeEasy.

Once you have added either record you can click on the green ‘Verify by
TXT’ or ‘Verify by CNAME’ button.

The HTML method lets you upload an HTML to your website at


http://yourdomain.com/zohoverify/verifyforzoho.html.

Add Users
Now you can provide a desired username to create your domain based email
account. You’re also given the possibility to add other email accounts for
your domain.

Zoho Add Users

Telegram Channel @nettrain


Add Groups
Groups are common email accounts that serve the purpose of having
common email addresses for a team of users in your organization. For
example, you can create hr@yourdomain.com as a Group account, a
common account with all the HRs as members of the group.

Zoho Groups

Configure Email Delivery


You need to configure the MX records of your domain in the DNS hosting
provider (DNS Manager) to start receiving email to the users and groups
created. (eg. the DNSMadeEasy Control Panel). You should change the
email service provider of the domain, only after this critical step.

MX Records (Mail eXchange) are the special entries in DNS that designate
the email-receiving server of your domain. Ensure that you have created the
required user accounts and group accounts, before changing the MX records.

The MX Records of Zoho to be added in the DNSMadeEasy Control Panel


are:

Telegram Channel @nettrain


Host Name Address Priority
@ mx.zoho.com 10
@ mx2.zoho.com 20

You must remove (delete) any other MX records other than the above 2
records.

Zoho MX Records

Your Zoho Mail account can be read in any email client that supports the
POP or IMAP protocol. The links contain the configuration instructions.

Zoho POP3 SMTP Accesss

You’re then redirected to your Zoho Mail Inbox! Try to send and receive a
few emails to see everything is working correctly!

Telegram Channel @nettrain


Zoho Inbox

Telegram Channel @nettrain


Installing Wordpress
Wordpress is one of the worlds most used blogging platforms. It allows you to easily
publish a weblog on the internet.

Wordpress includes the possibility to use themes to customize the look and feel of your
website. Most people will choose one of the free themes available or buy a professionally
designed theme template.

The Wordpress plugin system makes the Wordpress core functionality very extensible.
There are free and paid plugins for almost every feature or functionality you’ll need.

These include but are not limited to:

SEO plugins: optimize your articles for better Search engine optimalization visibility
Performance plugins: optimize the performance of your Wordpress website
Tracking plugins: integrate Google Analytics tracking
Landingpage, sales pages, membership portal plugins (eg. OptimizePress)

In this chapter we’ll focus on how to install Wordpress, and install the Yoast SEO plugin.

Downloading Wordpress
You can download the latest version of Wordpress with the following commands:
$ cd /home/<yourwebsitehome>/web
$ sudo wget https://wordpress.org/latest.tar.gz
$ sudo tar xvf latest.tar.gz
$ sudo mv wordpress/* .
$ sudo rm latest.tar.gz

In this case you have installed Wordpress in the root directory of your site. (eg. it’ll be
available at http://www.<yourwebsitedomain>.com)

Enabling PHP In Your Nginx Server Configuration


If you have not already done so, you’ll need to enable the php configuration in Nginx for
this domain.

If you have previously followed our Nginx install guide, the nginx config file to edit
should be at /usr/local/nginx/conf/conf.d/<yourwebsitedomain>.conf
$ sudo nano /usr/local/nginx/conf/conf.d/<yourwebsitedomain>.conf

Telegram Channel @nettrain


Inside the server { … } block add the following include:
# Add .php handler
include /usr/local/nginx/conf/php.conf;

index index.php;

location / {
# $uri/ needed because otherwise the Wordpress Administration panel
# doesn't work well
try_files $uri $uri/ /index.php;
}

‘index index.php’ specifies that when a directory is requested it should serve index.php by
default. (this will make sure that http://www.<yourwebsitedomain>.com will execute
index.php (thus your Wordpress blog))

We added a location /, because in our example Wordpress is available via http://www.


<yourwebsitedomain>.com

The ‘try_files’ directive will try to execute three different URLS, the URL itself, the URL
with a slash appended and the URL with index.php appended. It’ll only try the next
possibility when the previous one returned a HTTP 404 error.

For example, if a request for http://www.yourwebsitedomain.com/my-newest-blog-post is


handled by Nginx, it’ll try to fetch http://www.yourwebsitedomain.com/my-newest-blog-
post, if not found then http://www.yourwebsitedomain.com/my-newest-blog-post/, if not
found then http://www.yourwebsitedomain.com/my-newest-blog-post/index.php. If the
file is still not found Nginx will return the HTTP 404 error to the browser.

We need these 3 different URLs, to make Nginx’s forwarding to PHP-FPM work with the
kind of URLs Wordpress creates.

Save the configuration file and then restart nginx:


$ sudo /etc/init.d/nginx restart

Creating a Database For Your Wordpress Installation


To create a database and database user for your Wordpress installation, we’ll use
PHPMyAdmin. If you have not yet installed PHPMyAdmin, please see our Installing
PHPMyAdmin chapter.

Open PHPMyAdmin in your browser and execute the following steps:

Click on the tab Databases


Create a new Database, you can eg. name it ‘wordpress_site’

Telegram Channel @nettrain


Now create a new database user via PHPMyAdmin:

Click on Users and then on Add a user.

In this screen you need to choose a username and a password. To generate a good
database password you can use the Generate button.

In the Host field, you should select the dropdown ‘local’, as such the database can not be
accessed from remote locations, but only from your server. (which is where your
Wordpress installation is also running).

Now you need to assign the necessary rights for this user so the user can insert, update
and delete records on your newly created database (eg. ‘wordpress_site’)

Edit the rights of the user and then click on the rounded subtab Database. Choose the
database you have created previously. (eg. ‘wordpress_site’)

In the next screen, choose “Select all”

Installing Wordpress
The Wordpress installation is web based. Go to http://www.<yourwebsitedomain>.com/
to install Wordpress.

Installing Wordpress

Telegram Channel @nettrain


Click on the button ‘Let’s go’ takes you to the database details configuration screen:

Configuring the Wordpress database

Here you need to fill in the following fields:

Database name: the name of the database you have created previously (eg.
‘wordpress_site’)
Username / password: the database user and password you have created previously
with PHPMyAdmin
Database host: leave it on localhost
Table prefix: leave it on wp_

Wordpress will now test the database connection. If succesfull, Wordpress will try to write
a wp-config.php file with its configuration.

Due to file permissions it is possible that Wordpress is not yet able to write this file
automatically. In this case you’ll see the following screen where you can copy-paste the
contents in a file you create manually on your server:

Telegram Channel @nettrain


Configuring the Wordpress database

Here is how you can do this via the SSH command line:

Make sure you’re in the root directory of your Wordpress installation.


$ cd /home/<yourwebsitehome>/web
$ sudo nano wp-config.php

Copy paste the contents of the file into the empty wp-config.php file

Save the file via Ctrl-O

Now click on the ‘Run the install’ button on the Wordpress installation screen.

Wordpress will now ask you for the following details:

Site title: you can put the name of your blog here
Username: choose a username to login to your Wordpress administration pages (eg.
where you create new posts and so on). We recommend you to not choose ‘admin’,
as this makes it easier for hackers to break in to your site.
Password: Use a strong password
Your email: Use an email address you own
Enable the flag ‘Allow search engines to index this site’

Telegram Channel @nettrain


Wordpress is now installed on your host. You should be able to access the administration
screen via http://www.<yourwebsitedomain>.com/wp-admin

Enabling auto-updates for Wordpress (core, plugins and themes)


To enable updating Wordpress via the User Interface administration panel you’ll need to
make a few tweaks to the Wordpress installation.

The first option you need to change is in the wp-config.php file where we’ll set the
Filesystem method:
$ sudo nano /home/<yourwebsitedomain>/www/wp-config.php

Add the following setting:


/** Enable autoupdate without FTP */
define('FS_METHOD', 'direct');

This effectively disables the Wordpress screen where it asks for FTP/SFTP details to
upload updated files to your server.

Next you’ll need to make sure that the files are readable and writable by the user and
group (should be nginx) the files are owned by:
$ sudo find /home/<yourwebsitedomain>/web -type f -exec chmod 664 {} \;

The -f option specifies you’re searching for files, and for every file found you execute the
command chmod 664.

You’ll also make the plugins and themes directory readable, writable and executable for
the user and group:
$ sudo find /home/<yourwebsitedomain>/web/wp-content/plugins -type d -exec chmod\
775 {} \;
$ sudo find /home/<yourwebsitedomain>/web/wp-content/themes -type d -exec chmod \
775 {} \;

The -f option specifies you’re searching for directories, and for every directory found you
execute the command chmod 775.

You also need to make sure that all Wordpress files are owned by your OS user/nginx
group you created for your <yourwebsitedomain>\ website:
$ cd /home/<yourwebsitedomain>/web
$ sudo chown -fR <yourwebsitedomain>:nginx *

Installing Wordpress plugins

Telegram Channel @nettrain


We’ll install some commonly used Wordpress plugins every site should use in the next
section

Installing Yoast SEO


The Yoast SEO plugin is probably one of the best SEO plugins available.

To install it you need to execute the following steps:

Go to your Wordpress administration panel (eg. www.


<yourwebsitedomain>.com/wp-admin)
Login
Click on Plugins and then Add new
In the Search Plugins field, search for ‘Yoast SEO’

Installing Yoast SEO

Click on the Install button. The plugin will now be installed.


Configuring Yoast SEO

The first thing we’ll fix are the URLs Wordpress will generate for your pages and
blogposts. Each blogpost is uniquely accesible by an URL. In Wordpress this is called a
‘Permalink’.

By default WordPress uses web URLs which have question marks and lots of numbers in
them; however, WordPress offers you the ability to create a custom URL structure for
your permalinks and archives.

Go to Settings -> Permalinks to change the URL structure.

For SEO reasons it is best to use the option ‘Post name’:

Telegram Channel @nettrain


Configuring Permalinks Structure

Then click on the Save button.

Now go to Settings -> General

In the General Settings you’ll find the Site title of your blog. You may want to update the
Tagline because by default it’s ‘Just another Wordpress site’

Choosing a Tagline

After installing Yoast SEO plugin, you’ll see a new sidebar tab ‘SEO’ in your
administration panel. Click on the ‘SEO’ -> General item. Then click on the tab Your
info.

Choose whether you’re a person or company.

Choose whether you’re a company or a person

Now click on the Webmaster Tools tab:

Telegram Channel @nettrain


Connecting Webmaster Tools

We recommend you to verify your site with at least the Google Search Console and the
Bing Webmaster tools. Click on any of the links and follow the instructions that are
provided.

Now go to SEO -> Social.

On this page you can add all the social profiles for your website. (eg. Twitter, Facebook,
Instagram, Pinterest, Google+ and so on).

If you have not yet created social profiles for you website, we recommend todo so as all
these profiles can direct traffic to your own site.

Connecting Your Site With Social Profiles

Configuring Yoast SEO Sitemap XML File

Sitemap files are XML files that list all the URLS available on a given site. They include
information like last update date, how often it changes and so on.

Telegram Channel @nettrain


The URL location of the Sitemaps can be submitted to the Search Engines Webmaster
Tools - which will use it to crawl your site more thoroughly; increasing the amount of
visitors.

Google Search Console


Bing Webmaster Tools

The Yoast SEO plugin will automatically generate a sitemap (index) xml file. You can
enable it via the SEO -> XML Sitemaps menu.

Enabling Sitemap files for your Wordpress blog

If you click on the XML Sitemap button you’ll see that the XML Sitemap file is available
at the URL http://www.<yourwebsite>.com/sitemap_index.xml
Adding URL Rewrites for Yoast SEO Sitemap XML Files

We need some extra URL rewrite rules in the nginx configuration to make the XML
Sitemap button not return a HTTP 404 though:
$ sudo nano /usr/local/nginx/conf/conf.d/<yourwebsitedomain>.conf

Add a new location as follows:


location ~ ([^/]*)sitemap(.*)\.x(m|s)l$ {
rewrite ^/sitemap\.xml$ /sitemap_index.xml permanent;
rewrite ^/([a-z]+)?-?sitemap\.xsl$ /index.php?xsl=$1 last;
rewrite ^/sitemap_index\.xml$ /index.php?sitemap=1 last;
rewrite ^/([^/]+?)-sitemap([0-9]+)?\.xml$ /index.php?sitemap=$1&\
sitemap_n=$2 last;
rewrite ^/news_sitemap\.xml$ /index.php?sitemap=wpseo_news last;
rewrite ^/locations\.kml$ /index.php?sitemap=wpseo_local_kml las\
t;
rewrite ^/geo_sitemap\.xml$ /index.php?sitemap=wpseo_local last;

Telegram Channel @nettrain


rewrite ^/video-sitemap\.xsl$ /index.php?xsl=video last;

access_log off; # no logging of rewrites


}

Now the URL http://www.<yourwebsite>.com/sitemap_index.xml will correctly return


the sitemap (index) file.
Enabling Breadcrumbs

We recommend you to enable Breadcrumbs for your blog via the SEO -> Advanced
section

Enabling Breadcrumbs for your Wordpress blog

Breadcrumbs have an important SEO value because they help the search engine (and your
visitors!) to understand the structure of your site.

For the breadcrumbs to show up on your Wordpress blog, it is possible you may need to
edit your Wordpress theme. You can learn how to do this by reading this Yoast SEO KB
article

Telegram Channel @nettrain


Optimizing Wordpress performance
In this chapter we will optimize the performance of your Wordpress blog.
We recommend to execute some page speed tests before doing any
optimizations. This way you can compare the before and after performance
when changing one or more settings.

The following websites analyse your site’s speed and give advice on how to
make it faster:

GTMetrix
PageSpeed Insights

The results are sorted in order of impact upon score; thus optimizing rules at
the top of the list can greatly improve your overall score.

Below we will install some plugins which will help to achieve a faster site
and improve your score:

EWWW Image Optimizer


When uploading images to your WordPress media library, it is possible your
images do not have the smallest size possible. Images such as JPGs, PNGs
or WebP formats can sometimes be further compressed without a loss of
picture quality (eg. losslessly) by removing invisible meta data (like EXIF
information).

That’s a good thing, because smaller image sizes means faster page loads.

We recommend to install the EWWW Image Optimizer Wordpress plugin to


automatically optimize any image you upload to Wordpress. If you already
have an existing Media Library full of images, the plugin can also process
your existing images and replace them with the optimized versions.

Telegram Channel @nettrain


This plugin needs the PHP exec function to be enabled. For security reasons
we disabled this in our PHP chapter.

Here is how to enable the PHP exec function:


$ sudo nano /usr/local/lib/php.ini

Remove the exec option from the following line:

disable_functions=exec,passthru,shell_exec,system,proc_open,popen

After enabling the EWWW Image Optimizer plugin; go the the Settings
page to optimally configure it:

Enable the option ‘Remove metadata’


Advanced settings: optipng optimization level: Level 4 and pngout
optimalization level: Level 0

Don’t forget to Bulk Optimize the images already in your media library!

Remove CSS Link IDs plugin


A wordpress theme generally uses one or more CSS Stylesheet files (.css
files). By default they appear in the HTML with an id. This is not really
necessary and blocks the Google PagesPeed Plugin for nginx performing
optimally when it tries to combine multiple CSS files into one. (we will
discuss the Google Pagespeed plugin in the next chapter)

There is a small Wordpress plugin that takes care of fixing this:

Remove CSS Link IDs

Telegram Channel @nettrain


Remove CSS Link IDs

Disable Emojis support in Wordpress 4.2 and later


Emojis icons are by default enabled since Wordpress 4.2 - even if you don’t
use them at all.

If you don’t use them in your Wordpress blog, you can safely disable them
via the following Wordpress plugin.

Disable Emojis

Make sure to activate the plugin after installing.

Optimizing Wordpress performance with the W3 Total


Cache plugin
The W3 Total Cache plugin is a Wordpress plugin aimed at improving the
speed of your site. It has quite a few features including:

Page Caching (via Memcached)


Object Caching (via Memcached)
Database Caching (via Memcached)
Host your Images, Stylesheet and Javascript resources on a CDN
Minifying your Javascript, HTML and CSS.

In the following section we’ll enable the Page, object and Database caching
via Memcached. We will also enable the CDN support.

Telegram Channel @nettrain


This configuration will make your Wordpress installation a lot faster.

Installing W3 Total Cache


Go to your Wordpress Admin dashboard and click on Plugins -> Add new.

Search for ‘w3 total cache’

Installing W3 Total Cache Wordpress plugin

Now click on the Install button to continue the installation. Click on the
Activate Plugin link.

Now go the Plugins overview page and click on the W3 Total Cache Settings
link.

In the General Settings tab we want to enable the Memcache integration for
the following types of caches: 1. Page Cache 1. Database cache 1. Object
cache

Telegram Channel @nettrain


Page Cache Settings

Database Cache Settings

Object cache settings

If you have setup a CDN at KeyCDN as explained in our CDN chapter, then
you can enable the CDN settings to let Wordpress serve all resource files
from the CDN location.

Telegram Channel @nettrain


CDN Settings

Choose the type ‘Generic Mirror’

Now click on the Performance -> CDN link in the sidebar. We can now
configure the details of the CDN integration:

CDN General Settings

Telegram Channel @nettrain


Make sure to fill in the setting ‘Replace site’s hostname with’ with the URL
for your CDN Pull zone. (eg. cdn.mysite.com)

CDN Configuration

Lastly check the Performance -> Page cache settings:

Installing W3 Total Cache Wordpress plugin

Telegram Channel @nettrain


Notice we didn’t enable any Minification of our Javascript and Stylesheets
via Performance -> Minify. We will use Googles Pagespeed plugin for nginx
to minify our resources (and a lot more) in our Google Pagespeed plugin
chapter.

The Browser cache settings are also disabled, because we’ll also use Google
Pagespeed for this.

Telegram Channel @nettrain


Speed up your site with Google PageSpeed nginx
plugin
The Google Pagespeed module for nginx optimizes your site for better end-user
performance. It directly complements the Google PagesPeed Insights tool which analyses
your site’s speed and gives advice on how to make it faster.

Many of the suggestions given by the Pagespeed Insight analysis can be fixed by using
and configuring the Google Pagespeed plugin for nginx.

Pagespeed Insights

In our nginx chapter, we compiled nginx with the Google Pagespeed plugin included. We
haven’t enabled it yet in our configuration until now. Let’s do this now!

Configuring global Pagespeed settings.


First let’s create a new nginx configuration file in the nginx configuration directory:
$ sudo nano /usr/local/nginx/conf/pagespeed-global.conf

Copy paste the following contents into the file:


# Location where pagespeed caching files is located
# PageSpeed must be configured with a path where it can write cache files, tuned\
to limit the amount of disk space consumed.
# The file cache has a built-in Least Recently Used mechanism to remove old file\

Telegram Channel @nettrain


s, targeting a certain total disk space usage, and a certain interval for the cl\
eanup process
pagespeed FileCachePath /var/cache/ngx_pagespeed/;
pagespeed FileCacheSizeKb 1024000; # 100MB
pagespeed FileCacheCleanIntervalMs 3600000; # 1 hour
pagespeed FileCacheInodeLimit 500000;

# To optimize performance, a small in-memory write-through LRU cache can be inst\


antiated in each server process
pagespeed LRUCacheKbPerProcess 8192;
pagespeed LRUCacheByteLimit 16384;

# As part of its operation, PageSpeed stores summaries of how to apply optimizat\


ions to web pages as part of a metadata cache. Metadata entries are small and fr\
equently accessed. They should ideally be stored in local memory and shared acro\
ss server processes, as opposed to on disk or a memcached server
# If this cache is enabled, metadata will no longer be written to the filesystem\
cache, significantly improving metadata cache performance, but metadata informa\
tion will be lost upon server restart. This will require resources to be reoptim\
ized for each restart. If a memcached cache is available, cache entries will be \
written through to the memcached cache so that multiple PageSpeed servers can sh\
are metadata and the metadata cache will survive a server restart.
# Identify current file cache size on disk: du -s -h --apparent-size /var/cache/\
ngx_pagespeed/rname/ => 21M /var/cache/ngx_pagespeed/rname/

pagespeed CreateSharedMemoryMetadataCache "/var/cache/pagespeed/" 51200; # 50MB \


cache
# No default cache that is wrtten through disk as we created a memory cache just\
above
pagespeed DefaultSharedMemoryCacheKB 0;

# PageSpeed's memcached integration by uses a background thread for communicatin\


g with the memcached servers. This allows PageSpeed to batch multiple Get reques\
ts into a single MultiGet request to memcached, which improves performance and r\
educes network round trips.
# http://localhost/ngx_pagespeed_statistics?memcached
pagespeed MemcachedServers "localhost:11211";

# As of version 1.9.32.1, it can be configured to also allow purging of individu\


al URLs
pagespeed EnableCachePurge on;

# Often PageSpeed needs to request URLs referenced from other files in order to \
optimize them. To do this it uses a fetcher. By default ngx_pagespeed uses the s\
ame fetcher mod_pagespeed does, serf, but it also has an experimental fetcher th\
at avoids the need for a separate thread by using native Nginx events. In initia\
l testing this fetcher is about 10% faster
pagespeed UseNativeFetcher on;
resolver 8.8.8.8;

# Enable Messages History in admin pages


pagespeed MessageBufferSize 100000;

# PageSpeed keeps seperate statistics over all virtual hosts.


pagespeed UsePerVhostStatistics on;

# The PageSpeed Console reports various problems your installation has that can \

Telegram Channel @nettrain


lead to sub-optimal performance
pagespeed Statistics on;
pagespeed StatisticsLogging on;
pagespeed LogDir /var/log/pagespeed;

# To support variables when disabling filters for HTTP2 clients


pagespeed ProcessScriptVariables on;

Let’s go over the settings in detail.

First we specify where the pagespeed plugin should cache its files. (eg. FileCachePath
/var/cache/ngx_pagespeed/). We also make sure the cache is big enough to hold rewritten
resources of our website like image, stylesheets and javascript files. (FileCacheSizeKb
100MB) Secondly we create a small in memory cache per server process
(LRUCacheKbPerProcess) and a Metadata cache in memory.
(CreateSharedMemoryMetadataCache)

As we are already using Memcached, we can enable the pagespeed Memcached support
by specifing on which host and port our memcached server is running.
(MemcachedServers “localhost:11211”)

The Google Pagespeed module feature administration pages to tune and check your
installation. We enable it via the following settings: * Statistics on; * StatistiscLogging
on * LogDir /var/log/pagespeed

The admin pages can also log informative messages and you can purge individual URLS
from its cache. (MessageBufferSize and EnableCachePurge)

Pagespeed internally uses a fetcher to request URLS referenced from Javascripts and
Stylesheets. By specifying “UseNativeFetcher on;” we use a faster experimental fetcher
For nginx.

‘ProcessScriptVariables on’ allows the Pagespeed plugin to process variables. We need


this setting to be able to exclude some Pagespeed filters which would hurt performance
for clients using HTTP2.

The pagespeed-global.conf should be included from your main nginx.conf file:


$ sudo nano /usr/local/nginx/conf/nginx.conf

Before the line


include /usr/local/nginx/conf/conf.d/*.conf;

include the following line:


include /usr/local/nginx/conf/pagespeed-global.conf;

Telegram Channel @nettrain


Enabling Pagespeed filters
We will configure/enable the Pagespeed filters in a seperate nginx configuration file:
$ sudo nano /usr/local/nginx/config/pagespeed.conf

You should include this file for all sites where you want to enable the Pagespeed plugin.
Normally all your site configuration files have been created in the
/usr/local/nginx/config/conf.d directory.

For example:
$ sudo nano /usr/local/nginx/config/conf.d/mysite.conf

server {
....

include /usr/local/nginx/conf/pagespeed.conf;
....
}

Let’s add the following sections to the /usr/local/nginx/config/pagespeed.conf file:


pagespeed on;

This line turns on the pagespeed module.


pagespeed PreserveUrlRelativity on;

Previous versions of the PageSpeed plugin would rewrite relative URLs into absolute
URLs. This wastes bytes and can cause problems for sites that run over HTTPS. This
setting makes sure this rewriting doesn’t take place.
pagespeed AdminPath /pagespeed-<your-own-url>;

The admin path specifies the URL where you can browse the admin pages. The admin
pages provide visibility into the operation of the PageSpeed optimization plugin. Please
make sure that you create your own URL which cannot be easily guessed. (and/or protect
the URL with authentication).
pagespeed RewriteLevel PassThrough;

By default the Pagespeed plugin enables a set of core filters. By setting the RewriteLevel
on PassThrough, no filters are enabled by default. We will manually enable the filters we
need below.
include pagespeed_libraries.conf;
pagespeed EnableFilters canonicalize_javascript_libraries;

Telegram Channel @nettrain


The canonicalize_javascript_libraries filter identifies popular JavaScript libraries that can
be replaced with ones hosted for free by a JavaScript library hosting service — by
default the Google Hosted Libraries. This has several benefits:

Most important, first-time site visitors can benefit from browser caching, since they
may have visited other sites making use of the same service to obtain the libraries.
The JavaScript hosting service acts as a content delivery network (CDN) for the
hosted files, reducing load on the server and improving browser load times.
There are no charges for the resulting use of bandwidth by site visitors.
The hosted versions of library code are generally optimized with third-party
minification tools. These optimizations can make use of library-specific annotations
or minification settings that aren’t portable to arbitrary JavaScript code, so the
libraries benefit from more aggressive optimization than can be provided by
PageSpeed.

Here is how you should generate the pagespeed_libraries.conf which should be located in
/usr/local/nginx/conf:

In Nginx you need to convert pagespeed_libraries.conf from Apache-format to Nginx


format:
$ cd
$ cd ngx_pagespeed-release-1.10.33.2-beta/
$ scripts/pagespeed_libraries_generator.sh -use_experimental_minifier > ~/pagesp\
eed_libraries.conf
$ sudo mv ~/pagespeed_libraries.conf /usr/local/nginx/conf/

pagespeed EnableFilters extend_cache;

The extend_cache filter improves the cacheability of a web page’s resources without
compromising the ability of site owners to change the resources and have those changes
propagate to users’ browsers. By default this filter will cache the resources for 1 year.
It’ll also improve the cacheability of images references from within CSS files if the
rewrite_css filter is enabled.
pagespeed EnableFilters combine_css;

The Combine CSS filter seeks to reduce the number of HTTP requests made by a
browser during page refresh by replacing multiple distinct CSS files with a single CSS
file
pagespeed EnableFilters rewrite_css;

This filter parses linked and inline CSS (in the HTML file), rewrites the images found
and minifies the CSS (stylesheet).

Telegram Channel @nettrain


pagespeed EnableFilters rewrite_javascript;
pagespeed UseExperimentalJsMinifier on;

Enabling rewrite_javascript enables us to minify the Javascript files. Enabling the


experimental JS minifier further reduces the size of the Javascript (before it’ll be
compressed with GZip later in this chapter)
pagespeed EnableFilters insert_image_dimensions;

This flag inserts width= and height= attributes into <img> HTML tags that lack them and
sets them to the image’s width and height. The effect on performance is minimal,
especially on modern browsers.
pagespeed EnableFilters inline_javascript;

The “Inline JavaScript” filter reduces the number of requests made by a web page by
inserting the contents of small external JavaScript resources directly into the HTML
document.
pagespeed EnableFilters defer_javascript;

Defering the execution of JavaScript code can often dramatically improve the rendering
speed of a site. Use this filter with caution as it may not work on all sites.
pagespeed EnableFilters prioritize_critical_css;

This filter improves the page render times by identifying CSS rules from your CSS
stylesheet that are needed to render the visible part of the page, inlining those critical
rules and deferring the load of the full CSS resources.
pagespeed EnableFilters collapse_whitespace;

This filter will remove whitespaces from your HTML, further reducing the size of the
hTML
pagespeed EnableFilters combine_javascript;

The ‘Combine JavaScript’ rule seeks to reduce the number of HTTP requests made by a
browser during a page refresh by replacing multiple distinct JavaScript files with a single
one.
pagespeed EnableFilters rewrite_images;

The rewrite_images filter enables the following image optimalizations if the optimized
version is actually smaller then the original:

Telegram Channel @nettrain


inline images: this optimization replaces references to small images by an inline
data: URL, eliminating the need to initiate another connection to fetch the image
data.
recompress images: this filter attempts to recompress image data and strip
unnecessary metadata such as thumbnails. This is a group filter, and is equivalent to
enabling: convert_gif_to_png, convert_jpeg_to_progressive,
convert_jpeg_to_webp, jpeg_subsampling, recompress_jpeg, recompress_png,
recompress_webp, strip_image_color_profile, and strip_image_meta_data
convert png images to jpeg: Enabling convert_png_to_jpeg allows a gif or png
image to be converted to jpeg if it does not have transparent pixels and if the
Pagespeed plugin considers that it is not sensitive to jpeg compression noise. The
conversion is lossy, but the resulting jpeg is generally substantially smaller than the
corresponding gif or png.
resize images: This attempts to resize any image that is larger than the size called
for by the width= and height= attributes on the <img>
pagespeed EnableFilters flatten_css_imports;

The purpose of this filter is to reduce the number of HTTP round-trips by combining
multiple CSS resources into one. It parses linked and inlined CSS and flattens it by
replacing all @import rules with the contents of the imported file, repeating the process
recursively for each imported file
pagespeed EnableFilters inline_css;

Inlining a CSS will insert the contents of small external CSS resources directly into the
HTML document. This can reduce the time it takes to display content to the user,
especially in older browsers.
pagespeed EnableFilters fallback_rewrite_css_urls;

The CSS parser cannot parse some CSS3 or proprietary CSS extensions. If
fallback_rewrite_css_urls is not enabled, these CSS files will not be rewritten at all. If
the fallback_rewrite_css_urls filter is enabled, a fallback method will attempt to rewrite
the URLs in the CSS file, even if the CSS cannot be successfully parsed and minified.
pagespeed EnableFilters inline_import_to_link;

The “Inline @import to Link” filter converts a <style> tag consisting of only @import
statements into the corresponding <link> tags. This conversion does not itself result in
any significant optimization, rather its value lies in that it enables optimization of the
linked-to CSS files by later filters, in particular the combine_css, rewrite_css, inline_css,
and extend_cache filters.
pagespeed EnableFilters convert_meta_tags;

Telegram Channel @nettrain


When a server returns a response to the browser, the HTML can contain meta tags like
the following:
<meta http-equiv="Content-Language" content="fr">

This filter converts these meta tags to HTTP Response headers:


Content-Language: fr

Certain http-equiv meta tags, specifically those that specify content-type, require a
browser to reparse the html document if they do not match the headers. By ensuring that
the headers match the meta tags, these reparsing delays are avoided.
pagespeed EnableFilters rewrite_style_attributes_with_url;

The “Rewrite Style Attributes” filter rewrites the CSS inside elements’ style attributes to
enable CSS minification, image rewriting, image recompression, and cache extension, if
enabled. It is enabled only for style attributes that contain the text ‘url(‘ as these images
references are generally the source for greatest improvement.
pagespeed EnableFilters insert_dns_prefetch;

DNS resolution time varies from <1ms for locally cached results, to hundreds of
milliseconds due to the cascading nature of DNS. This can contribute significantly
towards total page load time. This filter reduces DNS lookup time by providing hints to
the browser at the beginning of the HTML, which allows the browser to pre-resolve DNS
for resources on the page.
# Disable filters when a HTTP2 connection is made
set $disable_filters "";
if ($http2) {
set $disable_filters "combine_javascript,combine_css,sprite_images";
}
pagespeed DisableFilters "$disable_filters";

Clients that make use of a HTTP/2 connection make some of the Google Pagespeed
filters unnecessary. These include all filters which combine resources like Javascript files
and CSS files into one file.

With HTTP/1 connections there was a lot of overhead for all these extra requests.
HTTP/2 allows multiple concurrent exchanges for all these resources on the same
connection, as such making one big file would actually hurt performance because the
parallel download of the different resources can not take place.

The above code selectively disables the combine filters for Javascript and CSS for
HTTP2 connections. It does leave them enabled for people accessing your site via
browsers which don’t yet support HTTP/2.

Telegram Channel @nettrain


Optimizing the performance of the Pagespeed plugin
The Pagespeed plugin does a lot of things as you could see in the previous section. Doing
all the optimizations of course takes CPU and disk I/O time.

To make sure the latency of a request is not increasing too much after enabling all the
Pagespeed filters, you need to make sure that the Pagespeed caches are fully used. Eg.
requesting different pages will trigger optimizations for all those pages. After the cache
has warmed up (= all the optimized versions have been generated); most of the traffic to
your website should be able to be served from the cache.

To see if that’s the case, you should open the Pagespeed Console graphs at
http://yourwebsite.com/<pagespeed-admin-location>/console

In the following sections we’ll go over the different graphs displayed.

Cache lookups that were expired

Cache lookups that were expired

Cache lookups that were expired means that although these resources were found in
cache, they were not rewritten because they were older than their max-age. max-age is a
HTTP Cache-Control header that is sent by your http server when Pagespeed fetches
these resources.

If you notice that you have a lot of cache lookups that were expired, you can tell
Pagespeed to load the files straight from disk rather than through HTTP:

Inside /usr/local/nginx/conf/pagespeed.conf add:


pagespeed LoadFromFile "http://www.yourwebsite.com/" "/home/<yourwebsite>/www/";

Resources not rewritten because domain wasn’t authorized


When you’re serving resources like CSS and Javascript files from a subdomain, the
Pagespeed plugin may not optimize or rewrite them because they are served from a
different domain. (eg. subdomain.mywebsite.com instead of wwww.mywebsite.com)

Telegram Channel @nettrain


To enable the plugin for other domains you should add the following configuration line
for every (sub)domain you’re using on your site:
pagespeed Domain http://subdomain.mywebsite.com;

CDN Integration & HTTPS support


When you’re using a CDN for serving resources, they will not be rewritten by the
Pagespeed plugin by default as the domain hasn’t been authorized. Like we explained
above you’ll need to add a “pagespeed Domain” line to your configuration.

For CDNs you should also add a LoadFromFile configuration line to specify where the
Pagespeed plugin can find the resources on your web server.

Eg.
pagespeed Domain https://cdn.mywebsite.com;
pagespeed LoadFromFile "https://www.mywebsite.com" "/home/mywebsite/web/root";
pagespeed LoadFromFile "https://cdn.mywebsite.com" "/home/mywebsite/web/root";

Note that we have also enabled https support in the Pagespeed plugin by using
LoadFromFile with a https protocol and specifying where the resources are located on
your web server.

Extending the Memcached cache size


As we’re using the Pagespeed Memcached integration, we probably need to expand the
Memcached cache size from its default 64M so it can hold most of the Pagespeed cache.
You should try to find a value where the Pagespeed cache misses are minimized. (check
the Statistics page in the Pagespeed administration pages)
$ sudo nano /etc/init.d/memcached

Locate the MEMSIZE=64 option and increase it to 128M or 256M depending on the
amount of RAM available.

Telegram Channel @nettrain


Appendix: Resources
TCP IP
http://blog.tsunanet.net/2011/03/out-of-socket-memory.html

https://ticketing.nforce.com/index.php?/Knowledgebase/Article/View/40/11
/sysctl-settings-which-can-have-a-negative-affect-on-the-network-speed

http://blog.cloudflare.com/optimizing-the-linux-stack-for-mobile-web-per

Ubuntu / Linux
http://www.debuntu.org/how-to-managing-services-with-update-rc-d/

KVM
http://en.wikipedia.org/wiki/Hypervisor

http://s152758605.onlinehome.us/wp-content/uploads/2012/02/slide33.png

https://software.intel.com/sites/default/files/OVM_KVM_wp_Final7.pdf

SSD
https://sites.google.com/site/easylinuxtipsproject/ssd

https://rudd-o.com/linux-and-free-software/tales-from-responsivenessland-
why-linux-feels-slow-and-how-to-fix-that

OpenSSL
http://sandilands.info/sgordon/upgrade-latest-version-openssl-on-ubuntu

Nginx

Telegram Channel @nettrain


http://news.netcraft.com/archives/2014/12/18/december-2014-web-server-
survey.html

https://wordpress.org/plugins/nginx-helper/

https://rtcamp.com/wordpress-nginx/tutorials/single-site/fastcgi-cache-with-
purging/

http://unix.stackexchange.com/questions/86839/nginx-with-ngx-pagespeed-
ubuntu

https://www.digitalocean.com/community/tutorials/how-to-optimize-nginx-
configuration

http://nginx.com/blog/tuning-nginx/

http://linuxers.org/howto/howto-use-logrotate-manage-log-files

PHP
https://support.cloud.engineyard.com/entries/26902267-PHP-Performance-
I-Everything-You-Need-to-Know-About-OpCode-Caches

https://www.erianna.com/enable-zend-opcache-in-php-5-5

https://rtcamp.com/tutorials/php/fpm-status-page/

http://nitschinger.at/Benchmarking-Cache-Transcoders-in-PHP

MariaDB
https://wiki.debian.org/Hugepages

http://time.to.pullthepl.ug/blog/2008/11/18/MySQL-Large-Pages-errors/

http://dino.ciuffetti.info/2011/07/howto-java-huge-pages-linux/

http://www.cyberciti.biz/tips/linux-hugetlbfs-and-mysql-performance.html

http://matthiashoys.wordpress.com/tag/nr_hugepages/

Telegram Channel @nettrain


http://blog.yannickjaquier.com/linux/linux-hugepages-and-virtual-memory-
vm-tuning.html

https://mariadb.com/blog/how-tune-mariadb-write-performance/

https://snipt.net/fevangelou/optimised-mycnf-configuration/

http://www.percona.com/files/presentations/MySQL_Query_Cache.pdf

Jetty
http://www.eclipse.org/jetty/documentation/current/quickstart-running-
jetty.html

http://dino.ciuffetti.info/2011/07/howto-java-huge-pages-linux/

http://greenash.net.au/thoughts/2011/02/solr-jetty-and-daemons-debugging-
jettysh/

http://java-performance.info/java-string-deduplication/

http://dev.mysql.com/doc/connector-j/en/connector-j-reference-
configuration-properties.html

http://assets.en.oreilly.com/1/event/21/Connector_J%20Performance%20Ge
ms%20Presentation.pdf

https://github.com/brettwooldridge/HikariCP/wiki/MySQL-Configuration

https://mariadb.com/kb/en/mariadb/about-the-mariadb-java-client/

CDN
https://www.maxcdn.com/blog/manage-seo-with-cdn/

HTTPS
https://support.comodo.com/index.php?/Default/Knowledgebase/Article/Vi
ew/1/19/csr-generation-using-openssl-apache-wmod_ssl-nginx-os-x

Telegram Channel @nettrain


https://www.wormly.com/help/ssl-tests/intermediate-cert-chain

https://blog.hasgeek.com/2013/https-everywhere-at-hasgeek

https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx

http://www.nginxtips.com/hardening-nginx-ssl-tsl-configuration/

https://bjornjohansen.no/optimizing-https-nginx

http://security.stackexchange.com/questions/54639/nginx-recommended-
ssl-ciphers-for-security-compatibility-with-pfs

https://www.mare-system.de/guide-to-nginx-ssl-spdy-hsts/#choosing-the-
right-cipher-suites-perfect-forward-security-pfs

http://blog.mozilla.org/security/2013/07/29/ocsp-stapling-in-firefox/

https://blog.kempkens.io/posts/ocsp-stapling-with-nginx

https://gist.github.com/plentz/6737338

Telegram Channel @nettrain

You might also like