Professional Documents
Culture Documents
Wim Bervoets
* * * * *
This is a Leanpub book. Leanpub empowers authors and publishers with the
Lean Publishing process. Lean Publishing is the act of publishing an in-
progress ebook using lightweight tools and many iterations to get reader
feedback, pivot until you have the right book and build traction once you
do.
* * * * *
Performance Baseline
Serverbear
Testing Ping Speed
Tuning KVM Virtualization Settings
VPS Control Panel Settings
Configuring the CPU Model Exposed to KVM Instances
Tuning Kernel Parameters
Improving Network speeds (TCP/IP settings)
Disabling TCP/IP Slow start after idle
Other Network and TCP/IP Settings
File Handle Settings
Setup the number of file handles and open files
Improving SSD Speeds
Kernel settings
Scheduler Settings
Reducing writes on your SSD drive
Enable SSD TRIM
Other kernel settings
Installing OpenSSL
Installing OpenSSL 1.0.2d
Upgrading OpenSSL to a Future Release
Check Intel AES Instructions Are Used By OpenSSL
Securing your Server
Installing CSF (ConfigServer Security and Firewall)
Configuring the ports to open
Configuring and Enabling CSF
Login Failure Daemon (Lfd)
Ordering a Domain Name For Your Website
Choosing a Domain Name and a Top Level Domain
Ordering a Domain With EuroDNS
Configuring a Name Server
Ordering the DNSMadeEasy DNS Service
Installing MariaDB 10, a MySQL Database Alternative
Installing PHP
Installing PHP-FPM
Configuring PHP-FPM
Configuring a PHP-FPM Pool
How to start PHP-FPM
nginx FPM Configuration
Viewing the PHP FPM Status Page
Viewing statistics from the Zend OpCode cache
Starting PHP-FPM Automatically At Bootup
Log File Management For PHP-FPM
Installing memcached
Comparison of caches
Downloading memcached And Extensions
Installing Libevent
memcached Server
Installing libmemcached
Installing igbinary
Using a CDN
Choosing a CDN
Analysing performance before and after enabling a CDN
Configuring a CDN service
Create a New Zone.
Using a cdn subdomain like cdn.mywebsite.com
Tuning the performance of your CDN
HTTPS everywhere
Do you need a secure website?
Buying a certificate for your site
Standard certificate
Wildcard certificate.
Public and private key length
Extended Validation certificates and the green bar in the browsers
Buying the certificate
Generate a Certificate Signing request
Ordering a certificate
Configuring nginx for SSL
Getting an A+ grade on SSLLabs.com
Enabling SSL on a CDN
Enabling SPDY or HTTP/2 on a CDN
Installing Wordpress
Downloading Wordpress
Enabling PHP In Your Nginx Server Configuration
Creating a Database For Your Wordpress Installation
Installing Wordpress
Enabling auto-updates for Wordpress (core, plugins and themes)
Appendix: Resources
TCP IP
Ubuntu / Linux
KVM
SSD
OpenSSL
Nginx
PHP
MariaDB
Jetty
CDN
HTTPS
License Notes
This book is licensed for your personal enjoyment only. This book may not
be re-sold or given away to other people. If you would like to share this
book with another person, please purchase an additional copy for each
person. Thank you for respecting the hard work of this author.
Some of the links in the book are affiliate links. This means if you click on
the link and purchase the item, I will receive an affiliate commission.
Regardless, I only recommend products or services I use personally and
believe will add value to my readers.
This book was born out of the need to provide a comprehensive and up-to-
date overview on how to configure your website hosting in the most
optimal way and from the start to finish. The book is packed with a great
deal of practical and real world knowledge.
If you don’t have a technical background, this book can also serve as a
guide for the technical people in your organisation.
It also helps your ranking in search engines, which may offer you more free
traffic from them. If you have an e-commerce site, several studies have
shown that a faster website can increase the sale conversions, thus revenue!
Understanding the building blocks of your website will make sure that it
can scale in popularity while keeping it stable and fast!
After having chosen a web hosting package and server, we will install a
Linux-based OS on your server. We’ll guide you with choosing and
installing the right Linux distribution.
After the base OS installation has been completed, we’ll install and
configure commonly used web software including
We’ll also explain technologies like SPDY, HTTP/2 and CDN and how they
can help you to make your site faster for your visitors from all around the
world.
In one word: your users will always love a fast site. Actually they’ll take it
for granted. If you manage to make it too slow, they’ll start getting angered
by the slowness. And this can have a lot of consequences:
Mobile users are even quicker to abandon your site when it loads slowly on
their tablet or smartphone on a 3G or 4G network. And we all know that
mobile usage is increasing very fast!
These days there is a lot competition between websites. One of the factors
Google uses to rank websites is their site speed. As Google wants the best
possible experience for their users, they’ll give an edge to a faster site (all
other things being equal). And higher rankings lead to more traffic.
There are a lot of factors influencing the speed of your site. The first
decision (and an important one) you’ll have to make is which web hosting
company you want to use to host your server / site. Let’s get started!
We will give you a checklist with performance indicators which will allow
you to analyze and compare different hosting packages.
Types Of Hosting
To run a website, you’ll need to install the necessary web server software on
a server machine. Web hosts generally offer different types of hosting. We’ll
explain the pro’s/contra’s of each.
Shared Hosting
Shared hosting is the cheapest kind of hosting. Shared hosting means that the
web host has setup a (physical) server and let it host a lot of different sites
from different customers on that server. (into the hundreds)
The cost of the server is thus spread to all customers with a shared hosting
plan on that server.
Most people start with this kind of hosting because it is cheap. There are
some downsides though:
When your site gets busy, your site could get slower (because there are
few hundred sites from other people also waiting to be served)
Your web hosting company could force you to move to a more
expensive server because you’re eating up too many resources from the
physical server - negatively impacting the availability or performance
of other sites which are also hosted on that server.
You could be affected by security issues (eg. due to out of date server
software) created by the other websites you’re sharing the server with.
Your website performance could suffer because other sites misbehave
For the above reasons, we will not further mention this type of hosting in
this book, as it is not a good choice to get the best possible stability and
repeatable performance out of your server.
VPS Hosting
VPS stands for Virtual Private Server. Virtualization is a technique that
makes it possible to create a number of virtual servers on one physical
server. The big difference with shared hosting is that the virtual servers are
completely separated from each other:
Each virtual server can host its own operating system. (and can be
completely different from eachother)
Each virtual server gets an agreed upon slice of the physical server
hardware (cpu, disk, memory)
The main advantage is that it’s not possible that other virtual servers will
negatively affect the performance of your machine and your installation is
more secure.
Most web hosts will limit the number of virtual private servers running on
one physical server to be able to give you the advertised features of the VPS.
(eg. 512MB RAM, 1Ghz CPU and so on)
Ramnode
MediaTemple
BlueHost
Dedicated Hosting
The extra layer of virtualization (which has some performance impact) is not
present here, unless you decide to virtualize the server yourself into separate
virtual servers. This means you generally get the best possible performance
from the hardware.
Cloud Hosting
Cloud servers are in general VPS servers, but with the following differences:
The size of your cloud server (amount of RAM, CPU and so on) can be
enlarged or downsized dynamically. Eg. during peak traffic times you
could enlarge the instance to be able to keep up with the load. When
traffic diminishes again, you can downsize the instance again.
Bigger instances will always cost more money then small instances.
Being able to change this dynamically can reduce your bills.
Cloud-based servers can be even moved to other hardware while the
server keeps running. They can also span multiple physical servers.
(This is called horizontal scaling)
Cloud-based servers allow your site(s) to grow more easily to really
high-traffic websites.
DigitalOcean
This may seem daunting, if you have never done it before. At one point I
was in the same situation, but as you’ll read in this guide, it’ll become clear
that it’s all pretty doable if you’re interested to learn a few new things.
Some packages include auto updates of your server software (eg. latest
security patches). Prices can differ greatly as can the included services. Be
sure to compare them well.
Our Recommendation
To create a stable and performant website, we recommend to choose
between VPS or Cloud-based servers.
If you hope the create the next Twitter or Pinterest, cloud servers will give
you the ability to manage the growth of your traffic more easily. For
example a sudden, large traffic increase could instantly overload your server.
With a cloud server you’ll be able to ‘scale your server’ and get additional
computing power in minutes vs. creating from scratch a new instance that
could take a few hours.
The following diagram shows how bandwidth and latency relate to eachother
(diagram adapted from Ilya Grigorik excellent book High Performance
Browser Networking).
Latency is the amount of time it takes for a request from a device (such as a
computer, phone or tablet) to arrive at the server. Latency is expressed in
milliseconds.
You can see in the picture that the bits and bytes are routed over different
networks, some of which are yours (eg. your WIFI network), some belong to
the Internet provider, and some belong to the web hosting provider.
In the example above the Cable infrastructure has the least bandwidth
available. This can be due to physical limitations or by arbitrary limits set by
your ISP.
The latency is affected by the distance between the client and the server. If
the two are very far away, the propagation time will be bigger, as light
travels at a constant speed over fiber network links.
The latency is also affected by the available bandwidth. For example if you
put a big movie on the wire (eg. 100MB) and the wire can transfer 10MB per
second maximally; it’ll take 10 seconds to put everything on the wire. There
are other kinds of delays, but for our discussion the above represent what we
need to know.
Because the latency is affected by the distance between the physical location
of your visitor and the server it is very important to take a look at your target
audience.
If your target audience is Europe then having a VPS on the west coast of the
USA isn’t a great idea.
Target Audience Of Your Site
To know the target audience of your site there are two options:
If you have already an established website the chances are high you are
already analyzing your traffic via Google Analytics or similar tools.
This makes it easy to find out your top visitor locations.
If your site is new, you’ll need to establish who will be your target
customer and where he is located. (eg. on a Dutch site this could be
Belgium and the Netherlands).
As you can see in the example above, our Top5 visitor locations are:
1. United States
2. Germany
3. India
4. United Kingdom
5. Brazil
The United States leads with a large margin in our example, that’s why we’ll
drill down in Google Analytics to a State level:
1. California
2. Texas
3. New York
4. Florida
This combines the best response times for both Europe and the US.
Bandwidth Considerations
The amount of bandwidth used will depend on the popularity of your site
and also the type of content (hosting audio or video files for download will
result in higher bandwidth needs). Most hosts have a generous allocation of
available bandwidth (100GB+ per month).
The rate at which you might use your bandwidth will be determined by the
network performance and port speed of the host you’ve selected.
If you’re seeing speeds of ~10 MB/s then your host has a 100Mbit port
speed, if you see speeds of 100 MB/s then the host has a 1Gbit port.
Ping latency checks from around the globe can be tested for free at
http://cloudmonitor.ca.com/en/ping.php
To use the tool you’ll need to specify the hostname or IP address of your
server (in the case you already have bought it). If you’re still researching the
perfect host, then you’ll need to find a test ping server from the web host
company.
Google ‘<name of the web host> ping test url’ or ‘<name of the web host>
looking glass’ to see if there is a public test URL. For example, RamNode
hosting provider has Ping server at IP address 23.226.235.3 for their Atlanta
US based server. If there is no test url available try to contact their pre sales
support.
When executing the Ping latency check you can see the minimum round trip
time, the average and the maximum round trip time. With round trip time we
mean the time it takes for the request to reach the server at your web host
and the time it takes to get back the response to the client.
For client locations very near your server the round trip time will be very
low: 25ms, while eg. clients from India or Australia which probably are
much further away from your server are eg. in the 200ms range. For a client
in Europe and a server on the east coast of the USA, the round trip time
cannot be much lower then 110ms (because of the speed of light).
To test the maximum download speeds of your web host (bandwidth), search
for Google ‘<name of the web host> bandwidth test url’ or ‘<name of the
web host> looking glass’ Eg. a Ramnode server in Atlanta can be tested by
going to http://lg.atl.ramnode.com/
When comparing web hosting plans, it’s important to compare the CPUs
with the following checks:
Later in this chapter we will show you how to run some CPU benchmarks.
I/O Performance
Writing and reading files from disk can quickly become a bottleneck when
you have a lot of simultaneous visitors to your site.
Here are a few examples where your site needs to read files from disk:
Writing to the disk will happen less often, but will be also slower then the
read speeds; here are some examples:
For Wordpress blogs, with performant Disk I/O you’ll notice that your pages
cache and serve much faster.
A solid state drive uses memory chips to store the data persistently, while
hard drives contain a disk and a drive motor to spin the disk.
Solid state drives have much better access times because there are no
moving parts. They also have much higher read and write speeds (x MB per
At this moment SSD drives are still more expensive then hard drive
based disks, thus your web host provider could also price these
offerings to be more expensive.
The second downside is that SSD drive sizes are smaller then the
biggest hard disks sizes available. (Gigabytes vs Terabytes). This could
be a problem if you want to store terabytes of video.
The third option SSD Cached - tries to solve the second downside; the
amount of available disk space.
SSD Cached VPSs come with more space. The most frequently used data is
served from SSDs; while the less frequently accessed data is stored on
HDDs. The whole process is automated by the web host by using high
performance hardware RAID cards.
The following web host providers have plans with SSD disks:
Ramnode
MediaTemple
RAM
You’ll want 256MB or more for a WordPress based blog, preferably 512MB
if you can afford it.
You should check how much bandwidth is included in your hosting package
per month. For every piece of data that gets transferred to and from your site
(eg ssh connections, html, images, videos and so on) it’ll be counted towards
your bandwidth allowance per month.
The rate at which you might use your bandwidth will be determined by the
network performance and port speed of the host you’ve selected. If you want
a quick indication of speed you can run ~~~~~~~~ wget
http://cachefly.cachefly.net/100mb.test ~~~~~~~~
If you’re seeing speeds of ~10 MB/s then your host has a 100Mbit port
speed. If you see speeds of 100 MB/s then the host has a 1Gbit port.
Obviously if you can get a host with 1Gbit port speeds then that’s ideal but
you also want an indication of what speeds you’ll get around the globe. This
can easily be tested with the Network performance benchmark in
ServerBear. (see next section)
You could ask for a free trial, so you can run benchmarks on the server itself.
Check if the web host company offers a 30-day money-back guarantee, this
way you can cancel your order if expectations are not met.
We’ll discuss some benchmarks you can use on a Linux system to assess the
performance of the server in a later chapter.
Benchmarks from ServerBear
ServerBear.com is a great site where you can see the benchmark results of a
lot of web hosting plans by different companies. As the ServerBear
There are a lot of options to filter the search results: eg. on price, type of
hosting (shared, VPS, cloud), ssd or hdd storage and so on)
You can also run the ServerBear benchmark on your machine (if you already
have hosting) and optionally share it with the world at ServerBear.com
Once the benchmark is complete ServerBear sends you a report that shows
your current plan in comparison to similar plans. For example if you’re on a
VPS ServerBear might show you similar VPS plans with better performance
and some lower tier dedicated servers.
OpenVZ
KVM
OpenVZ
You’ll see in later sections that a recent and tuned Linux kernel is necessary
to get the best possible performance and scalability.
KVM
Virtual machines are Linux processes that can run either Linux or Microsoft
Windows as a guest operating system. As a result, KVM can effectively and
efficiently run both Microsoft Windows and Linux workloads in virtual
machines, alongside native Linux applications as required. QEMU is
provided for I/O device emulation inside the virtual machine.
Full isolation
Very good performance
Upgrading the kernel is possible
Tweaking the kernel is possible
We recommend to use a web host where you can order a KVM based VPS.
When you want to eg. download a file from a server, the file will be broken
up into multiple IP packets, because one packet is only small in size and the
file won’t fit in it.
Although those IP packets share the same source and destination IP address,
the route they follow over the internet may differ. It’s also possible that the
destination receives the different packets in a different order then the order
in which the source sent them.
TCP provides flow control (how fast can you sent packets/receive packets
before the source or destination is overwhelmed), handles retransmission of
dropped packets and sort IP packets back in the correct order.
TCP is highly tunable for performance, which we will see in later chapters.
IPv6 or version 6 aims to solve this problem. In this addressing scheme there
are 2^128 number of different IP addresses possible. Here is an example:
3ffe:6a88:85a3:0000:1319:8a2e:0370:7344
When choosing a web host and ordering a server you will generally get one
or two IPv4 addresses included. This means that if you host more then two
websites, they will be hosted on the same IP address. (this is possible via
virtual hosts support in web servers like nginx)
We recommend you to look how many IPv6 addresses are included. If there
are none, we can not recommend such a hosting provider as they are not
future proof. The best solution today is to make your site available both
under IPv4 and Ipv6, this way nobody is left in the cold and you can reach
the highest number of potential customers/devices. We’ll explain how to do
this in later chapters.
RamNode Ordering
You’ll need to surf to Ramnode via your browser and click on the View
Plans and Pricing. Then click on the KVM Full Virtualization option.
Massive: uses SSD Cached, which gives you more storage for a cheaper
price
Standard: uses Intel E5 v1/v2 CPUs with a minimum per core speed of
2.3GHz. It comes with less bandwidth then the Premium KVM SSDs.
They are also not available in Europe (NL - Netherlands).
Premium: uses Intel E3 v2/v3 CPUs with a minimum per core speed of
3.3GHz. They are available in the Netherlands, New York, Atlanta and
Seattle.
When clicking on the orange order link you’ll be taken the shopping cart
where you can configure a few settings:
billing cycle
the hostname for your server (you should choose a good name
identifying the server. (eg. server.<mydomainname>.<com/co.uk/…>)
You can order an extra IPv4 address for 1,50USD
You can order an extra Distributed Denial of Service attack filtered
IPv4 address for 5USD (more information here
On the next page you can finish the ordering process by entering personal
and payment details.
After a few hours you’ll receive an email to notify the VPS has been setup
by RamNode.
At this point you can access a web based control panel at RamNode
(SolusVM) where you can install the operating system on your VPS.
This is the first step to complete to get your website up and running. We’ll
give you some recommendations and a step by step guide on how to install
the OS in the next chapter.
If you decide to use another host like MediaTemple, the process will be
pretty similar.
Using a recent kernel is important for the speed of your site as new versions can provide you with
performance and scalability improvements in networking and KVM hypervisor areas.
Support for TCP Fast Open (TFO), a mechanism that aims to reduce the latency penalty imposed
on new TCP connections (Available since Linux kernel 3.7+)
the TCP Initial congestion window setting was increased (with broadband connections the
default setting limited performance) (Available since Linux kernel 2.6.39)
Default algorithm for recovering from packet loss changed to Proportional Rate Reduction for
TCP since Linux Kernel 3.2
Don’t underestimate the impact of kernel upgrade; here is what Ilya Grigorik from the Web Fast Team
at Google says about kernel upgrades:
(http://chimera.labs.oreilly.com/books/1230000000545/ch02.html#OPTIMIZING_TCP)
As a starting point, prior to tuning any specific values for each buffer and timeout variable in TCP, of
which there are dozens, you are much better off simply upgrading your hosts to their latest system
versions. TCP best practices and underlying algorithms that govern its performance continue to
evolve, and most of these changes are only available only in the latest kernels.
Our recommendation is to choose a stable version of a Linux distribution which comes with the most
recent kernel possible.
The defaults for the server distribution may also be optimized better.
Some Linux distributions have a Longterm support version and stable mainline version. The latter is
usually newer, can come with a newer kernel, but is not supported as long as the Longterm support
version.
For example; with Ubuntu the current Long term support version (LTS) is version 14.04; and
supported until April 2019. With supported they mean:
The latest stable version is now 15.10 which will be supported for the next nine months. By that time
a newer stable version will be out which you can upgrade to.
With Ubuntu a Long term support version is generally released every two years.
Our recommendation is to use the latest stable version. These releases are generally more then stable
enough, and will allow you to use newer kernels and functionality more rapidly.
We’ll be installing Ubuntu on our RamNode server. Looking at the current list of Linux flavors
available at RamNode we recommend you to install Ubuntu 15.04 as it comes with Kernel 3.19
Note that while we were writing this ebook, the latest version of Ubuntu was 15.04 which we’ll show
you how to install below.
In the next chapter we’ll also explain how to upgrade to Ubuntu 15.10.
Selecting the ISO template in the Ramnode Control Panel and Booting from CDROM
To start the install we need to select the Ubuntu 15.04 ISO template from the Ramnode control panel.
Click on the CDRom tab and mount the latest Ubuntu x86_64 Server ISO file.
Click on the Settings tab and change the boot order to (1) CDROM (2) Hard Disk
Now reboot your server
You can use the following VNC Viewers via RamNodes Control panel:
VNC Clients
We recommend to use the HTML5 SSL VNC Viewer. If it doesn’t work somehow, provided you have
installed Java on your machine, you can use the Java VNC client.
All the VNC clients on RamNode control panel will automatically connect to the host address and
port you can see on the VNC tab.
Alternatively you can also install Tight VNC Viewer on Windows/Mac and connect to the
address/port and use the password provided.
Because you have rebooted the VPS and launched the CDROM with the Ubuntu install you should
now see the Ubuntu installer in the VNC window.
Select the location of your server, to correctly set the time zone and system locale of your Linux
installation. This may be different then the country you’re living in if you are hosting your site in
another country.
Now it is time to configure the hostname for you server. This name will later be used when setting up
your web server. You can choose eg. server.<mydomain>.com or something similar.
Enter your full name and on the second screen enter your firstname (in lowercase)
To gain a little extra performance we didn’t let Ubuntu encrypt our home directory. If you will allow
multiple users to access your server it’s recommended to enable this setting though.
The next step is partitioning the disk where you want to install Linux on. Of course you should make
sure there is no data on the disk which you want to keep as the partitioning process will remove all the
data.
Choose the Guided - use entire disk and set up LVM option.
Partitioning
Choose Yes to write the changes to disk and configure the LVM. (Logical Volume Manager)
The final step in the partitioning process. The installer show that it’ll create a root partition, a swap
partition and one partition Virtual disk 1 where we will install Ubuntu Linux on.
Partition disks
Now the package manager “apt” (used to install and update software packages on Ubuntu) needs to be
configured. You can leave the HTTP proxy line empty unless you’re using a proxy server to connect
to the Internet.
In the software selection we unchecked all optional packages, as we want the initial install to be as
clean as possible. Later in this guide we will install Nginx, PHP, MariaDB, etc… manually
Software selection
Software selection
When the installer asks you to reboot the server, you’ll first need to change the boot order again, so
Ubuntu is launched from your hard disk instead of the setup process on the CDROM.
To change the boot order you need to login into your Solus VM Control Panel and change the boot
order which we explained earlier.
After the reboot you’ll need to login again with VNC (using the username created during the setup
and your chosen password).
Now you have succesfully installed Ubuntu Linux and you’re ready to install other packages to make
it a real server!
Secure Shell (SSH) is used to securely get access to a remote computer. We’ll use it to administer our
server eg. For installing new software, configuring/tuning settings or restarting our web server.
The first thing we want to install is an SSH daemon. This will allow you to connect via an SSH client
or SFTP client to your server.
We assume the server has now booted and you’re viewing the command line on your server via the
VNC Viewer.
If you want to change the default settings, such as the default port to connect to, edit the
/etc/ssh/sshd_config file:
$ sudo nano /etc/ssh/sshd_config
The default port is 22, it can be useful to change this to another port number to enhance the security of
your server as people will have to guess where the SSH service is running)
If you want you can let Putty remember the details via Save.
Putty configuration
Putty will then issue a warning so you don’t accidentally connect to a compromised server.
To upload files securely we recommend FileZilla as it supports SFTP, the secure version of File
Transfer Protocol (FTP).
Open the FileZilla SiteManager and enter the same server details as for SSH login with Putty.
Make sure to use the SFTP protocol and login Type Normal to be able to login via username and
password.
apt-get update downloads the package lists from the repositories and “updates” them to get
information on the newest versions of packages and their dependencies.
apt-get-upgrade will actually be used to install the newest versions of all packages currently installed
on the system
sudo allows users to run programs with the security privileges of another user (normally the
superuser, or root)
For many, that’s as secure as it gets, and this is mostly because the /tmp directory is just that: a
directory, not its own filesystem.
By default the /tmp directory lives on the root partition (/ partition) and as such has the same mount
options defined.
In this section we will make the temp directory a bit more hackerproof. Because hackers could try to
place executable programs in the /tmp directory we’ll set some additional restrictions.
We will do that by setting /tmp on its own partition, so that it can be mounted independent of the root
/ partition and have more restrictive options set. More specific we will set the following options: *
You could then remove the /var/tmp directory and create a symlink pointing to /tmp so that the
temporary files in /var/tmp also make use of these restrictive mount options.
Axel tries to accelerate downloads by using multiple connections (possibly to multiple servers) for
one download. It might be very useful as a wget replacement.
sudo apt-get install axel
Usage is:
$ axel <url to file you want to download>
NTP is a TCP/IP protocol for synchronising time over a network. Basically a client requests the
current time from a server, and uses it to set its own clock.
ntpdate
Ubuntu comes with ntpdate as standard, and will run it once at boot time to set up your time
according to Ubuntu’s NTP server.
$ ntpdate -s ntp.ubuntu.com
ntpq
View it is running:
In this section we’ll make sure that our Ubuntu Linux is correctly configured so that our server is
accessible via an IPv6 address.
By default, after the install of Ubuntu, IPv6 is not enabled yet. You can test if IPv6 support is enabled
by pinging an IPv6 server of Google from your VPS server:
$ ping6 ipv6.google.com
It’ll return unknown host if your server is not yet IPv6 enabled.
When you got your VPS, you were given 1 or more IPv4 addresses and IPv6 addresses. Look them
up, as we’ll need them shortly. With RamNode you can look them up via the SolusVM Control panel
at https://vpscp.ramnode.com/login.php
To have network access our VPS has a (Gigabit) network card. Each network card is available in
Linux under a name like eg. eth0, eth1, …
The eth0 is the Linux interface to our Ethernet network card. The inet addr tells us that an IPv4
address is configured on this interface/network card.
We don’t see any ‘inet6 addr’; that’ll be added below for our Ipv6 addresses.
$ sudo nano /etc/sysctl.conf
Add:
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.eth0.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.bindv6only = 0
/etc/network/interfaces is the file which contains the configuration details of all our networking
interfaces.
$ sudo nano /etc/network/interfaces
For the loopback network interface we add the following line in bold if it is not already present:
# The loopback network interface
auto lo
iface lo inet loopback
*iface lo inet6 loopback*
The following line auto starts the eth0 network interface at bootup:
# Auto start at boot
auto eth0
For Ramnode the netmask is 255.255.255.0. The gateway server should also be documented by your
VPS host. For Ramnode you can find the information here.
We will use the Google DNS nameservers 8.8.8.8 8.8.4.4; so our server can resolve domain names
hostnames.
We have a second IPv4 address, which we want to bind to the same physical network card (eth0).
The concept of creating or configuring multiple IP addresses on a single network card is called IP
aliasing. The main advantage of using IP aliasing is, you don’t need to have a physical network card
attached to each IP, but instead you can create multiple or many virtual interfaces (aliases) to a single
physical card.
In the above example we setup the IPv4 addresses statically (hence the static keyword). There are
other options like dhcp. You can read more about it at https://wiki.debian.org/NetworkConfiguration
Now let’s bind the IPv6 addresses we have to the eth0 interface:
For Ramnode the netmask is 64. The gateway server should also be documented by your VPS host.
We will use the Google IPv6 DNS nameservers 2001:4860:4860::8888 and 2001:4860:4860::8844; so
our server can resolve domain names hostnames.
The up ip -6 / down ip -6 lines setup Ipv6 aliasing in a similar manner then we did for our second
Ipv4 address.
This makes sure that the DNS nameservers we have specified are added to the file /etc/resolv.conf:
$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 8.8.8.8
nameserver 8.8.4.4
nameserver 2001:4860:4860::8888
After booting, check the output of ifconfig again; it should now list a ‘Scope:Global’ Ipv6 address in
the eth0 section:
$ ifconfig
Now try to ping to Google via the Ipv6 enabled ping app:
$ ping6 ipv6.google.com
PING ipv6.google.com(yk-in-x8a.1e100.net) 56 data bytes
64 bytes from yk-in-x8a.1e100.net: icmp_seq=1 ttl=57 time=23.8 ms
64 bytes from yk-in-x8a.1e100.net: icmp_seq=2 ttl=57 time=15.1 ms
Congratulations, your server is now reachable via IPV4 and IPV6 addresses!
A good way to check how much RAM is being used on your server is using the command free -m
total used free shared buffers cached
Mem: 3953 1896 2056 0 151 382
-/+ buffers/cache:1363 2590
Swap: 4095 0 4095
Make sure you look at the free RAM on the -/+ buffers/cache line, because the first line includes the
memory used for disk caching, which can make it seem like you have no RAM left.
If you’re not using an LTS version (Long Time Support version), your Ubuntu install may become
unsupported pretty soon. (eg. after 9 months). You’ll be greeted with the following warning if you’re
running an unsupported version when you login via SSH:
Your Ubuntu release is not supported anymore.
For upgrade information, please visit:
http://www.ubuntu.com/releaseendoflife
To be able to use the newest kernels we recommend non-LTS versions, which means you’ll upgrade
more regularly.
In the following sections we’ll give some important pre-upgrade checks you have todo before starting
the upgrade.
Here is how you can disable noexec; make sure the revert the change after the upgrade.
$ sudo nano /etc/fstab
by
tmpfs /tmp tmpfs noatime,nodiratime,rw,nosuid 0 0
You can remove any unused old kernels from the boot partition via the following command:
$ sudo apt-get autoremove
We will execute via a Remote VNC session. For RamNode this can be accessed from the SolusVMCP
panel at https://vpscp.ramnode.com/remote.php
The VNC login looks very similar to an SSH based login. Some keys may not work as expected
though. For example the - character has to be entered as ALT-45 to work.
$ sudo do-release-upgrade
If the upgrade asks to overwrite locally changed files choose the option to keep your version of the
files.
Serverbear
Go to http://serverbear.com/ and choose Benchmark my server. Choose
your hosting provider and which plan you have bought. Enter your email to
get the results of the benchmarks when they are ready.
At the bottom you can copy paste a command that you can use within your
SSH session.
When the benchmark finishes you will receive an email with a link to your
results.
The server bear benchmarks tests the CPU (UnixBench score), the
performance of the I/O subsystem, and the download speed from various
locations to your server.
To test the latency from your location to the server you can use the
Windows/Mac builtin ping command:
Network card: Intel PRO/1000. This makes sure were not limited to 100Mbit/sec
bandwidth on the network card. (but instead have the full 1000Mbit/sec available).
Disk driver: virtio - this can give better IO performance
After this change we can run the ServerBear benchmark again with the following results:
UnixBench score: 3561.0
I/O rate: 1300.0 MB/second
Bandwidth rate: 110.0 MB/second
You can view the processor information with the following command:
$ cat /proc/cpuinfo
processor : 1
...
From the output you can see we have a multi core CPU (processor: 0, processor: 1). The
model name is as you can see QEMU Virtual CPU version. In the flags section you can
see what kind of special instruction sets the CPU supports. Eg. SSE2
In reality our CPU supports a lot more instruction sets, eg. SSE4,… But it is not visible
yet in /proc/cpuinfo, which means Linux apps or when compiling new software, may not
use the instructions either.
To be sure we maximize performance we can enable the CPU Passthrough mode so that
all CPU features are available.
Now we can see we’re running on a Intel(R) Xeon(R) CPU E3-1240 V2 @ 3.40GHz.
The flags line also has a lot more instruction sets available now.
Tuning the Linux kernel can be done with the sysctl program or by editing the /etc/sysctl.conf file.
After editing the sysctl.conf file you can run sudo sysctl -p to reflect the changes without rebooting your server.
We will not try to explain every parameter in detail because this is highly technical. If you’re interested in more
information about TCP/IP we recommend you to read the book High Performance Browser Networking by Ilya
Grigorik
To start editing the settings open the sysctl.conf file in an editor; all settings in the next section can be added to this
file (at the end preferably)
$ sudo nano /etc/sysctl.conf
This is what the TCP/IP Slow start algorithm is designed to do. The general idea is that the sender of the data starts
to send data slowly, waiting for ACKs (=acknowledgements) by the receiver. If the network is great between the
two, and the sender gets the ACKs back correctly, the sender will increase the amount of data in the next round
(before waiting for ACKs to return). This is called the TCP/IP congestion Window.
This phase of the TCP connection is commonly known as the “exponential growth” algorithm, as the client and the
server are trying to quickly communicate at maximum speed on the network path between them.
No matter the available bandwidth, every TCP connection must go through the slow-start phase. This can hinder
performance when downloading eg. an html file with multiple Javascript files and style sheets.
These are all separate HTTP connections which run on the TCP/IP protocol. Because the files are small in size it is
not unusual for the requests to terminate before the maximum TCP window size is reached. Because of this, slow-
start limits the available bandwidth throughput, which has an adverse effect on the performance of small transfers.
To decrease the amount of time it takes to grow the congestion window, we can decrease the roundtrip time
between the client and server so the ACKs will arrive faster at the sender, which can then in turn start sending
more data sooner.
In Linux Kernel 2.6.x and later the initial congestion window size was increased to 10 segments of data. (eg. due
to better networks TCP/IP connections will transfer more data now before waiting for ACKs). It’s one of the
TCP also implements a Slow start restart mechanism, which reduces again the size of the TCP congestion window
after a connection has been idle for a period of time. (because network conditions could have changed in the mean
time).
This can have a significant impact on TCP connections which are alive for a longer time. It is recommended to
disable the Slow start restart mechanism on the server.
# Disable TCP IP Slow start after idle; this affects IPv6, too, despite the name
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_keepalive_time=1800
This setting reduces the keepalive timout of a TCP connection to 30 minutes. This way we have less memory
usage.
net.ipv4.tcp_max_syn_backlog = 10240
This setting increases the maximum number of remembered connection requests, which still did not receive an
acknowledgment from the connecting client. You may need to lower this number if you have a memory
constrained VPS. The default is 1024.
net.core.netdev_max_backlog = 2500
The number of packets that can be queued should be increased from the default of 1000 to 2500
net.ipv4.tcp_max_tw_buckets = 1440000
With a web server you will see a lot of TCP connections in the TIME-WAIT state. TIME_WAIT is when the
socket is waiting after close to handle packets still in the network. This setting should be high enough to prevent
simple Denial of Service attacks.
net.ipv4.ip_local_port_range = 1024 65535
This setting defines the local port range that is used by TCP traffic. You will see in the parameters of this file two
numbers: The first number is the first local port allowed for TCP on the server, the second is the last local port
number allowed. For high traffic sites the range can be increased, so that more local ports are available for use
concurrently by the web server.
net.ipv4.tcp_max_orphans = 60000
An orphan socket is a socket that isn’t associated to a file descriptor. For instance, after you close() a socket, you
no longer hold a file descriptor to reference it, but it still exists because the kernel has to keep it around for a bit
more until TCP is done with it. If this number is exceeded, the orphaned connections are reset immediately and
warning is printed. Each orphan eats up to 64K of unswappable memory.
The number is not in bytes but in number of pages (where most of the time 1 page = 4096 bytes)
So if we take the total memory (4GB=4000 000 000 bytes) of our server we can do the math:
4000 000 000 / 4096 = 976562 pages
We actually recommend to not set tcp_mem manually as it is already auto-tuned by Linux based on the amount of
RAM.
We don’t recommend to use higher values unless you have more memory available.
You can view the current amount of pages needed by TCP via:
$ cat /proc/net/sockstat
sockets: used 169
TCP: inuse 32 orphan 0 tw 9 alloc 68 mem 34
The first value in this variable tells the minimum TCP receive buffer space available for a single TCP socket.
Unlike the tcp_mem setting, this one is in bytes. The second value is the default size. The third value is the
maximum size.
net.ipv4.tcp_wmem=4096 65536 16777216
The first value in this variable tells the minimum TCP send buffer space available for a single TCP socket. Unlike
the tcp_mem setting, this one is in bytes. The second value is the default size. The third value is the maximum size.
16MB per socket sounds much, but most sockets won’t use anywhere near this much. (+ it is nice to be able to
expand if necessary)
net.ipv4.tcp_sack = 1
The tcp_sack variable enables Selective Acknowledgements (SACK). It was developed to handle lossy
connections better.
net.ipv4.tcp_timestamps = 1
This is an TCP option that can be used to calculate the Round Trip Measurement in a better way than the
retransmission timeout method can.
net.ipv4.tcp_window_scaling = 1
This specifies how we can scale TCP windows if we are sending them over large bandwidth networks. When
sending TCP packets over these large pipes, we experience heavy bandwidth loss due to the channels not being
fully filled while waiting for ACK’s for our previous TCP windows.
Enabling tcp_window_scaling enables a special TCP option which makes it possible to scale these windows to a
larger size, and hence reduces bandwidth losses due to not utilizing the whole connection.
net.ipv4.tcp_ecn = 1
Linux 3.5 kernel and later implement TCP Early Retransmit with some safeguards for connections that have a
small amount of packet reordering. This allows connections, under certain conditions, to trigger fast retransmit and
bypass the costly Retransmission Timeout (RTO). By default it is enabled in the failsafe mode
tcp_early_retrans=2.
net.core.somaxconn=1024
The maximum number of “backlogged sockets”. The default is 128. This is only needed on very loaded server.
You’re effectively letting clients wait instead of returning a connection abort.
net.core.wmem_max=16777216
The maximum OS send buffer size in bytes for all types of connections
net.core.rmem_max=16777216
The maximum OS receive buffer size in bytes for all types of connections
net.core.rmem_default=65536
The default OS receive buffer size in bytes for all types of connections.
net.core.wmem_default=65536
When you’re serving a lot of html, stylesheets, etc; it is usually the case that the web server will open a lot of local
files simultaneously. The kernel limits the number of files a process can open.
To scale your web server (eg. we will install Nginx later), you need to increase this number:
$ ulimit -Hn
4096
$ ulimit -Sn
1024
-Hn shows us the hard limit, while -Sn shows the soft limit.
This means that the soft and hard limit on the number of files per user is 65536. The * means that we apply this
limit to all user accounts on your system.
These settings are optimized for SSD disks. Do not apply these if you don’t have an SSD in your server.
vm.swappiness=1
With the action above, you limit the use of the swap partition (the virtual memory on the SSD). Ubuntu’s
inclination to use the swap, is determined by a setting.
On a scale of 0-100, the default setting is 60. We set it to 1 so that swap is not used (= less I/O traffic) unless the
server gets severe RAM shortage.
vm.vfs_cache_pressure=50
Scheduler Settings
I/O performance, or the read/write latency of a web server can seriously impact the overall page load times of your
server. Making a simple change to the IO scheduler that’s built into the kernel can decrease your IO latency by
20% or more.
Scheduling algorithms attempt to reoder disk access patterns to mitigate the shortcomings of traditional HDDs.
They work best with I/O devices that have reasonable transfer speeds and slow seek times.
Depending on the Linux distribution and version you have installed, the I/O Scheduler can be different. There are a
few possible Linux schedulers:
Deadline
CFQ
NoOp
None
By default, Ubuntu 14.04 and later uses the I/O scheduler Deadline, which is working well for both SSDs and
conventional hard disks.
When running Ubuntu 14.04 inside a KVM based VPS, you cannot choose the I/O scheduler as you are accessing
a virtual device. The I/O request is passed straight down the stack to the real disk. The scheduler used will be
“none”. This is due to kernel changes in Kernel 3.13 and later.
By default Ubuntu 12.04 uses the CFQ scheduler which is good for conventional hard disks but not for SSDs.
CFQ, Completely Fair Queuing, tries to balance I/O among all the processes fairly. This isn’t the best option for
web servers.
Here is how to view the current and available schedulers on your Linux system:
You may need to substitute the vda portion of the command with your disk devices, which may be sda, sdb, sdc or
hda, hdb, etc.
$ cat /sys/block/vda/queue/scheduler
It says the deadline scheduler is the current IO scheduler. cfq and noop are also available.
Our recommendation is to use the noop scheduler, which will allow the host to determine the order of the
read/write requests from your VPS instead of the VPS re-ordering and potentially slowing things down with the
cfq scheduler.
Add
echo 'noop' > /sys/block/vda/queue/scheduler
echo '0' > /sys/block/vda/queue/rotational
The last line is used to check if the device is of a rotational type or non-rotational type. For SSD disks this should
be set to 0; because seek times are much lower then on traditional HDDs. (where the scheduler spends extra CPU
cycles to minimize head movement.)
Setting the correct scheduler on the KVM host is something that your VPS provider has to set. Ideally they will use
the noop scheduler for SSD based VPSs and deadline if the server also has HDDs in the mix.
In this case no changes are necessary inside the KVM VPS. Just be sure or ask your VPS provider to set deadline
or noop scheduler on the KVM host.
Now add ‘noatime’ in /etc/fstab for all your partitions on the SSD.
To enable this on a KVM VPS, your VPS provider will need to add a discard=’unmap’ attribute added to the disk
definition for the domain (https://libvirt.org/formatdomain.html#elementsDisks)
We can also use it to eg. connect securely to our database, generate private/public keys
and offers keystores, certificates,…
Last year OpenSSL was all over the news because it had some major security bugs.
These have all been fixed in the current version.
In the guide below we will show you how to compile the latest version from source.
We’ll also describe you how you can update to a new version if one appears.
You can view the installed version of openssl on your system by typing:
$ openssl
then type
version
OpenSSL> version
OpenSSL 1.0.1j 15 Oct 2014
OpenSSL>
Here we see were not running the latest version. 1.0.2d was released in July 2015 and
fixes 12 security bugs. You can find the newest versions at https://www.openssl.org/
Edit the file /etc/manpath.config, adding the following line before the first
MANPATH_MAP:
$ sudo nano /etc/manpath.config
MANPATH_MAP /usr/local/ssl/bin /usr/local/ssl/man
Edit the file /etc/environment and insert the path for the new OpenSSL version
(/usr/local/ssl/bin) before the path for Ubuntu’s version of OpenSSL (/usr/bin).
$ sudo nano /etc/environment
My environment file looks like this:
PATH="/usr/local/sbin:/usr/local/bin:/usr/local/ssl/bin:/usr/sbin:/usr/bin:/sbin\
:/bin:/usr/games"
After reboot check whether executing openssl displays the version you’ve just
upgraded to:
$ openssl
OpenSSL> version
OpenSSL 1.0.2d 9 Jul 2015
You can see that AES enabled and used by OpenSSL on our system.
Make sure you’re exposing the AES instructions to your KVM VPS by Configuring
the CPU model exposed to KVM instances, which we explained previously.
A firewall
Checks login authentication failures for SSH, FTP, …
Excessive connection blocking
Syn flood protection
Ping of death protection
CSF is autostarted on bootup of your server. It reads its configuration settings from the
file /etc/csf/csf.conf
As you can see the last 4 are used for IPv6 connections. Incoming connections are
connections coming from the outside world to your server. (these could be HTTP
connections, SSH login, …). Outgoing connections are connections created by your
server to the outside world.
You should only add the ports for services you are really using; the ports which are not
whitelisted are closed by default, reducing the possible attack surface for hackers.
At the top of the file make sure that the Testing mode is enabled for now. This will make
sure we don’t accidently lock us out of our own server by an incorrect configuration.
TESTING = "1"
No errors should be returned. To check if you can still login via SSH, open a second SSH
connection to your server after enabling CSF. You should be able to login correctly. If so
you can disable the testing mode in CSF by setting TESTING = “0” in the
/etc/csf/csf.conf file. Afterwards restart csf by running:
$ sudo csf -r
CSF Firewall can also allow or whitelist ip addresses with the following command:
$ csf -a xxx.xxx.xxx.xxx
When CSF is enabled, the login failure daemon is also automatically running.
You can view the logs of LFD (including which auto bans were performed) via the
following command:
$ sudo cat /var/log/lfd.log
With the domain name your visitors can easily find your website. Without
Domain Name services your users would have to use the IP address of your
server to access your site. (eg. 72.13.444.2)
The first level is the .com part. This is either a top level domain (tld)
such as .gov, .com, .edu or a country code top level domain (.be, .fr,…)
The second level is the google part.
The third level is the www part.
To create your fully qualified domain name you’ll generally use something
like:
www.<mydomainname>.<tld>
or
<mydomainname>.<tld>
When ordering your domain name you’ll thus have to decide which top level
domain (<tld>) you want to choose and which <mydomainname> you want
to choose.
We recommend to use a country level based top level domain if the target
audience of your site is from one country only. Otherwise the .com tld is still
the most used.
After signup or login you can enable the option Domain Privacy if needed.
Enabling Domain Privacy will hide your name, address, email and phone
number from ‘whois’ lookups via your domain name.
You also need to specify the name server to use. A name server is a
computer server that connects your domain name to any services you may
use (e.g. email, web hosting). We’ll go into more detail on how to configure
this in the next section. For now you can use the EuroDNS default name
server.
Now click on ‘Review and Payment’ to enter the credit card details and
finish the ordering process.
This name server can be the same server as your VPS server you’ll use for
hosting your website. In this case you would have to install name server
software on your VPS server and update the nameserver configuration at
EuroDNS to use your name server.
Managed DNS is a service that allows you to outsource DNS to a third party
provider. Here are the reasons why we recommend this option:
The Small Business plan at 29,95USD will be fine for most sites, unless you
have massive amounts of traffic to your site. (in this case you’ll need to
check the number of queries included in each plan)
After login in the control panel, you can add a domain this way:
1 Select the “DNS” Menu, select “Managed DNS” 1. Click “Add Domains”,
on the right 1. Enter a domain name and Click “Ok”
For each we will discuss why they are used and how to configure them.
A Records
fill in the name (leave it blank once and for the second A record fill in
www)
the IPv4 address of your server
Dynamic dns: off
This is very similar to A Records; except that the AAAA records will map to
IPv6 addresses of your server.
CNAME Records
One such example where you’ll need this is when using a CDN service
(Content Delivery Network) for optimizing latency and download speeds of
your website resources (eg. Images, html, css,… )
TXT Records are used for multiple purposes, one of which is for email anti-
spam measures. We’ll discuss this further in our Setting up mails chapter.
Using the DNSMadeEasy nameservers at your DNS registrar EuroDNS
This will make sure that DNS servers around the world will find all the
configuration you’ve applied in the previous section.
You can attach this profile to your domain name via the following steps:
There are some handy free website tools which can verify if your DNS
configuration is well setup. Some tools also test the performance of the DNS
server or Service you are using.
The following sites also analyze your DNS configuration and performance:
https://cloudmonitor.ca.com/en/dnstool.php
In this chapter we will install MariaDB, which is a drop-in replacement for the well
known MySQL database.
MariaDB is a community developed fork of the MySQL database which should remain
free. It is led by the original developers of MySQL after the acquisition of MySQL by
Oracle.
This means that for most cases, you can just use MariaDB where you would use MySQL.
If your website software doesn’t explicitly support MariaDB, but it does support MySQL,
it should work out-of-the-box with MariaDB.
Better performance
By default uses the XtraDB storage engine which is a performance enhanced fork of
the MySQL InnoDB storage engine
Better tested and fewer bugs
All code is open source
The commands below will add a Maria DB repository to Ubuntu 15.x which will allow us
to install MariaDB and future updates via the apt-get system.
sudo apt-get install software-properties-common
sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082\
a1bb943db
sudo add-apt-repository 'deb http://lon1.mirrors.digitalocean.com/mariadb/repo/1\
0.1/ubuntu vivid main'
You can find the exact steps for Ubuntu 15.x via the configuration guide at
https://downloads.mariadb.org/mariadb/repositories
The above steps only needs to be performed once on a given server. The apt-key
command enables apt to verify the integrity of the packages it downloads.
$ sudo apt-get update
$ sudo apt-get install mariadb-server
During installation MariaDB will ask for a password for the root user.
Choose a good password because the root MariaDB user is an administrative user that has
access to all databases!)
Securing MariaDB
By default the MariaDB installation still has a test user, database, and anonymous login
which should be disabled on a production server.
You can run the following script to make your installation fully secure:
$ sudo mysql_secure_installation
It’ll make the following changes to your MariaDB installation: * Setting a password for
the root user * Remove anonymous users * Disallow root login remotely * Remove test
database and access to it
One important step you should still run after the upgrade is to run mysql_upgrade
$ sudo mysql_upgrade --user=<root mariadb user> --password=<password>
You should run mysql_upgrade after upgrading from one major MySQL/MariaDB release
to another, such as from MySQL 5.0 to MariaDB 5.1 or MariaDB 10.0 to MariaDB 10.1.
It is also recommended that you run mysql_upgrade after upgrading from a minor
version, like MariaDB 5.5.40 to MariaDB 5.5.41, or even after a direct “horizontal”
migration from MySQL 5.5.40 to MariaDB 5.5.40. If calling mysql_upgrade was not
necessary, it does nothing.
Symptoms you may receive when you didn’t run msyql_upgrade include:
Errors in the error log that some system tables don’t have all needed columns.
Updates or searches may not find all the records.
Executing CHECKSUM TABLE may report the wrong checksum for MyISAM or
Aria tables.
When tuning the performance of MariaDB, there are some hardware considerations to be
reflected on.
Amount of memory available: when MariaDB has more memory available, larger
key and table caches can be stored in memory. This reduces disk access which is of
course much slower.
Disk access: fast disk access is critical, as ultimately the database data is stored on
disks. The key figure is the disk seek time, a measurement of how fast the physical
disk can move to access the data. Because we’re using an SSD, seek times are very
fast :-)
Kernel Parameters
With the action above, you limit the use of the swap partition (the virtual memory on the
SSD). Ubuntu’s inclination to use the swap, is determined by a setting.
On a scale of 0-100, the default setting is 60. We set it to 1 so that swap is not used unless
the server gets severe RAM shortage.
MariaDB’s internal algorithms assume that memory is not swap, and are highly
inefficient if it is.
Swap increases IO over just using disk in the first place as pages are actively
swapped in and out of swap.
Database locks are particularly inefficient in swap. They are designed to be obtained
and released often and quickly, and pausing to perform disk IO will have a serious
impact on their usability.
Storage Engines
MariaDB comes with quite a few storage engines. They all store your data and they all
have their pro’s and cons depending on the usage scenario.
The most used storage engines you’ll come across frequently are:
MyISAM: this is the oldest storage engine from MySQL; and is not transactional
Aria: a modern improved version of MyISAM
InnoDB: a transactional general purpose storage engine.
XtraDB: a performance improved version of InnoDB. It is meant to be near 100%
compatible with InnoDB. From MariaDB 10.0.0.15 on it is the default storage
engine.
Using MySQLTuner
MySQLTuner is a free Perl script that can be run on your database which will give you a
list of recommendations to execute to improve performance.
It is advised to run this script when your database has been up for at least a day or longer.
Otherwise the recommendations may be inaccurate.
MySQLTuner
For our Database they eg. advise to defragment our tables and add more INDEXes when
we are joining tables together. Here is how you can defragment all databases with
MySQL/MariaDB:
$ sudo mysqlcheck -uroot -p<password> -o all-databases
To lookup slow queries (possibly due to missing indexes), take a look at the queries
logged in:
It’ll enable you or your application developers to analyze what is wrong and how to make
the queries more performant.
Also take a look at the “highest usage of available connections”. If this is much lower
then your max_connection setting, then you’ll be wasting memory which is never used.
For example in our server the max_connection settings is on 300 simultanuous
connections while the highest number of concurrent sessions is only 30. This make it
possible to reduce the max. connection settings to eg 150.
Enabling HugePages
What are huge pages or Large Pages?
When a process uses some memory, the CPU is marking the RAM as used by that
process. For efficiency, the CPU allocates RAM by chunks of 4K bytes (it’s the default
value on many platforms). Those chunks are named pages. Those pages can be swapped
to disk, etc.
The CPU and the operating system have to remember which page belong to which
process, and where it is stored. Obviously, the more pages you have, the more time it
takes to find where the memory is mapped.
A Translation-Lookaside Buffer (TLB) is a page translation cache inside the CPU that
holds the most-recently used virtual-to-physical memory address translations.
The TLB is a scarce system resource. A TLB miss can be costly as the processor must
then read from the hierarchical page table, which may require multiple memory accesses.
By using a bigger page size, a single TLB entry can represent a larger memory range and
as such there could be reduced Translation Lookaside Buffer (TLB) cache misses
improving performance.
Most current CPU architectures support bigger pages then the default 4K in Linux. Linux
supports these bigger pages via the Huge pages functionality since Linux kernel 2.6. By
default this support is turned off.
Note that enabling Large Pages can reduce performance if your VPS doesn’t have ample
RAM available. This is because when a large amount of memory is reserved by an
application, it may create a shortage of regular memory and cause excessive paging in
other applications and slow down the entire system.
We recommend to enable it only when you have enough RAM in your VPS (2GB or
more).
A kernel with Hugepage support should give a similar output like below:
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
You can see a single hugepage is 2048KB (2MB) in size on our system. Support for
Hugepage is also one of the reasons to use a recent kernel. The total number of
HugePages is zero because we have not yet enabled HugePages support.
Enabling HugePages
Then add:
# Enable Large pages in our case 1 page = 2048kb
vm.nr_hugepages = 512
We reserve 1GB of RAM (2048*512) for Large pages use. You should adjust this setting
based on how much memory your VPS has. We have 4GB of RAM available which
means we reserved 25%. You have to take into account this RAM is not available to
processes that are not using Huge pages.
Because we want to allow more then one process to access the HugePages we will create
a group ‘hugepage’. Every user who needs access can then add this group to its list of
groups.
For example, for user mysql we have the following groups attached now:
In our example you can now set the group number 1003 in the sysctl.conf:
$ sudo nano /etc/sysctl.conf
vm.hugetlb_shm_group=1003
Now we need to update the Linux shared memory kernel parameters SHMMAX and
SHMALL inside /etc/sysctl.conf
SHMMAX is the maximum size of a single shared memory segment. Its size is expressed
in bytes.
kernel.shmmax = 1073741825
It should be higher then the amount of memory in bytes allocated for Large Pages. As we
reserved 1GB of RAM this means: 1 * 1024 * 1024 * 1024 + 1 = 1073741825 bytes. Now
we want to specify that the max total shared memory (SHMALL) may not exceed 2GB:
(50% of the available RAM)
Now SHMALL is not measured in bytes but in pages, so in our example where 1 page is
4096 bytes in size, we need to:
The Linux OS can enforce memory limits a process/user can consume. We adjust the
memlock parameter to set no limit for the mysql user. We also set that the mysql user can
open max. 65536 files at the same time. In the next section we will set the open-files-limit
MariaDB parameter to the same 65536 value. (MariaDB can’t set it’s open_files_limit to
anything higher then the what was specified for user mysql in limits.conf.)
$ ipcs -lm
shows information about the shared memory: maximum size, max. segment size and
more. You can use it to verify the settings you’ve made in the sysctl.conf file after reboot.
System Variables
Tuning a database server is vast subject which could span a whole book by its own. We
will only scratch the surface in this and the following sections. We will optimize the
configuration of MyISAM and InnoDB/XtraDB storage engines.
We will give you some recommendations and tools. Of course your settings could be
different due to having more or less memory or applications which use the database
differently.
The configuration file is split into three parts: * [client]: configuration settings that are
read by all client programs. (eg. PHP accessing the database) * [mysqld_safe]:
configuration settings for the mysqld_safe daemon (see below) * [mysqld]: general
mysql/MariaDB configuration settings.
mysqld_safe starts mysqld the mysql daemon. mysqld_safe will check for an exit code. If
mysqld did not end due to system shutdown or a normal service mysql stop, mysqld_safe
will attempt to restart mysqld.
$ sudo nano /etc/mysql/my.cnf
# ----------------------------------------------------------------------
# CLIENT CONFIGURATION
# ----------------------------------------------------------------------
[client]
# As always, all charsets default to utf8.
default_character_set = utf8
[mysqld_safe]
# mysqld_safe is the recommended way to start a mysqld server on Unix. mysqld_sa\
fe adds some safety features such as restarting the server when an error occurs \
and logging runtime information to an error log file.
# Write the error log to the given file.
#
# DEFAULT: syslog
general-log-file = /var/log/mysql/mysql.log
log-error = /var/log/mysql/mysqld-safe-error.log
# The process priority (nice). Enter a value between -19 and 20; where
# -19 means highest priority.
#
# SEE: man nice
# DEFAULT: 0
nice = 0
# The Unix socket file that the server should use when listening for
# local connections
socket = /var/run/mysqld/mysqld.sock
# The number of file descriptors available to mysqld. Increase if you are gettin\
g the Too many open files error.
Open-files-limit = 65536
# Port
port = 3306
# The number of file descriptors available to mysqld. Increase if you are gettin\
g the Too many open files error.
open-files-limit = 65536
# If you run multiple servers that use the same database directory (not recommen\
ded), each server must have external locking enabled.
skip-external-locking = true
# ALL character set related options are set to UTF-8. We do not support
# any other character set unless explictely stated by the user who's
# working with our database.
character_set_server = utf8
# http://major.io/2007/08/03/obscure-mysql-variable-explained-max_seeks_for_key/
max_seeks_for_key = 1000
# The number of seconds that mysqld server waits for a connect packet
# before responding with "Bad Handshake". The defautl value is 10 sec
# as of MySQL 5.0.52 and 5 seconds before that. Increasing the
# connect_timeout value might help if clients frequently encounter
# errors of the form "Lost connection to MySQL server at 'XXX', system
# error: errno".
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_connect_timeout
# DEFAULT: 10
connect_timeout = 5
# Index blocks for MyISAM tables are buffered and shared by all threads.
# The key_buffer_size is the size of the buffer used for index blocks.
# The key buffer is also known as the key cache.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_key_buffer_size
# http://www.ewhathow.com/2013/09/what-is-the-recommended-value-of\
-key_buffer_size-in-mysql/
# DEFAULT: 8388608
key_buffer_size = 32M
# Number of open tables for all threads. See Optimizing table_open_cache for sug\
gestions on optimizing. Increasing table_open_cache increases the number of file\
descriptors required.
# https://mariadb.com/kb/en/optimizing-table_open_cache/
table_open_cache = 2048
# Whether large page support is enabled. You must ensure that your
# server has large page support and that it is configured properly. This
# can have a huge performance gain, so you might want to take care of
# this.
#
# You MUST have enough hugepages size for all buffers you defined.
# Otherwise you'll see errno 12 or errno 22 in your error logs!
# Hugepages can give you a hell of a headache if numbers aren't calc-
# ulated whisely, but it's totally worth it as you gain a lot of
# performance if you're handling huge data.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/large-page-support.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_large_pages
# DEFAULT: 0
large_pages = true
# The index file for binary log file names. See Section 5.2.4, The
# Binary Log. If you omit the file name, and if you did not specify one
# with --log-bin, MySQL uses host_name-bin.index as the file name.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/replication-options-binary-log.htm\
l#option_mysqld_log-bin-index
# DEFAULT: OFF
# log_bin_index = /var/log/mysql/mariadb-bin.index
# Specifies how much information to include in your slow log. The value
# is a comma-delimited string, and can contain any combination of the
# following values:
#
# - microtime: Log queries with microsecond precision (mandatory).
# - query_plan: Log information about the query``s execution plan (optional).
# - innodb: Log InnoDB statistics (optional).
# - full: Equivalent to all other values OR``ed together.
# - profiling: Enables profiling of all queries in all connections.
# - profiling_use_getrusage: Enables usage of the getrusage function.
#
# Values are OR``ed together.
#
# For example, to enable microsecond query timing and InnoDB statistics,
# set this option to microtime,innodb. To turn all options on, set the
# option to full.
#
# If a query takes longer than this many seconds, the server increments
# the Slow_queries status variable. If the slow query log is enabled,
# the query is logged to the slow query log file. This value is measured
# in real time, not CPU time, so a query that is under the threshold on
# a lightly loaded system might be above the threshold on a heavily
# loaded one. The minimum and default values of long_query_time are 0
# and 10, respectively. The value can be specified to a resolution of
# microseconds. For logging to a file, times are written including the
# microseconds part. For logging to tables, only integer times
# are written; the microseconds part is ignored.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_long_query_time
# SEE: http://dev.mysql.com/doc/refman/5.5/en/slow-query-log.html
# DEFAULT: 10
long_query_time = 1
# The maximum size of one packet or any generated/intermediate string.
# The packet message buffer is initialized to net_buffer_length bytes,
# but can grow up to max_allowed_packet bytes when needed. This value by
# default is small, to catch large (possibly incorrect) packets. You
# must increase this value if you are using large BLOB columns or long
# strings. It should be as big as the largest BLOB you want to use. The
# protocol limit for max_allowed_packet is 1GB. The value should be a
# multiple of 1024; nonmultiples are rounded down to the nearest
# multiple. When you change the message buffer size by changing the
# value of the max_allowed_packet variable, you should also change the
# buffer size on the client side if your client program permits it. On
# the client side, max_allowed_packet has a default of 1GB. Some
# programs such as mysql and mysqldump enable you to change the client-
# side value by setting max_allowed_packet on the command line or in an
# option file. The session value of this variable is read only.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_max_allowed_packet
# DEFAULT: ?
max_allowed_packet = 16M
# If a write to the binary log causes the current log file size to
# exceed the value of this variable, the server rotates the binary logs
# (closes the current file and opens the next one). The minimum value is
# 4096 bytes. The maximum and default value is 1GB. A transaction is
# The size of the buffer that is allocated when sorting MyISAM indexes
# during a REPAIR TABLE or when creating indexes with CREATE INDEX or
# ALTER TABLE.
#
# The maximum permissible setting for myisam_sort_buffer_size is 4GB.
# Values larger than 4GB are permitted for 64-bit platforms (except
# 64-bit Windows, for which large values are truncated to 4GB with a
# warning).
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_myisam_sort_buffer_size
# Do not cache results that are larger than this number of bytes. The
# default value is 1MB.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_query_cache_limit
# DEFAULT: 1048576
query_cache_limit = 512K
# The amount of memory allocated for caching query results. The permiss-
# ible values are multiples of 1024; other values are rounded down to
# the nearest multiple. The query cache needs a minimum size of about
# 40KB to allocate its structures.
#
# 256 MB for every 4GB of RAM
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_query_cache_size
# DEFAULT: 0
query_cache_size = 128M
# Sets the global query cache type. There are three possible enumeration
# values:
# 0 = Off
# 1 = Everything will be cached; except for SELECT SQL_NO_CACHE
# 2 = Only SELECT SQL_CACHE queries will be cached
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_query_cache_type
# DEFAULT: 1
# http://major.io/2007/08/08/mysqls-query-cache-explained/
query_cache_type = 1
# Each thread that does a sequential scan for a MyISAM table allocates
# a buffer of this size (in bytes) for each table it scans. If you do
# many sequential scans, you might want to increase this value.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/memory-use.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_read_buffer_size
# http://www.mysqlperformanceblog.com/2007/09/17/mysql-what-read_buffer_size-val\
ue-is-optimal/
# DEFAULT: 131072
read_buffer_size = 128K
# When reading rows from a MyISAM table in sorted order following a key-
# sorting operation, the rows are read through this buffer to avoid disk
# seeks. Setting the variable to a large value can improve ORDER BY
# performance by a lot. However, this is a buffer allocated for each
# client, so you should not set the global variable to a large value.
# Instead, change the session variable only from within those clients
# that need to run large queries.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/order-by-optimization.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/memory-use.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
# Only use IP numbers and all Host columns values in the grant tables
# must be IP addresses or localhost.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/host-cache.html
# DEFAULT: false
skip_name_resolve = true
# if set to 1, the slow query log is enabled. See log_output to see how log file\
s are written
slow_query_log=1
# The absolute path to the Unix socket where MySQL is listening for
# incoming client requests.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_socket
# DEFAULT: /tmp/mysql.sock
socket = /var/run/mysqld/mysqld.sock
# Size in bytes of the per-thread cache tree used to speed up bulk inserts into \
MyISAM and Aria tables. A value of 0 disables the cache tree
bulk_insert_buffer_size = 16M
# SEE: http://dev.mysql.com/doc/refman/5.5/en/table-cache.html
# SEE: http://www.mysqlperformanceblog.com/2009/11/16/table_cache-negative-scala\
bility/
# SEE: http://www.mysqlperformanceblog.com/2009/11/26/more-on-table_cache/
# DEFAULT: ?
table_cache = 400
# Should be the same as table_cache
table_definition_cache = 400
# How many threads the server should cache for reuse. When a client
# disconnects, the client's threads are put in the cache if there are
# fewer than thread_cache_size threads there. Requests for threads are
# satisfied by reusing threads taken from the cache if possible, and
# only when the cache is empty is a new thread created. This variable
# Run the mysqld server as the user having the name user_name or the
# numeric user ID user_id
# The size of the memory buffer InnoDB / XtraDB uses to cache data and
# indexes of its tables. The larger this value, the less disk I/O is
# needed to access data in tables. A save value is 50% of the available
# operating system memory.
#
# total_size_databases + (total_size_databases * 0.1) = innodb_buffer_pool_size
#
# SEE: http://www.mysqlperformanceblog.com/2007/11/03/choosing-innodb_buffer_poo\
l_size/
# SEE: http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_inno\
db_buffer_pool_size
# DEFAULT: 128M
innodb_buffer_pool_size = 256M
# If enabled, InnoDB / XtraDB creates each new table using its own .idb
# file for storing data and indexes, rather than in the system table-
# space. Table compression only works for tables stored in separate
# tablespaces.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/innodb-multiple-tablespaces.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_inno\
db_file_per_table
# DEFAULT: FALSE
innodb_file_per_table = 1
# Default values
innodb_read_io_threads = 4
innodb_write_io_threads = 4
# If set to 1, the default, to improve fault tolerance InnoDB first stores data \
to a doublewrite buffer before writing it to data file. Disabling will provide a\
marginal peformance improvement.
innodb_doublewrite = 1
# The size in bytes of the buffer that InnoDB uses to write to the log
# files on disk. A large log buffer enables large transactions to run
# without a need to write the log to disk before the transaction commit.
# If set to 1, the default, the log buffer is written to the log file and a flus\
h to disk performed after each transaction. This is required for full ACID compl\
iance. If set to 0, nothing is done on commit; rather the log buffer write a$
innodb_flush_log_at_trx_commit = 2
# Once this number of threads is reached (excluding threads waiting for locks), \
XtraDB/InnoDB will place new threads in a wait state in a first-in, first-out qu\
eue for execution, in order to limit the number of threads running concurren$
innodb_thread_concurrency = 9
# If the extra columns used for the modified filesort algorithm would contain m\
ore bytes than this figure, the regular filesort algorithm is used instead. Sett\
ing too high can lead some sorts to result in higher disk activity and lower$
max_length_for_sort_data = 1024
# The starting size, in bytes, for the connection and thread buffers for each cl\
ient thread. The size can grow to max_allowed_packet. This variable's session va\
lue is read-only. Can be set to the expected length of client statements if $
net_buffer_length = 16384
# Limit to the number of successive failed connects from a host before the host\
is blocked from making further connections. The count for a host is reset to ze\
ro if they successfully connect. To unblock, flush the host cache with a FLU$
max_connect_errors = 10
# Minimum size in bytes of the blocks allocated for query cache results.
# http://dba.stackexchange.com/questions/42993/mysql-settings-for-query-cache-mi\
n-res-unit
query_cache_min_res_unit = 2K
# Size in bytes of the oersistent buffer for query parsing and execution, alloca\
ted on connect and freed on disconnect. Increasing may be useful if complex quer\
ies are being run, as this will reduce the need for more memory allocations $
# Size in bytes of the extra blocks allocated during query parsing and executio\
n (after query_prealloc_size is used up).
query_alloc_block_size = 65536
# Size in bytes to increase the memory pool available to each transaction when t\
he available pool is not large enough
transaction_alloc_block_size = 8192
# Initial size of a memory pool available to each transaction for various memory\
allocations. If the memory pool is not large enough for an allocation, it is in\
creased by transaction_alloc_block_size bytes, and truncated back to transac$
transaction_prealloc_size = 4096
#
# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# For generating SSL certificates I recommend the OpenSSL GUI "tinyca".
#
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem
thread_handling=pool-of-threads
# ----------------------------------------------------------------------
# MYSQLDUMP CONFIGURATION
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/mysqldump.html
# ----------------------------------------------------------------------
[mysqldump]
[myisamchk]
# SEE: mysqld.key_buffer
key_buffer = 32M
sort_buffer = 16M
read_buffer = 16M
write_buffer = 16M
[mariadb]
# Default is Latin1, if you need UTF-8 set all this (also in client section)
#
character-set-server = utf8
collation-server = utf8_general_ci
character_set_server = utf8
collation_server = utf8_general_ci
userstat = 0
# https://mariadb.com/kb/en/segmented-key-cache/
# For all practical purposes setting key_cache_segments = 1 should be slower tha\
n any other option and should not be used in production.
key_cache_segments = 1
aria_log_file_size = 32M
aria_log_purge_type = immediate
# The size of the buffer used for index blocks for Aria tables. Increase this to\
get better index handling (for all reads and multiple writes) to as much as you\
can afford.
aria_pagecache_buffer_size = 128M
# The buffer that is allocated when sorting the index when doing a REPAIR or wh\
en creating indexes with CREATE INDEX or ALTER TABLE.
aria_sort_buffer_size = 512M
[mariadb-5.5]
# If set to 1 (0 is default), the server will strip any comments from the query \
before searching to see if it exists in the query cache.
query_cache_strip_comments=1
# http://www.mysqlperformanceblog.com/2010/12/21/mysql-5-5-8-and-percona-server-\
on-fast-flash-card-virident-tachion/
innodb_read_ahead = none
# ----------------------------------------------------------------------
# Location from which mysqld might load additional configuration files.
# Note that these configuration files might override values set in this
# configuration file!
# !includedir /etc/mysql/conf.d/
We’ll setup the Linux ‘logrotate’ service to solve this problem automatically.
renaming (rotating) & compressing log files when they reach a certain size
keeping compressed backups for log files (with limits)
In the end the options make sure that the size taken by the log files will be constant.
In this section we will logrotate the MariaDB log files, so that they’ll not fill the entire
disk. MariaDB has a few different log files:
The error log: it contains information about errors that occur while the MariaDB
server is running
The general query log: it contains general information about the queries. In our
configuration we haven’t enabled this log
The slow query log: it consists of slow queries. This is very useful to find SQL
queries which need to be optimized.
Let’s check which log files we have specified in our my.cnf MariaDB configuration:
$ sudo cat /etc/mysql/my.cnf | grep "\.log"
general-log-file = /var/log/mysql/mysql.log
log-error = /var/log/mysql/mysqld-safe-error.log
log-error = /var/log/mysql/mysqld-error.log
slow_query_log_file = /var/log/mysql/slowqueries.log
daily
max 7 backups, before log files are deleted
size take by log files is max 100MB
when rotating the log files, compress them
do a daily compress
do a postrotate when MariaDB is restarted
This means that the HugePages configuration was not properly setup. Please reread the
Enabling HugePages to see if you missed some details.
InnoDB: Error: log file ./ib_logfile0 is of different size <x> <y> bytes
This can happen when you have changed the size of the innoDB logfile in my.cnf. We
need to delete the previous InnoDB binary logfiles.
$ cd /var/lib/mysql
$ sudo rm ib_logfile0
$ sudo rm ib_logfile1
Why nginx ?
nginx, pronounced ‘engine X’, has been gaining a lot of users in the Linux web server
market. Although Apache HTTP Server is still the market share leader in December 2014;
nginx is the new cool kid on the block.
nginx has an event driven design which can make better use of the available hardware
than Apache’s process driven design. nginx can thus serve more concurrent clients with
higher throughput then Apache on the same hardware.
Nginx gets very regular releases that fix bugs and add new features. (like the SPDY and
HTTP/2 support for enhancing performance of https websites).
For new servers we thus recommend you to install nginx instead of Apache.
Installing nginx
As nginx doesn’t support adding libraries dynamically we will compile nginx and its extra
libraries from source.
There may be some precompiled packages available, but most of the time they don’t have
the extra libraries we want (eg. nginx Google Pagespeed plugin) or come with old
versions of nginx/plugins.
Because we want to use the newest versions we will compile everything from source.
This is actually pretty simple and not that hard.
Download nginx
First we need to download the latest nginx version sources. These can be downloaded
from: http://nginx.org/en/download.html
Because we will add some plugins to Nginx, we will first download those plugin sources
before we start the compilation.
Download ngx_pagespeed
$ cd ~
$ NPS_VERSION=1.10.33.2
$ wget https://github.com/pagespeed/ngx_pagespeed/archive/release-${NPS_VERSION}\
-beta.zip -O release-${NPS_VERSION}-beta.zip
$ unzip release-${NPS_VERSION}-beta.zip
$ cd ngx_pagespeed-release-${NPS_VERSION}-beta/
$ wget https://dl.google.com/dl/page-speed/psol/${NPS_VERSION}.tar.gz
$ tar -xzvf ${NPS_VERSION}.tar.gz # extracts to psol/
The latest version is v8.37. PCRE2 has also been released recently but is not yet
compatible with nginx due to a changed API.
make -j4 means we use 4 cores for the compilation process (as our VPS has 4 cores
available). Note this may not always work.
While it can cache responses from a backend, there is no support built-in for
programmatically purging content from the cache? Why would you want this support?
A good use case is eg. A Wordpress blog; which is written in PHP & stores it posts in a
database. Caching the content with the nginx FastCGI cache will make sure that most
people will get a cached response back where no PHP or database calls are necessary.
Now if a new article is published on the Wordpress blog; the cache inside nginx of eg. the
homepage should be purged, to allow the new content to be visible.
Purge support is available via the ngx_cache_purge module. We’ll cover the Wordpress
nginx Helper plugin (https://wordpress.org/plugins/nginx-helper/) in the Wordpress
chapter which works in conjunction with the nginx cache purge module.
$ cd ~
$ wget http://labs.frickle.com/files/ngx_cache_purge-2.3.tar.gz
$ tar -xzvf ngx_cache_purge-2.3.tar.gz
Compile nginx
$ cd ~
$ cd nginx-1.9.9
$ ./configure --sbin-path=/usr/local/sbin --conf-path=/usr/local/nginx/conf/ngin\
x.conf --with-ipv6 --add-module=../ngx_pagespeed-release-1.10.33.2-beta --with-h\
ttp_v2_module --with-http_ssl_module --with-http_gzip_static_module --with-http_\
stub_status_module --with-http_secure_link_module --with-http_flv_module --with-\
http_realip_module --with-pcre=../pcre-8.37 --with-pcre-jit --add-module=../n\
gx_cache_purge-2.3 --with-openssl=../openssl-1.0.2d --with-openssl-opt=enable-tl\
sext --add-module=../ngx_devel_kit-master --add-module=../set-misc-nginx-module-\
0.29 --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/ac\
We’ll go over the options in the configure command one by one to explain why they are
used.
This option adds the Google Pagespeed module for nginx. We’ll configure this plugin in a
later chapter to enhance the performance of your site.
–with-http_spdy_module (deprecated since 1.9.5; see below for the HTTP/2 module)
This option enables the SPDY module. SPDY is a Google specification which
manipulates HTTP traffic, with the goal to reduce web page load latency. It uses
compression and prioritizes and multiplexes the transfer of a web page so that only one
connection per client is required. (eg. Getting the html, images, stylesheets and javascript
files all happens with a connection that is kept open).
SPDY will form the basis of the next generation standardized HTTP v2 protocol.
SPDY requires the use of TLS (Transport Layer security) encryption (eg. https) for
security and better compatibility across proxy servers.
–with-http_v2_module
The ngx_http_v2_module module (since nginx 1.9.5) provides support for HTTP/2 and
supersedes the ngx_http_spdy_module module. Note that accepting HTTP/2 connections
over TLS (https) requires the ‘Application-Layer Protocol Negotiation’ (ALPN) TLS
extension support, which is available only since OpenSSL version 1.0.2; which we have
installed in our OpenSSL chapter.
–with-http_ssl_module
Enables SSL / TLS support (eg. To run your website over https)
In 2014 Google launched the HTTPS everywhere initiative; which tries to make secure
communications to your website the default (eg. Https everywhere).
Reasons:
This only works when your site is fully on https; not just the admin sections or shopping
cart.
To use https, you’ll also need to buy an SSL/TLS certificate from a certificate provider to
prove you’re the owner of your website domain. We’ll explain this whole process in later
chapters.
–with-http_gzip_static_module
Enables nginx to compress the html, css and javascript resources with gzip before sending
it to the browser. This will reduce the amount of data sent to the browser and increase the
speed. This module also allows to send precompressed .gz files if they are available on
the server.
–with-http_stub_status_module
This module enables you to view basic status information of your nginx server on a self
chosen URL.
Here you can see there are 291 active connections; for 6 of them nginx is reading the
request, for 179 nginx is writing the response and 106 requests are waiting to be handled.
When we create the nginx configuration we will show you how to define the URL where
this information will be visible.
–with-http_secure_link_module
This module is usefull when you’re hosting Flash Video files on your site (FLV files).
This module provides pseudo-streaming server-side support for Flash Video (FLV) files.
It handles requests with the start argument in the request URI’s query string specially, by
sending back the contents of a file starting from the requested byte offset and with the
prepended FLV header.
–with-http_realip_module
This module is used to get the real IP address of the client. When nginx is behind a proxy
server or load balancer; the IP address of the client will sometimes be the IP address of
the proxy server or load balancer; not the IP address of your visitor. To make sure the
correct IP is available for use, you can enable this module. This can also be usefull when
you are trying to determine the country of the visitor from the IP address. (eg. via the Geo
IP module of nginx).
There is good tutorial at CloudFront which explains how to use this (advanced) module.
–with-pcre=../pcre-8.37
Directory where the PCRE library sources are located. It adds support for regular
expressions in the nginx configuration files (as explained previously)
–with-pcre-jit
Enables the Just In Time compiler for regular expressions. Improves the performance if
you have a lot of regular expressions in your nginx configuration files.
–add-module=../ngx_cache_purge-2.3
This is an important module which enables the TLS extensions. One of such extension is
SNI or Server Name Indication.
This is used in the context of https sites. Normally only one site (https domain) can be
hosted on the same IP address and port number on the server.
SNI allows a server to present multiple https certificates on the same IP address and port
number to the browser. This makes it possible for your VPS with eg. only 1 IPv4 address
to host multiple https websites/domains on the standard https port without having them to
use all the same certificate. (which would give certificate warnings in the browsers as a
https certificate is generally valid for one domain)
The only browser with a little bit of market share that does not support SNI is IE6. Users
will still be able to browser your site, but will receive certificate warnings. We advise you
to check in your Google Analytics tool to know the market share of each browser.
–add-module=../ngx_devel_kit-master and –add-module=../set-misc-nginx-module-0.29
These modules add miscellaneous options for use in the nginx rewrite module (for HTTP
redirects, URL rewriting, …)
–error-log-path=/var/log/nginx/error.log and –http-log-path=/var/log/nginx/access.log
The PID path is the path to the nginx PID file. But what is a PID file?
pid files are written by some Unix programs to record their process ID while they are
starting. This has multiple purposes:
It’s a signal to other processes and users of the system that that particular program is
running, or at least started successfully.
You can write a script to check if a certain programm is running or not.
It’s a cheap way for a program to see if a previous running instance of the programm
did not exit successfully.
The nginx.pid file is thus used to see if nginx is running or not.
Startup/shutdown/restart scripts for nginx which we will use in later chapters depend
upon the presence of the nginx.pid file.
Lock files can be used to serialize the access to a shared resource (eg. a file, a memory
location); when two concurrent processes request for it at the same time. On most systems
the locks nginx takes are implemented using atomic operations. On some nginx will use
the lock-path option.
nginx Releases
Each nginx release comes with a changelog file describing the fixes and enhancements in
each version. You can find those on the download page of nginx
Here we list some recent changes that improve performance: * v1.7.8 fixes a 200ms delay
when using SPDY * v1.7.6 has some security fixes * v1.7.4 has bugfixes in the SPDY
module and improved SNI support * v1.9.5 introduces HTTP/2 support
This is good way to reduce the attack possibilities of hackers when they would manage to
find a security hole in nginx.
First we create a directory /home/nginx which we will use as home directory for the user
nginx.
Thirdly, we create the user ‘nginx’ and specify the home directory (/home/nginx) via the -
d option.
We also specify the login shell via -s. The login shell /usr/sbin/nologin actually makes
sure that we can not remotely login (SSH) via the nginx user. We do this for added
security.
If you now take a look in your home directory you’ll see something like this:
$ cd/home
$ ls -ll
drwxr-xr-x 2 nginx nginx 4096 Nov 6 07:47 nginx
A soft limit may be changed later by the process, up to the hard limit value. The hard
limit can not be increased, except for processes running with superuser priviliges (eg. as
root).
$ sudo nano /etc/security/limits.conf
after
add:
nginx soft nofile 65536
nginx hard nofile 65536
PATH=/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/local/sbin/nginx
DAEMON_OPTS="-c /usr/local/nginx/conf/nginx.conf"
NAME=nginx
DESC=nginx
PID_FILE=/var/run/nginx.pid
set -e
case "$1" in
start)
echo -n "Starting $DESC: "
start-stop-daemon --start --quiet --pidfile $PID_FILE \
--exec $DAEMON -- $DAEMON_OPTS
echo "$NAME."
;;
stop)
echo -n "Stopping $DESC: "
exit 0
Now we need to make the script executable with the chmod +x command:
$ sudo chmod +x /etc/init.d/nginx
To register the script to start nginx during boot we need to run update-rc.d with the name
of the startup script.
$ sudo /usr/sbin/update-rc.d -f nginx defaults
To restart nginx:
$ sudo /etc/init.d/nginx restart
You’ll notice that there a few *.conf.default files in this directory. They contain example
configurations of common use cases; such as a website on port 80 and http, PHP support,
https and more.
We advice you take a look at these examples to get a feeling on how to configure nginx.
The main configuration file is nginx.conf. The file contains general configuration and
includes other configuration files. This way you can eg. group configuration together per
site you want to host on your server.
Some settings only make sense inside a { … } block; eg. in a certain context. If you put
parameters in the wrong location, nginx will refuse to startup, telling you where the
problem is located in your configuration file(s).
Updating worker_processes
The worker-processes directive is responsible for letting nginx know how many processes
it should spawn. It is common practice to run 1 worker process per CPU core.
To view the number of available cores in your system, run the following command:
$ grep processor /proc/cpuinfo | wc -l
Updating worker_priority
This defines the scheduling priority for nginx worker processes. A negative number
means higher priority. (acceptable range: -20 to + -20)
$ sudo nano /user/local/nginx/conf/nginx.conf
worker_priority -10;
Updating timer_resolution
The 0666 permission settings mean all users can read and write to this directory but no
files may be executed here; again to harden security.
$ sudo nano /user/local/nginx/conf/nginx.conf
error_log /var/log/nginx/error.log;
To enable the Just In Time compiler to improve performance we also need to add it to the
nginx.conf:
$ sudo nano /user/local/nginx/conf/nginx.conf
pcre_jit on;
PCRE JIT can speed up processing of regular expressions in the nginx configuration files
significantly.
The access log logs all requests that came in for the nginx server. We will use the default
log format provided by nginx. This log format is called ‘combined’ and we will reference
it in the log file location parameter.
http {
...
access_log /var/log/nginx/access.log combined buffer=32k;
...
}
Logging every request takes CPU and I/O cycles; that’s why we define a buffer.
(buffer=32k). This causes nginx to buffer a series of log entries and write them to the file
together instead with a separate write operation for each.
open_log_file_cache max=100 inactive=30s min_uses=2;
Defines a cache that stores the file descriptors of frequently used logs whose names
contain variables. The directive has the following parameters:
max: sets the maximum number of descriptors in a cache; if the cache becomes full
the least recently used descriptors are closed
inactive: sets the time after which the cached descriptor is closed if there were no
access during this time; by default, 10 seconds
min_uses: sets the minimum number of file uses during the time defined by the
inactive parameter to let the descriptor stay open in a cache; by default, 1
Enabling HTTP Status output
The following configuration enables nginx HTTP status output on the URL /nginx-info
# status information nginx ngx_http_stub_status_module
location /nginx-info {
stub_status;
}
Here you can see there are 291 active connections; for 6 of them nginx is reading the
request, for 179 nginx is writing the response and 106 requests are waiting to be handled.
HTTP Performance Optimalizations
sendfile on;
The Unix sendfile allows to transfer data from a file descriptor to another directly in
kernel space. This saves a lot of resources and is very performant. When you have a lot of
static content to be served, sendfile will speed up the serving significantly.
When you have dynamic content (eg. Java, PHP, …) this setting will not be used by
nginx; and you won’t see performance differences.
tcp_nopush on;
tcp_nodelay on;
These two settings only have effect when ‘sendfile on’ is also specified.
tcp_nopush ensures that the TCP packets are full before being sent to the client. This
greatly reduces network overhead and speeds up the way files are sent.
When the last packet is sent (which is probably not full), the tcp_nodelay forces the
socket to send the data, saving up to 0.2 seconds per file.
Security optimalizations
server_tokens off;
server_name_in_redirect off;
Disabling server_tokens makes sure that no nginx version information is added to the
response headers of an http request.
keepalive_timeout 15s;
send_timeout 10s;
The keepalive_timeout assigns the timeout for keep-alive connections with the client.
Connections who are kept alive, for the duration of the timeout, can be reused by the
Internet Explorer will disregard the timeout setting, and will auto close them by itself
after 60 seconds.
keepalive_disable msie6;
Disables keep-alive connections with misbehaving browsers. The value msie6 disables
keep-alive connections with Internet Explorer 6
reset_timedout_connection on;
Allow the server to close the connection after a client stops responding. Frees up socket-
associated memory.
Gzip Compression
Gzip can help reduce the amount of network transfer by reducing the size of html, css,
and javascript.
Buffers
Buffers sizes should be big enough so that nginx doesn’t need to write to temporary file
causing disk I/O.
client_body_buffer_size 10k;
The buffer size for HTTP POST actions (eg. Form submissions).
client_header_buffer_size 1k;
Sets the maximum allowed size of the client request body, specified in the “Content-
Length” request header field. If the size in a request exceeds the configured value, the 413
(Request Entity Too Large) error is returned to the client
large_client_header_buffers 2 2k;
The maximum number and size of buffers for large client headers
Miscellaneous
proxy_temp_path /tmp/nginx_proxy/;
Defines a directory for storing temporary files with data received from proxied servers.
Caches for Frequently Accessed Files
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
They are used by browsers to tell the server which kind of types they understand/accept.
The server also uses them by specifying the mime type in the server responses.
nginx will use to file extension to correctly send back the corresponding mime type. An
example would be: extension .html corresponds to ‘text/html’ mime type.
You can add more if needed. Just define a new type {…} block inside the http block:
Create a system user for each website domain which can login via SFTP (secure file
transfer protocol) - this way you can upload your website to the server
Create a home directory for each website domain where you will put the html files,
… of your website
Create a nginx config file for each website domain (so nginx actually listens for that
domain.
web: where we will place the html, css, … files of our website
log: where specific logs for this website will be logged
When SFTPing to the server with the user mywebsite you’ll not be able to step outside of
the root directory /home/mywebsite.
$ sudo useradd -s /usr/sbin/nologin -d /home/mywebsite/ -g nginx mywebsite
The above command will add a user mywebsite, that is part of the nginx group. The home
directory is /home/mywebsite and we don’t provide any login shell via SSH for this user.
Remark: /usr/sbin/nologin is correct in Ubuntu; in other Linux distributions this can also
be /sbin/nologin !
Now we will generate a good password for the user mywebsite with openssl:
$ openssl rand 8 -base64
yDIv39Eycn8=
Change the ownership to root for this directory (this is needed for the remote SFTP to
work correctly)
$ sudo chown root.root /home/mywebsite
Make sure that /home/mywebsite is only writable by the root user. (read only for the rest
of the world):
$ ls -l /home
We will create two subdirectories web and log which we will later use to put our website
files and log files respectively.
Setting directories g+s with chmod makes all new files created in said directory have their
group set to the directory’s group by default. (eg. in this case our group is nginx). This
makes sure that nginx is correctly able to read the html files etc. for our website.
Configuring Remote SFTP access For mywebsite.com
In this section we will configure remote SFTP access by configuring the ssh daemon.
$ sudo nano /etc/ssh/sshd_config
We will change the Subsystem sftp from sftp-server to internal-sftp. We do this because
we want to enforce that the SFTP user can not get outside of its home directory for
security reasons. Only internal-sftp supports this.
Thus change
with
Subsystem sftp internal-sftp
Match User mywebsite indicates that the lines that follow only apply for the user
mywebsite.
The ChrootDirectory is the root directory (/) the user will see after the user is
authenticated.
“%h” is a placeholder that get’s replaced at run-time with the home folder path of that
user (eg. /home/mywebsite in our case).
ForceCommand internal-sftp - This forces the execution of the internal-sftp and ignores
any commands that are mentioned in the ~/.ssh/rc file.
You also need to add /usr/sbin/nologin to /etc/shells, or the sftp user will not be able to
login:
$ sudo nano /etc/shells
add
/usr/sbin/nologin
at the end
You can now try out to login via SFTP (eg. by using FileZilla on Windows). If the process
fails it is best to check in the authentication log of the ssh daemon:
$ cat /var/log/auth.log
In this log file you could also find login attempts of (would-be) hackers trying to break in
into your site.
First we’ll create a seperate nginx configuration file for our domain. We’ll then include it
in the main nginx.conf. This way, if we add more sites we have everything cleanly
seperated.
$ sudo mkdir -p /usr/local/nginx/conf/conf.d
Now we’ll add an include line in the nginx.conf to load all .conf files in the conf.d
directory.
$ sudo mkdir -p /usr/local/nginx/conf/nginx.conf
http {
...
include /usr/local/nginx/conf/conf.d/*.conf;
}
server {
server_name www.mywebsite.com;
listen <your IPv4 address>:80; # Listen on IPv4 address
listen [<your IPv6 address>]:80; # Listen on IPv6 address
root /home/mywebsite/web;
access_log /home/mywebsite/log/access.log combined buffer=32k;
error_log /home/mywebsite/log/error.log;
}
A server block defines a ‘virtual host’ in nginx. It maps a server_name to an IPv4 and/or
an IPv6 address. You can see we’re listening on the default HTTP port 80. The root tag
specifies in which directory the html files are present.
access_log and error_log respectively log the requests that arrive for
www.mywebsite.com and the errors.
You can create multiple server { … } blocks that map different server names to the same
IPv4 address. This allows you to host multiple websites using only 1 IPv4 or IPv6
address. These are also sometimes called ‘virtual hosts’ (Most probably you’ll only have
1 Ipv4 address available for your VPS because they are becoming more scarce).
Now we’ll add the mywebsite.com variant (without the www) in a second server { … }
block.
The last line tells nginx to issue an HTTP 301 Redirect response to the browser. This will
make sure that everyone that types in mywebsite.com will be redirected to
www.mywebsite.com. We do this because we want our website to be available on a single
address for Search Engine optimalization reasons. Google, Bing and other search engines
don’t like content that is not unique, eg. when it is available on more then one website.
The $request_uri is a variable, and contains whatever the user typed in the browser after
mywebsite.com
Configuring a Default Server
When you issue a request to www.mywebsite.com with your browser it’ll normally
include a Host: www.mywebsite.com parameter in the HTTP request.
nginx uses this “Host” parameter to determine which virtual server the request should be
routed to. (as multiple hosts can be available on one IP address)
If the value doesn’t match any server name or if the Host parameter is missing, then nginx
will route the request to the default server (eg. Port 80 for http).
If you haven’t defined a default server, nginx will take the first server { … } block as the
default.
In the config below we will explicitly set which server should be default, with the
default_server parameter.
# Default server
server {
listen 80 default_server; # listen on the IPv4 address
listen [::]:80 default_server; #listen on the IPv6 address
return 444;
}
We return a HTTP response 444, this returns no information to the client and closes the
connection.
Setting Up Log Rotation For nginx
Logs from nginx can quickly take up a lot of diskspace when your site(s) have a lot of
visitors.
We’ll setup the Linux ‘logrotate’ application to solve this problem automatically.
renaming (rotating) & compressing log files when they reach a certain size
keeping compressed backups for log files (with limits)
In the end the options make sure that the size taken by the log files will be constant.
Autorotate the logs of nginx: * daily * max 10 backups, before log files are deleted * size
take by log files is max 100MB * when rotating the log files, compress them * do a daily
compress * do a postrotate when nginx is restarted
We’ll create the following configuration file for the above settings:
$ sudo touch /etc/logrotate.d/nginx
$ sudo nano /etc/logrotate.d/nginx
We use the /etc/logrotate.d directory because logrotate will look there by default (default
settings are in /etc/logrotate.conf)
/var/log/nginx/*.log /usr/local/nginx/logs/*.log /home/*/log/*.log {
daily
missingok
rotate 10
size=100M
compress
delaycompress
notifempty
sharedscripts
postrotate
sudo /etc/init.d/nginx restart
endscript
}
The postrotate specifies that’ll restart nginx after the rotation of log files took place.
By using sharedscripts we are ensuring that the post rotate script doesn’t run on every
rotated log file but only once.
You can view the status of the logrotation via the command:
$ cat /var/lib/logrotate/status
You’ll see that log files will have been compressed and renamed as <filename of log
file>.1 (1 = first backup)
By default nginx will log all requests for html files, images, … to a log file called the
access log. The information recorded contains the IP address, the date of visit, the page
visited, the HTTP response code and more.
If you don’t need this information, eg. if you have added Google Analytics code to your
site, you can safely disable this logging and lessen the load on your I/O subsystem a bit.
Here is how: inside a http or server block you can specify the access_log and error_log
variables:
http {
...
log_not_found off;
access_log off;
}
server {
....
error_log /home/<site>/log/error.log;
}
In the above example we completely turn off the access log. We still log errors to the
error_log, but we disable the HTTP 404 not found logs. We recommend to use Google
Webmaster tools to keep track of bad links on your site.
Updating nginx
nginx regurarly comes out with new versions. As we have compiled nginx from source,
we will need to recompile the nginx version.
We start by taking a backup of our nginx config files which are located in
/usr/local/nginx/conf/
$ sudo cp -R /usr/local/nginx/conf/ ~/nginx-conf-backup
Like previously explained download the sources of the new nginx version. Also check
whether there are new versions of the modules we’re using. (eg. Google Pagespeed nginx
plugin, OpenSSL).
If there are new versions of the modules; also download these and update the ./configure
command used for building nginx with the new paths.
Before we can do ‘sudo make install’ we need to stop the currently running server:
$ sudo /etc/init.d/nginx stop
$ sudo make install
In this chapter we’ll install and configure PHP, We’ll also explain how to configure nginx
to pass requests to the PHP code interpreter in the most scalable & performant way.
The single biggest performance improvement for PHP based websites comes with the use
of an OpCode cache.
When an OpCode cache is added, the slow parsing and compiling steps are skipped,
leaving just the execution of the code.
Zend OpCache
Since PHP5.5 a default OpCode cache has been included: Zend OpCache.
We recommend using Zend OpCache instead of the older APC cache. (Alternative PHP
cache) because APC Cache was not always 100% stable with the latest PHP versions and
Zend OpCache appears to be more performant.
Downloading PHP
We will now download and unzip the latest version of the PHP sources. You can find the
download links at http://php.net/downloads.php
$ cd ~
$ sudo wget http://php.net/distributions/php-5.6.14.tar.gz
$ sudo tar xzvf php-5.6.14.tar.gz
Compilation flags
–enable-opcache
This is an interface to the mcrypt library, which supports a wide variety of block
algorithms such as DES, TripleDES, Blowfish (default), 3-WAY, SAFER-SK64, SAFER-
SK128, TWOFISH, TEA, RC2 and GOST in CBC, OFB, CFB and ECB cipher modes
This is used by eg. the Magento shopping cart and other PHP frameworks.
–with-zlib
PHP Zlib module allows you to transparently read and write gzip compressed files. Thus
it is used for serving faster content to the end users by compressing the data stream.
Some applications like Pligg require zlib compression enabled by default in the PHP
engine
–with-gettext
With the exif extension you are able to work with image meta data. For example, you
may use exif functions to read meta data of pictures taken from digital cameras by
The bzip2 functions are used to transparently read and write bzip2 (.bz2) compressed
files.
–enable-soap
These modules provide wrappers for the System V IPC family of functions. It includes
semaphores, shared memory and inter-process messaging (IPC).
–enable-shmop
Shmop is an easy to use set of functions that allows PHP to read, write, create and delete
UNIX shared memory segments
–with-pear
PEAR is short for “PHP Extension and Application Repository” and is pronounced just
like the fruit. The purpose of PEAR is to provide:
mbstring provides multibyte specific string functions that help programmers deal with
multibyte encodings in PHP. In addition to that, mbstring handles character encoding
conversion between the possible encoding pairs. mbstring is designed to handle Unicode-
based encodings such as UTF-8 and UCS-2 and many single-byte encodings for
convenience.
–with-openssl
Enables the user of the MySQL native driver for PHP which is highly optimized for and
tightly integrated into PHP.
Enables the use of the MySQL native driver with mysqli, the improved MySQL interface
API which is used by a lot of PHP frameworks.
–with-mysql-sock=/var/run/mysqld/mysqld.sock
Sets the path of the MySQL unix socket pointer (used by all PHP MySQL extensions)
–with-curl
Enables the PHP support for cURL (a tool to transfer data from or to a server)
–with-gd
PHP can not only output HTML to a browser. By enabling the GD extension PHP can
output image streams directly to a browser. (eg. JPG, PNG, WebP, …)
–enable-gd-native-ttf
Enables the use of arbitrary precision mathematics in PHP via the Binary Calculator
extension. Supports numbers of any size and precision, represented as strings.
–enable-calendar
A PHP script can use this extension to access an FTP server providing a wide range of
control to the executing script
–enable-pdo
Enables the use of the MySQL native driver with the PDO API interface
–enable-inline-optimization
Inlining is a way to optimize a program by replacing function calls with the actual body
of the function being called at compile-time.
It reduces some of the overhead associated with function calls and returns.
Enabling this configuration option will result in potentially faster php scripts that have a
larger file size.
–with-imap, –with-imap-ssl, –with-kerberos
Adds support for the IMAP protocol (used by email servers) and related libraries
–with-fpm-user=nginx, –with-fpm-group=nginx
When enabling the FastCGI Process Manager (FPM), we set the user and group to our
nginx web server user.
With the following command you can verify the version of PHP:
$ php -v
PHP 5.6.14 (cli) (built: Dec 29 2014 18:05:44)
Copyright (c) 1997-2014 The PHP Group
Zend Engine v2.6.0, Copyright (c) 1998-2014 Zend Technologies
To verify the PHP installation, let’s create a test php file which we will execute using the
php command line tool:
$ cd ~
$ nano phpinfo.php
<?php
As our server is located in the New York timezone we selected this timezone. You can
find a list of possible timezones at http://www.php.net/manual/en/timezones.php. Choose
the continent/city which is nearest your server data center.
This sets the maximum time in seconds a script is allowed to run before it is terminated
by the parser. This helps prevent poorly written scripts from tying up the server. The
default setting is 30. When running PHP from the command line the default setting is 0.
Your web server can have other timeout configurations that may also interrupt PHP
execution.
Duration of time (in seconds) for which to cache realpath information for a given file or
directory. For systems with rarely changing files, consider increasing the value.
Sets the maximum filesize that can be uploaded by a PHP script. Eg. If your Wordpress
blog system complains that you can not upload a big image, this can be caused by this
setting.
This sets the maximum amount of memory in bytes that a script is allowed to allocate.
This helps prevent poorly written scripts for eating up all available memory on a server.
Note that to have no memory limit, set this directive to -1.
Sets max size of HTTP POST data allowed. This setting also affects file upload. To
upload large files, this value must be larger than upload_max_filesize. If memory limit is
enabled by your configure script, memory_limit also affects file uploading. Generally
speaking, memory_limit should be larger than post_max_size.
This setting is a security setting. This means that you don’t want to expose to the world
that PHP is installed on the server.
This directive allows you to disable certain PHP functions for security reasons. It takes on
a comma-delimited list of function names.
Only internal functions can be disabled using this directive. User-defined functions are
unaffected.
Don’t add a X-PHP-Originating-Script HTTP header that will include the UID (Unique
identifier) of the script followed by the filename.
Sets the max nesting depth of input variables in HTTP GET, POST (eg.
$_GET, $_POST.. in PHP)
Set the maximum amount of input HTTP variables
max_input_vars = 2000
The maximum number of HTTP input variables that may be accepted (this limit is
applied to $_GET, $_POST and $_COOKIE separately).
Using this directive mitigates the possibility of denial of service attacks which use hash
collisions. If there are more input variables than specified by this directive, further input
variables are truncated from the request.
;If enabled, a fast shutdown sequence is used for the accelerated code
;The fast shutdown sequence doesn't free each allocated block, but lets
;the Zend Engine Memory Manager do the work.
opcache.fast_shutdown=1
; Location of the opcache library.
zend_extension=/usr/local/lib/php/extensions/no-debug-zts-20131226/opcache.so
Now reboot your server and check the boot log whether everything succeeded. (eg. No
PHP error messages).
When you have a lot of visitors, it is possible that the PHP interpreter is launched multiple
times concurrently. (eg. Multiple processes).
Configuring PHP-FPM
To configure PHP-FPM we’ll create the following php-fpm configuration file.
$ sudo nano /usr/local/etc/php-fpm.conf
First we’ll add the path to the PID file for PHP-FPM. We’ll use this later in our
startup/shutdown script of PHP-FPM.
; process id
pid = /var/run/php-fpm.pid
emergency_restart_threshold = 10
emergency_restart_interval = 1m
process_control_timeout = 10s
The above 3 settings configure that if 10 PHP-FPM child processes exit with an error
within 1 minute then PHP-FPM restarts automatically. This configuration also sets a 10
seconds time limit for child processes to wait for a reaction on signals from the master
PHP-FPM process.
As we assume our server is completely private, we will configure only one PHP-FPM
Pool, called “www”
[www]
user = nginx
group = nginx
The [www] defines a new pool with the name “www”. We’ll launch PHP with our nginx
user which makes sure that our PHP interpreter will only be able to read/write
files/directories that are owned by nginx. (in our case that would be our websites and
nothing else).
listen = 127.0.0.1:9000
The listen configuration is the IP address and port where PHP-FPM will listen for
incoming requests (in our case nginx forwarding requests for PHP files).
listen.allowed_clients = 127.0.0.1
The allowed_clients setting will limit from where the clients can access PHP-FPM. We
specify 127.0.0.1 or the local host; this makes sure that no one is able to access the PHP-
FPM from the outside (evil) world. Only applications also running on the same server can
communicate with PHP-FPM. In our case nginx will be able to communicate with the
PHP-FPM server.
pm = ondemand
You can choose how the PHP-FPM process manager will control the number of child
processes. Possible values are static, ondemand and dynamic.
The number of seconds after which an idle process will be killed with the ondemand
process manager.
You could also go for the following dynamic configuration which preforks 15 processes
(pm.start_servers):
pm = dynamic
pm.max_children = 5
; The number of child processes created on startup. Default Value: min_spare_ser\
vers + (max_spare_servers - min_spare_servers) / 2
pm.start_servers = 15
; The desired minimum number of idle server processes.
pm.min_spare_servers = 2
; The desired maximum number of idle server processes.
pm.max_spare_servers = 35
; The number of requests each child process should execute before respawning. Th\
is can be useful to work around memory leaks in 3rd party libraries.
pm.max_requests = 500
This defines the path were we can view the PHP FPM status page. The status page is very
usefull to see if we need to tweak the FPM settings further.
To view this information on your site via your browser you’ll need to add a location to the
nginx configuration too. We’ll add this later in this chapter when we configure nginx to
pass PHP requests to PHP-FPM.
rlimit_files = 65536
This setting disables the creation of core dumps when PHP FPM would crash. This way
we save on disk space.
slowlog = /var/log/php-fpm-www-slow.log
The ping URI to call the monitoring page of FPM. This could be used to test from outside
that FPM is alive and responding. (eg. with Pingdom
ping.response = FPM Alive
Limits the extensions the PHP FPM will process. Eg. Only files with a php, php3, php4 or
php5 extension are allowed. This will prevent malicious users to use other extensions to
exectute php code.
php_admin_value[error_log] = /var/log/php-fpm-www-php.error.log
Sets the PHP error_log location. Because we use php_admin_value the log location
cannot be overriden in a users php.ini file.
You can copy the php-fpm init.d script from the php sources:
$ cd ~
$ sudo cp php-5.6.16/sapi/fpm/init.d.php-fpm /etc/init.d/php-fpm
$ sudo chmod 755 /etc/init.d/php-fpm
Now edit the file and check whether the following paths are correct:
$ sudo nano /etc/init.d/php-fpm
prefix=/usr/local
exec_prefix=${prefix}
php_fpm_BIN=${exec_prefix}/sbin/php-fpm
php_fpm_CONF=${prefix}/etc/php-fpm.conf
php_fpm_PID=/var/run/php-fpm.pid
Stopping php-fpm:
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_buffer_size 40k;
fastcgi_buffers 290 40k;
fastcgi_busy_buffers_size 512k;
fastcgi_temp_file_write_size 512k;
fastcgi_intercept_errors on;
The fastcgi_buffers and fastcgi_buffer_size settings can be tweaked after your site is
receiving traffic. Then it is possible to compute the average and maximum response sizes
of the HTML files via the access logs. Here is the command to compute the average
response size for all requests in an access.log file.
$ echo $(( `awk '($9 ~ /200/)' access.log | awk '{print $10}' | awk '{s+=$1} END\
{print s}'` / `awk '($9 ~ /200/)' access.log | wc -l` ))
Set the fastcgi_buffer_size accordingly. (eg. a little bit higher then the average response
size. This makes sure that most requests will fit into 1 buffer, possibly optimizing
performance).
fastcgi_buffer_size 40k;
Then divide this number (in bytes) by 1000 (to get kb) and 40,
fastcgi_buffers 290 40k;
To enable PHP for a nginx server { … } configuration you should include the php.conf in
the server block:
server {
...
include /usr/local/nginx/conf/php.conf
...
}
We’ll now add the necessary nginx configuration for allowing access to it:
location ~ ^/(phpstatus|phpping)$ {
access_log off;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $fastcgi_script_name;
# fastcgi_params in conf directory of nginx
Now opcache.php is available in the directory where your nginx is looking for its files.
Going to the opcache.php file via your browser should display the following information:
OpCache
/var/php-fpm*.log {
daily
missingok
rotate 10
size=100M
compress
delaycompress
notifempty
sharedscripts
postrotate
sudo /etc/init.d/php-fpm restart
endscript
}
Memcached stores small chunks of arbitrary data that comes from results of database
calls, API calls or page rendering. It thus reduces calls made to your database, which will
increase the speed of regurarly accessed dynamic webpages. It’ll also improve the
scalability of your infrastructure.
Multiple client APIs exist, written in different languages like PHP and Java. In this guide
we’ll focus on the PHP integration, which will make it possible to let Wordpress, phpBB
and other PHP frameworks use the memcached server.
Comparison of caches
At this moment we have already configured an OpCode cache in PHP. Why would we
need another kind of caching?
The OpCode cache in PHP stores the compiled PHP code in memory. Running PHP code
will speed up significantly due to this.
Memcache can be used as a temporary data store for your application to reduce calls to
your database. Applications which support Memcache include PHPBB3 forum software,
Wordpress with the W3 Total Cache plugin and more.
pecl/memcache
pecl/memcached (newer)
We will install the two, because some PHP webapp frameworks may still use the older
API.
memcached Server
You can find the latest release of the memcached server at
http://memcached.org/downloads
Now we’ll make sure that Memcached server is able to find the libevent library:
$ sudo nano /etc/ld.so.conf.d/libevent-i386.conf
Add
/usr/local/lib/
/usr/local/lib is the directory where libevent.so is located. With sudo ldconfig we make
the configuration change active.
Press Ctrl-C to stop the memcached server, as we will add a startup script that’ll start
memcached at boot up of our server.
Installing libmemcached
libMemcached is an open source C/C++ client library for the memcached server . It has
been designed to be light on memory usage, thread safe, and provide full access to server
side methods.
It can be used by the pecl/memcached PHP extension as a way to communicate with the
Memcached server. Because it is written in C it is also very performant.
Installing igbinary
Igbinary is a PHP extension which provides binary serialization for PHP objects and data.
It’s a drop in replacement for PHP’s built in serializer.
The default PHP serializer uses a textual representation of data and objects. Igbinary
stores data in a compact binary format which reduces the memory footprint and performs
operations faster.
Why is this important? Because the pecl/memcached PHP extension will use the igbinary
serializer when saving and getting data from the memcached server. The memcached
server cache will then contain the more compact binary data and use less memory.
$ cd ~
$ wget http://pecl.php.net/get/igbinary-1.2.1.tgz
$ tar xvf igbinary-1.2.1.tgz
$ cd igbinary-1.2.1
$ phpize
Configuring for:
PHP Api Version: 20131106
Zend Module Api No: 20131226
Zend Extension Api No: 220131226
$ sudo ./configure CFLAGS="-O2 -g" --enable-igbinary
$ sudo make
$ sudo make install
Add:
; Load igbinary extension
extension=igbinary.so
You can view whether the igbinary module is now available in php via:
$ php -m
Add
extension=/usr/local/lib/php/extensions/no-debug-zts-20131226/memcache.so
Add
extension=/usr/local/lib/php/extensions/no-debug-zts-20131226/memcached.so
Now memcached is running as a daemon. On the command line we can now check if it is
listening on the default port 11211:
$ sudo netstat -tap | grep memcached
Now lets create a Memcached server startup/stop script that is automatically run on boot
up of our server.
In the startup script we’ll also put in some Memcached configuration settings.
# Usage:
# /etc/init.d/memcached start
# /etc/init.d/memcached stop
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/local/bin/memcached
DAEMONNAME=memcached
DESC=memcached
# ?
CON=1024
# nr of threads; It is typically not useful to set this higher than the number o\
f CPU cores on the memcached server
THREADS=4
# ?
MINSP=72
# ?
CHUNKF=1.25
# Port to listen on
PORT1=11211
# Server IP
SERVERIP='127.0.0.1'
# Experimental options
OPTIONS='-o slab_reassign,slab_automove,lru_crawler,lru_maintainer,maxconns_fast\
,hash_algorithm=murmur3'
set -e
case "$1" in
start)
echo -n "Starting $DESC: "
$DAEMON -d -m $MEMSIZE -l $SERVERIP -p $PORT1 -c $CON -t $THREADS -n $MI\
NSP -f $CHUNKF -u $USER $OPTIONS
;;
stop)
echo -n "Stopping $DESC: "
killall $DAEMON 2>/dev/null
;;
*)
N=/etc/init.d/$DESC
echo "Usage: $N {start|stop}" >&2
exit 1
;;
esac
exit 0
Save the file and then make the script executable via:
$ sudo chmod +x /etc/init.d/memcached
After rebooting memcached should be running. You can check this via the following
command:
$ ps -ef | grep memcached
nobody 2430 1 0 17:17 ? 00:00:00 /usr/local/bin/memcached -d -m 6\
4 -l 127.0.0.1 -p 11211 -c 1024 -t 4 -n 72 -f 1.25 -u nobody -o slab_reassign,s\
lab_automove,lru_crawler,lru_maintainer,maxconns_fast,hash_algorithm=murmur3
You should now be able to browse to the index.php file via your browser to view the stats.
It could be that due to the upgrade he it will complain about Unable to initialize module:
PHP Warning: PHP Startup: memcache: Unable to initialize module
Module compiled with module API=20121212
PHP compiled with module API=20131226
or
This means you’ll have to recompile PHP modules which you have added in php.ini
under extensions=
For those modules you’ll have todo phpize, configure, sudo make install (like explained
in previous chapters).
Sometimes when you recompile the modules after installing a new PHP version, the
location where the compiled PHP modules are placed changes: (eg.
/usr/local/lib/php/extensions/no-debug-non-zts-20131226).
If that’s the case you’ll need to update the php.ini file (/usr/local/lib/php.ini) and change
the extensions= locations to match the correct path.
PHP frameworks can use the power of Imagick via the pecl imagick PHP
extension.
Install ImageMagick
$ cd ~
$ wget http://www.imagemagick.org/download/ImageMagick.tar.gz
Add
extension=/usr/local/lib/php/extensions/no-debug-zts-20131226/imagick.so
Installing phpMyAdmin
We’ll download the latest version of phpMyAdmin and add it to our Nginx website root:
$ cd ~
$ wget https://files.phpmyadmin.net/phpMyAdmin/4.5.3.1/phpMyAdmin-4.5.3.1-all-la\
nguages.zip
$ unzip phpMyAdmin-4.5.3.1-all-languages.zip
Now we will use the browser based setup pages to setup PHPMyAdmin. We’ll configure
on which host, port, username and password our database can be accessed.
To setup phpMyAdmin via the browser, you must manually create a folder “config” in
the phpMyAdmin directory. This is a security measure. On a Linux system you can use
the following commands:
$ sudo mkdir config # create directory for saving
$ sudo chmod 774 config # give it world writable permissions
After restart of nginx (sudo /etc/init.d/nginx restart), you can go the phpMyAdmin setup
screen with your browser at http://<mywebsite>/dbMyAdmin/setup/index.php and create
a new server.
In our case, our database server runs on the same host as everything else (localhost) and
the MySQL default port (3306). We recommend to put Connection Type on socket (it is a
little bit faster then tcp) and also fill in the Server socket (/var/run/mysqld/mysqld.sock)
In the Authentication tab, you can enter the user and password phpMyAdmin should use
to connect to the database. We previously showed you how you can create users to limit
access to databases. When you use the root MySQL user you’ll have access to everything
which could be a security risk.
Now press Apply; we’re back in the main config screen where we can download the
generated configuration file. (config.inc.php).
In the https / SSL support chapter we will enable secure access to our PHP My Admin
installation.
In the next section we will install a Java based web application server Jetty, which
scales very well and is very performant.
The latest major release of Java is Java 8. Each release sees new features and
optimized performance (eg. better garbage collection, …). We’ll describe how to
install or update your Java VM to the latest version below.
Oracle made it a little bit difficult to download the JDK from the SSH commandline
via wget. (eg. they want you to accept license agreements etc.).
This adds all the java executables to the system path and defines the JAVA_HOME
variable to /usr/local/java/jdk1.8.0_66
Now we need to tell the Linux OS that the Oracle Java version is available for use.
We execute 3 commands, one for the java executable, one for the javac executable
(java compiler) and one for javaws (java webstart):
$ sudo update-alternatives --install "/usr/bin/java" "java" "/usr/local/java/jdk\
1.8.0_66/bin/java" 1
$ sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/local/java/j\
dk1.8.0_66/bin/javac" 1
The following commands tell the OS to use our Java8 version as the default Java:
$ sudo update-alternatives --set java /usr/local/java/jdk1.8.0_66/bin/java
$ sudo update-alternatives --set javac /usr/local/java/jdk1.8.0_66/bin/javac
$ sudo update-alternatives --set javaws /usr/local/java/jdk1.8.0_66/bin/javaws
In the last step we will reload the system wide PATH /etc/profile by typing the
following command:
$ . /etc/profile
Now test if you have succesfully installed/updated your Java version with:
$ java -version
We recommend you to use the latest stable Jetty version. (at this moment 9.3). You can
see an overview of all Jetty versions at
http://www.eclipse.org/jetty/documentation/current/what-jetty-version.html
It is compatible with Java 1.8 and above and uses Java 1.7/1.8 features. Jetty will
have better performance when run on recent Java versions.
It supports the latest HTTP/2 protocol which speeds up websites
It supports the latest Servlet programming API (usefull for Java developers)
The following commands will make the jetty user the owner of the jetty installation
directory. We will also create a directory /var/log/jetty where Jetty can put its logs (which
is also owned by the jetty user):
$ sudo chown -R jetty:jetty /usr/local/jetty-9.3.6/
$ sudo mkdir -p /var/log/jetty
$ sudo chown -R jetty:jetty /var/log/jetty
Jetty comes with such a script which we will use. Only a few modifications are needed
which we describe below.
Let’s copy it to the /etc/init.d directory where all our other startup scripts are located.
$ sudo cp /usr/local/jetty-9.3.6/bin/jetty.sh /etc/init.d/jetty
JETTY_ARGS=jetty.port=8080
JETTY_HOME=/usr/local/jetty-9.3.6
JETTY_LOGS=/var/log/jetty
JETTY_PID=/var/run/jetty.pid
We’re setting the path where Jetty has been installed, the log files location and the process
id file respectively.
You’ll then receive similar output like below from the start script:
START_INI = /usr/local/jetty-conf/start.ini
JETTY_HOME = /usr/local/jetty-9.3.6
JETTY_BASE = /usr/local/jetty-conf
JETTY_CONF = /usr/local/jetty-9.3.6/etc/jetty.conf
JETTY_PID = /var/run/jetty.pid
JETTY_START = /usr/local/jetty-9.3.6/start.jar
JETTY_ARGS = jetty.port=8080 jetty.state=/usr/local/jetty-conf/jetty.state \
jetty-logging.xml jetty-started.xml
JAVA_OPTIONS = -Djetty.logs=/var/log/jetty -Djetty.home=/usr/local/jetty-9.3.\
6 -Djetty.base=/usr/local/jetty-conf -Djava.io.tmpdir=/tmp -XX:+UseLargePages -X\
mx512m
JAVA = /usr/bin/java
RUN_CMD = /usr/bin/java
START_INI = /usr/local/jetty-conf/start.ini
JETTY_HOME = /usr/local/jetty-9.3.6
JETTY_BASE = /usr/local/jetty-conf
JETTY_CONF = /usr/local/jetty-9.3.6/etc/jetty.conf
JETTY_PID = /var/run/jetty.pid
JETTY_START = /usr/local/jetty-9.3.6/start.jar
JETTY_LOGS = /var/log/jetty
JETTY_STATE = /usr/local/jetty-conf/jetty.state
CLASSPATH =
JAVA = /usr/bin/java
JAVA_OPTIONS = -Djetty.logs=/var/log/jetty -Djetty.home=/usr/local/jetty-9.3.\
6 -Djetty.base=/usr/local/jetty-conf -Djava.io.tmpdir=/tmp -XX:+UseLargePages -X\
mx512m
JETTY_ARGS = jetty.port=8080 jetty.state=/usr/local/jetty-conf/jetty.state \
jetty-logging.xml jetty-started.xml
RUN_CMD = /usr/bin/java -Djetty.logs=/var/log/jetty -Djetty.home=/usr/lo\
cal/jetty-9.3.6 -Djetty.base=/usr/local/jetty-conf -Djava.io.tmpdir=/tmp -XX:+Us\
eLargePages -Xmx512m -jar /usr/local/jetty-9.3.6/start.jar jetty.port=8080 jetty\
.state=/usr/local/jetty-conf/jetty.state jetty-logging.xml jetty-started.xml
This makes it very easy to seperate the configuration of your site from the Jetty
installation. When a new version of Jetty is released you can install it without risking to
overwrite your configuration files.
To create a default set of configuration files you can run the following command from our
jetty-conf directory:
$ sudo java -jar /usr/local/jetty-9.3.6/start.jar --add-to-start=http,deploy,js\
p,logging
Let’s make sure that the configuration files are all owned by our jetty user:
$ sudo chown --recursive jetty:jetty /usr/local/jetty-conf/
$ sudo chown --recursive jetty:jetty /usr/local/jetty-9.3.6/
We need to update our Jetty launch script to specify our Jetty Base directory:
$ sudo nano /etc/init.d/jetty
You should now be able to stop and start the Jetty server via sudo /etc/init.d/jetty
stop/start with the configuration inside the JETTY_BASE directory.
You can also view the configuration details of the Jetty server and Jetty base
configuration via:
$ cd /usr/local/jetty-conf
$ sudo java -jar /usr/local/jetty-9.3.6/start.jar --list-config
Jetty Environment:
-----------------
jetty.version = 9.3.6.v20151106
jetty.home = /usr/local/jetty-9.3.6
jetty.base = /usr/local/jetty-conf
This command will also list the Jetty Server classpath in case you would come across
some classpath or jar file issues.
You can of course modify the generated configuration. We’ll not cover this in detail but
will give an example we have used in production systems:
A start.ini file has been created which we can modify below to eg. Change the listening
port, max. number ot threads and more:
$ sudo nano /usr/local/jetty-conf/start.ini
--module=http
jetty.http.port=8080
jetty.http.idleTimeout=30000
# If this module is activated, then all jar files found in the lib/ext/ paths wi\
ll be automatically added to the Jetty Server Classpath
--module=ext
--module=server
# minimum number of threads
jetty.threadPool.minThreads=10
# Dump the state of the Jetty server, components, and webapps after startup
jetty.server.dumpAfterStart=false
--module=deploy
All this is configured via the jetty-deploy.xml. We’ll add a jetty-deploy.xml file to our
Jetty base directory to configure these settings:
$ cd ~
$ sudo mkdir -p /usr/local/jetty-conf/etc
$ sudo cp jetty-9.3.6/etc/jetty-deploy.xml /usr/local/jetty-conf/etc
$ sudo nano /usr/local/jetty-conf/etc/jetty-deploy.xml
Here we have copied a default jetty-deploy.xml from the unzipped Jetty download and
added it to our jetty-conf directory.
Jetty can use the monitoredDirName to find the directory where your Java webapp is
located. When starting Jetty, you’ll see Jetty deploying this directory.
Another important parameter is the scanInterval. This setting defines the number of
seconds between scans of the provided monitoredDirName.
A value of 0 disables the continuous hot deployment scan, Web Applications will be
deployed on startup only.
For production it is recommended to disable the scan for performance reasons and restart
the server when you have done webapp changes. For development you can use a 1 second
interval to see your changes immediately.
We can do the same for every Java app we run on our server. Beginning with Java 5 you
can start any Java app with the parameter -XX:+UseLargePages.
In our case we will configure the Jetty Java server to use large pages.
You can view the groups where user jetty is member of via:
$ sudo groups jetty
This is because we want to put Nginx in front of our Jetty server. Nginx is connected to
the outside world, and we’ll let Nginx forward the incoming requests to Jetty. (eg. for
example this could be all requests ending with .jsp or another URL pattern which should
be served by a Java application server)
Here is how you can forward the requests to Jetty running on port 8080 inside an existing
nginx configuration server {…} block.
server {
...
# Pass .jsp requests to Jetty
location ~ \.jsp$ {
proxy_pass http://localhost:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
}
...
}
receives the real IP address of the user visiting the site (X-Real-IP)
receives the URL that is in the users webbrowser bar (Host)
knows whether the website was accessed on https or http (X-Forwarded-Proto)
To launch Visual VM you’ll need to download and install the latest Java JDK
(http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-
2133151.html)
Now let’s add the port and host where the JMX information will be made available:
$ sudo nano start.ini
Now we will add a Jetty JMX host and port variable in the Jetty launch script:
$ sudo nano /etc/init.d/jetty
yourhostname should be the hostname you’ve chosen when you installed the Linux
distribution. In case you forgot you can find it via:
$ hostname
Please choose a jmxrmiport of your liking. In our example we chose 36806. We’ll need to
edit our firewall configuration so that this port is not blocked for incoming connections:
$ sudo nano /etc/csf/csf.conf
TCP_IN = "22,25,53,80,110,143,443,465,587,993,995,11211,3306,9000,8080,36806"
TCP6_IN = "22,25,53,80,110,143,443,465,587,993,995,11211,3306,9000,8080,36806"
We’re not yet done. Let’s create a jetty.conf file in our etc jetty base directory; this file is
automatically read by the /etc/init.d/jetty startup script we’re using.
$ sudo nano /usr/local/jetty-conf/etc/jetty.conf
These xml files can be copied from the Jetty home directory:
$ sudo cp ../jetty-9.3.6/etc/jetty-jmx-logging.xml .
$ sudo cp ../jetty-9.3.6/etc/jetty-jmx-started.xml .
$ sudo cp ../jetty-9.3.6/etc/jetty-jmx.xml .
$ sudo cp ../jetty-9.3.6/etc/jetty-jmx-remote.xml .
Uncomment the following section and add the IP Address of your server here: <Call
class=”java.lang.System” name=”setProperty”> <Arg>java.rmi.server.hostname</Arg>
<Arg><Your Server IP></Arg> </Call>
That’s it, now you need to restart your Jetty so the settings take effect.
$ sudo /etc/init.d/jetty restart
Now let’s get back to the VisualVM we started on our PC/Mac. Right click on the Remote
option and choose Add Remote host. Enter your servers IP Address here and click OK.
Now right click on the Remote host you’ve just added, and choose Add JMX Connection.
Add the JMX port after the colon in the Connection textfield and press OK. You should
now be able to connect to the remotely running Jetty server and monitor its status
When memory sizes grow large, the Garbage Collector can start to take more time to
reclaim memory. When the Garbage Collector is busy, the rest of the application is pauzed
temporarily.
For a server it’s very important to minimize these pauzes as much as possible, so they are
not visible to the end user. Eg. A GC pauze of 1 second is unacceptable.
Garbage Collection is a very tunable subject in Java. We’ll give our starting
recommendation for a Jetty web server.
There are multiple Garbage Collectors available. For a web server we recommend the
new G1 Garbage Collector which is available since Java 7.
The Garbage-First (G1) collector is a server-style garbage collector, targeted for multi-
processor machines with large memories. It has small pause times and achieves a high
throughput.
We will update the Jetty startup script to enable the G1 collector. Find the
JAVA_OPTIONS parameter and add the following options:
There is one other G1 collector settings which we kept on its default setting, and thus
haven’t had to include:
-XX:MaxGCPauseMillis=200
Here is why:
The String deduplication algorithm can be enabled for the G1 GC collector since Java7
update 6. If it can find two strings with the same content, it’ll make sure there is only one
underlying character array instead of two character arrays; cutting the memory
consumption in half this way.
In this chapter we will explain how to configure a connection from Jetty to MariaDB
using a JDBC driver. We will also use a connection pool for managing multiple JDBC
Connections and optimizing scalability and speed.
Two JDBC drivers exist for connecting to MariaDB. We have MariaDBs own JDBC
driver and Oracles MySQL JDBC driver works as well.
Currently our recommendation is to use the Oracle MySQL JDBC driver. Generally
spoken it still has more features then MariaDBs JDBC driver and is very performant.
There is also a difference in license terms: Oracle MySQL JDBC driver is GPL licensed,
while MariaDB JDBC driver is LGPL licensed. Here is an interesting thread about the
implications of the license: http://stackoverflow.com/questions/620696/mysql-licensing-
and-gpl
Below we will install & show configuration examples for both the Oracle and MariaDB
JDBC driver.
At this moment the latest version is v5.1.38 and can be downloaded as follows:
$ cd /usr/local/jetty-conf/lib/ext
$ sudo wget http://cdn.mysql.com/Downloads/Connector-J/mysql-connector-java-5.1.\
38.tar.gz
$ sudo tar xvf mysql-connector-java-5.1.38.tar.gz
$ cd mysql-connector-java-5.1.38/
$ sudo cp *.jar ..
$ cd ..
$ sudo rm -fr mysql-connector-java-5.1.38
$ sudo rm -fr mysql-connector-java-5.1.38.tar.gz
Multiple Connection Pool implementations exist. We have chosen HikariCP, because tests
show that it is currently the fastest & most stable choice for connecting to MySQL and/or
MariaDB.
Here is how we can download it and install it in the library extension directory of our
Jetty configuration:
$ cd /usr/local/jetty-conf/lib/ext
$ sudo wget https://repo1.maven.org/maven2/com/zaxxer/HikariCP/2.4.3/HikariCP-2.\
4.3.jar
Configuring Jetty to use a HikariCP connection pool with the MariaDB JDBC driver
<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://j\
etty.mortbay.org/configure.dtd">
Alternatively we can also configure the Oracle MySQL Datasource and JDBC URL:
$ sudo nano /home/<yourwebsitehomedir>/web/root/WEB-INF/jetty-env.xml
<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://j\
etty.mortbay.org/configure.dtd">
<Configure id="wimsbios" class="org.eclipse.jetty.webapp.WebAppContext">
<Set name="virtualHosts">
<Array type="java.lang.String">
<Item>www.<yoursite>.com</Item>
</Array>
</Set>
<New id="HikariConfig" class="com.zaxxer.hikari.HikariConfig">
<Set name="maximumPoolSize">100</Set>
<Set name="dataSourceClassName">com.mysql.jdbc.jdbc2.opt\
<Call name="addDataSourceProperty">
<Arg>url</Arg>
<Arg>jdbc:mysql://localhost:3306/<your_database_\
name></Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>user</Arg>
<Arg><your_database_user></Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>password</Arg>
<Arg><your_database_password></Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>useServerPrepStmts</Arg>
<Arg>true</Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>cachePrepStmts</Arg>
<Arg>true</Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>prepStmtCacheSqlLimit</Arg>
<Arg>2048</Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>prepStmtCacheSize</Arg>
<Arg>500</Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>cacheServerConfiguration</Arg>
<Arg>true</Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>useLocalTransactionState</Arg>
<Arg>true</Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>rewriteBatchedStatements</Arg>
<Arg>true</Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg>maintainTimeStats</Arg>
<Arg>false</Arg>
</Call>
</New>
prepStmtCacheSize: 500 - sets the number of prepared statements that the MySQL
JDBC driver will cache per connection
prepStmtCacheSqlLimit: 2048 - This is the maximum length of a prepared SQL
statement that the driver will cache
cachePrepStmts: true - enable the prepared statement cache
useServerPrepStmts: true - enable server-side prepared statements
cacheServerConfiguration: true - caches the Maria DB server configuration in the
JDBC driver
useLocalTransactionState: true
rewriteBatchedStatements: true
maintainTimeStats: false
Add
--module=logging
Add:
jetty.logs=/var/log/jetty
org.eclipse.jetty.LEVEL=INFO
The postrotate specifies that it will restart jetty after the rotation of log files took place.
By using sharedscripts we are ensuring that the post rotate script doesn’t run on every
rotated log file but only once.
You can view the status of the logration via the command:
$ cat /var/lib/logrotate/status
A CDN is a network of servers which are located around the world. They
cache static resources of your website, like images, stylesheets, javascript
files, downloads etc.
A CDN does not replace your server, but is a service you can buy &
configure separately. It will offload much of the traffic your server receives
for static content to the Content Delivery Network.
A CDN can improve the load time performance of your website because it:
off loads traffic from your server (less load on your own server)
the static resources will be fetched from the CDN server closest to the
user; which will probably be closer then your web hosting server;
because a CDN has many servers located around the world. There will
be less latency for these browser requests for static resources.
Choosing a CDN
There are quite a few CDN providers in the market place. A good overview
is listed at http://www.cdnplanet.com/. Pricing can differ quite a lot, so we
advise you to take your time to choose a CDN.
One term you’ll come across is POP, which stands for Point of Presence.
Depending on where your site visitors are coming from (Europe, USA, Asia,
Australia, …) you should investigate whether the CDN provider has one or
more POPs in that region.
For example, if your target market is China, you should use one of the CDNs
which has a POP in China.
With a Push CDN, the user manually uploads all the resources to the CDN
server. He/She then links to these resources from eg. the webpages on his
site/server.
A pull CDN takes another approach. Here the resources (images, javascript,
…) stay on the users server. The user links to the resources via the Pull CDN
URL. When the Pull CDN URL is asked for a file, it’ll fetch this file from
the original server (‘pulling’ the file) and serve it to the client.
The Pull CDN will then cache the resource until it expires. It can cause some
extra traffic on the original server if the resources would expire too soon (eg.
before being changed). Also, the first person asking for a certain resource
will have a slower response because the file is not yet cached yet (or if it was
expired).
Which type should you use? The Pull CDN is easier to use as you don’t have
to upload new resources manually. If the resources remain static or if your
site has a minimal amount of traffic, you could choose for a Push CDN;
because there the content stays available and is never re-pulled from the
original server.
Here is a list of features we think you should take into consideration when
choosing a CDN:
Support for https - if you have a secure website, the static resources
should also be loaded over https so that everything is secure. This
means the CDN also has support this
Support for SPDY 3.1 protocol - SPDY is a Google protocol to achieve
higher transmission speeds
Support for HTTP/2 protocol - HTTP/2 the next generation HTTP
standard is available since 2015 and supersedes the SPDY protocol.
Support for Gzip compression - compresses javascript and stylesheets
before sending it to the browser; reducing the size of the response
significantly
Our recommendations for stable, good performing and cheap CDNs are:
KeyCDN
MaxCDN
They both have all of the above features available at a very attractive
pricepoint.
For now we won’t yet configure the advanced features available under Show
Advanced Features. They include interesting performance related settings
like usage of Gzip, SPDY, HTTP/2 and https usage which we will cover in
our HTTPS chapter.
The Origin URL is the URL to your server where the CDN will find your
website resources (eg. CSS, Javascript, …)
For now leave the other settings at their default values. We’ll cover them in
the tuning section. When you save the zone an URL will be created of the
form: <name of zone>*.kxcdn.com. For example let’s suppose this is
mywebsite-93x.kxcdn.com.
Then your website resources will be downloadable via that URL; eg. a
request to http://mywebsite-93x.kxcdn.com/images/logo.png will pull the
resource from your Origin server at
http://www.mywebsite.com/images/logo.png and cache it at the CDN server
(mywebsite-93x.kxcdn.com)
Now you have enabled your Pull CDN zone, you can start using it on your
site. You’ll need to replace all URLs to static resources (images, css,
If you’re using a blogging platform like Wordpress, you can use the W3
Total Cache plugin to automate this process.
The easiest way todo is to create a DNS CName record at your DNS
provider. If you’ve chosen DNSMadeEasy like we explained in the DNS
chapter; this is quite easy.
Login at KeyCDN
Go to Zones
Click on the Managed - Edit button of your zone
Click on Show Advanced Features
Set Gzip to Enabled
Set expiration of resources
By default KeyCDN will not modify or set any cache response headers for
instructing browsers to expire resources (images, …). This means that
whatever cache response headers set by the Origin server stay intact. (in our
case this would be what nginx sets).
You can also override this behavior via the KeyCDN management dashboard
in the Show Advanced Features section of your zone.
We recommend to set the value to the max value allowed. (1 year) if possible
for your website. If you have resources which change a lot, you can get into
problems that visitors of your site keep seeing the cached version in their
browser cache. To circumvent this problem the image(s) should be served
from a different URL in those cases. Via the Google PageSpeed plugin
which we will cover later, this can be automated.
Setting the canonical URL Location
By default Google will use the URL it thinks is best. As we are linking to
cdn.mywebsite.com in our HTML code, Google will likely cache the images
with the cdn.mywebsite.com domain. We recommend you to use this
approach.
If you would like Google to use the image available on your origin server (
http://www.mywebsite.com/images/logo.png), you can try the following
When KeyCDN responds with the resource it can add a canonical Link
header to the HTTP Response. The canonical URL will be your origin URL
(eg. http://www.mywebsite.com/images/logo.png). This way Google and
Bing know which version of the resource is the ‘original’ one and will only
index these in eg. their image search.
You can enable this feature via the KeyCDN management dashboard in the
Show Advanced Features section of your zone:
Configuring robots.txt
In the same Advanced Features section you can also enable a robots.txt file
instructing Google and other search engines not to crawl any content at
cdn.wimsbios.com. We don’t recommend to enable this unless you know
very well what you’re doing. Eg. For example if you have enabled the
Canonical Header option in the previous section, you shouldn’t enable a
robots.txt which disallows Google to fetch the images and read the canonical
URL.
HTTP is an insecure protocol, which means everything that is sent between the
browser and your server is in plain text and readable by anyone tapping the internet
connection. This could be a government agency (eg. NSA, …) or someone using the
same free unencrypted free WIFI hotspot as your user.
HTTPS on the other hand encrypts the communication between the browser and the
server. As such nobody can listen to your users “conversations”. A https certificate
for a website also proves that users communicate with the intended website and not a
fake website run by malicious people.
Since the summer of 2014, Google has publicly said that having a https site can give
a small ranking boost in the search engine results.
It is also vital that you secure all parts of your website. This includes all pages, all
resources (images, javascript, css,), all resources hosted on a CDN, …
When you would only use https for eg. A login into a forum or a credit card detail
information page, your website is still ‘leaking’ sensitive information hackers can use.
More in detail this could be a session identifier or cookies which are typically set after a
login. The hacker could reuse this information to hijack the users session and being
effectively logged in without knowing any password.
In October 2010 the Firesheep plugin for the Firefox browser was released which
intercepted unencrypted cookies from Twitter and Facebook, forcing them to go https
everywhere.
We also recommend to only offer an https version of your site and redirect any users
accessing the http version to the https version. We’ll explain how to do this technically in
The browser will do the necessary checks to see if the certificate is still valid, is for the
correct site (eg. https://www.myexamplewebsite.com) and more.
To acquire a certificate, you’ll need to buy one from a certificate authority. A certificate
authority is a company which is trusted by the major browsers to issue valid certificates.
Well known names include Comodo, VeriSign and more.
There are a few things you need to know about the different certificate types that exist
before you can buy one.
Standard certificate
A standard certificate can be used for a single website domain. Eg. if all your content is
hosted on www.mywebsite.com, you could buy a standard certificate which is valid for
www.mywebsite.com. Note this doesn’t include any subdomains which you may also use.
For example cdn.mywebsite.com is not included. Browsers will issue warnings to the user
if you try to use a certificate which is not valid. You could by a second certificate for the
subdomain cdn.mywebsite.com to solve this problem.
Wildcard certificate.
A wildcard certificate is still only valid for one top domain (eg. mywebsite.com), but it
also supports all subdomains (*.mywebsite.com); hence the name wildcard certificate.
This kind of certificate is usually a little bit more expensive, then a standard certificate.
Depending on the price and on the number of subdomains you’re going to use you’ll need
to decide between a standard and wildcard certificate.
Other types of certificates exists (eg. Multidomain), but are usually pretty expensive; so
we’ll not cover them here.
To counter brute-force attacks that are trying to acquire the private key in use, the key
needs to be big enough. In the past 1024 bit keys were generally created by the certificate
authorities. Nowadays you should use 2048 bit keys, because 1024 bit keys have become
to weak. (we’ll guide you through the technical details later)
Of course the Certificate Authority plays a vital role in this: when you order a certificate
they should verify you’re the owner of the domain name.
With a normal certificate the validation is quicker and less extensive then when you order
an EV (Extended Validation) certificate. An EV certificate is more expensive due to the
extended manual verification of the site owner.
Browsers do place a lot more trust in an EV certificate. They will display the name of the
company inside a green bar in the browsers address bar. For example:
An EV certificate could be interesting for an ecommerce site because it gives your user a
greater trust in your site which could lead to more sales.
There are some restrictions with EV certificates though: only companies can order an EV
certificates, individuals cannot. EV certificates are always for one domain only; there are
no wildcard EV certificates at this moment.
GlobalSign
Network Solutions
Symantec
Thawte
Trustwave
You can view daily updated reports of the market shares of the leading Certificate
Authorities at http://w3techs.com/technologies/overview/ssl_certificate/all
Because of better pricing we have chosen to buy a certificate from Comodo. They also
support generating 2048 bit certificates for better security.
Many companies resell certificates from the above Certificate Authorities. They are the
exact same certificates, but come with a reduced price tag. We recommend you to shop
around.
One such reseller we recommend and use is the SSLStore which we will use in the
example ordering process below.
A Certificate Signing request is file with encrypted text that is generated on the server
where the certificate will be used on. It contains various details like your organization
name, the common name (=domain name), email address, locality and country. It also
contains your public key; which the Certificate Authority will put into your certificate.
When we create the Certificate Signing request below we will also generate a private key.
The Certificate Signing request will only work with the private key that was generated
with it. The private key will be needed for the certificate you’ll buy, to work.
Here is how you can create the Certificate Signing request on your server:
$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout myprivatekey.key -out cer\
tificate-signing-request.csr
req: activates the part of openssl that deals with certificate requests signing
-nodes: no des, stores the private key without protecting it with a passphrase. While
this is not considered to be best practice, many people do not set a passphrase or later
remove it, since services with pass phrase protected keys can not be auto-restarted
without typing in the passphrase
-newkey: generate a new private key
rsa:2048 1024 is the default bit length of the private key. We will use 2048 bit keys
because our Certificate Authority supports this and is required for certificates which
expire after October 2013
When launching the above command you’ll be asked to enter information that will be
incorporated into your certificate request.
There are quite a few fields but you can leave some blank. For some fields there will be a
default value (displayed in […] brackets). If you enter ‘.’, the field will be left blank.
Country Name (2 letter code) [AU]: <2 letter country code> eg. BE for Belgium
State or Province Name (full name) [Some-State]
Locality Name (eg. city) []
Organization Name (eg. company) [Internet Widgits Pty Ltd]: Wim Bervoets
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []: this is an important setting
which we will discuss below.
Email Address []: email address which will be in the certificate and used by the
Certificate Authority to verify your request . Make sure this email is valid & you
have access to it. The email address should also match with the email address in the
DNS contact emails used for the particular domain you’re requesting a certificate for.
The Common Name should be the domain name you’re requesting a certificate for. Eg.
www.mywebsite.com
This should include the www or the subdomain you’re requesting a certificate for.
If you want to order a wildcard certificate which is valid for all subdomains you should
specify this with a star; eg. *.mywebsite.com
OpenSSL will now ask you for a few ‘extra’ attributes to be sent with your certificate
request:
Now we can download the freshly generated csr file and use it when ordering our SSL
certificate at the SSLStore.
Ordering a certificate
Let’s suppose we want a Comodo Wildcard certificate. Go to
https://www.thesslstore.com/wildcardssl-certificates.aspx?aid=52910623 and click on the
Next you’ll be asked for your billing details and credit card information. After completing
these steps an email will be sent with a link to the Configure SSL service of Comodo
(together with a PIN)
Here you’ll also need to provide the Certificate Signing request you have generated in the
previous section.
After completing these steps, your domain will be validated by Comodo. Depending on
the type of certificate this will take a few hours to one week to complete.
As we didn’t choose an Extended Validation certificate, this validation was quick and we
soon received a ‘Domain Control Validation’ email with another validation code for our
certificate we requested.
This email was sent to the DNS contacts listed for our domain.
After entering the validation code on the Comodo website, the certificate was emailed to
our email address.
You may wonder why there are so many different certificates included and what you need
to do with it.
Browsers and devices connecting to secure sites have a fixed list of Certificate Authorities
they trust - the so called root CAs. The other kind of CAs are the intermediate Certificate
Authorities.
If the certificate of the intermediate CA is not trusted by the browser or device, the
browser will check if the certificate of the intermediate CA was issued by a trusted CA
(this goes on until a trusted (root) CA is found).
This chain of SSL certificates from the root CA certificate, over the intermediate CA
certificates to the end-user certificate for your domain represents the SSL certificate
chain.
You can view the chain in all popular browsers, for example in Chrome you can click on
the padlock item of a secure site, choose Connection and then Certificate data to view the
chain:
In the next section we’ll make use of the certificates as we install them in our nginx
configuration.
the browser will receive the full certificate chain. (except for the root certificate but
the browser already has this one builtin).
Some browsers will display warnings when they can not find a trusted CA certificate
in the chain. This can happen if the chain is not complete.
Other browsers will try to download the intermediary CA certificates; this is not
good for the performance of your website because it slows down setting up a secure
connection. If we combine all the certificates and configure nginx properly this will
be much faster.
Note: In general a combined SSL certificate with less intermediary CAs will be a little bit
better performance wise still.
You can combine the certificates on your server, after you have uploaded all the
certificate .crt files with the following command:
$ cat <your_domain>.crt COMODORSADomainValidationSecureServerCA.crt COMODORSAAdd\
TrustCA.crt > yourdomain.chained.crt
You’ll need to add the following configuration inside a server {…} block in the nginx
configuration. Please refer to our Configuring your website domain in nginx section.
$ sudo nano /usr/local/nginx/conf/conf.d/mywebsite.com.conf
server {
server_name www.mywebsite.com;
# SSL config
listen <ipv4 address>:443 default_server ssl http2;
listen [ipv6 address]:443 default_server ssl http2;
ssl_certificate /usr/local/nginx/conf/<yourdomain.chained.crt>;
ssl_certificate_key /usr/local/nginx/conf/<yourprivate.key>;
In this configuration we tell nginx to listen on an IPv4 and Ipv6 address on the default
HTTPS port 443. We enable ssl and http2.
HTTP/2 is the next generation standardized HTTP v2 protocol. It is based on the SPDY
Google specification which manipulates HTTP traffic, with the goal to reduce web page
load latency. It uses compression and prioritizes and multiplexes the transfer of a web
page so that only one connection per client is required. (eg. Getting the html, images,
stylesheets and javascript files all happens with a connection that is kept open).
You can check an example what kind of performance improvements are possible with
HTTP2 on the Akaimai HTTP2 test page
HTTP/2 is best used with TLS (Transport Layer security) encryption (eg. https) for
security and better compatibility across proxy servers.
Now restart the nginx server. Your site should now be accessible via https.
We recommend you to now run an SSL analyzer. You’ll get a security score and a
detailed report of your SSL configuration:
To get an A+ score the default nginx SSL configuration shown above is not enough. More
likely you’ll receive one or more of the following warnings:
To make your users use the https version of your site by default, you’ll need to redirect all
http traffic to the https protocol. Here is an example server nginx configuration which
does this:
server {
server_name www.yourwebsite.com;
listen <ip_address>:80; # Listen on the HTTP port
listen [<ip_address>]:80; # Listen on IPv6 address and HTTP 80 port
If you don’t fix this error users may receive strong browser warnings and experience slow
performance.
Sometimes security issues are found in the security protocols or ciphers used for securing
websites. Some of these issues get an official name like BEAST or POODLE attacks.
By using the latest version of OpenSSL and properly configuring the nginx SSL settings
you can mitigate most of these issues.
We recommend to at least use the Intermediate or the Modern version as they give you
higher levels of security between browsers/clients and your server. The modern version is
the most secure, but doesn’t work well with old browsers.
Here are the minimum versions supported for the Modern & Intermediate configuration.
Modern: Firefox 27, Chrome 22, IE 11, Opera 14, Safari 7, Android 4.4, Java 8
Intermediate: Firefox 1, Chrome 1, IE 7, Opera 5, Safari 1, Windows XP IE8,
Android 2.3, Java 7
Choose nginx, Intermediate and fill in the nginx version & OpenSSL version. The nginx
configuration generated should be like the one in the screenshot above.
In summary these settings will: * disable SSL3.0 protocol (even IE6 on Windows XP
supports the successor TLSv1 with Windows update) - clients are forced to use at least
TLSv1 * The SSL Ciphersuites nginx/OpenSSL supports server side are ordered: the
most secure at the beginning of the list. This will make sure client/servers will try to use
the most secure options they both support. * Specifies that server ciphers should be
preferred over client ciphers when using the TLS protocols (to fix BEAST SSL attack) *
Enable OSCP stapling (explained in the next chapter)
OCSP stands for Online Certificate Status Protocol. Let’s explain the context a bit.
Certificates issued by a Certificate Authority can be revoked by the CA. For example
because the customer lost their private key or was stolen, or the domain was transferred to
a new owner.
The Online Certificate Status Protocol (OCSP) is one method for obtaining certificate
revocation information. When presented with a certificate, the browser asks the issuing
CA if there are any problems with it. If the certificate is fine, the CA can respond with a
signed assertion that the certificate is still valid. If it has been revoked, however, the CA
can say so by the same mechanism.
it slows down new HTTPS connections. When the browser encounters a new
certificate, it has to make an additional request to a server operated by the CA.
Additionally, if the browser cannot connect to the CA, it must choose between two
undesirable options: ** It can terminate the connection on the assumption that
something is wrong, which decreases usability. ** It can continue the connection,
which defeats the purpose of doing this kind of revocation checking.
OCSP stapling solves these problems by having the site itself periodically ask the CA for
a signed assertion of status and sending that statement in the handshake at the beginning
of new HTTPS connections.
The ssl_trusted_certificate file should only contain the root Certificate Authority
certificate. In our case, we created this file like this:
cat AddTrustExternalCARoot.crt > ca.root.crt
When nginx asks for the revocation status of your certificate, it’ll ask the CA this in a
secure manner using the root CA certificate (ca.root.crt in our case).
when it is working.
You may need to rerun this command a few times if you just recently started nginx.
If OCSP is not working correctly nginx will also issue the following warning in its error
log file (/var/log/nginx/error.log)
2015/12/12 04:47:03 [error] 1472#0: OCSP_basic_verify() failed (SSL: error:27069\
065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable\
to get issuer certificate) while requesting certificate status, responder: gv.s\
ymcd.com
Suppose a user types in the URL of your website in a browser without any https or http
protocol specified. Then the browser will likely choose to load the site via http (the
unsecure version). Even if you have configured your server to redirect all http requests to
https, the user may will talk to the non-encrypted version of the site before being
redirected.
This opens up the potential for a man-in-the-middle attack, where the redirect could be
exploited to direct a user to a malicious site instead of the secure version of the original
page.
The HTTP Strict Transport Security feature lets a website inform the browser it should
never try to load the site using HTTP, and that it should automatically convert all attempts
to access the site using HTTP to HTTPs requests.
To further optimize the SSL performance of nginx we can enable some caches.
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 10m;
The ssl_session_cache will create a shared cache between all the nginx worker processes.
We have reserved 20MB for storing SSL sessions (for 10 minutes). According to the
nginx documentation 1MB can store about 4000 sessions. You can reduce or increase the
size of the cache based on the traffic you’re expecting.
When you’re using a CDN to host your resources, you’ll need to configure the SSL
settings in your CDN Account.
We’re going to show you how you can enable HTTPS on a KeyCDN server. The process
will be similar for eg. MaxCDN.
SSL
Custom SSL certficate
Custom SSL Private key
Force SSL
In the Custom SSL Certificate, we need to include our domain certificate and the
intermediate CA certificates.
You’ll also need to provide your private key in the Custom SSL Private Key section. This
key is available at /usr/local/nginx/conf/<yourprivate.key>
Make sure to use a https URL for your Origin URL too (eg.
https://www.yourwebsite.com)
Please note that most CDNs that support SSL implement it via Server Name Indication
which means multiple certificates can be presented to the browser on 1 single IP address.
This reduces their need for dedicated IP addresses per customer which lowers the cost
Installing your own email server takes extra server resources for the
processing and storage of your emails. It also generates extra server
administration work for you. Eg. you’ll need to make sure that SPAM is
properly blocked while allowing legitimate messages to go through. You’ll
also want a good uptime of your email server.
Another reason to go with a third party provider could be that when you
change servers or your website server crashes under load; your email service
will continue to work when using a 3rd party provider.
Depending on usage there is still a free option available (Zoho Mail), which
we’ll configure in the remainder of the chapter. If you choose another
If you only need emails for one domain name, you can choose the free
option which gives you 5GB mailbox storage per user. Otherwise take any of
the other options.
Next, specify your existing company or business domain name for which
you want to setup a youremail@yourdomain email address.
Zoho Mail will now configure your account and within a few seconds come
back to congratulate you:
Now click on the link ‘Setup <your domain> in Zoho’. You’ll need to
complete the following steps:
Verify Domain
You need to start by verifying that you own the domain you want to use in
your email address. Select your domain DNS Hosting provider from the list.
If you’re using DNSMadeEasy then choose Other.
1. CNAME Method
2. TXT Method
3. HTML Method
The CNAME and TXT method both need configuration changes inside your
DNS Service Provider Control Panel for your domain. You’ll either need to
add a CNAME record or a TXT record in eg. your DNSMadeEasy control
panel. Please check our ‘Ordering a domain name’ chapter for more
information on how to setup DNSMadeEasy.
Once you have added either record you can click on the green ‘Verify by
TXT’ or ‘Verify by CNAME’ button.
Add Users
Now you can provide a desired username to create your domain based email
account. You’re also given the possibility to add other email accounts for
your domain.
Zoho Groups
MX Records (Mail eXchange) are the special entries in DNS that designate
the email-receiving server of your domain. Ensure that you have created the
required user accounts and group accounts, before changing the MX records.
You must remove (delete) any other MX records other than the above 2
records.
Zoho MX Records
Your Zoho Mail account can be read in any email client that supports the
POP or IMAP protocol. The links contain the configuration instructions.
You’re then redirected to your Zoho Mail Inbox! Try to send and receive a
few emails to see everything is working correctly!
Wordpress includes the possibility to use themes to customize the look and feel of your
website. Most people will choose one of the free themes available or buy a professionally
designed theme template.
The Wordpress plugin system makes the Wordpress core functionality very extensible.
There are free and paid plugins for almost every feature or functionality you’ll need.
SEO plugins: optimize your articles for better Search engine optimalization visibility
Performance plugins: optimize the performance of your Wordpress website
Tracking plugins: integrate Google Analytics tracking
Landingpage, sales pages, membership portal plugins (eg. OptimizePress)
In this chapter we’ll focus on how to install Wordpress, and install the Yoast SEO plugin.
Downloading Wordpress
You can download the latest version of Wordpress with the following commands:
$ cd /home/<yourwebsitehome>/web
$ sudo wget https://wordpress.org/latest.tar.gz
$ sudo tar xvf latest.tar.gz
$ sudo mv wordpress/* .
$ sudo rm latest.tar.gz
In this case you have installed Wordpress in the root directory of your site. (eg. it’ll be
available at http://www.<yourwebsitedomain>.com)
If you have previously followed our Nginx install guide, the nginx config file to edit
should be at /usr/local/nginx/conf/conf.d/<yourwebsitedomain>.conf
$ sudo nano /usr/local/nginx/conf/conf.d/<yourwebsitedomain>.conf
index index.php;
location / {
# $uri/ needed because otherwise the Wordpress Administration panel
# doesn't work well
try_files $uri $uri/ /index.php;
}
‘index index.php’ specifies that when a directory is requested it should serve index.php by
default. (this will make sure that http://www.<yourwebsitedomain>.com will execute
index.php (thus your Wordpress blog))
The ‘try_files’ directive will try to execute three different URLS, the URL itself, the URL
with a slash appended and the URL with index.php appended. It’ll only try the next
possibility when the previous one returned a HTTP 404 error.
We need these 3 different URLs, to make Nginx’s forwarding to PHP-FPM work with the
kind of URLs Wordpress creates.
In this screen you need to choose a username and a password. To generate a good
database password you can use the Generate button.
In the Host field, you should select the dropdown ‘local’, as such the database can not be
accessed from remote locations, but only from your server. (which is where your
Wordpress installation is also running).
Now you need to assign the necessary rights for this user so the user can insert, update
and delete records on your newly created database (eg. ‘wordpress_site’)
Edit the rights of the user and then click on the rounded subtab Database. Choose the
database you have created previously. (eg. ‘wordpress_site’)
Installing Wordpress
The Wordpress installation is web based. Go to http://www.<yourwebsitedomain>.com/
to install Wordpress.
Installing Wordpress
Database name: the name of the database you have created previously (eg.
‘wordpress_site’)
Username / password: the database user and password you have created previously
with PHPMyAdmin
Database host: leave it on localhost
Table prefix: leave it on wp_
Wordpress will now test the database connection. If succesfull, Wordpress will try to write
a wp-config.php file with its configuration.
Due to file permissions it is possible that Wordpress is not yet able to write this file
automatically. In this case you’ll see the following screen where you can copy-paste the
contents in a file you create manually on your server:
Here is how you can do this via the SSH command line:
Copy paste the contents of the file into the empty wp-config.php file
Now click on the ‘Run the install’ button on the Wordpress installation screen.
Site title: you can put the name of your blog here
Username: choose a username to login to your Wordpress administration pages (eg.
where you create new posts and so on). We recommend you to not choose ‘admin’,
as this makes it easier for hackers to break in to your site.
Password: Use a strong password
Your email: Use an email address you own
Enable the flag ‘Allow search engines to index this site’
The first option you need to change is in the wp-config.php file where we’ll set the
Filesystem method:
$ sudo nano /home/<yourwebsitedomain>/www/wp-config.php
This effectively disables the Wordpress screen where it asks for FTP/SFTP details to
upload updated files to your server.
Next you’ll need to make sure that the files are readable and writable by the user and
group (should be nginx) the files are owned by:
$ sudo find /home/<yourwebsitedomain>/web -type f -exec chmod 664 {} \;
The -f option specifies you’re searching for files, and for every file found you execute the
command chmod 664.
You’ll also make the plugins and themes directory readable, writable and executable for
the user and group:
$ sudo find /home/<yourwebsitedomain>/web/wp-content/plugins -type d -exec chmod\
775 {} \;
$ sudo find /home/<yourwebsitedomain>/web/wp-content/themes -type d -exec chmod \
775 {} \;
The -f option specifies you’re searching for directories, and for every directory found you
execute the command chmod 775.
You also need to make sure that all Wordpress files are owned by your OS user/nginx
group you created for your <yourwebsitedomain>\ website:
$ cd /home/<yourwebsitedomain>/web
$ sudo chown -fR <yourwebsitedomain>:nginx *
The first thing we’ll fix are the URLs Wordpress will generate for your pages and
blogposts. Each blogpost is uniquely accesible by an URL. In Wordpress this is called a
‘Permalink’.
By default WordPress uses web URLs which have question marks and lots of numbers in
them; however, WordPress offers you the ability to create a custom URL structure for
your permalinks and archives.
In the General Settings you’ll find the Site title of your blog. You may want to update the
Tagline because by default it’s ‘Just another Wordpress site’
Choosing a Tagline
After installing Yoast SEO plugin, you’ll see a new sidebar tab ‘SEO’ in your
administration panel. Click on the ‘SEO’ -> General item. Then click on the tab Your
info.
We recommend you to verify your site with at least the Google Search Console and the
Bing Webmaster tools. Click on any of the links and follow the instructions that are
provided.
On this page you can add all the social profiles for your website. (eg. Twitter, Facebook,
Instagram, Pinterest, Google+ and so on).
If you have not yet created social profiles for you website, we recommend todo so as all
these profiles can direct traffic to your own site.
Sitemap files are XML files that list all the URLS available on a given site. They include
information like last update date, how often it changes and so on.
The Yoast SEO plugin will automatically generate a sitemap (index) xml file. You can
enable it via the SEO -> XML Sitemaps menu.
If you click on the XML Sitemap button you’ll see that the XML Sitemap file is available
at the URL http://www.<yourwebsite>.com/sitemap_index.xml
Adding URL Rewrites for Yoast SEO Sitemap XML Files
We need some extra URL rewrite rules in the nginx configuration to make the XML
Sitemap button not return a HTTP 404 though:
$ sudo nano /usr/local/nginx/conf/conf.d/<yourwebsitedomain>.conf
We recommend you to enable Breadcrumbs for your blog via the SEO -> Advanced
section
Breadcrumbs have an important SEO value because they help the search engine (and your
visitors!) to understand the structure of your site.
For the breadcrumbs to show up on your Wordpress blog, it is possible you may need to
edit your Wordpress theme. You can learn how to do this by reading this Yoast SEO KB
article
The following websites analyse your site’s speed and give advice on how to
make it faster:
GTMetrix
PageSpeed Insights
The results are sorted in order of impact upon score; thus optimizing rules at
the top of the list can greatly improve your overall score.
Below we will install some plugins which will help to achieve a faster site
and improve your score:
That’s a good thing, because smaller image sizes means faster page loads.
disable_functions=exec,passthru,shell_exec,system,proc_open,popen
After enabling the EWWW Image Optimizer plugin; go the the Settings
page to optimally configure it:
Don’t forget to Bulk Optimize the images already in your media library!
If you don’t use them in your Wordpress blog, you can safely disable them
via the following Wordpress plugin.
Disable Emojis
In the following section we’ll enable the Page, object and Database caching
via Memcached. We will also enable the CDN support.
Now click on the Install button to continue the installation. Click on the
Activate Plugin link.
Now go the Plugins overview page and click on the W3 Total Cache Settings
link.
In the General Settings tab we want to enable the Memcache integration for
the following types of caches: 1. Page Cache 1. Database cache 1. Object
cache
If you have setup a CDN at KeyCDN as explained in our CDN chapter, then
you can enable the CDN settings to let Wordpress serve all resource files
from the CDN location.
Now click on the Performance -> CDN link in the sidebar. We can now
configure the details of the CDN integration:
CDN Configuration
The Browser cache settings are also disabled, because we’ll also use Google
Pagespeed for this.
Many of the suggestions given by the Pagespeed Insight analysis can be fixed by using
and configuring the Google Pagespeed plugin for nginx.
Pagespeed Insights
In our nginx chapter, we compiled nginx with the Google Pagespeed plugin included. We
haven’t enabled it yet in our configuration until now. Let’s do this now!
# Often PageSpeed needs to request URLs referenced from other files in order to \
optimize them. To do this it uses a fetcher. By default ngx_pagespeed uses the s\
ame fetcher mod_pagespeed does, serf, but it also has an experimental fetcher th\
at avoids the need for a separate thread by using native Nginx events. In initia\
l testing this fetcher is about 10% faster
pagespeed UseNativeFetcher on;
resolver 8.8.8.8;
# The PageSpeed Console reports various problems your installation has that can \
First we specify where the pagespeed plugin should cache its files. (eg. FileCachePath
/var/cache/ngx_pagespeed/). We also make sure the cache is big enough to hold rewritten
resources of our website like image, stylesheets and javascript files. (FileCacheSizeKb
100MB) Secondly we create a small in memory cache per server process
(LRUCacheKbPerProcess) and a Metadata cache in memory.
(CreateSharedMemoryMetadataCache)
As we are already using Memcached, we can enable the pagespeed Memcached support
by specifing on which host and port our memcached server is running.
(MemcachedServers “localhost:11211”)
The Google Pagespeed module feature administration pages to tune and check your
installation. We enable it via the following settings: * Statistics on; * StatistiscLogging
on * LogDir /var/log/pagespeed
The admin pages can also log informative messages and you can purge individual URLS
from its cache. (MessageBufferSize and EnableCachePurge)
Pagespeed internally uses a fetcher to request URLS referenced from Javascripts and
Stylesheets. By specifying “UseNativeFetcher on;” we use a faster experimental fetcher
For nginx.
You should include this file for all sites where you want to enable the Pagespeed plugin.
Normally all your site configuration files have been created in the
/usr/local/nginx/config/conf.d directory.
For example:
$ sudo nano /usr/local/nginx/config/conf.d/mysite.conf
server {
....
include /usr/local/nginx/conf/pagespeed.conf;
....
}
Previous versions of the PageSpeed plugin would rewrite relative URLs into absolute
URLs. This wastes bytes and can cause problems for sites that run over HTTPS. This
setting makes sure this rewriting doesn’t take place.
pagespeed AdminPath /pagespeed-<your-own-url>;
The admin path specifies the URL where you can browse the admin pages. The admin
pages provide visibility into the operation of the PageSpeed optimization plugin. Please
make sure that you create your own URL which cannot be easily guessed. (and/or protect
the URL with authentication).
pagespeed RewriteLevel PassThrough;
By default the Pagespeed plugin enables a set of core filters. By setting the RewriteLevel
on PassThrough, no filters are enabled by default. We will manually enable the filters we
need below.
include pagespeed_libraries.conf;
pagespeed EnableFilters canonicalize_javascript_libraries;
Most important, first-time site visitors can benefit from browser caching, since they
may have visited other sites making use of the same service to obtain the libraries.
The JavaScript hosting service acts as a content delivery network (CDN) for the
hosted files, reducing load on the server and improving browser load times.
There are no charges for the resulting use of bandwidth by site visitors.
The hosted versions of library code are generally optimized with third-party
minification tools. These optimizations can make use of library-specific annotations
or minification settings that aren’t portable to arbitrary JavaScript code, so the
libraries benefit from more aggressive optimization than can be provided by
PageSpeed.
Here is how you should generate the pagespeed_libraries.conf which should be located in
/usr/local/nginx/conf:
The extend_cache filter improves the cacheability of a web page’s resources without
compromising the ability of site owners to change the resources and have those changes
propagate to users’ browsers. By default this filter will cache the resources for 1 year.
It’ll also improve the cacheability of images references from within CSS files if the
rewrite_css filter is enabled.
pagespeed EnableFilters combine_css;
The Combine CSS filter seeks to reduce the number of HTTP requests made by a
browser during page refresh by replacing multiple distinct CSS files with a single CSS
file
pagespeed EnableFilters rewrite_css;
This filter parses linked and inline CSS (in the HTML file), rewrites the images found
and minifies the CSS (stylesheet).
This flag inserts width= and height= attributes into <img> HTML tags that lack them and
sets them to the image’s width and height. The effect on performance is minimal,
especially on modern browsers.
pagespeed EnableFilters inline_javascript;
The “Inline JavaScript” filter reduces the number of requests made by a web page by
inserting the contents of small external JavaScript resources directly into the HTML
document.
pagespeed EnableFilters defer_javascript;
Defering the execution of JavaScript code can often dramatically improve the rendering
speed of a site. Use this filter with caution as it may not work on all sites.
pagespeed EnableFilters prioritize_critical_css;
This filter improves the page render times by identifying CSS rules from your CSS
stylesheet that are needed to render the visible part of the page, inlining those critical
rules and deferring the load of the full CSS resources.
pagespeed EnableFilters collapse_whitespace;
This filter will remove whitespaces from your HTML, further reducing the size of the
hTML
pagespeed EnableFilters combine_javascript;
The ‘Combine JavaScript’ rule seeks to reduce the number of HTTP requests made by a
browser during a page refresh by replacing multiple distinct JavaScript files with a single
one.
pagespeed EnableFilters rewrite_images;
The rewrite_images filter enables the following image optimalizations if the optimized
version is actually smaller then the original:
The purpose of this filter is to reduce the number of HTTP round-trips by combining
multiple CSS resources into one. It parses linked and inlined CSS and flattens it by
replacing all @import rules with the contents of the imported file, repeating the process
recursively for each imported file
pagespeed EnableFilters inline_css;
Inlining a CSS will insert the contents of small external CSS resources directly into the
HTML document. This can reduce the time it takes to display content to the user,
especially in older browsers.
pagespeed EnableFilters fallback_rewrite_css_urls;
The CSS parser cannot parse some CSS3 or proprietary CSS extensions. If
fallback_rewrite_css_urls is not enabled, these CSS files will not be rewritten at all. If
the fallback_rewrite_css_urls filter is enabled, a fallback method will attempt to rewrite
the URLs in the CSS file, even if the CSS cannot be successfully parsed and minified.
pagespeed EnableFilters inline_import_to_link;
The “Inline @import to Link” filter converts a <style> tag consisting of only @import
statements into the corresponding <link> tags. This conversion does not itself result in
any significant optimization, rather its value lies in that it enables optimization of the
linked-to CSS files by later filters, in particular the combine_css, rewrite_css, inline_css,
and extend_cache filters.
pagespeed EnableFilters convert_meta_tags;
Certain http-equiv meta tags, specifically those that specify content-type, require a
browser to reparse the html document if they do not match the headers. By ensuring that
the headers match the meta tags, these reparsing delays are avoided.
pagespeed EnableFilters rewrite_style_attributes_with_url;
The “Rewrite Style Attributes” filter rewrites the CSS inside elements’ style attributes to
enable CSS minification, image rewriting, image recompression, and cache extension, if
enabled. It is enabled only for style attributes that contain the text ‘url(‘ as these images
references are generally the source for greatest improvement.
pagespeed EnableFilters insert_dns_prefetch;
DNS resolution time varies from <1ms for locally cached results, to hundreds of
milliseconds due to the cascading nature of DNS. This can contribute significantly
towards total page load time. This filter reduces DNS lookup time by providing hints to
the browser at the beginning of the HTML, which allows the browser to pre-resolve DNS
for resources on the page.
# Disable filters when a HTTP2 connection is made
set $disable_filters "";
if ($http2) {
set $disable_filters "combine_javascript,combine_css,sprite_images";
}
pagespeed DisableFilters "$disable_filters";
Clients that make use of a HTTP/2 connection make some of the Google Pagespeed
filters unnecessary. These include all filters which combine resources like Javascript files
and CSS files into one file.
With HTTP/1 connections there was a lot of overhead for all these extra requests.
HTTP/2 allows multiple concurrent exchanges for all these resources on the same
connection, as such making one big file would actually hurt performance because the
parallel download of the different resources can not take place.
The above code selectively disables the combine filters for Javascript and CSS for
HTTP2 connections. It does leave them enabled for people accessing your site via
browsers which don’t yet support HTTP/2.
To make sure the latency of a request is not increasing too much after enabling all the
Pagespeed filters, you need to make sure that the Pagespeed caches are fully used. Eg.
requesting different pages will trigger optimizations for all those pages. After the cache
has warmed up (= all the optimized versions have been generated); most of the traffic to
your website should be able to be served from the cache.
To see if that’s the case, you should open the Pagespeed Console graphs at
http://yourwebsite.com/<pagespeed-admin-location>/console
Cache lookups that were expired means that although these resources were found in
cache, they were not rewritten because they were older than their max-age. max-age is a
HTTP Cache-Control header that is sent by your http server when Pagespeed fetches
these resources.
If you notice that you have a lot of cache lookups that were expired, you can tell
Pagespeed to load the files straight from disk rather than through HTTP:
For CDNs you should also add a LoadFromFile configuration line to specify where the
Pagespeed plugin can find the resources on your web server.
Eg.
pagespeed Domain https://cdn.mywebsite.com;
pagespeed LoadFromFile "https://www.mywebsite.com" "/home/mywebsite/web/root";
pagespeed LoadFromFile "https://cdn.mywebsite.com" "/home/mywebsite/web/root";
Note that we have also enabled https support in the Pagespeed plugin by using
LoadFromFile with a https protocol and specifying where the resources are located on
your web server.
Locate the MEMSIZE=64 option and increase it to 128M or 256M depending on the
amount of RAM available.
https://ticketing.nforce.com/index.php?/Knowledgebase/Article/View/40/11
/sysctl-settings-which-can-have-a-negative-affect-on-the-network-speed
http://blog.cloudflare.com/optimizing-the-linux-stack-for-mobile-web-per
Ubuntu / Linux
http://www.debuntu.org/how-to-managing-services-with-update-rc-d/
KVM
http://en.wikipedia.org/wiki/Hypervisor
http://s152758605.onlinehome.us/wp-content/uploads/2012/02/slide33.png
https://software.intel.com/sites/default/files/OVM_KVM_wp_Final7.pdf
SSD
https://sites.google.com/site/easylinuxtipsproject/ssd
https://rudd-o.com/linux-and-free-software/tales-from-responsivenessland-
why-linux-feels-slow-and-how-to-fix-that
OpenSSL
http://sandilands.info/sgordon/upgrade-latest-version-openssl-on-ubuntu
Nginx
https://wordpress.org/plugins/nginx-helper/
https://rtcamp.com/wordpress-nginx/tutorials/single-site/fastcgi-cache-with-
purging/
http://unix.stackexchange.com/questions/86839/nginx-with-ngx-pagespeed-
ubuntu
https://www.digitalocean.com/community/tutorials/how-to-optimize-nginx-
configuration
http://nginx.com/blog/tuning-nginx/
http://linuxers.org/howto/howto-use-logrotate-manage-log-files
PHP
https://support.cloud.engineyard.com/entries/26902267-PHP-Performance-
I-Everything-You-Need-to-Know-About-OpCode-Caches
https://www.erianna.com/enable-zend-opcache-in-php-5-5
https://rtcamp.com/tutorials/php/fpm-status-page/
http://nitschinger.at/Benchmarking-Cache-Transcoders-in-PHP
MariaDB
https://wiki.debian.org/Hugepages
http://time.to.pullthepl.ug/blog/2008/11/18/MySQL-Large-Pages-errors/
http://dino.ciuffetti.info/2011/07/howto-java-huge-pages-linux/
http://www.cyberciti.biz/tips/linux-hugetlbfs-and-mysql-performance.html
http://matthiashoys.wordpress.com/tag/nr_hugepages/
https://mariadb.com/blog/how-tune-mariadb-write-performance/
https://snipt.net/fevangelou/optimised-mycnf-configuration/
http://www.percona.com/files/presentations/MySQL_Query_Cache.pdf
Jetty
http://www.eclipse.org/jetty/documentation/current/quickstart-running-
jetty.html
http://dino.ciuffetti.info/2011/07/howto-java-huge-pages-linux/
http://greenash.net.au/thoughts/2011/02/solr-jetty-and-daemons-debugging-
jettysh/
http://java-performance.info/java-string-deduplication/
http://dev.mysql.com/doc/connector-j/en/connector-j-reference-
configuration-properties.html
http://assets.en.oreilly.com/1/event/21/Connector_J%20Performance%20Ge
ms%20Presentation.pdf
https://github.com/brettwooldridge/HikariCP/wiki/MySQL-Configuration
https://mariadb.com/kb/en/mariadb/about-the-mariadb-java-client/
CDN
https://www.maxcdn.com/blog/manage-seo-with-cdn/
HTTPS
https://support.comodo.com/index.php?/Default/Knowledgebase/Article/Vi
ew/1/19/csr-generation-using-openssl-apache-wmod_ssl-nginx-os-x
https://blog.hasgeek.com/2013/https-everywhere-at-hasgeek
https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx
http://www.nginxtips.com/hardening-nginx-ssl-tsl-configuration/
https://bjornjohansen.no/optimizing-https-nginx
http://security.stackexchange.com/questions/54639/nginx-recommended-
ssl-ciphers-for-security-compatibility-with-pfs
https://www.mare-system.de/guide-to-nginx-ssl-spdy-hsts/#choosing-the-
right-cipher-suites-perfect-forward-security-pfs
http://blog.mozilla.org/security/2013/07/29/ocsp-stapling-in-firefox/
https://blog.kempkens.io/posts/ocsp-stapling-with-nginx
https://gist.github.com/plentz/6737338