You are on page 1of 132

ALPINE LINUX

TUTORIALS AND HOWTOS

http://wiki.alpinelinux.org/

NETWORKING

Welcome to Tutorials and HowtoS

The tutorials are hands-on and the reader is expected to try and achieve the goals described in each
step, possibly with the help of a good example. The output in one step is the starting point for the
following step.

Networking
Configure Networking

Setting System Hostname


To set the system hostname, do something like the following:
echo "shortname" > /etc/hostname

Then, to activate the change, do the following:


hostname -F /etc/hostname

If you're using IPv6, you should also add the following special IPv6 addresses to your /etc/hosts file:
::1
fe00::0
ff00::0
ff02::1
ff02::2
ff02::3

localhost ipv6-localhost ipv6-loopback


ipv6-localnet
ipv6-mcastprefix
ipv6-allnodes
ipv6-allrouters
ipv6-allhosts

Tip: If you are going to use automatic IP configuration, such as IPv4 DHCP or IPv6 Stateless
Autoconfiguration, you can skip ahead to Configuring DNS. Otherwise, if you are going to use a
static IPv4 or IPv6 address, continue below.
For a static IP configuration, it's common to also add the machine's hostname you just set (above) to the
/etc/hosts file.
Here's an IPv4 example:
192.168.1.150

shortname.domain.com

And here's an IPv6 example:

2001:470:ffff:ff::2

shortname.domain.com

Configuring DNS
Tip: For users of IPv4 DHCP: Please note that /etc/resolv.conf will be completely overwritten
with any nameservers provided by DHCP. Also, if DHCP does not provide any nameservers, then
/etc/resolv.conf will still be overwritten, but will not contain any nameservers!
For using a static IP and static nameservers, use one of the following examples.
For IPv4 nameservers, edit your /etc/resolv.conf
The following example uses Google's Public DNS servers.

file

to

look

like

this:

file

to

look

like

this:

nameserver 8.8.8.8
nameserver 8.8.4.4

For IPv6 nameservers, edit your /etc/resolv.conf


The following example uses Hurricane Electric's public DNS server.
nameserver 2001:470:20::2

You can also use Hurricane Electric's public DNS server via IPv4:
nameserver 74.82.42.42

Tip: If you decide to use Hurricane Electric's nameserver, be aware that it is 'Google-whitelisted'.
What does this mean? It allows you access to many of Google's services via IPv6. (Just don't add
other, non-whitelisted, nameservers to /etc/resolv.conf ironically, such as Google's Public DNS
Servers.) Read here for more information.

Enabling IPv6 (Optional)


If you use IPv6, do the following to enable IPv6 for now and at each boot:
modprobe ipv6
echo "ipv6" >> /etc/modules

Interface Configuration

Loopback Configuration (Required)


Note: The loopback configuration must appear first in
networking issues.

/etc/network/interfaces

to prevent

To configure loopback, add the following to a new file /etc/network/interfaces:


auto lo
iface lo inet loopback

The above works to setup the IPv4 loopback address (127.0.0.1), and the IPv6 loopback address ( ::1)
if you enabled IPv6.

Wireless Configuration
See Connecting to a wireless access point.

Ethernet Configuration
For the following Ethernet configuration examples, we will assume that you are using Ethernet device
eth0.

Initial Configuration
Add the following to the file /etc/network/interfaces, above any IP configuration for eth0:
auto eth0

IPv4 DHCP Configuration


Add the following to the file /etc/network/interfaces, below the auto eth0 definition:
iface eth0 inet dhcp

IPv4 Static Address Configuration


Add the following to the file /etc/network/interfaces, below the auto eth0 definition:
iface eth0 inet
address
netmask
gateway

static
192.168.1.150
255.255.255.0
192.168.1.1

IPv6 Stateless Autoconfiguration


Add the following to the file /etc/network/interfaces, below the auto eth0 definition:
iface eth0 inet6 manual
pre-up echo 1 > /proc/sys/net/ipv6/conf/eth0/accept_ra

Tip: The "inet6 manual" method is available in busybox 1.17.3-r3 and later.
IPv6 Static Address Configuration
Add the following to the file /etc/network/interfaces, below the auto eth0 definition:
iface eth0 inet6 static
address 2001:470:ffff:ff::2
netmask 64
gateway 2001:470:ffff:ff::1
pre-up echo 0 > /proc/sys/net/ipv6/conf/eth0/accept_ra

Example: Dual-Stack Configuration


This example shows a dual-stack configuration.
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet
address
netmask
gateway

static
192.168.1.150
255.255.255.0
192.168.1.1

iface eth0 inet6 static


address 2001:470:ffff:ff::2
netmask 64
gateway 2001:470:ffff:ff::1
pre-up echo 0 > /proc/sys/net/ipv6/conf/eth0/accept_ra

Firewalling with iptables and ip6tables


See also: Alpine Wall - How-To Alpine Wall - Alpine Wall User's Guide

Install iptables/ip6tables

To install iptables:
apk add iptables

To install ip6tables:
apk add ip6tables

To install the man pages for iptables and ip6tables:


apk add iptables-doc

Configure iptables/ip6tables
Tip: Good examples of how to write iptables rules can be found at the Linux Home Networking Wiki
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch14_:_Linux_Firewalls_
Using_iptables

Save Firewall Rules


For iptables
1. Set iptables to start on reboot
o

rc-update add iptables

2. Write the firewall rules to disk


o

/etc/init.d/iptables save

3. If you use Alpine Local Backup:


1. Save the configuration

lbu ci

For ip6tables
1. Set ip6tables to start on reboot
o

rc-update add ip6tables

2. Write the firewall rules to disk


o

/etc/init.d/ip6tables save

3. If you use Alpine Local Backup:


1. Save the configuration

lbu ci

Activating Changes and Testing Connectivity


Changes made to /etc/network/interfaces can be activated by running:
/etc/init.d/networking restart

If you did not get any errors, you can now test that networking is configured properly by attempting to
ping out:
ping www.google.com

PING www.l.google.com (74.125.47.103) 56(84) bytes of data.


64 bytes from yw-in-f103.1e100.net (74.125.47.103): icmp_seq=1
64 bytes from yw-in-f103.1e100.net (74.125.47.103): icmp_seq=2
64 bytes from yw-in-f103.1e100.net (74.125.47.103): icmp_seq=3
64 bytes from yw-in-f103.1e100.net (74.125.47.103): icmp_seq=4
^C
--- www.l.google.com ping statistics --4 packets transmitted, 4 received, 0% packet loss, time 3007ms
rtt min/avg/max/mdev = 56.411/58.069/60.256/1.501 ms

ttl=48
ttl=48
ttl=48
ttl=48

time=58.5
time=56.4
time=57.0
time=60.2

ms
ms
ms
ms

For an IPv6 traceroute (traceroute6), you will first need to install the iputils package:
apk add iputils

Then run traceroute6:


traceroute6 ipv6.google.com

traceroute to ipv6.l.google.com (2001:4860:8009::67) from 2001:470:ffff:ff::2, 30


hops max, 16 byte packets
1 2001:470:ffff:ff::1 (2001:470:ffff:ff::1) 3.49 ms 0.62 ms 0.607 ms
2 * * *
3 * * *
4
pr61.iad07.net.google.com (2001:504:0:2:0:1:5169:1)
134.313 ms
95.342 ms
88.425 ms
5 2001:4860::1:0:9ff (2001:4860::1:0:9ff) 100.759 ms 100.537 ms 89.907 ms
6 2001:4860::1:0:5db (2001:4860::1:0:5db) 115.563 ms 102.946 ms 106.191 ms

7
8
9

2001:4860::2:0:a7 (2001:4860::2:0:a7) 101.754 ms 100.475 ms 100.512 ms


2001:4860:0:1::c3 (2001:4860:0:1::c3) 99.272 ms 111.989 ms 99.835 ms
yw-in-x67.1e100.net (2001:4860:8009::67) 101.545 ms 109.675 ms 99.431 ms

Additional Utilities

iproute2
You may wish to install the 'iproute2' package (note that this will also install iptables if not yet
installed)
apk add iproute2

This provides the 'ss' command which is IMHO a 'better' version of netstat.
Show listening tcp ports:
ss -tl

Show listening tcp ports and associated processes:


ss -ptl

Show listening and established tcp connections:


ss -ta

Show socket usage summary:


ss -s

Show more options:


ss -h

drill
You may also wish to install 'drill' (it will also install the 'ldns' package) which is a superior (IMHO)
replacement for nslookup and dig etc:
apk add drill

Then use it as you would for dig:

drill alpinelinux.org @8.8.8.8

To perform a reverse lookup (get a name from an IP) use the following syntax:
drill -x 8.8.8.8 @208.67.222.222

Connecting to a wireless access point


First make sure your wireless drivers are loded properly. (if you are using a Broadcom chipset, see note
at the end of this post.)
Install wireless-tools and wpa_supplicant.
apk add wireless-tools wpa_supplicant

Bring the link up so we can look for wireless networks. (An error here means you probably need extra
drivers/firmware.)
ip link set wlan0 up

Find a network to connect to. Look for the ESSID. In this example we will use the ESSID "MyNet".
iwlist wlan0 scanning

Lets set the ESSID:


iwconfig wlan0 essid MyNet

We need to create a shared key for wpa_supplicant.


wpa_passphrase MyNet > wpa.conf

It will wait for the password from stdin. Enter the password and enter. Now you will have a wpa.conf
file with the preshared key.
Start wpa_supplicant with the generated config:
wpa_supplicant -Dwext -iwlan0 -c ./wpa.conf

From another console, start dhcpcd:

udhcpc -i wlan0

You should get an IP address.


You

then

want

to

make the connection process


/etc/network/interfaces and add the following stanza:

automatic

on

boot-up.

Open

auto wlan0
iface wlan0 inet dhcp

You will also need to set wpa_supplicant to start automatically on boot:


rc-update add wpa_supplicant boot

Next, create /etc/wpa_supplicant/ (permissions of 755 with root:root are fine), and move wpa.conf
into that folder, renaming it to wpa_supplicant.conf.
Reboot and check that you are associated with the access point:
iwconfig wlan0

and check that you got a DHCP lease:


ifconfig wlan0 | grep addr

Note: BROADCOM WIFI CHIPSET USERS will need to compile the firmware manually for their
chipset. First, apk add alpine-sdk, then git clone aports from git.alpinelinux.org, switch to the
aports/non-free/b43-firmware folder, then run abuild -r. Install the generated apk file. Run fwcutter,
and you should be good to go.

Bonding
Note: Alpine Linux v2.4 or later is required

Installation
First, install the bonding package. This will give you support for bonding in the /etc/network/interfaces
file.
apk add bonding

Configuration
Edit the /etc/network/interfaces file:
auto bond0
iface bond0 inet static
address 192.168.0.2

netmask 255.255.255.0
gateway 192.168.0.1
# specify the ethernet interfaces that should be bonded
bond-slaves eth0 eth1 eth2 eth3

The keyword is bond-slaves that will make ifup add the slaves to the bond0 interface.

References: http://www.kernel.org/doc/Documentation/networking/bonding.txt

Vlan

Installation
First, install the vlan package. This will give you support for vlans in the /etc/network/interfaces file.
apk add vlan

Configuration
Edit the /etc/network/interfaces file:
auto eth0.8
iface eth0.8 inet static
address 192.168.0.2
netmask 255.255.255.0
gateway 192.168.0.1

With the vlan package installed ifup will find the trailing .8 in eth0.8 and will create a vlan interface
with vid 8 over eth0.
Alternative with vlan8 over eth0:
auto vlan8
iface vlan8 inet static
address 192.168.0.2
netmask 255.255.255.0
gateway 192.168.0.1
vlan-raw-device eth0

In those examples a static ip address was used but it works with dhcp as well.

Example with vlans over bonding


In this example we will bond eth1 and eth2 interfaces and create vlan trunks with vid 8, 64 and 96. The
gateway (ISP) is on eth0.
auto eth0
iface eth0 inet dhcp
auto bond0
# use manual because the bond0 interface should not have any ip.
iface bond0 inet manual
bond-slaves eth1 eth2
# bring up the vlans with 'ifup bond0'
up ifup bond0.8
up ifup bond0.64

up ifup bond0.96
down ifdown bond0.96
down ifdown bond0.64
down ifdown bond0.8
# no auto since ifup bond0 will bring this up
iface bond0.8 inet static
address 10.65.8.1
netmask 255.255.255.0
iface bond0.64 inet static
address 10.65.64.1
netmask 255.255.255.0
iface bond0.96 inet static
address 10.65.96.1
netmask 255.255.255.0

Bridge

Using brctl
Bridges are manually managed with the brctl command.
Usage: brctl COMMAND [BRIDGE [INTERFACE]]
Manage ethernet bridges
Commands:
show
Show a list of bridges
addbr BRIDGE
Create BRIDGE
delbr BRIDGE
Delete BRIDGE
addif BRIDGE IFACE
Add IFACE to BRIDGE
delif BRIDGE IFACE
Delete IFACE from BRIDGE
setageing BRIDGE TIME
Set ageing time
setfd BRIDGE TIME
Set bridge forward delay
sethello BRIDGE TIME
Set hello time
setmaxage BRIDGE TIME
Set max message age
setpathcost BRIDGE COST
Set path cost
setportprio BRIDGE PRIO
Set port priority
setbridgeprio BRIDGE PRIO
Set bridge priority
stp BRIDGE [1|0]
STP on/off

To manually create a bridge interface br0:


brctl addbr br0

To add interface eth0 and eth1 to the bridge br0:


brctl addif br0 eth0
brctl addif br0 eth1

Note that you need to set the link status to up on the added interfaces.
ip link set dev eth0 up

ip link set dev eth1 up

Configuration file
Note: Alpine Linux v2.4 or newer is required for this
Install the scripts that configures the bridge.
apk add bridge

Bridging is then configured in /etc/network/interfaces with the bridge-ports keyword. Note that you
normally don't assign ip addresses to the bridged interfaces (eth0 and eth1 in our example) but to the
bridge itself (br0).
In this example the address 192.168.0.1/24 is used.
auto br0
iface br0 inet static
bridge-ports eth0 eth1
bridge-stp 0
address 192.168.0.1
netmask 255.255.255.0

You can set the various options with those keywords:

bridge-aging
Set ageing time
bridge-fd
Set bridge forward delay
bridge-hello
Set hello time
bridge-maxage
Set bridge max message age
bridge-pathcost
Set path cost
bridge-portprio
Set port priority
bridge-bridgeprio
Set bridge priority
bridge-stp
STP on/off

Using pre-up/post-down
For older versions of Alpine Linux, or if you want be able to control the bridge interfaces individually,
you need to use pre-up/post-down hooks.
Example /etc/network/interfaces:

auto br0
iface br0 inet static
pre-up brctl addbr br0
pre-up echo 0 > /proc/sys/net/bridge/bridge-nf-call-arptables
pre-up echo 0 > /proc/sys/net/bridge/bridge-nf-call-iptables
pre-up echo 0 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
address 192.168.0.253
netmask 255.255.255.0
gateway 192.168.0.254
post-down brctl delbr br0
auto eth0
iface eth0 inet manual
up ip link set $IFACE up
up brctl addif br0 $IFACE
down brctl delif br0 $IFACE || true
down ip link set $IFACE down
auto eth1
iface eth1 inet manual
up ip link set $IFACE up
up brctl addif br0 $IFACE
down brctl delif br0 $IFACE || true
down ip link set $IFACE down

That way you create br0 with: ifup br0, and you can add/remove individual interfaces to the bridge with
ifup eth0, ifdown eth0.

Bridging for a Xen dom0


Bridging in a dom0 is a bit specific as it consists in bridging a real interface (i.e. ethX) with a virtual
interface (i.e. vifX.Y). At bridge creation time, the virtual interface does not exist and will be added by
the Xen toolstack when a domU is booting (see Xen documentation on how to link the virtual interface
to the correct bridge).

Particularities
the
bridge
consists
of
a
single
physical
interface
- the physical interface does not have an IP and is configured as manual
- the bridge will have the IP and will be auto, resulting in bringing up the physical interface
This translates to this sample config :
Example /etc/network/interfaces:
auto eth0
iface eth0 inet manual
auto br0
iface br0 inet static
address 192.168.0.253
netmask 255.255.255.0
gateway 192.168.0.254
bridge_ports eth0
bridge_stp 0

After the domU OS is started, the virtual interface wil be added and the working bridge can be checked
with
brctl show
ifconfig -a

OpenVSwitch

Installing OVS
apk add openvswitch
rc-update add ovs-modules
rc-update add ovsdb-server
rc-update add ovs-vswitchd
rc-service ovs-modules start
rc-service ovsdb-server start
rc-service ovs-vswitchd start

Using ovs-vsctl
Open VSwithes are manually managed with the ovs-vsctl command.
To manually create a switch named "lan":
ovs-vsctl add-br lan

To add interface eth0 to the switch "lan":


ovs-vsctl add-port lan eth0

Note that you need to set the link status to up on the added interfaces.
ip link set dev eth0 up

To see what OVS are defined:


ovs-vsctl list-br

To see what interfaces are linked to the lan OVS:


ovs-vsctl list-ports lan

To enable spanning tree (if needed):


ovs-vsctl set bridge lan stp_enable=true
LACP Timer setting 'fast' mode: ovs-vsctl set port bond0 other_config:lacp-time=fast

Using OVS appctl


ovs-appctl lacp/show bond0

Configuration file
configured in /etc/network/interfaces

auto eth0 lan


iface eth0 inet manual
up ifconfig eth0 0.0.0.0 up
down ifconfig eth0 down
iface lan inet dhcp

OVS and qemu


Helper scripts
ovs-ifup#!/bin/sh
switch=$(echo $0|/usr/bin/cut -d- -f3)
[ -z ${switch} ] && echo "Please define some symlink with suffix to use." && exit 1
[ $# -lt 1 ] && echo "Too few params. Must be 1 and is $#." && exit 2
/sbin/ifconfig $1 0.0.0.0 up
ovs-vsctl add-port ${switch} $1
logger "qemu: $1 added to ${switch} at startup of VM"

ovs-ifdown#!/bin/sh
switch=$(echo $0|/usr/bin/cut -d- -f3)
[ -z ${switch} ] && echo "Please define some symlink with suffix to use." && exit 1
[ $# -lt 1 ] && echo "Too few params. Must be 1 and is $#." && exit 2
/sbin/ifconfig $1 0.0.0.0 down
ovs-vsctl del-port ${switch} $1
logger "qemu: $1 removed from ${switch} at shutdown of VM"

OVS and LXC


Helper scripts
lxc-ovs-ifdown#!/bin/sh
switch=$(echo $0|/usr/bin/cut -d- -f4)
[ -z ${switch} ] && echo "Please define some symlink with suffix to use." && return
1
[ $# -lt 5 ] && echo "Too few params. Must be 5 and is $#." && exit 2
nic=$5
/usr/bin/ovs-vsctl del-port ${switch} ${nic}
/usr/bin/logger "lxc: ${nic} removed from ${switch} at shutdown of VM."

Caveats
Beware to have OVS package files available with no hassle at next reboot! this ca be a problem when
running from ram with no cache...

How to configure static routes


First, set the staticroute service to start automatically on boot by entering the following command:
rc-update add staticroute

You can now define your static routes in one of three ways:

Option 1: Edit /etc/conf.d/staticroute


Routes are added as a parameter value - the staticroute file has header information that explains the
syntax - for example:
staticroute="net 10.200.200.0 netmask 255.255.255.0 gw 192.168.202.12"

If you need to add multiple static routes, just add additional routes to the end of the text between the
quotes, with each route separated by a semicolon or a line break (press ENTER).

Option 2: Create /etc/route.conf


Use nano, vi, or your favourite installed editor to create the file /etc/route.conf and add each static route
on a separate line - for example:
net 10.200.200.0 netmask 255.255.255.0 gw 192.168.200.12
net 10.200.201.0 netmask 255.255.255.0 gw 192.168.200.13

Option 3: Setup routes in /etc/network/interfaces


If you have static network configuration in /etc/network/interfaces, you can add a static route that gets
activated when an interface gets brought up:
auto eth0
iface eth0 inet static
address 192.168.0.1
netmask 255.255.255.0
up ip route add 10.14.0.0/16 via 192.168.0.2
up ip route add 192.168.100.0/23 via 192.168.0.3

Once editing is complete, remember to save your changes if you are running the OS in RAM (eg:
booting from a USB stick) - if you are not sure how to do this, see this article: How to Boot Alpine
Linux and Save Settings on a USB Stick or Alpine_local_backup
You can verify your settings by restarting the server, or by issuing the following commands:
To add your defined static routes manually (for testing):
/etc/init.d/staticroute start

To remove your defined static routes:


/etc/init.d/staticroute stop

There is a 'restart' option to do an automatic stop/start, but it seems less reliable than using the two
above.

Alpine Wall
This sessions is a design and implementation plan for a new firewall management framework. The
new framework addresses the limitations of Shorewall, which is probably the most common
solution used with Alpine.

Proposal
We evaluated serveral existing open source projects, none of which satisfied our demanding taste. The
existing solutions are either too tied to specific (router) distributions, targeted to home users (with too
many assumptions built-in), or depedent on bloated frameworks (usually Perl). Moreover, we would
like to keep management of firewall settings and activation of such settings as two separate workflows,
which would facilitate centralized management of firewall rules.
As no readily available solution was found, the proposal is to implement a new management
framework for iptables, which would integrate with the Alpine Configuration Framework (ACF).
The framework is hereafter referred to as the Alpine Wall (awall).

Design
Awall would consist of three major components: data model, front-end, and back-end. It also
implements a plug-in architecture, which allows extending the data model and functionality, in order to
simplify common organization-specific administrative tasks.
The data model would describe the firewall configuration using concepts and terminology that is
roughly compatible with Shorewall. It would also borrow some useful concepts from other firewall
solutions we evaluated, such as the Service concept as defined in the Turtle Firewall (but generalized a
bit). Awall plug-ins can contain schema extension modules augmenting the basic model provided by
the framework.
The back-end is responsible for translating the model's data into configuration files, most notably the
files that can be read by iptables-restore and ip6tables-restore. Moreover, it can produce files
into e.g. /etc/modprobe.d and /etc/sysctl.d if necessary. When a plug-in extends the data model,
it must also provide a back-end module that interprets model extension and translates the data into
iptables and other rules. The framework includes a module for interpreting the base model. The
framework is responsible for ordering and aggregating the results produced by all modules into actual
configuration files.
The front-end is essentially an ACF module which allows editing the data model and activating the
changes with the help of the back-end. The front-end implements also a fallback mechanism that
prevents the operator from locking himself out by a faulty configuration. The configuration data is
stored in text files which can be directly edited. The front-end provides a command line tool for
validating and activating the configuration after manual changes.

Base Model
The basic data model could roughly look like as follows:

Zone
interface*, subnet*

Service
(protocol, port*)+

Forwarding policy
Zone:in*, Zone:out*, accept/reject/drop, masq_on/masq_off

Filtering rule
Zone:in*, Zone:out*, Service+, accept/reject/drop, conn_limit?, flow_limit?

NAT rule
snat/dnat, Zone:in*, Zone:out*, Service, ip4_range, port_range?

Subnets in zone definitions can be declared using IPv4/IPv6 addresses (CIDR notation), domain names,
or as references to ipsets. A domain name can resolve to one or more IP addresses. The referred ipsets
may be managed manually or by some other tool.
If a packet with source address a arrives on interface i, it is considered to originate from zone Z = (I, S)
(where I is the set of interfaces and S is the set of subnets) if and only if I includes i, and a belongs to
any subnet of S. In zone definitions, I would default to the set of all interfaces and S to {0.0.0.0/0, ::}.
The destination zone would be defined in a similar way based on the packet's destination address and
interface.

Implementation Considerations
The data model should preferably be based on some existing format, such as JSON, XML, or YAML.
In order to allow extensions to the data model, awall must define some kind of schema language. This
language would embed the necessary information the front-end needs to automatically generate a user
interface for the extension. For example, the help texts shown to the user would be placed in the
schema modules.
Ideally, the base model would be described using the very same language as the model extensions, but
it would impose quite demanding requirements on the language, e.g. support for complex data types. If
we select this approach and model the data using XML, we could use XML Schema as the basis. There
is also an (expired) Internet Draft on JSON Schema, but there seems to be no existing validator
implementation in C or Lua.
Even though elegant from architecture point of view, it is unlikely that support for complex data types
would be required by typical extensions. In most cases, a set of global variables of primitive types
would suffice. Therefore, we could just use a very simple language for declaring such variables or
implement support for a limited subset of some well-known schema language. In this alternative, the
base model would not be described using this language but rather hard-coded into the front-end.
The back-end modules are responsible for translating the configuration data into configuration file
fragments. As regards their implementation, we have two alternatives. The first alternative is to
implement them as Lua functions invoked by the framework in a defined way. The framework would
provide a library that allows the said functions to access the data model, and also otherwise assists in
their implementation. The functions would report the results back to the framework, which finally
would translate them into target files.
In the second alternative, the back-end modules would be implemented using a template language
rather than a general-purpose programming language. An example of a firewall-related template
language is ferm, which unfortunately is implemented in Perl. Ferm also lacks certain capabilities
required to implement e.g. the Zone and Service concepts conveniently. However, we could introduce a
new template language that would better suit our purposes. Such a language would eliminate some
redundancy from the back-end modules which necessarily comes with the use of a general-purpose
language. On the other hand, developing and maintaining such a language would take effort and might
make the framework initially more difficult to use.
The back-end will contain functionality for domain name resolution. In the data model, hosts of groups
thereof can be identified by their domain names. The back-end will resolve these to IP addresses, which
will be stored in the target files, so there will be no need to resolve anything when activating the
configuration during boot.

How-To Alpine Wall


Make sure you are running latest version by running the following commands:
apk update

apk add -u awall


apk version awall

Structure
Your AWall firewall configuration file(s) goes to /etc/awall/optional
Each such file is called Policy.

Note: AWall versions prior 0.2.12 will only look for Policy files in /usr/share/awall/optional.
From version 0.2.12 and higher, AWall will look for Policy files in both /etc/awall/optional and
/usr/share/awall/optional
You may have multiple Policy files (it is useful to have separate files for eg. HTTP,FTP and other
roles).
The Policy(s) can be enabled or disabled by using the "awall [enable|disable]" command.

Note: AWall's Policy files are not equivalent to Shorewalls /etc/shorewall/policy file.
An AWall Policy can contain definitions of:

variables (like /etc/shorewall/params)


zones (like /etc/shorewall/zones)
interfaces (like /etc/shorewall/interfaces)
policies (like /etc/shorewall/policy)
filters and NAT rules (like /etc/shorewall/rules)
services (like /usr/share/shorewall/macro.HTTP)

Prerequisites
After installing AWall, you need to load the following iptables modules:
modprobe ip_tables
modprobe iptable_nat

#if NAT is used

This is needed only the first time, after AWall installation.


Make the firewall autostart at boot and autoload the needed modules:
rc-update add iptables

A Basic Home Firewall


We will give a example on how you can convert a "Basic home firewall" from Shorewall to AWall.

Example firewall using Shorewall


Let's suppose you have the following Shorewall configuration:

/etc/shorewall/zones
inet
loc

ipv4
ipv4

/etc/shorewall/interfaces
inet
loc

eth0
eth1

/etc/shorewall/policy
fw
loc
all

all ACCEPT
inet ACCEPT
all DROP

/etc/shorewall/masq
eth0

0.0.0.0/0

Example firewall using AWall


Now we will configure AWall to do the same thing as we just did with the above Shorewall example.
Create a new file called /etc/awall/optional/test-policy.json and add the following content to
the file.

Tip: You could call it something else as long as you save it in


???.json)

/etc/awall/optional/

and name it

{
"description": "Home firewall",
"zone": {
"inet": { "iface": "eth0" },
"loc": { "iface": "eth1" }
},
"policy": [
{ "in": "_fw", "action": "accept" },
{ "in": "loc", "out": "inet", "action": "accept" }
],
"snat": [
{ "out": "inet" }
]
}

The above configuration will:

Create a description of your Policy


Define zones
Define policy
Define snat (to masqurade the outgoing traffic)

Note: snat means "source NAT". It does not mean "static NAT".
Tip: AWall has a built-in zone named "_fw" which is the "firewall itself". This corresponds to the
Shorewall "fw" zone.
Activating/Applying a Policy
After saving the Policy you can run the following commands to activate your firewall settings:

awall list

# Listing available 'Policy(s)' (This step is optional)

awall enable test-policy

# Enables the 'Policy'

awall activate
# Genereates firewall configuration from the 'Policy'
files and enables it (starts the firewall)

If you have multiple policies, after enabling or disabling them, you need to always run awall activate in
order to update the iptables rules.

Advanced Firewall settings


Assuming you have your /etc/awall/optional/test-policy.json with your "Basic home firewall"
settings, you could choose to modify that file to test the below examples.

Tip: You could create new files in /etc/awall/optional/ for testing some of the below examples

Logging
AWall

will (since v0.2.7) automatically log dropped packets.


You could add the following row to the "policy" section in your Policy file in order to see the dropped
packets.
{ "in": "inet", "out": "loc", "action": "drop" }
Note: If you are using Alpine 2.4 repository (AWall v0.2.5

or below), you should use

"action":

"logdrop" in order to log dropped packets .

Note: If you are adding the above content to an already existing file, then make sure you add ","
signs where they are needed!

Port-Forwarding
Let's suppose you have a local web server (192.168.1.10) that you want to make accessible from the
"inet".
With Shorewall you would have a rule like this in your /etc/shorewall/rules:
#ACTION
#
DNAT

SOURCE
inet

DEST
loc:192.168.1.10

PROTO
tcp

DEST
SOURCE
PORT(S) PORT(S)
80

ORIGINAL
DEST

Lets configure our AWall Policy file likewise by adding the following content.
"variable": {
"APACHE": "192.168.1.10",
"STATIC_IP": "1.2.3.4"
},
"filter": [
{ "in": "inet",
"dest": "$STATIC_IP",
"service": "http",
"action": "accept",
"dnat": "$APACHE"
}
]

As you can see in the above example, we create a

"variable" section where we specify some IP-addresses


"filter" section where we do the actual port-forwarding (using the variables we just created
and using some preexisting "services" definitions)

Note: If you are adding the above content to a already existing file, then make sure you add ","
signs where they are needed!
Tip: AWall already has a "service" definition list for several services like HTTP, FTP, SNMP, etc. (see
/usr/share/awall/mandatory/services.json)
If you need to forward to a different port (e.g. 8080) you can do:
"dnat": [
{"in": "inet", "dest": "$STATIC_IP", "to-addr": "$APACHE", "service": "http",
"to-port": 8080 }
]

Create your own service definitions


You can add your own service definitions into your Policy files:
"service": {
"openvpn": { "proto": "udp", "port": 1194 }
}

Note:
You
can
not
override
a
/usr/share/awall/mandatory/services.json

"service"

definition

that

comes

from

Note: If you are adding the above content to a already existing file, then make sure you add ","
signs where they are needed!

Inherit services or variables


You can import a Policy into other Policy files for inheriting services or variables definitions:
"import": "myfirewall"

Specify load order


By default policies are loaded on alphabetical order.
You can change the load order with the keywords "before" and "after":
"before": "myfirewall"
"after": "someotherpolicy"

Other

Help and debugging


If you end up in some kind of trouble, you might find some commands useful when debugging:
awall
application

# (With no parameters) Shows some basic help about awall

iptables -L -n

# Show what's in iptables

Alpine Wall User's Guide

Configuration File Processing


Alpine Wall (awall) reads its configuration from multiple JSON-formatted files, called policy files. The
files located in directory /usr/share/awall/mandatory are mandatory policies shipped with APK
packages. In addition, there can be installation-specific mandatory policies in /etc/awall.
The latter directory may also contain symbolic links to policy files located in
/usr/share/awall/optional and /etc/awall/optional. These are optional policies, which can be
enabled on need basis. Such symbolic links are easily created and destroyed using the awall enable
and awall disable commands. awall list shows which optional policies are enabled and disabled.
The command also prints the description of the optional policy if defined in the file using a top-level
attribute named description.
Sometimes a policy file depends on other policy files. In this case, the policy file must have a top-level
attribute import, the value of which is a list of policy names, which correspond to the file names
without the .json suffix. The imported policies may be either optional policies or private policies,
located in /usr/share/awall/private or /etc/awall/private. By default, the policies listed there
are processed before the importing policy.
The order of the generated iptables rules generally reflects the processing order of their corresponding
awall policies. The processing order of policies can be adjusted by defining top-level attributes after
and before in policy files. These attributes are lists of policies, after or before which the declaring
policy shall be processed. Putting a policy name to either of these lists does not by itself import the
policy. The ordering directives are ignored with respect to those policies that are not enabled by the
user or imported by other policies. If not defined, after is assumed to be equal to the relative
complement of the before definition in the import definition of the policy.
As the import directive does not require the path name to be specified, awall expects policies to have
unique names, even if located in different directories. It is allowed to import optional policies that are
not explicitly enabled by the user. Such policies show up with the required status in the output of
awall list.

List Parameters
Several awall parameters are defined as lists of values. In order to facilitate manual editing of policy
files, awall also accepts single values in place of lists. Such values are semantically equivalent to lists
containing one element.

Variable Expansion
Awall allows variable definitions in policy files. The top-level attribute variable is a dictionary
containing the definitions. The value of a variable can be of any type (string, integer, list, or
dictionary).
A variable is referenced in policy files by a string which equals the variable name prepended with the $
character. If the value of the variable is a string, the reference can be embedded into a longer string in
order to substitute some part of that string (in shell style). Variable references can be used when
defining other variables, as long as the definitions are not circular.
Policy files can reference variables defined in other policy files. Policy files can also override variables
defined elsewhere by redefining them. In this case, the new definition affects all policy files, also those
processed before the overriding policy. Awall variables are in fact simple macros, since each variable
remains constant thoughout a single processing round. If multiple files define the same variable, the
definition in the file processed last takes effect.

If defined as an empty string, all non-embedded references to a variable evaluate as if the attribute in
question was not present in the configuration. This is also the case when a string containing embedded
variable references finally evaluates to an empty string.

Configuration Objects
Configuration objects can be divided into two main types. Auxiliary objects model high-level concepts
such as services and zones. Rule objects translate into one or more iptables rules, and are often defined
with the help of some auxiliary objects.

Services
A service represents a set of network protocols. A top-level attribute service is a dictionary that maps
service names to service definition objects, or lists thereof in more complex cases.
A service definition object contains an attribute named proto, which corresponds to the --protocol
option of iptables. The protocol can be defined as a numerical value or string as defined in
/etc/protocols. If the protocol is tcp or udp, the scope of the service definition may be constrained
by defining an attribute named port, which is a list of TCP or UDP port numbers or ranges thereof,
separated by the - character. If the protocol is icmp or icmpv6, an analogous type attribute may be
used. The replies to ICMP messages have their own type codes, which may be specified using the
reply-type attribute.
If the protocol is icmp or icmpv6, the scope of the rule is also automatically limited to IPv4 or IPv6,
respectively. There are also other services which are specific to IPv4 or IPv6. To constrain the scope of
the service definition to either protocol version, an optional family attribute can be set to value inet or
inet6, respectively.
Some services require the server or client to open additional connections to dynamically allocated ports
or even different hosts. Connection tracking helpers are used to make the firewall aware of such
additional connections. The ct-helper attribute is used to associate such a helper to a service definition
when required by the service.
All rule objects, except for policies, may have an attribute named service, constraining the rule's scope
to specific services only. This attribute is a list of service names, referring to the keys of the top-level
service dictionary.

Zones
A zone represents a set of network hosts. A top-level attribute zone is a dictionary that maps zone
names to zone objects. A zone object has an attribute named iface, addr, or both. iface is a list of
network interfaces and addr is a list of IPv4/IPv6 host and network addresses (CIDR notation). addr
may also contain domain names, which are expanded to IP addresses using DNS resolution. If not
defined, addr defaults to the entire address space and iface to all interfaces. An empty zone can be
defined by setting either addr or iface to an empty list.
Rule objects contain two attributes, in and out, which are lists of zone names. These attributes control
whether a packet matches the rule or not. If a particular zone is referenced by the in attribute, the rule
applies to packets whose ingress interface and source address are covered by the zone definition.
Correspondingly, if a zone is referenced by the out attribute, the rule applies to packets whose egress
interface and destination address are included in the zone. If both in and out are defined, the packet
must fulfill both criteria in order to match the rule.
The firewall host itself can be referred to using the special value _fw as the zone name.
By default, awall does not generate iptables rules with identical ingress and egress interfaces. This
behavior can be changed per zone by setting the optional route-back attribute of the zone to true. Note
that this attribute can have an effect also in the case where in and out attributes of a rule are not equal

but their definitions overlap. In this case, the route-back attribute of the out zone determines the
behavior.

Limits
A limit specifies the maximum rate for a flow of packets or new connections. Unlike the other auxiliary
objects, limits are not named members of a top-level dictionary but are embedded into other objects.
In its simplest form, a limit definition is an integer specifying the maximum number of packets or
connections per second. More complex limits are defined as objects, where the count attribute define
the maximum during an interval defined by the interval attribute. The unit of the interval attribute is
second, and the default value is 1.
The maximum rate defined by a limit may be absolute or specific to blocks of IP addresses or pairs
thereof. The number of most significant bits taken into account when mapping the source and
destination IP addresses to blocks can be specified with the mask attribute. The mask attribute is an
object with two attributes defining the prefix lengths, named src and dest. Alternatively, the mask
object may have object attributes named inet and inet6 which contain address familyspecific prefix
length pairs. If mask is defined as an integer, it is interpreted as the source address prefix length.
The default value for mask depends on the type of the enclosing object. For filters, the default behavior
is to apply the limit for each source address separately. For logging classes, the limit is considered
absolute by default.

Logging Classes
A logging class specifies how packets matching certain rules are logged. A top-level attribute log is a
dictionary that maps logging class names to setting objects.
A setting object may have an attribute named mode, which specifies which logging facility to use.
Allowed values are log, nflog, and ulog. The default is log, i.e. in-kernel logging.
The following table shows the optional attributes valid for all logging modes:

Attribute

Description

every

Divide successive packets into groups, the size of which is specified by the value of this
attribute, and log only the first packet of each group

limit

Maximum number of packets to be logged defined as limit

prefix

String with which the log entries are prefixed

probability Probability for logging an individual packet (default: 1)


With the in-kernel log mode log, the level of logging may be specified using the level attribute. Log
modes nflog and ulog are about copying the packets into user space, at least partially. The following
table shows the additional attributes valid with these modes:

Attribute

Description

group

Netlink group to be used

range

Number of bytes to be copied

threshold Number of packets to queue inside the kernel before copying them
Filter and policy rules can have an attribute named log. If it is a string, it is interpreted as a reference to
a logging class, and logging is performed according to the definitions. If the value of the log attribute is

true (boolean), logging is done using default settings. If the value is false (boolean), logging is
disabled for the rule. If log is not defined, logging is done using the default settings except for accept
rules, for which logging is omitted.
Default logging settings can be set by defining a logging class named _default. Normally, default
logging uses the log mode with packets limited to one per second.

Rules
There are several types of rule objects:

Filter rules
Policy rules
Packet Logging rules
NAT rules
Packet Marking rules
Transparent Proxy rules
MSS Clamping rules
Connection Tracking Bypass rules

All rule objects can have the in and out attributes referring to zones as described in the previous
section. In addition, the scope of the rule can be further constrained with the following attributes:

Attribute

Value format

Effect

src

Similar to addr attribute of zone


Packet's source address matches the value
objects

dest

Similar to addr attribute of zone


Packet's destination address matches the value
objects

ipset

Object containing two attributes: Packet matches the IP set referred here when the
name referring to an IP set and match arguments are taken from the source (in) and
args, which is a list of strings in and destination (out) address or port in the order
out
specified by args

ipsec

in or out

IPsec decapsulation perfomed on ingress (in) or


encapsulation performed on egress (out)

Rule objects are declared in type-specific top-level dictionaries in awall policy files. If a packet
matches multiple rules, the one appearing earlier in the list takes precedence. If the matching rules are
defined in different policy files, the one that was processed earlier takes precedence in the current
implementation, but this may change in future versions.

Filter Rules
Filter objects specify an action for packets fulfilling certain criteria. The top-level attribute filter is a
list of filter objects.
Filter objects must have an attribute named action, the value of which can be one of the following:

Value

Action

accept Accept the packet (default)


reject Reject the packet with an ICMP error message

drop Silently drop the packet


Put incoming TCP connections into persist state and ignore attempts to close them. Silently
tarpit drop non-TCP packets. (Connection tracking bypass is automatically enabled for the
matching packets.)
Filter objects, the action of which is accept, may also contain limits for packet flow or new
connections. These are specified with the flow-limit and conn-limit attributes, respectively. The values
of these attributes are limit objects. The drop action is applied to the packets exceeding the limit.
Optionally, the limit object may have an attribute named log. It defines how the dropped packets should
be logged and is semantically similar to the log attribute of rule objects.
Filter objects may have an attribute named dnat, the value of which is an IPv4 address. If defined, this
enables destination NAT for all IPv4 packets matching the rule, such that the specified address replaces
the original destination address. If also port translation is desired, the attribute may be defined as an
object consisting of attributes addr and port. The format of the port attribute is similar to that of the
to-port attribute of NAT rules. This option has no effect on IPv6 packets.
Filter objects may have a boolean attribute named no-track. If set to true, connection tracking is
bypassed for the matching packets. In addition, if action is set to accept, the corresponding packets
travelling to the reverse direction are also allowed.
If one or more connection tracking helpers are associated with the services referred to by an accept
rule, additional iptables rules are generated for the related connections detected by the helpers. The
related attribute can be used to override the default rules generated by awall. It is a list of basic rule
objects, the packets matching to which are accepted, provided that they are also detected by at least one
of the helpers.

Policy Rules
Policy objects describe the default action for packets that did not match any filter. The top-level
attribute policy is a list of policy objects.
Policy objects must have the action attribute defined. The possible values and their semantics are the
same as in filter rules.

Packet Logging Rules


Packet logging rules allow packets matching the specified criteria to be logged before any filtering
takes place. Such rules are contained in the top-level list named packet-log.
Logging class may be specified using the log attribute. Otherwise, default logging settings are used.

NAT Rules
NAT rules come in two flavors: source NAT rules and destination NAT rules. These are contained in
two top-level lists named snat and dnat, respectively.
Each NAT rule may have an attribute named to-addr that specifies the IPv4 address range to which the
original source or destination address is mapped. The value can be a single IPv4 address or a range
specified by two addresses, separated with the - character. If not defined, it defaults to the primary
address of the ingress interface in case of destination NAT, or that of the egress interface in case of
source NAT.
Optionally, a NAT rule can specify the TCP and UDP port range to which the original source or
destination port is mapped. The attribute is named to-port, and the value can be a single port number or

a range specified by two numbers, separated with the - character. If to-port is not specified, the original
port number is kept intact.
NAT rules, may have an action attribute set to value include or exclude. The latter means that NAT is
not performed on the matching packets (unless they match an include rule processed earlier). The
default value is include.

Packet Marking Rules


Packet marking rules are used to mark incoming packets matching the specified criteria. The mark can
be used as a basis for the routing decision. Each marking rule must specify the mark using the mark
attribute, which is a 32-bit integer.
Normal marking rules are contained by the top-level list attribute named mark.
There is another top-level list attribute, named route-track, which contains route tracking rules. These
are special marking rules which cause all the subsequent packets related to the same connection to be
marked according to the rule.

Transparent Proxy Rules


Transparent proxy rules divert the matching packets to a local proxy process without altering their
headers. Such rules are contained in the top-level list named tproxy.
In addition to the firewall configuration, using a transparent proxy requires a routing configuration
where packets marked for proxying are diverted to a local process. The awall_tproxy_mark variable
can be used to specify the mark for such packets, which defaults to 1.
Proxy rules may also have an attribute named to-port for specifying the TCP or UDP port of the proxy
if it is different from the original destination port.

MSS Clamping Rules


MSS Clamping Rules are used to deal with ISPs that block ICMP Fragmentation Needed or ICMPv6
Packet Too Big packets. An MSS clamping rule overwrites the MSS option with a value specified with
the mss attribute for the matching TCP connections. If mss is not specified, a suitable value is
automatically determined from the path MTU. The MSS clamping rules are located in the top-level
dictionary named clamp-mss.

Connection Tracking Bypass Rules


Connection tracking bypass rules are used to disable connection tracking for packets matching the
specified criteria. The top-level attribute no-track is a list of such rules.
Like NAT rules, connection tracking bypass rules may have an action attribute set to value include or
exclude.

IP Sets
Any IP set referenced by rule objects should be created by awall. Auxiliary IP set objects are used to
defined them in awall policy files. The top-level attribute ipset is a dictionary, the keys of which are IP
set names. The values are IP set objects, which have two mandatory attributes. The attribute named
type corresponds to the type argument of the ipset create command. family specifies whether the
set is for IPv4 or IPv6 addresses, and the possible values are inet and inet6, correspondingly.
For bitmap-type IP sets, the range attribute specifies the range of allowed IPv4 addresses. It may be
given as a network address or two addresses separated by the - character. It is not necessary to specify
family for bitmaps, since the kernel supports only IPv4 bitmaps.

Command Line Syntax


Translating Policy Files to Firewall Configuration Files
awall translate [-o|--output DIRECTORY] [-V|--verify]

The --verify option makes awall verify the configuration using the test mode of iptables-restore
before overwriting the old files.
Specifying the output directory allows testing awall policies without overwriting the current iptables
and ipset configuration files. By default, awall generates the configuration to /etc/iptables and
/etc/ipset.d, which are read by the init scripts.

Run-Time Configuration of Firewall


awall activate [-f|--force]

This command genereates firewall configuration from the policy files and enables it. If the user
confirms the new configuration by hitting the Return key within 10 seconds or the --force option is
used, the configuration is saved to the files. Otherwise, the old configuration is restored.
There is also a command for deleting all firewall rules:
awall flush

This command configures the firewall to drop all packets.

Optional Policies
Optional policies can be enabled or disabled using this command:
awall {enable|disable} POLICY...

Optional policies can be listed using this command:


awall list

The enabled status means that the policy has been enabled by the user. The disabled status means that
the policy is not in use. The required status means that the policy has not been enabled by the user but
is in use because it is required by another policy which is in use.

Debugging Policies
This command can be used to dump variable, zone, and other definitions as well as their source
policies:
awall dump [LEVEL]

The level is an integer in range 05 and defaults to 0. More information is displayed on higher levels.

PXE boot
Guide to options
ip
Required for PXE.
Set ip=dhcp to get an IP via DHCP. (Requires af_packet.ko in the initrd, in addition to the
modules needed for your NIC.)

Set ip=client-ip::gateway-ip:netmask::[device]: to specify an IP manually. device is a


device name (e.g. eth0). If one is not specified, one is chosen automatically.
apkovl
Valid forms include:

An HTTP, HTTPS or FTP URL to an apkovl.tar.gz file which will be retrieved and
applied.
device_name[:fs_type]:path, where device_name does not include /dev (e.g., sda).
fs_type is optional (e.g. ext4). path expresses the path on the device to the
apkovl.tar.gz file.
A relative path, interpreted relative to the root of the alpine_dev.
If not specified, a file matching *.apkovl.tar.gz is searched for in the root of the
ovl_dev. (If more than one exists in the root of a device, all are ignored.)

alpine_dev
Required.
The alpine_dev specifies a device used for reference data which must reside on a
filesystem; currently, this is only the case for kernel modules.
This is also used to obtain APKs if a repository is not explicitly specified; see below.
Valid forms include:

A device name, not including /dev/.


UUID=filesystem-uuid
LABEL=filesystem-label
nfs:ip-address:path, specifying an NFS export to use as the device. You may need

to add modules to the initrd.


ovl_dev
Valid forms include:

device_name[:fs_type]

If not specified, various devices are searched for a file matching *.apkovl.tar.gz in
the root directory.

This argument can contain the fields {MAC} and {UUID}, which will be substituted with the
MAC address of the NIC used by Alpine and the system's DMI "Product UUID" respectively. If
these substitutions are used, the value passed to ovl_dev must be enclosed in quotes. e.g.
ovl_dev="http://.../{MAC}.apkovl.tar.gz".
alpine_repo
If set, /etc/apk/repositories will be filled with this. May be an URL. Otherwise, try and
find a directory containing the marker file .boot_repository on the alpine_dev.
modloop
if the specified file is of http/ftp or https (if wget is installed), the modloop file will be
downloaded to the /lib directory and will be mounted afterwards
ie modloop=http://192.168.1.1/pxe/alpine/grsec.modloop.squashfs in the append section of
your bootloader

HOWTO
Alpine can be PXE booted starting with Alpine 2.6-rc2. In order to accomplish this you must complete
the following steps:

Set up a DHCP server and configure it to support PXE boot.


Set up a TFTP server to serve the PXE bootloader.
Set up an HTTP server to serve the rest of the boot files.
Set up an NFS server from which Alpine can load kernel modules.
Configure mkinitfs to generate a PXE-bootable initrd.

This article describes a setup using gpxe as a PXE bootloader, but you could also use PXELINUX.
Standard setup of all involved services is not covered here; advice on setting up basic
DHCP/TFTP/HTTP/NFS/etc. is widely available.

Set up a DHCP server and configure it to support PXE boot


If you use the ISC DHCP server (package "dhcp"), amend your subnet block like so:
next-server 10.0.0.1;
filename "gpxe.kpxe";

Set up a TFTP server to serve the PXE bootloader


Install a TFTP server (package "tftp-hpa"). You will need to place a gPXE image at
/var/tftproot/gpxe.kpxe. You can generate an image online at ROM-o-matic.net. Select the
".kpxe" output format and the "undionly" driver. You will need to specify a custom boot script. Select
"Customize". The following boot script works well:
dhcp net0
chain http://${net0/next-server}/gpxe-script

You can include ${net0/mac} and ${uuid} in the URL for the interface MAC address and machine
UUID respectively.
Note that as of writing, ROM-o-matic appears to produce a buggy image unless it is used with the
"undionly" driver. If you require a different driver, consider building gPXE yourself, especially if you
experience inexplicable connectivity issues. Common symptoms are a seemingly correctly configured,
randomly functional network connection which appears to suffer from extreme packet loss.

Set up an HTTP server to serve the rest of the PXE boot files
Suppose you have an HTTP server configured to serve from /srv/http. Place an appropriate gPXE
script, such as the following, at /srv/http/prov/gpxe-script:
#!gpxe
kernel http://${net0/next-server}/prov/grsec ip=dhcp alpine_dev=nfs:${net0/nextserver}:/srv/nfs/depot alpine_repo=http://nl.alpinelinux.org/alpine/v2.5/main/
initrd http://${net0/next-server}/prov/pxerd
boot
ip=dhcp

instructs the initrd to obtain an IP via DHCP. The NFS share specified by alpine_dev will be
mounted. alpine_repo specifies an apk repository to use.

Using lpxelinux instead of gPXE

Since recent version of syslinux, pxelinux also has support to boot over ftp/http.
The pxelinux.cfg/default file (or specific MAC address file name) should be in the same format as with
regular syslinux.
You will need to use a copy of the lpxelinux.0 found when installing syslinux on alpine:
/usr/share/syslinux/lpxelinux.0 and copy it to your tftp server.
Don't forget to also copy ldlinux.c32, as its a dependency of syslinux variants (see documentation).
DEFAULT alpine
LINUX http://ipaddr/grsec
INITRD http://ipaddr/grsec.gz
APPEND ip=dhcp modules=loop,squashfs,sd-mod,usb-storage alpine_repo=http://repo-url
modloop=http://ipaddr/grsec.modloop.squashfs
apkovl=http://ipaddr/localhost.apkovl.tar.gz

Using pxelinux instead of gPXE


Since recent version of syslinux, pxelinux also has support to boot over tftp.
The pxelinux.cfg/default file (or specific MAC address file name) should be in the same format as with
regular syslinux.
You will need to use a copy of the pxelinux.0 found when installing syslinux on alpine:
/usr/share/syslinux/pxelinux.0 and copy it to your tftp server.
Don't forget to also copy ldlinux.c32, as its a dependency of syslinux variants (see documentation).
PROMPT 0
TIMEOUT 3
default alpine
LABEL alpine
LINUX alpine-vmlinuz-grsec
INITRD alpine-pxerd
APPEND ip=dhcp alpine_dev=nfs:192.168.1.1:/srv/boot/alpine
modloop=http://192.168.1.1/modloop-grsec nomodeset quiet
apkovl=http://192.168.1.1/localhost.apkovl.tar.gz

vmlinuz-grsec is taken from a system running in memory from usb.


pxerd is generated on a system running in memory from usb. With network nfs and virtio_net added.
/srv/boot/alpine is a copy of /media/usb from a system running in memory from usb.
modules=loop,squashfs,sd-mod,usb-storage is not needed as loop and squashfs are hard coded into the
init script and we do not use sd nor usb.
modloop=http://ipaddr/grsec.modloop.squashfs does not seems to work. Without neither...(*)
apkovl=http://ipaddr/localhost.apkovl.tar.gz.
(*) about the modloop problem: /etc/init.d/modloop tries to load the file from /media/nfs instead of
/media/alpine and starts trying to mount it! (unsuccessfully) A fix to it is (a proposal) see
http://bugs.alpinelinux.org/issues/4015

Set up an NFS server from which Alpine can load kernel


modules
NOTE: by adding modloop with http support, this is no need for modules.
Set up an NFS share at /srv/nfs/depot and export it via /etc/exports:
/srv/nfs/depot

*(ro,no_root_squash,no_subtree_check)

This export does not currently need to contain anything, unless you wish to use it to serve apks, in
which case ensure that a file ".boot_repository" is created in the directory containing architecture
subdirectories and remove alpine_repo from the kernel arguments. The repository will be autodetected
by searching for ".boot_repository". Eventually Alpine will be able to load kernel modules from this
export.

Configure mkinitfs to generate a PXE-bootable initrd


NOTE: There is currently a mkinitfs profile just for networking called: network. Using it will
automatically add pxe support and all ethernet drivers to the initramfs.
You need to add drivers for any Ethernet cards with which you might PXE boot to your initrd. To do
this, create /etc/mkinitfs/features.d/network.modules. List any kernel drivers you require for
your Ethernet card. If you are using an Intel E1000 card (this is used by VMware and VirtualBox, and
so is good for testing), add
kernel/drivers/net/ethernet/intel/e1000/*.ko

You also must create the following files so that the modules and scripts necessary for DHCP and NFS
are inserted into the initrd.
/etc/mkinitfs/features.d/dhcp.files, containing:
/usr/share/udhcpc/default.script
/etc/mkinitfs/features.d/dhcp.modules, containing:
kernel/net/packet/af_packet.ko
/etc/mkinitfs/features.d/nfs.modules, containing:
kernel/fs/nfs/*

Finally edit /etc/mkinitfs/mkinitfs.conf and add features squashfs, network, dhcp and nfs.
Generate a PXE-capable initrd by running
mkinitfs -o /srv/http/prov/pxerd

You should now be able to PXE-boot Alpine Linux. This feature is still in development and non-fatal
post-initrd boot errors (regarding modloop, etc.) are to be expected.

Specifying an apkovl
If you wish to specify an apkovl, simply add
apkovl=http://..../file.apkovl.tar.gz

to the kernel arguments. {MAC} and {UUID} in this parameter will be substituted with the MAC
address of the boot interface and the machine UUID respectively. If you use these parameters, ensure
you place the URL in quotes.
You can also use ovl_dev= if you want to obtain an apkovl from a device. Use either apkovl or
ovl_dev, not both.

Using serial modem

Install packages
Install required packages
apk add ppp

Load modules
Load some needed modules and make sure they get automatically loaded at next reboot

modprobe ppp
echo "ppp" >> /etc/modules

Configfiles
/etc/ppp/peers/serialmodem
(The filename 'serialmodem' can be changed to whatever is appropriate for you, but you will need to
remember it when running pon/poff command)
debug
/dev/ttyS0
115200
modem
crtscts
asyncmap 0
defaultroute
lock
noauth
user '{login_id}'
connect '/usr/sbin/chat -v -f /etc/ppp/chat-serialmodem'

/etc/ppp/chat-serialmodem
(The filename 'chat-serialmodem' can be changed to whatever is appropriate for you, but you will need
modify above configfile to reflect your decition)
ABORT 'BUSY'
ABORT 'ERROR'
ABORT 'NO ANSWER'
ABORT 'NO CARRIER'
ABORT 'NO DIALTONE'
ABORT 'Invalid Login'
ABORT 'Login incorrect'
REPORT 'CONNECT'
TIMEOUT '60'
'' 'ATZ'
OK 'ATDT{phonenumber}'
CONNECT '\d\c'
Username: '{login_id}'
Password: '{your_password}'

/etc/ppp/pap-secrets
When you look at the logs and see pppd report something like this:
daemon.debug pppd[5665]: rcvd [LCP ConfReq id=0xf6 <asyncmap 0xa0000> <auth pap>
<magic 0xa239b2b1> <pcomp> <accomp>]

(Note the "<auth pap>" section)


Then you might need to use pap-secrets file (or chap-secrets depending on what pppd reports in the
logs).
You file might in this case look something like this
# client
{login_id}

server
*

secret
{your_password}

IP addresses
*

If you are using 'pap-secrets' (or 'chat-secrets') you should most likely comment out 'Username:' and
'Password:' lines in your '/etc/ppp/chat-serialmodem' config.

Note for above example configs


Note: Replace above words {login_id}, {your_password} and {phonenumber} with what you
received from your ISP. The characters { and } should also be removed.
Note: You might need to replace the 'Username:' and 'Password:' parts to reflect the words your
ISP is using to ask you to enter your credentials
Tip: You might need to replace "CONNECT '\d\c'" with "CONNECT 'CLIENT'" in your chatserialmodem config
References:

http://axion.physics.ubc.ca/ppp-linux.html
http://www.yolinux.com/TUTORIALS/LinuxTutorialPPP.html
http://www.linux.org/docs/ldp/howto/PPP-HOWTO/options.html

Start/Stop

Start connection
pon serialmodem

Stop connection
poff serialmodem

If something goes wrong...


Check if process is running
pidof pppd

Logfile might give you a clue on what went wrong


egrep "pppd|chat" /var/log/messages

Check nic information


ifconfig ppp0

pppd has a statusinformation function that could come in handy


pppstats

Using HSDPA modem

Install packages
Install required packages
apk add ppp

Load modules
Now let's load the driver (using the values you just found out) and prepare it to get automatically loaded
at next reboot.
modprobe ppp
echo "ppp" >> /etc/modules

Configfiles
/etc/ppp/peers/E220
(The filename 'E220' can be changed to whatever is appropreate for you, but you will need to
remember it when running pon/poff command)
debug
/dev/ttyUSB0
460800
crtscts
modem
noauth
usepeerdns
defaultroute
noipdefault
noccp
nobsdcomp
novj
connect '/usr/sbin/chat -v -f /etc/ppp/chat-E220-pin || /usr/sbin/chat -f
/etc/ppp/chat-E220-nopin'

/etc/ppp/chat-E220-pin
(The filename 'chat-E220-pin' can be changed to whatever is appropreate for you, but you will need
modify above configfile to reflect your decition)
ABORT "BUSY"
ABORT "ERROR"
ABORT "NO CARRIER"
REPORT "CONNECT"
TIMEOUT "10"
"" "ATZ"
OK "AT+CPIN={PIN}"
OK AT+CGDCONT=1,"ip","{APN}"
OK "ATE1V1&D2&C1S0=0+IFC=2,2"
OK "AT+IPR=115200"
OK "ATE1"
TIMEOUT "60"
"" "ATD*99***1#"
CONNECT \c

/etc/ppp/chat-E220-nopin
(The filename 'chat-E220-nopin' can be changed to whatever is appropreate for you, but you will need
modify above configfile to reflect your decition)
ABORT "BUSY"
ABORT "ERROR"
ABORT "NO CARRIER"
REPORT "CONNECT"
TIMEOUT "10"
"" "ATZ"
OK AT+CGDCONT=1,"ip","{APN}"
OK "ATE1V1&D2&C1S0=0+IFC=2,2"
OK "AT+IPR=115200"
OK "ATE1"
TIMEOUT "60"
"" "ATD*99***1#"
CONNECT \c

Note: Replace above word {PIN} with the "PIN" of your card (typically a 4 digit code)
Note: Replace above word {APN} with the "Access Point Name" of the service you use (for instance
mine is "web.omnitel.it"). If you don't know the Internet APN, ask your service provider

Routes
Create a default gw route to your 'ppp0' device.
ip route add default dev ppp0

DNS
Figure out what DNS-servers your provider has.
egrep -i 'pppd.*dns' /var/log/messages

This might give you some useful information.


Search for a IP-address that might be your providers DNS-server and add this IP-address into
'/etc/resolv.conf'.
echo "nameserver {DNS-server-IP-address}" > /etc/resolv.conf

Start/Stop

Start connection
pon E220

Stop connection

poff E220

If something goes wrong...


Check if process is running
pidof pppd

Logfile might give you a clue on what went wrong


egrep "pppd|chat" /var/log/messages

Check nic information


ifconfig ppp0

pppd has a statusinformation function that could come in handy


pppstats

Huawei E378
Tested on Alpine 2.3.3.
Add usb-modeswitch (currently only in testing repo):
apk add usb-modeswitch

/etc/modules:
usbserial vendor=0x12d1 product=0x1446

/etc/usb_modeswitch.conf:
#
#
#
#
#
#
#

Configuration for the usb_modeswitch package, a mode switching tool for


USB devices providing multiple states or modes
This file is evaluated by the wrapper script "usb_modeswitch_dispatcher"
in /usr/sbin
To enable an option, set it to "1", "yes" or "true" (case doesn't matter)
Everything else counts as "disable"

# Disable automatic mode switching globally (e.g. to access the original


# install storage)
DisableSwitching=0
# Enable logging (results in a extensive report file in /var/log, named
# "usb_modeswitch_<interface-name>" (and probably others)
EnableLogging=1

DefaultVendor=0x12d1
DefaultProduct=0x1446
TargetVendor=0x12d1
TargetProduct=0x14ac
MessageContent="55534243000000000000000000000011060000000000000000000000000000"
CheckSuccess=5

/etc/network/interfaces
auto ppp0
iface ppp0 inet ppp
provider E378
pre-up usb_modeswitch -c /etc/usb_modeswitch.conf

/etc/ppp/peers/E378:
debug
/dev/ttyUSB0
460800
crtscts
modem
noauth
usepeerdns
defaultroute
noipdefault
noccp
nobsdcomp
novj
connect '/usr/sbin/chat -v -f /etc/ppp/peers/chat-E378-nopin'

/etc/ppp/peers/chat-E378-nopin:
ABORT "BUSY"
ABORT "ERROR"
ABORT "NO CARRIER"
REPORT "CONNECT"
TIMEOUT "10"
"" "ATZ"
OK AT+CGDCONT=1,"ip","isp.telus.com"
OK AT+CGQREQ=1,2,4,3,6,31
OK AT+CGQMIN=1,2,4,3,6,31
OK AT+CGATT=1
OK ATD*99#
CONNECT \c

Novatel MC679
Tested on Alpine 2.4.5.
Add usb-modeswitch (currently only in testing repo):
apk add usb-modeswitch

/etc/modules:
usbserial vendor=0x1410 product=0x7042

/etc/usb_modeswitch.conf:
#
#
#
#
#
#
#

Configuration for the usb_modeswitch package, a mode switching tool for


USB devices providing multiple states or modes
This file is evaluated by the wrapper script "usb_modeswitch_dispatcher"
in /usr/sbin
To enable an option, set it to "1", "yes" or "true" (case doesn't matter)
Everything else counts as "disable"

# Disable automatic mode switching globally (e.g. to access the original


# install storage)
DisableSwitching=0
# Enable logging (results in a extensive report file in /var/log, named
# "usb_modeswitch_<interface-name>" (and probably others)
EnableLogging=1
DefaultVendor= 0x1410
DefaultProduct=0x5059
TargetVendor= 0x1410
TargetProduct= 0x7042
MessageContent="5553424312345678000000000000061b000000020000000000000000000000"
NeedResponse=1

/etc/network/interfaces
auto ppp0
iface ppp0 inet ppp
provider MC679
pre-up usb_modeswitch -c /etc/usb_modeswitch.conf || true

/etc/ppp/peers/MC679:
debug
/dev/ttyUSB0
460800
crtscts
modem
noauth
usepeerdns
defaultroute
noipdefault
noccp
nobsdcomp
novj
connect '/usr/sbin/chat -v -f /etc/ppp/peers/chat-MC679-nopin'

/etc/ppp/peers/chat-MC679-nopin:
ABORT "BUSY"
ABORT "ERROR"
ABORT "NO CARRIER"
REPORT "CONNECT"
TIMEOUT "10"
"" "ATZ"
OK ATD*99#
CONNECT \c

Setting up a ssh-server
If you need to administer a Alpine Linux box, you can install and use openssh. Openssh is used to
provide a secure encrypted communications between you and the host where openssh is running (the
ssh-server is called sshd and the ssh-client is called ssh).

Installation
Install package:
apk add openssh

Note: If you want the ACF-frontend for openssh, you should install 'acf-openssh' instead (assuming
that you have setup-acf)

Make it autostart
Next time you reboot your Linux box, you would probably want your sshd to automatically start.
rc-update add sshd

You can check your boot services:


rc-status

Start it up now
The reason we want to manually start sshd at this moment is that we want sshd to create some initial
files that he needs. After they are created, we can permanently save them.
Next reason is... we don't have time to wait for the box to reboot ;-)
/etc/init.d/sshd start

Note: Don't forget to permanently save your settings by using the 'lbu ci' command when you are
done.

Fine tuning
The default config that comes with openssh has pretty good default values.
But sometimes you would like to fine-tune things. We show some examples below on what you might
want to do.

Note: You are _not_ required to follow this #Fine_tuning section. You can skip it if you want to
make things easy!
The fine-tuning is done by editing /etc/ssh/sshd_config
"#" marks that the rest of the line should be ignored by sshd. Everything right to the "#" is treated as
comments.
UseDNS no
# By setting this to no, you could increase speed when the client
starts to connect to this ssh-server
PasswordAuthentication no
# Instead you could use private/public keys to
authenticate to this box (this increases security for the box)

Many other options are found in /etc/ssh/sshd_config. The describing text that comes in the same file
will guide you in your fine-tuning.

Firewalling
As default, sshd will communicate on port '22' using protocol 'TCP'.
You would need to make sure that the box where sshd is running, doesn't block your connection
attempts on 22TCP.
If you still have trouble accessing your box, make sure that there is no other firewall blocking your
connection.
Sometimes 22TCP is blocked by some firewall that you can not control. In those cases you might want
to configure sshd to communicate on some other port.
In that case you change /etc/ssh/sshd_config to reflect your needs.
But before you do so, you need to check so you don't use a port that already is in use. (You can check
this by using the command 'netstat -ln' on the box where you plan to run sshd)
Port 443

# Use whatever port number that fits your needs

You need to restart sshd after you done you modifications.


/etc/init.d/sshd restart

Save settings
If you already haven't done so, save all your settings
lbu ci

Alternative: Dropbear
Alternatively you can use Dropbear. Install it through the Alpine setup scripts, or manually with:
apk add dropbear

Start it:

rc-service dropbear start

And if you are happy with it, add it to the default runlevel:
rc-update add dropbear

Use the following command to check all available server options:


dropbear -h

The config file is located at /etc/conf.d/dropbear


dropbear

also includes an SSH client which in its simplest form can be used like this:

dbclient x.x.x.x

(where x.x.x.x is the destination server). Use dbclient -h to see all available options.

How to setup a wireless access point

Install needed packages


apk add hostapd wireless-tools wpa_supplicant

Check that card is detected


Cat /proc/net/dev and see which cards are detected. If no cards are available, check what driver the card
uses and modprobe it. Check that the card is in master mode.

Setup Bridge
Please see Bridge for how to set up network bridges.

Setup Encryption
Edit /etc/hostapd/hostapd.wpa_psk and insert the following, replacing PASSPHRASE with the
WPA_PSK key you would like to use (remove keys that you don't want to use):
00:00:00:00:00:00 PASSPHRASE

Setup hostapd
Edit /etc/hostapd/hostapd.conf and replace entries that need to be such as interface, bridge, driver, ssid,
etc. Example file below:

interface=wlan0
bridge=br0
driver=hostap
logger_syslog=-1
logger_syslog_level=2
logger_stdout=-1
logger_stdout_level=2
debug=0
dump_file=/tmp/hostapd.dump
ctrl_interface=/var/run/hostapd
ctrl_interface_group=0
ssid=SecureSSID
#macaddr_acl=1
#accept_mac_file=/etc/hostapd/accept
auth_algs=3
eapol_key_index_workaround=0
eap_server=0
wpa=3
wpa_psk_file=/etc/hostapd/hostapd.wpa_psk
wpa_key_mgmt=WPA-PSK
wpa_pairwise=CCMP

If you wish to use MAC address filtering, uncomment the lines starting with macaddr_acl and
accept_mac_file, create /etc/hostapd/accept (with 600 permissions) and add the allowed clients' MAC
address to the file.
Start hostapd.
/etc/init.d/hostapd start

Associate clients
Associate a few different clients to test.

Setting up a OpenVPN server


This article describes how to set up an OpenVPN server with the Alpine Linux. This is an ideal
solution for allowing single users or devices to remotely connect to your network. To establish
connectivity with a Remote Office or site, Racoon/Opennhrp would provide better functionality.
It is recommended that you have a publicly routable static IP address in order for this to work. This
means that your IP address cannot be in the private IP address ranges described here: WikiPedia
If your Internet-connected machine doesn't have a static IP address, DynDNS can be used for resolving
DNS names to IP addresses.

Setup Alpine

Initial Setup
Follow Installing_Alpine to setup Alpine Linux.

Install programs
Install openvpn
apk add openvpn

Prepare autostart of OpenVPN


rc-update add openvpn default
modprobe tun
echo "tun" >>/etc/modules

Certificates
One of the first things that needs to be done is to make sure that you have secure keys to work with.
Alpine makes this easy by having a web interface to manage the certificates. Documentation for it can
be found here: Generating_SSL_certs_with_ACF. It is a best practice not to have your certificate server
be on the same machine as the router being used for remote connectivity.
You will need to create a server (ssl_server_cert) certificate for the server and one client
(ssl_client_cert) for each client. To use the certificates, you should download the .pfx file and extract it.
To extract the three parts of each .pfx file, use the following commands:
To get the ca cert out...
openssl pkcs12 -in PFXFILE -cacerts -nokeys -out ca.pem

To get the cert file out...


openssl pkcs12 -in PFXFILE -nokeys -clcerts -out cert.pem

To get the private key file out. Make sure this stays private.
openssl pkcs12 -in PFXFILE -nocerts -nodes -out key.pem

On the VPN server, you can also install the acf-openvpn package, which contains a web page to
automatically upload and extract the server certificate. There is also a button to automatically generate
the Diffie Hellman parameters.
If you would prefer to generate your certificates using OpenVPN utilities, see #Alternative Certificate
Method

Configure OpenVPN server


Example configuration file for server. Place the following content in /etc/openvpn/openvpn.conf:
local "Public Ip address"
port 1194
proto udp
dev tun
ca openvpn_certs/server-ca.pem
cert openvpn_certs/server-cert.pem
dh openvpn_certs/dh1024.pem #to generate by hand #openssl dhparam -out dh1024.pem
1024

server 10.0.0.0 255.255.255.0


ifconfig-pool-persist ipp.txt
push "route 10.0.0.0 255.0.0.0"
push "dhcp-option DNS 10.0.0.1"
keepalive 10 120
comp-lzo
user nobody
group nobody
persist-key
persist-tun
status /var/log/openvpn-status.log
log-append /var/log/openvpn.log
verb 3

(Instructions are based on openvpn.net/howto.html#server)

Test your configuration


Test configuration and certificates
openvpn --config /etc/openvpn/openvpn.conf

Configure OpenVPN client


Example client.conf:
client
dev tun
proto udp
remote "public IP" 1194
resolv-retry infinite
nobind
ns-cert-type server # This means that the certificate on the openvpn server needs
to have this field. Prevents MitM attacks
persist-key
persist-tun
ca client-ca.pem
cert client-cert.pem
key client-key.pem
comp-lzo
verb 3

(Instructions are based on openvpn.net/howto.html#client)

Save settings
Don't forget to save all your settings if you are running a RAM-based system.
lbu commit

More than one server or client


If you want more than one server or client running on the same alpine box, use the standard Multiple
Instances of Services process.
For example, to create a config named "AlphaBravo":

Create an approriate /etc/openvpn/openvpn.conf file, but name it


"/etc/openvpn/AlphaBravo.conf"
create a new symlink of the init.d script:

ln -s /etc/init.d/openvpn /etc/init.d/openvpn.AlphaBravo

Have the new service start automatically

rc-update add openvpn.AlphaBravo

Alternative Certificate Method

Manual Certificate Commands


(Instructions are based on openvpn.net/howto.html#pki)

Initial setup for administrating certificates


The following instructions assume that you want to save your configs, certs and keys in
/etc/openvpn/keys.
Start by moving to the /usr/share/openvpn/easy-rsa folder to execute commands
apk add openvpn-easy-rsa
cd /usr/share/doc/openvpn/easy-rsa

If not already done then create a folder where you will save your certificates and save a copy of your
/usr/share/openvpn/easy-rsa/vars for later use.
(All files in /usr/share/openvpn/easy-rsa are overwritten when the computer is restarted)
mkdir /etc/openvpn/keys
cp ./vars /etc/openvpn/keys

If not already done then edit /etc/openvpn/keys/vars


(This file is used for defining paths and other standard settings)
vim /etc/openvpn/keys/vars

Change KEY_DIR= from "$EASY_RSA/keys" to "/etc/openvpn/keys"


Change KEY_SIZE, CA_EXPIRE, KEY_EXPIRE, KEY_COUNTRY, KEY_PROVINCE, KEY_CITY,
KEY_ORG, KEY_EMAIL to match your system.

source the vars to set properties


source /etc/openvpn/keys/vars

touch /etc/openvpn/keys/index.txt
echo 00 > /etc/openvpn/keys/serial

Set up a 'Certificate Authority' (CA)


Clean up the keys folder.
./clean-all

Generate Diffie Hellman parameters


./build-dh

Now lets make the CA certificates and keys


./build-ca

Set up a 'OpenVPN Server'


Create server certificates
./build-key-server <commonname>

Set up a 'OpenVPN Client'


Create client certificates
./build-key <commonname>

Revoke a certificate
To revoke a certificate
./revoke-full <commonname>

The revoke-full script will generate a CRL (certificate revocation list) file called crl.pem in the keys
subdirectory.
The file should be copied to a directory where the OpenVPN server can access it, then CRL verification
should be enabled in the server configuration:
crl-verify crl.pem

openVPN and LXC


Let's call this LXC "mylxc"...

On the host
modprobe tun
mkdir /var/lib/lxc/mylxc/rootfs/dev/net
mknod /var/lib/lxc/mylxc/rootfs/dev/net/tun c 10 200
chmod 666 /var/lib/lxc/mylxc/rootfs/dev/net/tun

In /var/lib/lxc/mylxc/config
lxc.cgroup.devices.allow = c 10:200 rwm

In the guest
apk add openvpn

Then config as usual...


This should work both as server and as client.

Generating SSL certs with ACF


You need to create certificates for servers or remote persons. You might need an SSL cert for your web
server running lighttpd or mini_httpd. You might use something like openvpn or racoon for your VPN
services. Wouldn't it be nice to have some way to manage and view all the certs you have given to
everyone? Revoke the certs? Review the certificate before you issue it? Alpine, via ACF, has a nice
web interface to use for this sort of job...

Installation Process
This will somewhat guide you through the process of creating this type of server. It is suggested to not
host this on your VPN gateway, but use another machine to generate your certificates.

Install Alpine
Link below to the standard document.
Installing_Alpine

Install and Configure ACF


Run the following command: This will install the web front end to Alpine Linux, called ACF.
/sbin/setup-acf

Install acf-openssl
apk add acf-openssl

Browse to your computer https://ipaddr/


Login as root.
Click on the User Management tab and create yourself an account.

Acf-openssl
From the navigation bar on the left, under the Applications section, click the Certificate Authority link.
If you already have a CA that you would like to have the web interface manage you can upload it from
the Status page (as a pfx).
From the Status tab, Click Configure(to remove most of the error messages).
If you do not have a CA, To generate a new CA certificate: Click the Edit Defaults tab. Input the Items
that will be needed for the CA and any other certs generated from it then Click Save. Click the Status
tab. Input values for the input boxes to generate a CA and click Generate.

Generate a certificate with ACF


Request Form
Provided Fields:

Country Name (2 letter abbreviation)


Locality Name (e.g. city)
Organization Name
Common Name (eg, the certificate CN)
Email Address
Multiple Organizational Unit Name (eg, division)
Certificate Type

A box has been set aside for adding Additional x509 Extensions formatted the same as if you were to
fill out a section directly in openssl.cnf. Section would be [v3_req]
You could put in here:

subjectAltName ="IP:192.168.1.1"
subjectAltName ="DNS:192.168.1.10"

Here is also where you would specify the CRL / OCSP distribution point, from where clients can query
information:

crlDistributionPoints=URI:http://whatever.com/whatever.crl

Once this form has been filled out and the password entered click submit.

View
Go to the View tab after you have the request form submitted. The view tab will show you pending
requests for certificates. Also available from this tab are already approved requests (generated certs),
revoked certs, and the CRL.
For a Pending request, make sure to review the cert before approving it. Once you have verified that all
the information is correct, with no mis-types or spelling mistakes, Approve the request.
The file that will be generated can be downloaded from the ACF. Use the command lines below to
extract the pkcs12 file into its part to begin using it.

Extract PFX certificate


To get the CA CERT
openssl pkcs12 -in PFXFILE -cacerts -nokeys -out cacert.pem

To get the Private Key


openssl pkcs12 -in PFXFILE -nocerts -nodes -out mykey.pem

Since this file contains the key without passsword protection, make sure to set restrictive permissions
on this file.
To get the Certificate
openssl pkcs12 -in PFXFILE -nokeys -clcerts -out mycert.pem

To get the Certificate and Private key in a single file (For lighttpd or mini_httpd for instance)
openssl pkcs12 -in PFXFILE -nodes -out server.pem

Since this file contains the key without passsword protection, make sure to set restrictive permissions
on this file.
To get the CA Chain (For lighttpd for instance)
openssl pkcs12 -in PFXFILE -nokeys -cacerts -chain -out ca-certs.pem

Display the cert or key readable/text format


openssl x509 -in mycert.pem -noout -text

Examples
Replacing the ACF SSL cert
By default, setup-acf uses mini_httpd with a self-signed certificate for serving ACF webpages. We can
replace the self-signed certificate with one signed by our new CA.
Create a certificate of type 'ssl_server_cert' with appropriate settings (i.e. Common Name = server
name)
Download the certificate pfx and upload it to the ACF server (remember, this is generally separate from
the standalone Certificate Authority server)
Replace the mini_httpd server certificate

openssl pkcs12 -in PFXFILE -nodes -out /etc/ssl/mini_httpd/server.pem

Restart mini_httpd
/etc/init.d/mini_httpd restart

Generating server and client certs for OpenVPN


For OpenVPN use, we need a server certificate and one client certificate for each user. ACF can be
used to generate all of them, including allowing users to request their own client certificates.
Generate a certificate of type 'ssl_server_cert' with appropriate settings for the OpenVPN server.
Copy the server certificate pfx to the OpenVPN server and extract the certificate using the commands
above. Configuration of the OpenVPN server is beyond the scope here.
Create an ACF user account on the Certificate Authority server for each OpenVPN user. From the
navigation bar, click on User Management under System. Click on Create. Create a user with
CERT_REQUESTER role for each user. You could set the user Home to /openssl/openssl/read to
default to showing that user's certificates.
Each user can request his own client certificate. Log in as the new user. Create a certificate request for
a certificate of type 'ssl_client_cert' with appropriate settings.
You can view and approve the requested certificates as described above.
The user can then download and install the client certificate pfx on his OpenVPN client. Once again,
this is beyond the scope of this document.

Extras
OpenSSL command line to create your CA
The following command will need a password. Make sure to remember this.
openssl genrsa -des3 -out server.key 2048
openssl req -new -key server.key -out server.csr
openssl rsa -in server.key. -out server.pem
openssl x509 -req -days 365 -in server.csr -signkey server.pem -out cacert.pem
mv server.pem /etc/ssl/private; mv cacert.pem /etc/ssl/

Edits to /etc/ssl/openssl-ca-acf.cnf
Via the expert tab on ACF edit the openssl-ca-acf.cnf file. Something like subjectAltName can be
added to be used by the certificates that you generate.
3.subjectAltName = Assigned IP Address
3.subjectAltName_default = 192.168.1.1/32

Setting up unbound DNS server


Unbound is a validating, recursive, and caching DNS resolver that supports DNSSEC.

Install
Install the unbound package:
apk add unbound

Configure
The following configuration is an example of a caching name server (in a production server, it's
recommended to adjust the access-control parameter to limit access to your network). The forwardzone(s) section will forward all DNS queries to the specified servers. Don't forget to change the
'interface' parameter to one of your local interface IP (or 0.0.0.0 to listen on all local IPv4 interfaces).
The following is a minimal example with many options commented out.
/etc/unbound/unbound.conf
server:
verbosity: 1
## Specify the interface address to listen on:
interface: 10.0.0.1
## To listen on all interfaces use:
#
interface: 0.0.0.0
do-ip4: yes
do-ip6: yes
do-udp: yes
do-tcp: yes
do-daemonize: yes
access-control: 0.0.0.0/0 allow
## Other access control examples
#access-control: 192.168.1.0/24 action
## 'action' should be replaced by any one of:
#deny (drop message)
#refuse (sends a DNS rcode REFUSED error message back)
#allow (recursive ok)
#allow_snoop (recursive and nonrecursive ok).
## Minimum lifetime of cache entries in seconds. Default is 0.
#cache-min-ttl: 60
## Maximum lifetime of cached entries. Default is 86400 seconds (1 day).
#cache-max-ttl: 172800
## enable to not answer id.server and hostname.bind queries.
hide-identity: yes
## enable to not answer version.server and version.bind queries.
hide-version: yes
## default is to use syslog, which will log to /var/log/messages.
use-syslog: yes
## to log elsewhere, set 'use-syslog' to 'no' and set the log file location below:
#logfile: /var/log/unbound
python:
remote-control:
control-enable: no
## Stub zones are like forward zones (see below) but must only contain authority
server (no recursive servers)
#stub-zone:
#
name: "my.test.com"
#
stub-addr: 172.16.1.1
#
stub-addr: 172.16.1.2
## Note for forward zones, the destination servers must be able to handle recursion
to other DNS server

## Forward all *.example.com queries to the server at 192.168.1.1


#forward-zone:
#
name: "example.com"
#
forward-addr: 192.168.1.1
## Forward all other queries to the Verizon DNS servers
forward-zone:
name: "."
## Level3 Verizon
forward-addr: 4.2.2.1
forward-addr: 4.2.2.4

root-hints
Instead of forwarding queries to a public DNS server, you may prefer to query the root DNS servers.
To do this, comment out the forwarding entries ("forward-zone" sections) in the config. Then, grab the
latest root hints file using wget:
wget http://www.internic.net/domain/named.cache -O /etc/unbound/root.hints

And finally point unbound to the root hints file by adding the following line to the server section of the
unbound config file:
root-hints: "/etc/unbound/root.hints"

Restart unbound to ensure the changes take effect. You may wish to setup a cron job to update the root
hints file occasionally.

0x20 bit
Use of the 0x20 bit is considered experimental. It makes use of an otherwise unused bit in a DNS
packet to ask an authoritative server to respond with an answer mimicking the case used in the query.
For example, when using this feature a query for www.google.com could appear in the request as
www.google.com or Www.GoogLe.coM or WWW.GoOGlE.cOm or any other conbination of upper
and lower case. The authoritative server should respond with the same case. This helps prevent DNS
spoofing attacks.
In some cases a very small number of old or misconfigured servers may return an error (less than 1% of
servers will respond incorrectly). To turn on this feature, simply add the following line to the 'server'
section of /etc/unbound/unbound.conf and restart the server:
use-caps-for-id: yes

Set auto-start, start and test the daemon


Check the configuration for errors:
unbound-checkconf

and if no errors are reported, set to auto-start then start unbound:


rc-update add unbound
rc-service unbound start

Test, for example:


dig nl.alpinelinux.org @10.0.0.1

or:
nslookup www.google.cz 10.0.0.1

or use drill:
drill www.bbc.co.uk @10.0.0.1

Setting up nsd DNS server


NSD is an authoritative-only DNS server. The following page shows how to setup a single-zone
configuration, with one server being a master where updates are made, and a slave which will have
changes replicated to it automatically. In the examples 10.1.0.1 is used as the master server's IP, while
10.2.0.1 is the slave. The IP addresses used here (along with the domain) should be replaced with the
proper IP addresses of your servers.

Install
Installation is simple (perform this step on both servers):
apk add nsd

Configure
First, setup the main configuration file on the master server, /etc/nsd/nsd.conf, replacing the secret with
a proper one:
server:
ip-address: 10.1.0.1
port: 53
server-count: 1
ip4-only: yes
hide-version: yes
identity: ""
zonesdir: "/etc/nsd"
key:
name: "sec_key"
algorithm: hmac-md5
secret: "WhateverSecretYouUse"
zone:
name: alpinelinux.org
zonefile: alpinelinux.org.zone
notify: 10.2.0.1 sec_key
provide-xfr: 10.2.0.1 sec_key

Then, create the zone file for the zone in question (/etc/nsd/alpinelinux.org.zone in this case):

;## alpinelinux.org authoritative zone


$ORIGIN alpinelinux.org.
$TTL 86400
@ IN SOA ns1.alpinelinux.org. webmaster.alpinelinux.org. (
2011100501
; serial
28800
; refresh
7200
; retry
86400
; expire
86400
; min TTL
)

lists
@
mail
www
www-prd
www-qa
wiki
lists
monitor
bugs
nl
dl-2
dl-3
dl-4
rsync
distfiles
build-edge
build64-edge
build-2-2
build64-2-2
build-2-1
build-2-0
build-1-10

NS
MX
MX
IN
IN
IN
IN
IN
IN
IN
IN
IN
IN
IN
IN
IN
IN
IN
IN
IN
IN
IN
IN
IN
IN

10
10
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A

ns1.alpinelinux.org.
mail.alpinelinux.org.
mail.alpinelinux.org.
81.175.82.11
64.56.207.219
81.175.82.11
74.117.189.132
74.117.189.131
74.117.189.132
64.56.207.219
213.234.126.133
81.175.82.11
81.175.82.11
208.74.141.33
74.117.189.132
64.56.207.216
81.175.82.11
91.220.88.29
91.220.88.23
204.152.221.26
91.220.88.34
91.220.88.35
91.220.88.32
91.220.88.31
91.220.88.26

Next, on the slave server, setup /etc/nsd/nsd.conf:


server:
ip-address: 10.2.0.1
port: 53
server-count: 1
ip4-only: yes
hide-version: yes
identity: ""
zonesdir: "/etc/nsd"
key:
name: "sec_key"
algorithm: hmac-md5
secret: "WhateverSecretYouUse"
zone:
name: alpinelinux.org
zonefile: alpinelinux.org.zone
allow-notify: 10.1.0.1 sec_key
request-xfr: AXFR 10.1.0.1 sec_key

Create the zone file /etc/nsd/alpinelinux.org.zone as well on the slave.

Start Server
First step, make sure you didn't have any typos in your configuration (on both boxes):
nsd-checkconf /etc/nsd/nsd.conf

Then each time a change is made to the zone (including when you first start the server), you need to
rebuild the NSD zone databases:
nsdc rebuild

Finally, start the server and set it to auto-start:


/etc/init.d/nsd start
rc-update add nsd

Tip: You've now got a set of DNS servers that will each answer with authoritative data for the given
zone, and whenever updates are made to the master server, they are replicated using a zone
transfer to the slave box.

TinyDNS Format
TinyDNS data file
The offical data format for tinydns data is documented in http://cr.yp.to/djbdns/tinydns-data.html See
notes at bottom for more information on certain fields.

SOA Record
`Zfqdn:nameserver:contactinfo:serial:retry:expire:min:ttl:timestamp:lo`
Zmy.example.net:208.210.221.65:abuse.example.net
Question: Zmy.example.net.
my.example.net. +2560 soa 208.210.221.65. abuse@example.net. 1206390017 16384 2048
1048576 2560

Zmy.example.net:208.210.221.65:abuse.example.net:2008032201:1000:2000:3000:4000
my.example.net. +2560 soa 208.210.221.65. abuse@example.net. 2008032201 1000 2000
3000 4000

A, NS record combined
`&fqdn:ip:x:ttl:timestamp:lo`
Creates an A and NS record. Typically used to delegate a subdomain; can be used in combination with
Z to accomplish the same thing as the combo above, but with a different email address.
&my.example.net:208.210.221.65:something:
# Question: Zmy.example.net.
# NS replies:
my.example.net. +259200 ns something.ns.my.example.net.
# AR replies:
#something.ns.my.example.net. +259200 a 208.210.221.65

&my.example.net:208.210.221.65:ns1.somewhere.com:3600

# Question: Zmy.example.net.
# NS replies:
my.example.net. +3600 ns ns1.somewhere.com.
# AR replies:
#ns1.somewhere.com. +3600 a 208.210.221.65

A and PTR record


`=fqdn:ip:ttl:timestamp:lo`
=alpha.my.example.net:192.168.1.1
Question: Zalpha.my.example.net.
alpha.my.example.net. +86400 a 192.168.1.1
# Question: Z1.1.168.192.in-addr.arpa.
1.1.168.192.in-addr.arpa. +86400 ptr alpha.my.example.net

For the PTR record to be returned, you must have the corresponding SOA record defined:
Zmy.example.net:ns1.my.example.net:abuse.example.net
&my.example.net:208.210.221.65:ns1.my.example.net
Z168.192.in-addr.arpa:ns1.my.example.net:abuse.example.net
&168.192.in-addr.arpa:208.210.221.65:ns1.my.example.net

A record
`+fqdn:ip:ttl:timestamp:lo`
+alpha.my.example.net:192.168.1.1
Question: Zalpha.my.example.net.
alpha.my.example.net. +86400 a 192.168.1.1

MX Record
`@fqdn:ip:x:dist:ttl:timestamp:lo`
@my.example.net:208.210.221.77:something
Question: @my.example.net.
my.example.net. +86400 mx 0 something.mx.my.example.net.

@my.example.net:208.210.221.77:mx1.my.example.net:10
@my.example.net:208.210.221.78:mx2.my.example.net:20
Question: @my.example.net.
my.example.net. +86400 mx 10 mx1.my.example.net.
my.example.net. +86400 mx 20 mx2.my.example.net.
# AR replies:
#mx1.my.example.net. +86400 a 208.210.221.77
#mx2.my.example.net. +86400 a 208.210.221.78

CNAME
`Cfqdn:x:ttl:timestamp:lo`
Cmailserver.my.example.net:yourmailserver.somewhere.com
Question: Zmailserver.my.example.net.
mailserver.my.example.net. +86400 cname yourmailserver.somewhere.com.

TXT
`'fqdn:s:ttl:timestamp:lo`
'my.example.net:Please do not bug us we know our DNS is broken
Question: Tmy.example.net
my.example.net. +86400 txt 'Please do not bug us we know our DNS is broken'

SRV
Sfqdn:ip:x:port:priority:weight:ttl:timestamp

Standard rules for ip, x, ttl, and timestamp apply. Port, priority, and weight all range from 0-65535.
Priority and weight are optional; they default to zero if not provided.
Sconsole.zoinks.example.com:1.2.3.4:rack102-con1:2001:7:69:300:
query: 33 console.zoinks.example.com
answer: console.zoinks.example.com 300 SRV 7 69 2001 rack102-con1.example.com

NAPTR
`Nfqdn:order:pref:flags:service:regexp:replacement:ttl:timestamp`
The same standard rules for ttl and timestamp apply. Order and preference (optional) range from 065535, and they default to zero if not provided. Flags, service and replacement are character-strings.
The replacement is a fqdn that defaults to '.' if not provided.
Nsomedomain.org:100:90:s:SIP+D2U::_sip._udp.somedomain.org
query: 35 somedomain.org
answer: somedomain.org 78320 NAPTR 100 90 "s" "SIP+D2U" "" _sip._tcp.somedomain.org

Ncid.urn.arpa:100:10:::!^urn\058cid\058.+@([^\.]+\.)(.*)$!\2!i:

AAAA
`:fqdn:28:location:ttl`
These records are used to resolve IPv6 addresses.
:alpha.my.example.net:28:\050\001\103\000\302\072\000\077\105\052\064\355\256\064\063\124:86
400
query: alpha.my.example.net
IN
ANY
answer: alpha.my.example.net. 86400 IN AAAA

2801:4300:c23a:3f:452a:34ed:ae34:3354

Notes
Each line starts with a character, and continues with colon separated fields. Spaces and tabs at the end
of a line are ignored. Blank lines are also ignored.
timestamp is an optional TAI64 (hex format) timestamp. If the timestamp is given the ttl has special
meaning:

If ttl is nonzero or omitted, then the timestamp is when this record goes "live"

if ttl is zero, then the timestamp is the "time to die", when the record is no longer to be
served. Tinydns will dynamically adjust the ttl so that the DNS records are not cached
beyond the "time to die"

lo is an optional location field. The record is ignored if the client is outside that location. lo can be one
or two characters. For example,
%in:192.168
%ex
+www.mydomain.com:192.168.1.1:::in
+www.mydomain.com:200.20.32.1:::ex

specifies www.mydomain.com has address 192.168.1.1 for clients in the 192.168.0.0/16 address range,
and 200.20.32.1 for all other clients.
In lines with "x", if "x" contains a dot, then "x" is used as the server name rather than
"x".[something].fqdn You should omit ip if x has IP addresses assigned elsewhere in the data file.
Wildcards in the form of *.fqdn are allowed, and will resolve any address EXCEPT those that have
their own records, or more specific wildcards.
+www.mydomain.com:200.3.1.1
+*.mydomain.com:127.0.0.1

Will send the user to his local machine for foo.mydomain.com, mx.mydomain.com, in fact
ANYTHING .mydomain.com, except www.mydomain.com

Fault Tolerant Routing with Alpine Linux


This document will explain how to setup a fault-tolerant router using Alpine Linux. It has been tested
using Alpine Linux 2.2.3.

Hardware and Network Setup


The network used in this example is as follows:

Will run (at least initially) IPv4


Pre-existing border router that NATs from public IP(s) to the 10.0.0.0/8 network and has the
address 10.0.0.1/24 on the transit network
The border router will have a default route to the internal network via 10.0.0.2 (the virtual IP
address of the routers being setup in this doc)
A transit network between the border router and the fault-tolerant routers in this document
will be on 10.0.0.0/24
The routers will also connect several internal subnets on the network:
o 10.0.1.0/24
o 10.0.2.0/24
o 10.0.3.0/24
It's assumed that 10.0.0.0/24 and 10.0.1.0/24 are connected via dedicated interfaces (eth0
and eth1, respectively), while 10.0.2.0/24 and 10.0.3.0/24 share an interface(eth2), but
traffic is segregated using 802.1q vlan tagging, with 10.0.2.0/24 using vlan id 2, and
10.0.3.0/24 using vlan id 3.
Finally, all computers in subnets 10.0.1.0/24, 10.0.2.0/24 and 10.0.3.0/24 are setup with a
default gateway of 10.0.x.1

Two computers will be needed, with at three NICs in them (more if you are connecting more network
segments together), and they will act as routers.

Initial Setup
First, setup Alpine on a USB key or CF card on both computers. Connect both computers initially to
10.0.0.0/24 on eth0, and assign them ip addresses of 10.0.0.3/24 and 10.0.0.4/24 in the setup-alpine
script(for router1 and router2, respectively). Do not configure other interfaces initially. Ensure that both
machines are pingable.
Next, connect them both to 10.0.1.0/24 on eth1, and assign them ip addresses of 10.0.1.2/24 and
10.0.1.3/24, respectively. Ensure that they can also ping each other. /etc/network/interfaces (for
router1) at this point should look like:
auto eth0
iface eth0 inet
address
netmask
gateway
auto eth1
iface eth1 inet
address
netmask

static
10.0.0.2
255.255.255.0
10.0.0.1
static
10.0.1.2
255.255.255.0

Finally, get the last two networks connected (the ip addresses given are for router1, and these steps
should be performed on router2 as well):
modprobe 8021q
echo "8021q" >> /etc/modules
cat >> /etc/network/interfaces << EOF
auto eth2
iface eth2 inet manual
up ip link set eth2 up
up ifup eth2.2 || true
up ifup eth2.3 || true
down ifdown eth2.3 || true
down ifdown eth2.2 || true
down ip link set dev eth2 down
iface eth2.2 inet static
pre-up vconfig add eth2 2
address 10.0.2.2
netmask 255.255.255.0
post-down vconfig rem $IFACE
iface eth2.3 inet static
pre-up vconfig add eth2 3
address 10.0.3.2
netmask 255.255.255.0
post-down vconfig rem $IFACE
EOF

Test that you can also ping between these interfaces.

Start ip forwarding
Next, turn them into simple routers by enabling ip forwarding (do this on each box):
echo 1 > /proc/sys/net/ipv4/ip_forward

If you follow the ucarp section below, you'll also need to disable rp_filter (RFC3704 Ingress Filtering).
Since this howto is designed for an internal router this should (or might not) be acceptable. Available
options for this setting can be viewed at http://git.kernel.org/?p=linux/kernel/git/torvalds/linux2.6.git;a=blob;f=Documentation/networking/ip-sysctl.txt;hb=HEAD#l855.
echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter

echo 0 > /proc/sys/net/ipv4/conf/eth0/rp_filter


echo 0 > /proc/sys/net/ipv4/conf/eth1/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/eth2.2/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/eth2.3/rp_filter

Install and start ucarp


Ucarp will provide a virtual IP address for each subnet that the routers will share. That way, if either
router fails, network connectivity stays up.

Copy the scripts for each the interface:

apk add ucarp


ln -s /etc/init.d/ucarp /etc/init.d/ucarp.eth0
ln -s /etc/init.d/ucarp /etc/init.d/ucarp.eth1
ln -s /etc/init.d/ucarp /etc/init.d/ucarp.eth2.2
ln -s /etc/init.d/ucarp /etc/init.d/ucarp.eth2.3
cp /etc/conf.d/ucarp /etc/conf.d/ucarp.eth0
cp /etc/conf.d/ucarp /etc/conf.d/ucarp.eth1
cp /etc/conf.d/ucarp /etc/conf.d/ucarp.eth2.2
cp /etc/conf.d/ucarp /etc/conf.d/ucarp.eth2.3

edit the /etc/conf/ucarp.eth0 file:

REALIP=
VHID=1
VIP=10.0.0.2
PASSWORD=Password

edit the /etc/conf/ucarp.eth1 file:

REALIP=
VHID=2
VIP=10.0.1.1
PASSWORD=Password

edit the /etc/conf/ucarp.eth2.2 file:

REALIP=
VHID=3
VIP=10.0.2.1
PASSWORD=Password

edit the /etc/conf/ucarp.eth2.3 file:

REALIP=

VHID=4
VIP=10.0.3.1
PASSWORD=Password

Create etc/ucarp/vip-up-eth0.sh (and copy this script for each interface: vip-up-eth1.sh, vipup-eth2.2.sh, vip-up-eth2.3.sh):

#!/bin/sh
# Add the VIP address
ip addr add $2/24 dev $1
for a in 330 440 550; do beep -f $a -l 100; done

Create /etc/ucarp/vip-down-eth0.sh (and copy this script for each interface: vip-downeth1.sh, vip-down-eth2.2.sh, vip-down-eth2.3.sh):

#!/bin/sh
# Remove the VIP address
ip addr del $2/24 dev $1
for a in 550 440 330; do beep -f $a -l 100; done

Make the scripts executable

chmod +x /etc/ucarp/*.sh

Start ucarp and save the changes

rc-update add ucarp.eth0


rc-update add ucarp.eth1
rc-update add ucarp.eth2.2
rc-update add ucarp.eth2.3
/etc/init.d/ucarp.eth0 start
/etc/init.d/ucarp.eth1 start
/etc/init.d/ucarp.eth2.2 start
/etc/init.d/ucarp.eth2.3 start
lbu commit

Do the above steps for each router

At this point, you should have connectivity from your border router through to hosts on the internal
subnets, and hosts on your subnets should be able to ping all interfaces on each router.

Install and configure Shorewall iptables frontend


We will name our Shorewall zones by letters to keep the config files simple, and they will correspond
to subnets and interfaces as follows:

A =
B =
C_2
C_3

eth0 = 10.0.0.0/24
eth1 = 10.0.1.0/24
= eth2.2 = 10.0.2.0/24
= eth2.3 = 10.0.3.0/24

Install the needed package


apk add shorewall

Edit /etc/shorewall/params:
#FIRST LINE
A_IF=eth0
B_IF=eth1
C_2_IF=eth2.2
C_3_IF=eth2.3
#LAST LINE - ADD YOUR ENTRIES ABOVE THIS ONCE - DO NOT REMOVE

Edit /etc/shorewall/interfaces:
#FIRST LINE
A $A_IF detect dhcp
B $B_IF detect dhcp
C_2 $C_2_IF detect dhcp
C_3 $C_3_IF detect dhcp
#LAST LINE - ADD YOUR ENTRIES ABOVE THIS ONCE - DO NOT REMOVE

Edit /etc/shorewall/zones:
#FIRST LINE
fw firewall
A
ipv4
B
ipv4
C_2 ipv4
C_3 ipv4
#LAST LINE - ADD YOUR ENTRIES ABOVE THIS ONCE - DO NOT REMOVE

Edit /etc/shorewall/policy: (note: this allows all traffic between internal subnets and all traffic outbound
towards the Internet, but blocks all traffic inbound by default, this policy might need to be adjusted in
your case)
#FIRST LINE
A
all
REJECT INFO
B
A
ACCEPT
B
C_2
ACCEPT
B
C_3
ACCEPT
C_2 A
ACCEPT
C_2 B
ACCEPT
C_2 C_3
ACCEPT
C_3 A
ACCEPT
C_3 B
ACCEPT
C_3 C_2
ACCEPT
all all
REJECT INFO
#LAST LINE - ADD YOUR ENTRIES ABOVE THIS ONCE - DO NOT REMOVE

Edit /etc/shorewall/rules: (the following simply allow SSH traffic in to your two routers - more will
probably be needed to accept or reject traffic based on your needs)
#FIRST LINE
ACCEPT
A
fw
tcp
22
#LAST LINE - ADD YOUR ENTRIES ABOVE THIS ONCE - DO NOT REMOVE

Edit /etc/shorewall/start:

#FIRST LINE
/bin/echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter
/bin/echo 0 > /proc/sys/net/ipv4/conf/eth0/rp_filter
/bin/echo 0 > /proc/sys/net/ipv4/conf/eth1/rp_filter
/bin/echo 0 > /proc/sys/net/ipv4/conf/eth2.2/rp_filter
/bin/echo 0 > /proc/sys/net/ipv4/conf/eth2.3/rp_filter
#LAST LINE - ADD YOUR ENTRIES ABOVE THIS ONCE - DO NOT REMOVE

Finally, edit STARTUP_ENABLED line in /etc/shorewall/shorewall.conf:


STARTUP_ENABLED=yes

Check config for errors then start shorewall:


shorewall check
shorewall restart

Tip: This gives you a basic set of redundant policy routers, however you can continue on with
section(s) below to add features to them

DHCP Relaying
This will allow DHCP broadcasts from one subnet to be sent on to a DHCP server in another subnet (so
that you don't have to have your DHCP server have an interface in each subnet).
First, install dhcrelay:
apk add dhcrelay

Next, adjust /etc/conf.d/dhcrelay: (assuming you don't need DHCP on the 10.0.0.0/24 subnet, and that
your DHCP servers are 10.0.1.100 and 10.0.1.101)
IFACE="eth1 eth2.2 eth2.3"
DHCRELAY_SERVERS="10.0.1.100 10.0.1.101"

Finally, setup your scopes on 10.0.1.100 and 10.0.1.101 for the 10.0.1.0/24, 10.0.2.0/24, and
10.0.3.0/24 subnets and test by requesting a lease from a computer in either 10.0.2.0/24 or 10.0.3.0/24.

Freeradius Active Directory Integration


This documents explain how use Freeradius 2 with Microsoft Active Directory as an authentication
oracle.
At the time of writing this document, the software used was:

Microsoft Windows Server 2003 R2 SP2


Alpine 2.0.2
freeradius-2.1.10-r7
freeradius-postgresql-2.1.10-r7

Join the domain

Install samba, and kerberos


# apk add samba winbind heimdal

Edit /etc/samba/smb.conf. Replace tags "<...>" with appropriate values for your environment:
[global]
workgroup = <MYWORKGROUP>
#change the netbios name as desired
netbios name = RADIUS
realm = <MYREALM>
server string =
security = ads
encrypt passwords = yes
password server = <DCNAME>.<MYDOMAIN>
log file = /var/log/samba/%m.log
max log size = 0
socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192
preferred master = False
local master = No
domain master = False
dns proxy = No
# use uids from 10000 to 20000 for domain users
idmap uid = 10000-20000
# use gids from 10000 to 20000 for domain groups
idmap gid = 10000-20000
# allow enumeration of winbind users and groups
winbind enum users = yes
winbind enum groups = yes
winbind use default domain = yes
# If you don't use SMB signing
# change the following setting to "no"
client use spnego = yes

Edit /etc/krb5.conf. Replace tags "<...>" with appropriate values for your environment. Make sure to
respect the letters' case when replacing tags:
[libdefaults]
default_realm = <MYREALM>
[realms]
<MYREALM> = {
kdc = <DCNAME>.<MYDOMAIN>
default_domain = <MYDOMAIN>
}
[domain_realm]
.<mydomain> = .<MYREALM>
<mydomain> = <MYREALM>

Change /etc/conf.d/samba in:


daemon_list="winbindd"

Set autostart:
# rc-update add samba default

Join domain:
# net ads join -S <DCNAME>.<MYDOMAIN> -U Administrator

You should get a message that you have joined the domain.
Start winbind:
# /etc/init.d/samba start

Check that AD integration works:


# wbinfo -u

You should get the list of all your domain users.

Configure Freeradius
Install freeradius-postgres
# apk add freeradius-postgres

Edit /etc/raddb/sql.conf to match the settings of your postgresql server:


server = "<fqdn>"
login = "<username>"
password = "<password>"

PostgreSQL can be configured using the scripts found in /etc/raddb/sql/postgres/*.sql.


Besides the scripts above you should run the following statements against the radius database (replace
"<user>" with user of radius DB):
GRANT USAGE ON SEQUENCE radpostauth_id_seq TO <user>;
GRANT USAGE ON SEQUENCE radacct_radacctid_seq TO <user>;

Create/Edit /etc/raddb/modules/ntlm_auth. Replace "MYDOMAIN" with your domain name:


exec ntlm_auth {
wait = yes
program = "/usr/bin/ntlm_auth --request-nt-key --domain=MYDOMAIN -username=%{mschap:User-Name} --password=%{User-Password}"
}

You have to list ntlm_auth in the authenticate sections of each the raddb/sites-enabled/default file, and
of the raddb/sites-enabled/inner-tunnel file:
authenticate {
...
ntlm_auth
...
}

Add the following text to the top of the users file:


DEFAULT

Auth-Type = ntlm_auth

Find the mschap module in /etc/raddb/modules/mschap file, and look for the line containing ntlm_auth
= . It is commented out by default, and should be uncommented, and edited to be as follows (replace
"MYDOMAIN" with your domain name):
ntlm_auth = "/usr/bin/ntlm_auth --request-nt-key --username=%{mschap:User-Name:None} --domain=%{%{mschap:NT-Domain}:-MYDOMAIN} --challenge=%{mschap:Challenge:-00}
--nt-response=%{mschap:NT-Response:-00}"

Configure your clients editing /etc/raddb/clients.conf.


Start radius in debug mode in order to check that everything works:
# radiusd -X

If everything is ok, press Ctrl^C and set it for autostart:


# rc-update add freeradius default
# /etc/init.d/freeradius start

Accounting into SQL is not enabled by default. In /etc/raddb/sites-enabled/default remove the comment
from "sql" under section accounting:
accounting {
...
sql
...
}

Multi ISP
This document describes how to configure your Alpine Linux router with multiple ISPs, with or
without failover and load balancing.
Please note: Alpine Linux v2.4.4 or newer is required.
Interface Name

IP address

Description

eth0

ISP1

DHCP

eth1

DMZ

192.168.0.1 Demilitarized zone

eth2

LAN

192.168.1.1 Local network

eth3

ISP2

DHCP

Internet with static IP address (we will use 1.2.3.4/24 and gw 1.2.3.1 as
example)

Internet with dynamic IP address.

Network Interfaces
If there are multiple static IP addresses, or to be exact, if there are multiple default gateways, you must
specify a metric value. The gateway with the lowest metric will win.
/etc/network/interfaces:
auto eth0
iface eth0 inet static
address 1.2.3.4
netmask 255.255.255.0
gateway 1.2.3.1
metric 200

The DHCP client will automatically add a metric value to the default gateway. It will pick 200 +
interface index. Therefore we don't need to worry about that for DHCP.
auto eth3
iface eth0 inet dhcp

For PPP interfaces there is a defaultroute-metric option that can set the metric value.

Shorewall Zones

In /etc/shorewall/zones, create the zones:


INET
DMZ
LAN

ipv4
ipv4
ipv4

Shorewall Interfaces
We need to make shorewall aware of the second ISP interface. In /etc/shorewall/params, add ISP1
under ISP2.
ISP1_IF=eth0
ISP2_IF=eth3
DMZ_IF=eth1
LAN_IF=eth2

Add it also to /etc/shorewall/interfaces:


INET
INET
DMZ
LAN

$ISP1_IF
$ISP2_IF
$DMZ_IF
$LAN_IF

detect
detect
detect
detect

dhcp
dhcp
dhcp
dhcp

The dhcp option is not really needed unless dhcp is actually used. We recommend leaving the dhcp
option on in case the ISP changes to DHCP in the future.

Shorewall Masquerade
We must masquerade the internet addresses so that every ISP gets a correct source address. This is
done in /etc/shorewall/masq.
$ISP1_IF
$ISP2_IF

0.0.0.0/0
0.0.0.0/0

Shorewall Policy
We disallow routing between the 2 ISP's by adding a drop policy in /etc/shorewall/policy.
...
INET
...

INET

DROP

Shorewall Providers
The /etc/shorewall/providers defines the 2 ISP's and allows firewall packet marking (the firewall can
mark certain packages which can later be used for the routing decision). Shorewall will also let the
firewall mark the incomming connections in order to keep track of what gateway the response should
go via. This is needed in case you do DNAT of incomming traffic on the secondary ISP. Without the
connection tracking the outgoing traffic would always go via the primary ISP.

We do not want shorewall to mess with the routing tables (which we needed it to do with older
versions of pingu). We can tell shorewall not to flush the routing tables that pingu will manage by
setting DUPLICATE column to '-', GATEWAY column to 'none' and let COPY column be empty.
In this example eth0 is attatched to ISP1, eth3 is attatched to ISP2.
#NAME
COPY

NUMBER

MARK

DUPLICATE

INTERFACE

GATEWAY

OPTIONS

ISP1
1
1
track,optional,loose
ISP2
2
2
track,optional,loose

eth0

none

eth3

none

After shorewall has restarted it should be possible to see the route rules for the firewall marks using the
command ip rule:
...
10001: from all fwmark 0x1 lookup ISP1
10002: from all fwmark 0x2 lookup ISP2
...

Shorewall Route Rules


We also use /etc/shorewall/route_rules to set up some static route rules. This is to force local traffic to
ignore the alternate route tables for the ISP's.
#SOURCE
DEST
PROVIDER
# prevent local traffic go via providers' route tables
192.168.0.1/24
main
192.168.1.1/24
main

PRIORITY
1000
1000

Make Shorewall not touch default gateway


Shorewall will store the default gateway(s) on startup and try restore on stopping. Unfortunally it does
it badly, restulting in a default gateway getting lost on stop. Additionally, the default gateway might
have changed during runtime, due to dhcp lease renewal. We need configure shorewall to not do
anything with the default gateways at all and let pingu handle this.
Add the following to /etc/shorewall/shorewall.conf:
RESTORE_DEFAULT_ROUTE=No

Policy Routing with Pingu


We need to make sure that where ISP1's IP address is used as source address, the ISP1's default
gateway must be used. Otherwise we might send packets with ISP1 IP source address via ISP2. The
response would come through the ISP2's interfaces and firewall would block.
Since IP address and default gateway might change (DHCP lease renewal) we need a daemon that
monitors the source address of the gateways via the interfaces. This is what pingu does.
apk add pingu

We configure pingu to monitor our eth0 and eth3 interfaces in /etc/pingu/pingu.conf:


interface eth0 {
# route-table must correspond with NUMBER column in /etc/shorewall/providers
route-table 1
# the rule-priority must be a higher number than the priority in
/etc/shorewall/route_rules
rule-priority 20000
}
interface eth3 {
# route-table must correspond with NUMBER column in /etc/shorewall/providers
route-table 2
rule-priority 20000
}

When pingu is started you should be able to see the routing rules with

ip rule
...
20000: from 1.2.3.4 lookup ISP1
20000: from 192.168.254.4 lookup ISP2
...

and see the ISP route tables with


ip route show table 1
default via 1.2.3.1 dev eth0 metric 200
1.2.3.0/24 dev eth0 proto kernel scope link

metric 200

Internet Failover with Pingu


To do failover we need a way to detect if an ISP is still or not. We configure some hosts that pingu will
ping for us at regular intervals. When a host stops to respond we consider that host offline. We can bind
a ping host to an interface in pingu and when enough hosts stops to respond, pingu will remove that
gateway from main route table and a gateway will lower metric will take over. We normally want ping
more than one host since a single host might down without it meaning that the rest of internet is down.
We also want pick hosts that is unlikely to change. DNS servers are good in this aspect. They seldomly
change since it would require all clients to do reconfiguration. In this example we will use google's
DNS and opendns hosts.
Add the hosts to /etc/pingu/pingu.conf and bind to primary ISP. We also set the ping timeout to 2.5
seconds.
...
# Ping responses that takes more than 2.5 seconds are considered lost.
timeout 2.5
# ping google dns via ISP1
host 8.8.8.8 {
interval 60
bind-interface eth0
}
# ping opendns via ISP1
host 208.67.222.222 {
interval 60
bind-interface eth0
}

Now, if both hosts stops to respond to pings, ISP-1 will be considered down and all gateways via eth0
will be removed from main route table. Note that the gateway will not be removed from the route table
'1'. This is so we can continue try ping via eth0 so we can detect that the ISP is back online. When ISP
starts working again, the gateways will be added back to main route table again.

Load Balancing
It is possible to give the impression of load balancing by using multipath routes. It will not let a http
file download use both ISPs for double download speed but it will let some users surf via ISP1 and
others via ISP2, resulting that bandwith usage is spread over both ISPs.
Note that balancing will not be perfect, as it is route based, and routes are cached. This means that
routes to often-used sites will always be over the same provider. It also means that the source IP might
change so websites that requires login (webmail, banks etc) will break if they check if the clients ip is
consistent. Banking sites often do this. It is recommended to avoid this feature if possible.
To enable load balance, simply add the 'load-balance' keyword to all the interfaces that should be
balanced in /etc/pingu/pingu.conf:

interface eth0 {
...
load-balance
}
interface eth3 {
...
load-balance
}
...

OwnCloud
ownCloud is WedDAV-based solution for storing and sharing on-line your data, files, images, video,
music, calendars and contacts. You can have your ownCloud instance up and running in 5 minutes with
Alpine!

Installation
ownCloud

is available from Alpine 2.5 and greater.

Before you start installing anything, make sure you have latest packages available. Make sure you are
using a 'http' repository in your /etc/apk/repositories and then run:
apk update

Tip: Detailed information is found in this doc.

Database
First you have to decide which database to use. Follow one of the below database alternatives.

sqlite
All you need to do is to install the package
apk add owncloud-sqlite

postgresql
Install the package
apk add owncloud-pgsql

Next thing is to configure and start the database


/etc/init.d/postgresql setup
/etc/init.d/postgresql start

Next you need to create a user, and temporary grant CREATEDB privilege.

psql -U postgres
CREATE USER mycloud WITH PASSWORD 'test123';
ALTER ROLE mycloud CREATEDB;
\q

Note: Replace the above username 'mycloud' and password 'test123' to something secure.
Remember these settings, you will need them later when setting up owncloud.
mysql
Install the package
apk add owncloud-mysql mysql-client

Now configure and start mysql


/etc/init.d/mysql setup
/etc/init.d/mysql start
/usr/bin/mysql_secure_installation

Follow the wizard to setup passwords etc.

Note: Remember the usernames/passwords that you set using the wizard, you will need them later.
Next you need to create a user, database and set permissions.
mysql -u root -p
CREATE DATABASE owncloud;
GRANT ALL ON owncloud.* TO 'mycloud'@'localhost' IDENTIFIED BY 'test123';
GRANT ALL ON owncloud.* TO 'mycloud'@'localhost.localdomain' IDENTIFIED BY
'test123';
FLUSH PRIVILEGES;
EXIT

Note: Replace the above username 'mycloud' and password 'test123' to something secure.
Remember these settings, you will need them later when setting up owncloud.
mysql-client

is not needed anymore. Let's uninstall it:

apk del mysql-client

Webserver
Next thing is to choose, install and configure a webserver. In this example we will install nginx or
lighttpd. Nginx is preferred over Lighttpd since the latter when working with large files will consume
a lot of memory (see lighty bug #1283). You are free to install any other webserver of your choice as
long as it supports PHP and FastCGI. We're not explaining how to generate an SSL certificate for your
webserver.

Nginx
Install the needed packages
apk add nginx php-fpm

Remove/comment any section like this in


Contents of /etc/nginx/nginx.conf

server {
listen ...
}

Include the following directive in


Contents of /etc/nginx/nginx.conf

http {
...
include /etc/nginx/sites-enabled/*;
...

Create a directory for your websites


mkdir /etc/nginx/sites-available

Create a configuration file for your site in /etc/nginx/sites-available/mysite.mydomain.com


server {
#listen
[::]:80; #uncomment for IPv6 support
listen
80;
return 301 https://$host$request_uri;
server_name mysite.mydomain.com;
}
server {

#listen
listen
server_name

[::]:443 ssl; #uncomment for IPv6 support


443 ssl;
mysite.mydomain.com;

root /var/www/vhosts/mysite.mydomain.com/www;
index index.php index.html index.htm;
disable_symlinks off;
ssl_certificate
ssl_certificate_key

/etc/ssl/cert.pem;
/etc/ssl/key.pem;

ssl_session_cache
ssl_session_timeout

shared:SSL:1m;
5m;

#Enable Perfect Forward Secrecy and ciphers without known vulnerabilities


#Beware! It breaks compatibility with older OS and browsers (e.g. Windows
XP, Android 2.x, etc.)
#ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCMSHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCMSHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128SHA256:ECDHE-RSA-AES256-SHA;
#ssl_prefer_server_ciphers on;
location / {
try_files $uri $uri/ /index.html;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
fastcgi_pass 127.0.0.1:9000;
#fastcgi_pass unix:/var/run/php-fpm/socket;
fastcgi_index index.php;
include fastcgi.conf;
}
}

If you are running-from-RAM and you're dealing with large files you might need to move the FastCGI
temp file from /tmp to /var/tmp or to a directory that is mounted on hdd
fastcgi_temp_path /var/tmp/nginx/fastcgi 1 2;

Large files upload takes sometime to be processed by php-fpm. So you need to bump the Nginx read
default timeout:
fastcgi_read_timeout 300s;

Set user and group for php-fpm in /etc/php/php-fpm.conf


...
user = nginx
group = www-data
...

Note: If you are serving serveral users make sure to tune the *children settings in /etc/php/phpfpm.conf
Make nginx user member of www-data group
addgroup nginx www-data

Enable your website

ln -s ../sites-available/mysite.mydomain.com /etc/nginx/sitesenabled/mysite.mydomain.com

Start services
rc-service php-fpm start
rc-service nginx start

Lighttpd
Install the package
apk add lighttpd php-cgi

Make sure you have FastCGI enabled in lighttpd:


Contents of /etc/lighttpd/lighttpd.conf
...
include "mod_fastcgi.conf"
...

Start up the webserver


/etc/init.d/lighttpd start

Tip: You might want to follow the Lighttpd_Https_access doc in order to configure lighttpd to use
https (securing your connections to your owncloud server).
Link owncloud installation to web server directory:
ln -s /usr/share/webapps/owncloud /var/www/localhost/htdocs

Other settings
Hardening
Consider updating the variable url.access-deny in /etc/lighttpd/lighttpd.conf for additional
security. Add "config.php" to the variable (that's where the database is stored) so it looks something
like this:
Contents of /etc/lighttpd/lighttpd.conf
...
url.access-deny = ("~", ".inc", "config.php")
...

Restart lighttpd to activate the changes


/etc/init.d/lighttpd restart

Additional packages
Some large apps, such as texteditor, documents and videoviewer are in separate package:
apk add owncloud-texteditor owncloud-documents owncloud-videoviewer

Configure and use ownCloud

Configure
Point your browser at https://mysite.mydomain.com and follow the on-screen instructions to
complete the installation, supplying the database user and password created before.

Hardening postgresql
If you have chosen PGSQL backend, revoke CREATEDB privilege from 'mycloud' user:
psql -U postgres
ALTER ROLE mycloud NOCREATEDB;
\q

Increase upload size


Default configuration for php is limited to 2Mb file size. You might want to increase that size by
editing the /etc/php/php.ini and change the following values to something that suits you:
upload_max_filesize = 2M
post_max_size = 8M

Clients
There are clients available for many platforms, Android included:

http://owncloud.org/sync-clients/ (ownCloud Sync clients)


http://owncloud.org/support/android/ (Android client)

Apache with php-fpm


Apache and php
There are 4 different ways to use php with apache

cgi

fcgi
mod_php
php-fpm

Here is a comparison why fpm is the better way. In short it works with apache event mpm, has better
security, can have per vhost pool configuration, better process management, .... When used with apache
event mpm, it's the most memory efficient setup that you can achieve with apache + php. Whether it's
better than nginx + php-fpm depends on your setup, contents and preference.

Step by step (apache2 event mpm + php+fpm)


install apache2 and php-fpm
apk add apache2-proxy php-fpm

configure apache2
in /etc/apache2/httpd.conf
Comment line:
LoadModule mpm_prefork_module modules/mod_mpm_prefork.so

and uncomment line:


#LoadModule mpm_event_module modules/mod_mpm_event.so

copy the mpm config, and adapt it to suit you needs.


cp /etc/apache2/original/extra/httpd-mpm.conf /etc/apache2/conf.d/

put these lines in your httpd.conf or vhost files, don't forget to change the path
ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/var/www/localhost/htdocs/$1
DirectoryIndex /index.php

tune php-fpm
edit the file /etc/php/php-fpm.conf to suit you needs
(re)start apache2 and php-fpm
/etc/init.d/apache2 restart && /etc/init.d/php-fpm restart

Seafile: setting up your own private cloud


Seafile
Seafile is like ownCloud, but with client side encryption. There are clients for Linux, Windows, OSX,
Android, IOS and web. Seahub is the web interface but for some other functionalities (like setup,
management, ...) seafile needs seahub too.

Installation
Currently seafile, seahub and some of their dependencies are still in testing, so you will need to add
testing to your repo.
install everything (highly recommended)
apk add seahub

install only seafile-server (not recommended, unless you know what you are doing)
apk add seafile-server

Setup
Using the default instance
cd /var/lib/seafile/default
sudo -u seafile seafile-admin setup
sudo -u seafile seafile-admin create-admin

check the options in


/etc/conf.d/seahub

Using non default instance


export SEA_USER=clouduser
export SEA_INSTANCE=mycloud
sudo -u $SEA_USER mkdir /var/lib/seafile/"$SEA_INSTANCE"
sudo -u $SEA_USER mkdir /var/lib/seafile/"$SEA_INSTANCE"/seafile-server
sudo cp -aR /usr/share/seafile/scripts /var/lib/seafile/"$SEA_INSTANCE"
sudo cp -u $SEA_USER -aR /usr/share/seahub
/var/lib/seafile/"$SEA_INSTANCE"/seafile-server
cd /var/lib/seafile/"$SEA_INSTANCE"
sudo -u $SEA_USER seafile-admin setup
sudo -u $SEA_USER seafile-admin create-admin

make init.d script and conf.d


cd /etc/init.d/
sudo ln -s seafile seafile."$SEA_INSTANCE"
sudo ln -s seahub seahub."$SEA_INSTANCE"
cd /etc/conf.d/
sudo cp seafile seafile."$SEA_INSTANCE"
sudo cp seahub seahub."$SEA_INSTANCE"

change user and group and check options in


/etc/conf.d/seafile
/etc/conf.d/seahub

Starting
with seahub

sudo /etc/init.d/seahub start

seafile server only


sudo /etc/init.d/seafile start

or append ."$SEA_INSTANCE" if you are using a non default instance.

Traffic monitoring
This is to show a few tools that can be used to monitor traffic on an Alpine Linux firewall, either to
track down specific issues, to establish a traffic baseline, etc. This is not a comprehensive guide to each
tool, since that can be found elsewhere on the internet.

iptraf
Available from main repository (v1.9, v1.10, v2.0, edge). No switches are needed. Good tool for seeing
statistical view of traffic, either layer 2 or 3.

iftop
Available from main repository(v1.9, v1.10, v2.0, edge). No switches needed. Graphical view of traffic
by connection.

nload
Available from main repository (edge). No switches needed, another graphical traffic viewer by
connection.
More are available, but that is a start.

Setting up traffic monitoring using rrdtool (and snmp)

Install programs
Install rrdtool
apk add rrdtool

Create a rrd-database
The creation of the database is dependent on how many DataSources (DS) you have, and what type of
DataSources you use.

In this example, we monitor eth0 and eth1 (RX and TX) on this local machine, and fetch (eth0 and
eth1) information from another computer through snmp.
rrdtool create /root/exampledb.rrd \
--step 30 \
DS:pc1eth0rx:COUNTER:120:0:U \
DS:pc1eth0tx:COUNTER:120:0:U \
DS:pc1eth1rx:COUNTER:120:0:U \
DS:pc1eth1tx:COUNTER:120:0:U \
DS:pc2eth0rx:COUNTER:120:0:U \
DS:pc2eth0tx:COUNTER:120:0:U \
DS:pc2eth1rx:COUNTER:120:0:U \
DS:pc2eth1tx:COUNTER:120:0:U \
RRA:AVERAGE:0.5:1:3600 \
RRA:MAX:0.5:1:3600

This creates a Round Robin Database (RRD) called exampledb.rrd.

"--step 30" specifies the base interval in seconds with which data will be fed into the RRD.
The first "DS..." row assigns a name "pc1eth0rx".
"COUNTER" is for continuous incrementing counters like the ifInOctets counter in a router.
The rest in the DS-row, I hopefully come back to later.
"RRA" is a RoundRobinArcive. Its tores information from the DataSource (DS) in various
ways.
"RRA:AVARAGE..." will present a average value of (average for each second) and keep values
for 3600 seconds (older than 3600 seconds will be overwritten).
"RRA:MAX" calulates the maximum value.

Gather information and put it in the RRD


Gathering data can be done in various ways and from various systems.
In our example we collect data from 2 different computers, each computer has eth0 and eth1 that we
would like to monitor.

Feeding the database with information


Now we create a new script that starts by fetching the local information and then fetches snmp
information from the next computer and then puts it all in the database.
Lets call the script /root/collect_data.sh.
#!/bin/sh
while true; do
sleep 30
ETH0=$(grep eth0 /proc/net/dev)
E0DOWN=$(echo $ETH0|tr \: \ |awk '{print $2}')
E0UP=$(echo $ETH0|tr \: \ |awk '{print $10}')
ETH1=$(grep eth1 /proc/net/dev)
E1DOWN=$(echo $ETH1|tr \: \ |awk '{print $2}')
E1UP=$(echo $ETH1|tr \: \ |awk '{print $10}')
rrdupdate /root/exampledb.rrd N:\
${E0DOWN}:${E0UP}:${E1DOWN}:${E1UP}:\
`/usr/bin/snmpget -v 1 -c general -Oqv 192.168.0.2
`/usr/bin/snmpget -v 1 -c general -Oqv 192.168.0.2
`/usr/bin/snmpget -v 1 -c general -Oqv 192.168.0.2
`/usr/bin/snmpget -v 1 -c general -Oqv 192.168.0.2
done

IF-MIB::ifInOctets.2`:\
IF-MIB::ifOutOctets.2`:\
IF-MIB::ifInOctets.3`:\
IF-MIB::ifOutOctets.3`:\

For this to work, you need to have a configured snmpd running on the 192.168.0.2 computer. The
community-name 'general' might be different for you, depending on your snmp configuration.

Present the information in graphs

Now starts the fun part. We will display the gathered information.
The information can be presented in various ways (we will use LINE-graph).
rrdtool graph /root/result.png --start -1800 \
-a PNG -t "Network Interfaces" --vertical-label "bits/s" \
-w 1260 -h 800 -r \
DEF:pc1eth0rx=/root/exampledb.rrd:pc1eth0rx:AVERAGE \
DEF:pc1eth0tx=/root/exampledb.rrd:pc1eth0tx:AVERAGE \
DEF:pc1eth1rx=/root/exampledb.rrd:pc1eth1rx:AVERAGE \
DEF:pc1eth1tx=/root/exampledb.rrd:pc1eth1tx:AVERAGE \
DEF:pc2eth0rx=/root/exampledb.rrd:pc2eth0rx:AVERAGE \
DEF:pc2eth0tx=/root/exampledb.rrd:pc2eth0tx:AVERAGE \
DEF:pc2eth1rx=/root/exampledb.rrd:pc2eth1rx:AVERAGE \
DEF:pc2eth1tx=/root/exampledb.rrd:pc2eth1tx:AVERAGE \
CDEF:pc1eth0rxb=pc1eth0rx,8,\* \
CDEF:pc1eth0txb=pc1eth0tx,-8,\* \
CDEF:pc1eth1rxb=pc1eth1rx,8,\* \
CDEF:pc1eth1txb=pc1eth1tx,-8,\* \
CDEF:pc2eth0rxb=pc2eth0rx,8,\* \
CDEF:pc2eth0txb=pc2eth0tx,-8,\* \
CDEF:pc2eth1rxb=pc2eth1rx,8,\* \
CDEF:pc2eth1txb=pc2eth1tx,-8,\* \
AREA:pc1eth0rxb#D7CC00:PC1_EHT0-RX \
AREA:pc1eth0txb#D7CC00:PC1_EHT0-TX \
LINE2:pc1eth1rxb#D73600:PC1_EHT1-RX \
LINE2:pc1eth1txb#D73600:PC1_EHT1-TX \
LINE2:pc2eth0rxb#0101D6:PC2_EHT0-RX \
LINE2:pc2eth0txb#0101D6:PC2_EHT0-TX \
LINE2:pc2eth1rxb#00D730:PC2_EHT1-RX \
LINE2:pc2eth1txb#00D730:PC2_EHT1-TX

First define name of output and the time-span (we could also have definesd "--end").
We output some headers and Y-axis information.
Next we define the size of the png.
"DEF..." gets the information from the database.
"CDEF..." recalculates the original information (in our case we want to presnet bits instead of
bytes).
"AREA..." displays a area-graph on the output.
"LINE2..." (or "LINE") writes a line-graph.
In the color settings, you could enter opacity at the end. "LINE3...#FF00007F" would have
displayed a 3 pixel red line with approx. 50% opacity.

Save settings
Don't forget to save all your settings

Setting up monitoring using rrdtool (and rrdcollect)

Install programs
apk add rrdtool rrdcollect

Create rrd-databases
As we will use rrdcollect to collect our data for us, we will create all databases that the default config
for rrdcollect tries to use.

stat.rrd
rrdtool create /var/lib/rrdtool/stat.rrd \
--step 60 \
DS:cpu_user:COUNTER:120:0:U \
DS:cpu_nice:COUNTER:120:0:U \
DS:cpu_system:COUNTER:120:0:U \
DS:cpu_idle:COUNTER:120:0:U \
DS:cpu_iowait:COUNTER:120:0:U \
DS:cpu_irq:COUNTER:120:0:U \
DS:cpu_softirq:COUNTER:120:0:U \
DS:ctxt:COUNTER:120:0:U \
DS:page_in:COUNTER:120:0:U \
DS:page_out:COUNTER:120:0:U \
DS:processes:COUNTER:120:0:U \
DS:swap_in:COUNTER:120:0:U \
DS:swap_out:COUNTER:120:0:U \
RRA:AVERAGE:0.5:1:360 \
RRA:AVERAGE:0.5:10:1008 \
RRA:MAX:0.5:10:1008

memory.rrd
rrdtool create /var/lib/rrdtool/memory.rrd \
--step 60 \
DS:mem_total:GAUGE:120:0:U \
DS:mem_used:GAUGE:120:0:U \
DS:mem_free:GAUGE:120:0:U \
DS:mem_shared:GAUGE:120:0:U \
DS:mem_buffers:GAUGE:120:0:U \
DS:swap_total:GAUGE:120:0:U \
DS:swap_used:GAUGE:120:0:U \
DS:swap_free:GAUGE:120:0:U \
RRA:AVERAGE:0.5:1:360 \
RRA:AVERAGE:0.5:10:1008 \
RRA:MAX:0.5:10:1008

eth0.rrd
rrdtool create /var/lib/rrdtool/eth0.rrd \
--step 60 \
DS:bytes_in:COUNTER:120:0:U \
DS:pkts_in:COUNTER:120:0:U \
DS:bytes_out:COUNTER:120:0:U \
DS:pkts_out:COUNTER:120:0:U \
RRA:AVERAGE:0.5:1:360 \
RRA:AVERAGE:0.5:10:1008 \
RRA:MAX:0.5:10:1008

Note: If you chose to change the "--step 60" (which specifies the base interval in seconds with which
data will be fed into the RRD) then make sure to change the 'step' value in
/etc/rrdcollect/rrdcollect.conf to reflect your changes above.
Tip: In the above examples the first RRA in each .rrd is more precise (1min interval), but it holds
data
for
shorter
time.
(1x360x60)
equals
21600s/6h)
The second RRA evaluates 10 min interval and holds data for longer period. (10x1008x60 equals
604800s/168h/7d)

Gather information and put it in the RRD


rc-service rrdcollect start

Create graphs based on the rrd's


In the below examples you will notice that the .png file is exported to "/var/www/localhost/htdocs/".
You would need to either create /var/www/localhost/htdocs/ or change the path for the images.

Stat
rrdtool graph /var/www/localhost/htdocs/stat.png --start -1800 \
-a PNG -t "Stat" --vertical-label "bits/s" \
-w 1260 -h 400 -r \
DEF:cpu_user=/var/lib/rrdtool/stat.rrd:cpu_user:AVERAGE \
DEF:cpu_nice=/var/lib/rrdtool/stat.rrd:cpu_nice:AVERAGE \
DEF:cpu_system=/var/lib/rrdtool/stat.rrd:cpu_system:AVERAGE \
DEF:cpu_idle=/var/lib/rrdtool/stat.rrd:cpu_idle:AVERAGE \
DEF:cpu_iowait=/var/lib/rrdtool/stat.rrd:cpu_iowait:AVERAGE \
DEF:cpu_irq=/var/lib/rrdtool/stat.rrd:cpu_irq:AVERAGE \
DEF:cpu_softirq=/var/lib/rrdtool/stat.rrd:cpu_softirq:AVERAGE \
DEF:ctxt=/var/lib/rrdtool/stat.rrd:ctxt:AVERAGE \
DEF:page_in=/var/lib/rrdtool/stat.rrd:page_in:AVERAGE \
DEF:page_out=/var/lib/rrdtool/stat.rrd:page_out:AVERAGE \
DEF:processes=/var/lib/rrdtool/stat.rrd:processes:AVERAGE \
DEF:swap_in=/var/lib/rrdtool/stat.rrd:swap_in:AVERAGE \
DEF:swap_out=/var/lib/rrdtool/stat.rrd:swap_out:AVERAGE \
AREA:cpu_user#D7CC00:cpu_user \
AREA:cpu_nice#D7CC00:cpu_nice \
LINE2:cpu_system#D73600:cpu_system \
LINE2:cpu_idle#D73600:cpu_idle \
LINE2:ctxt#0101D6:ctxt \
LINE2:page_in#0101D6:page_in \
LINE2:page_out#D73600:page_out \
LINE2:processes#D73600:processes \
LINE2:swap_in#D73600:swap_in \
LINE2:swap_out#D73600:swap_out

Memory
rrdtool graph /var/www/localhost/htdocs/memory.png --start -1800 \
-a PNG -t "Memory" --vertical-label "" \
-w 1260 -h 400 -r \
DEF:mem_total=/var/lib/rrdtool/memory.rrd:mem_total:AVERAGE \
DEF:mem_used=/var/lib/rrdtool/memory.rrd:mem_used:AVERAGE \
DEF:mem_free=/var/lib/rrdtool/memory.rrd:mem_free:AVERAGE \
DEF:mem_shared=/var/lib/rrdtool/memory.rrd:mem_shared:AVERAGE \
DEF:mem_buffers=/var/lib/rrdtool/memory.rrd:mem_buffers:AVERAGE \
DEF:swap_total=/var/lib/rrdtool/memory.rrd:swap_total:AVERAGE \
DEF:swap_used=/var/lib/rrdtool/memory.rrd:swap_used:AVERAGE \
DEF:swap_free=/var/lib/rrdtool/memory.rrd:swap_free:AVERAGE \
CDEF:mem_total_x=mem_total,1024,\* \
CDEF:mem_used_x=mem_used,1024,\* \
CDEF:mem_free_x=mem_free,1024,\* \
CDEF:mem_shared_x=mem_shared,1024,\* \
CDEF:mem_buffers_x=mem_buffers,1024,\* \
CDEF:swap_total_x=swap_total,1024,\* \
CDEF:swap_used_x=swap_used,1024,\* \
CDEF:swap_free_x=swap_free,1024,\* \
LINE1:mem_total_x#000000:mem_total \
LINE2:mem_used_x#D7CC00:mem_used \
LINE2:mem_free_x#00CC00:mem_free \
LINE2:mem_shared_x#D73600:mem_shared \
LINE2:mem_buffers_x#D73600:mem_buffers \
LINE2:swap_total_x#000000:swap_total \
LINE2:swap_used_x#0101D6:swap_used \
LINE2:swap_free_x#0101D6:swap_free

eth0
rrdtool graph /var/www/localhost/htdocs/eth0.png --start -1h \
-a PNG -t "eth0" --vertical-label "bits/s" \
-w 1260 -h 400 -r \
DEF:bytes_in=/var/lib/rrdtool/eth0.rrd:bytes_in:AVERAGE \
DEF:bytes_out=/var/lib/rrdtool/eth0.rrd:bytes_out:AVERAGE \
CDEF:bits_in=bytes_in,8,\* \
CDEF:bits_out=bytes_out,-8,\* \
AREA:bits_in#339933:bits_in \
AREA:bits_out#aa3333:bits_out \
HRULE:0#000000
rrdtool graph /var/www/localhost/htdocs/eth0pkt.png --start -1800
-a PNG -t "eth0" --vertical-label "packets" \
-w 1260 -h 400 -r \

DEF:pkts_in=/var/lib/rrdtool/eth0.rrd:pkts_in:AVERAGE \
DEF:pkts_out=/var/lib/rrdtool/eth0.rrd:pkts_out:AVERAGE \
CDEF:pkts_out_negative=pkts_out,-1,\* \
LINE2:pkts_in#006600:pkts_in \
LINE2:pkts_out_negative#D73600:pkts_out \
HRULE:0#000000

Setting up Cacti
Install needed packages:
apk add lighttpd php cacti net-snmp-tools fcgi

Add php support to lighttpd (uncomment this line in /etc/lighttpd/lighttpd.conf):


include "mod_fastcgi.conf"

Save and exit editor.


Create a softlink for the cacti web files:
ln -s /usr/share/webapps/cacti /var/www/localhost/htdocs/cacti

Assign permits to lighttpd user:


chown -R lighttpd:lighttpd /var/www/localhost/htdocs/cacti/

In case you are using other web server than lighttpd have to assign permits to that user. If it hasn't
already been done, setup MySQL:
apk add mysql-client
mysql_install_db --user=mysql
/etc/init.d/mysql start
mysql_secure_installation

Create the cacti database and populate it


mysql -u root -p
mysql> create database cacti;

Grant Cacti MySQL user access (give it a more secure password):

mysql> grant all on cacti.* to 'cactiuser'@'localhost' identified by


'MostSecurePassword'; flush privileges;

Quit from Mysql command prompt:


mysql> \q

Edit and put in the password you used in the above step for the mysql user.
vi /var/www/localhost/htdocs/cacti/include/config.php

Import the initial Cacti MySQL config:


mysql --user=cacti -p cacti < /usr/share/webapps/cacti/cacti.sql

Set lighttpd to autostart and start the daemon.


rc-update add lighttpd && rc-service lighttpd start

Browse to http://localhost/cacti/
In the web page click:
-> Next
Then select new install in case is not selected:
-> New install, Next
Then finish
-> Finish
Login using:
Password= admin user= admin

Next wil be prompted to change password:


change password.

Add to crontab:
cd /etc/crontabs
vi root

copy to the end of the file:


*/5 * * * * lighttpd php /var/www/localhost/htdocs/cacti/poller.php > /dev/null
2>&1

In case you are using other web server have to modify the "lighttpd" user.
*/5 * * * * "web server user" php /var/www/localhost/htdocs/cacti/poller.php >
/dev/null 2>&1

Add your devices and you're ready to start monitoring!

Setting up Zabbix
The purpose of this document is to assist in installing the Zabbix server software and Zabbix agent on
the Alpine Linux operating system. Instructions on how to configure and use Zabbix - as well as many
useful tutorials - can be found at http://www.zabbix.com.

Note: The minimum required version of Alpine Linux required to install Zabbix is Alpine 2.2.

Install Lighttpd, and PHP


Install the additional packages:
apk add lighttpd php-common php-iconv php-json php-gd php-curl php-xml php-pgsql
php-imap php-cgi fcgi
apk add php-pdo php-pdo_pgsql php-soap php-xmlrpc php-posix php-mcrypt php-gettext
php-ldap php-ctype php-dom

Configure Lighttpd
vi /etc/lighttpd/lighttpd.conf

Uncomment line:
include "mod_fastcgi.conf"

Start lighttpd service and add to needed runlevel


rc-service lighttpd start && rc-update add lighttpd default

Configure PostgreSQL
Install PostgreSQL
apk add postgresql postgresql-client

Now configure PostgreSQL:

/etc/init.d/postgresql setup
/etc/init.d/postgresql start
rc-update add postgresql

Install Zabbix
apk add zabbix zabbix-pgsql zabbix-webif zabbix-setup

Now we need to set up the zabbix database. Substitute '*********' in the example below for a real
password:
psql U postgres
postgres=# create user zabbix with password '*********';
postgres=# create database zabbix owner zabbix;
postgres=# \q
cd /usr/share/zabbix/create/schema/
cat postgresql.sql | psql -U zabbix zabbix
cd ..
cd data/
cat data.sql | psql -U zabbix zabbix
cat images_pgsql.sql | psql -U zabbix zabbix

Create a softlink for the Zabbix web-frontend files:


rm /var/www/localhost/htdocs -R
ln -s /usr/share/webapps/zabbix /var/www/localhost/htdocs

Edit PHP configuration to satisfy some zabbix requirements. Edit /etc/php/php.ini and configure the
following values at least:
Max_execution_time = 600
Expose_php = off
Date.timezone = <insert your timezone here>
post_max_size = 32M
upload_max_filesize = 16M
max_input_time = 600
memory_limit = 256M

Configure the following entries in /etc/zabbix/zabbix_server.conf, where DBPassword is the password


chosen for the database above:

DBName=zabbix
# Database user
DBUser=zabbix
# Database password
# Comment this line if no password used
DBPassword=*********
FpingLocation=/usr/sbin/fping

Start Zabbix server:


rc-update add zabbix-server
/etc/init.d/zabbix-server start

Fix permissions on conf directory.


chown -R lighttpd /usr/share/webapps/zabbix/conf

You should now be able to browse to the Zabbix frontend: http://yourservername/.


or
You should now be able to browse to the Zabbix setup frontend: http://yourserverip/instal.php.
Follow the setup instructions to configure Zabbix, supplying the database information used above.
After setup, login using: Login name: Admin Password:zabbix. (as described at
http://www.zabbix.com/documentation/1.8/manual/installation)
Finally, Zabbix requires special permissions to use the fping binary.
chmod u+s /usr/sbin/fping

Install Zabbix Agent on Monitored Servers


Zabbix can monitor almost any operating system, including Alpine Linux hosts. Complete the
following steps to install the Zabbix agent on Alpine Linux.

Note: Support to allow zabbix-agentd to view running processes on Alpine Linux has been added
since linux-grsec-2.6.35.9-r2. Please ensure you have that kernel installed prior to attempting to run
zabbix-agentd.
Ensure that the readproc group exists (support added since alpine-baselayout-2.0_rc1-r1), by adding the
following line to /etc/group:
readproc:x:30:zabbix

Install the agent package:


apk add zabbix-agent

Edit the /etc/zabbix/zabbix_agentd.conf file and configure at least the following option:
Server=<ip or hostname of zabbix server>
Hostname=<ip or hostname of zabbix agent>
ListenPort=10050

Start the zabbix-agent:


rc-update add zabbix-agentd
/etc/init.d/zabbix-agentd start

In case you want to monitor using SNMP agent on remote machines you have to add these packages on
zabbix server:
apk add net-snmp net-snmp-tools

And add these packages on remote machines:


apk add net-snmp

Optional: Crash course in adding hosts, checks, and notifications


Note: This is optional since it's not specific to Alpine Linux, but I wanted a couple notes for how to
perform a simple check on a server that doesn't have the agent installed on it, and be notified on state
changes.
Administration -> Media Types -> Email

Setup server, helo, email from address

Administration -> Users

Setup each user who'll get notified, make sure they have media type "Email" added with
their address

Configuration -> Hosts -> Create host

In Linux Servers hostgroup


Define dns name, ip, connect by IP
If the machine is a simple networking device that will only be monitored using SNMP, add it
to Template_SNMPv2_Device, and you're done.

Configuration -> Templates -> Create template

Give it a name (Template_Alpine_Linux_Infra_HTTP)


In Templates group

Configuration -> Templates -> Template_Alpine_Linux_Infra_HTTP -> Items

Create Item
Host: Template_Alpine_Linux_Infra_HTTP
Description: HTTP Basic Check
Type: Simple_check
Key: http,80

Configuration -> Templates -> Template_Alpine_Linux_Infra_HTTP -> Triggers

Create Trigger
Name: "HTTP Trigger"
Expression: {Template_Alpine_Linux_Infra_HTTP:http,80.last(0)}#1
Severity: High

Configuration -> Actions ->

Create Action
name: Email notifications
Event source: triggers
Default Subject: add "{HOST.DNS}:" to the beginning
Default message: add "{HOST.DNS}:" to the beginning
Conditions: make host have to be from "Linux Servers" hostgroup, and
Template_Alpine_Linux_Infra_HTTP:HTTP trigger" is not 1
Email affected users

Dglog
Dlog its a CGI log analyzer for the web content filter DansGuardian, which is being developed under
the GNU General Public License (GPL). you need to have DansGuardian installed and working.
Dglog2 will read the access.log log file from /var/log/dansguardian directory. You can use dglog to to
have a basic analysis of the dansguardian log. Its very simple and basic, but its a good start.

Install http server and perl


apk add lighttpd perl

Configure lighttpd
Edit the /etc/lighttpd.conf file
nano +44 /etc/lighttpd/lighttpd.conf

Uncomment the following line:

include "mod_cgi.conf"

Save and exit.

Installing dglog
Make the necesary directory
mkdir -p /var/www/localhost/htdocs/cgi-bin/dglog
cd /var/www/localhost/htdocs/cgi-bin/dglog

Download
wget http://dansguardian.pl/pobierz/dglog2.pl -P /var/www/localhost/htdocs/cgibin/dglog

Edit the dglog2.pl file


nano +104 dglog2.pl

Uncomment and change


$cgipath = 'http://your.ip/cgi-bin/dglog/dglog2.pl';

Comment
$cgipath = $ENV{SCRIPT_NAME};

Starting
Starting http server and adding to boot.
/etc/init.d/lighttpd start && rc-update add lighttpd default

Done! Now you can browse to http://your.ip/cgi-bin/dglog/dglog2.pl and check the DansGuardian log
file.

Awstats
AWStats is a powerful tool which generates server statistics. AWstats works as a CGI or from
command line and shows you all possible information your log contains in a graphical way.

Installing Lighthttpd and Awstats


Install the additional packages:

apk add lighttpd php-common php-iconv php-json php-gd php-curl php-xml php-pgsql
php-imap php-cgi fcgi
apk add php-pdo php-pdo_pgsql php-soap php-xmlrpc php-posix php-mcrypt php-gettext
php-ldap php-ctype php-dom

Configure Lighttpd
vi /etc/lighttpd/lighttpd.conf

Uncomment line:
include "mod_fastcgi.conf"

Start lighttpd service and add to needed runlevel


rc-service lighttpd start && rc-update add lighttpd default

Installing Awstats
apk add awstats -U -X http://build.alpinelinux.org:8009/backports/1.10/ --allowuntrusted

In /etc/lighttpd/lighttpd.conf:
Change the base server root folder
#server.document-root = var.basedir + "/htdocs"
server.document-root = "/usr/lib/awstats"

Configuting Awstats
Run awstats_configure.pl
awstats_configure.pl

-----> Running OS detected: Linux, BSD or Unix


Do you want to continue setup from this NON standard directory [yN] ? y
-----> Check for web server install
Config file path ('none' to skip web server setup): none
-----> Need to create a new config file ? y
-----> Define config file name to create
Example: www.mysite.com
Example: demo
Your web site, virtual server or profile name: awstats
-----> Define config file path
>/etc/awstats
Press ENTER to continue...
Press ENTER to finish...

Edit awstats.awstats.conf
nano /etc/awstats/awstats.awstats.conf
Search the line LogFile=" " and configure your log path and file. e.g.
LogFile="/var/log/messages"

Start http server and add to boot


/etc/init.d/lighttpd start && rc-update add lighttpd default

Browse
http://AWSTATS_IP_NUMBER/cgi-bin/awstats.pl?config=awstats

Note: Awstats is not working yet with dansguardian logs.

Intrusion Detection using Snort


This guide will set up (list subject to change):

Snort
Barnyard (maybe)
BASE

This guide will assume:

You have a knowledge of your network setup (at least know which subnets exist).
You have Alpine 2.0.2 installed and working with networking setup.
You have had at least three cups of coffee this morning. And not decaf.

Get Development Packages


Install Alpine and Pre-packaged components
apk add alpine-sdk mysql-dev php-mysql lighttpd php-xml php-pear libpcap-dev php-gd
pcre-dev wireshark tcpdump tcpflow cvs bison flex

Download Non-Packaged Applications


Download the following packages
For the purpose of this document we will assume you download these files to /usr/src.
Download snort from www.snort.org. We used version 2.8.6.1 in this document.
Download the snort rules from http://www.snort.org/snort-rules/
Download BASE from http://sourceforge.net/projects/secureideas/files/BASE/base-1.4.5/base1.4.5.tar.gz/download
Download adodb5 from http://sourceforge.net/projects/adodb/files/adodb-php5-only/adodb511-for-php5/adodb511.zip/download

Compile Snort
Uncompress snort with something like:

tar -zxvf snort-2.8.6.1.tar.gz

Then do the following:


cd snort-2.8.6.1
./configure -enable-dynamicplugin --with-mysql
make
make install

Configure Snort and Ruleset


mkdir /etc/snort
cd /etc/snort
cp /usr/src/snort-2.8.6.1/etc/* .
mv /usr/src/snortrules-snapshot-2861.tar.gz /etc/snort/.
tar -zxvf /usr/src/snortrules-snapshot-2861.tar.gz

Now edit the snort.conf file:


vi snort.conf

and change the following:

Change "var HOME_NET any" to "var HOME_NET X.X.X.X/X" (fill in the subnet with your
trusted network)
Change "var EXTERNAL_NET any" to "var EXTERNAL_NET !$HOME_NET" (this is
stating everything except HOME_NET is external)
Change "var RULE_PATH ../rules" to "var RULE_PATH /etc/snort/rules"
Change "var SO_RULE_PATH ../so_rules" to "var SO_RULE_PATH /etc/snort/so_rules"
Change "var PREPROC_RULE_PATH ../preproc_rules" to "var PREPROC_RULE_PATH
/etc/snort/preproc_rules"
Comment out the line that says "dynamicdetection directory /usr/local/lib/snort_dynamicrules"
(by placing a "#" in front of the line)
Scroll down the list to the section with "# output database: log, ..." and remove the "#" from in
front of this line.
Edit this line to look like this:
output database: log, mysql, user=root password=yoursecretpassword dbname=snort
host=localhost

Make note of the username, password, and dbname. You will need this information when we set
up mysql.
Find this line (line 194 in current version)

preprocessor http_inspect: global iis_unicode_map unicode.map 1252 compress_depth 20480


decompress_depth 20480
and remove from "compress_depth" to the end of the line. When done, the line will read:
preprocessor http_inspect: global iis_unicode_map unicode.map 1252

Find this line (line 207 in current version)


inspect_gzip \
and remove it.

Save and quit.

Start and Setup MySQL


(Need to add detail here on starting up MySQL for the first time)
/usr/bin/mysql_install_db --user=mysql
rc-update add mysql
/etc/init.d/mysql start
/usr/bin/mysqladmin -u root password 'password'
you specified in the snort.conf file)

(set password to the same password

mysql -u root -p

Once in mysql, type the following commands:


mysql> create database snort;
mysql> exit

Now create the database schema:


mysql -D snort -u root -p < /usr/src/snort-2.8.6.1/schemas/create_mysql

Configure PHP and PEAR


Edit /etc/php/php.ini and add the following under "Dynamic Extensions".
extension=mysql.so
extension=gd.so

Save and exit. From the command line, type the following:
pear install Image_Color
pear install Image_Canvas-alpha
pear install Image_Graph-alpha

pear install mail


pear install mail_mime

Start Apache or lighttpd


Need to decide which of these to use in production.

Setup BASE
^
mv /usr/src/adodb5 /var/www/localhost/htdocs/.
mv /usr/src/base-1.4.5/* /var/www/localhost/htdocs/.

Now, open your web browser and navigate to http://X.X.X.X/setup (where x.x.x.x is your server's IP
address)
Click continue on the first page.
Step 1 of 5: Enter the path to ADODB.
This is /var/www/localhost/htdocs/adodb5.
Step 2 of 5:
Database type = MySQL, Database name = snort, Database Host = localhost, Database
username = root, Database Password = YOUR_PASSWORD
Step 3 of 5: If you want to use authentication enter a username and password here.
Step 4 of 5: Click on Create BASE AG.
Step 5 of 5: one step 4 is done at the bottom click on Now continue to step 5.
Copy the text on the screen, and then paste into a new file named
/var/www/localhost/htdocs/base_conf.php. Save that file.

Configure Barnyard
To improve performance.

Webmin

What is Webmin
A web-based interface for system administration for Unix or Linux. Setup user accounts, DNS, file
sharing, etc.
Note: Webmin has frequent security updates and patches. Please watch the webmin security alert page.
Tip: Consider using ACF (Alpine Configuration Framework) to get a web interface for Alpine.

Set up Webmin on Alpine Linux


This document will be a quick c/p guide to setup webmin and dansguardian module on Alpine linux.
We assume you have dansguardian installed and running. What we will install is the following:

Perl
Webmin 1.510

Dansguardian 0.7.0beta1b module (for webmin)

Installing Perl
Install Perl
apk add perl

Installing and configuring Webmin


Install webmin
Download
cd /etc/
wget http://prdownloads.sourceforge.net/webadmin/webmin-1.560.tar.gz

Unpack
gunzip webmin-1.560.tar.gz
tar xvf webmin-1.560.tar

Folder
mv webmin-1.560 webmin

Setup
cd webmin
./setup.sh

Config file directory [/etc/webmin]: enter


Log file directory [/var/webmin]: /var/log/webmin
Full path to perl: enter
Web server port (default 10000): enter (or other port number)
Login name (default admin): admin
Login password: admin-password
Password again: admin-password
Start Webmin at boot time (y/n): y

Browse
http://IP_NUMBER:10000 (or the ip number you chose)

To add dansguardian module


Download module

cd /tmp
wget http://downloads.sourceforge.net/project/dgwebminmodule/dgwebmindevel/0.7.0beta1b/dgwebmin-0.7.0beta1b.wbm

Browse to http://IP_NUMBER:10000 (or the ip number you chose)

Click on: Webmin / Webmin Configuration / Webmin Modules


In the Install Module box select install from: From local file, click on ... button
Then when the explorer box open, go to /tmp and select the dgwebmin-0.7.0beta1b.wbm file,
click ok
Back in the Webmin Modules click on Install Module button, and thats all.
Click on: Servers / DansGuardian Web Content to configure and use dansguardian module.

Note: Webmin Modules you can add more modules, from Standard module from www.webmin.com
or Third party module from, etc.

PhpMyAdmin
phpMyAdmin is a free software tool written in PHP intended to handle the administration of MySQL
over the World Wide Web. phpMyAdmin supports a wide range of operations with MySQL. The most
frequently used operations are supported by the user interface (managing databases, tables, fields,
relations, indexes, users, permissions, etc), while you still have the ability to directly execute any SQL
statement.

Install lighttpd, PHP and MySql


Install the additional packages:
apk add lighttpd php-common php-iconv php-json php-gd php-curl php-xml php-pgsql
php-imap php-cgi fcgi
apk add php-pdo php-pdo_pgsql php-soap php-xmlrpc php-posix php-mcrypt php-gettext
php-ldap php-ctype php-dom

Configure Lighttpd
vi /etc/lighttpd/lighttpd.conf

Uncomment line:
include "mod_fastcgi.conf"

Start lighttpd service and add to needed runlevel


rc-service lighttpd start && rc-update add lighttpd default

Install extra packages:

apk add mysql mysql-client php-mysql php-mysqli

Configuring MySql
/usr/bin/mysql_install_db --user=mysql
/etc/init.d/mysql start && rc-update add mysql default
/usr/bin/mysqladmin -u root password 'password'

Installing phpMyAdmin
Create a directory named webapps
mkdir -p /usr/share/webapps/

Download the source code


cd /usr/share/webapps
wget http://files.directadmin.com/services/all/phpMyAdmin/phpMyAdmin-4.5.0.2-alllanguages.tar.gz

Unpack the archive and remove the archive


tar zxvf phpMyAdmin-4.5.0.2-all-languages.tar.gz
rm phpMyAdmin-4.5.0.2-all-languages.tar.gz

Rename the folder


mv phpMyAdmin-4.5.0.2-all-languages phpmyadmin

Change the folder permissions


chmod -R 777 /usr/share/webapps/

Create a symlink to the phpmyadmin folder


ln -s /usr/share/webapps/phpmyadmin/ /var/www/localhost/htdocs/phpmyadmin

Log on your phpMyAdmin

Browse to: http://WEBSERVER_IP_ADDRESS/phpmyadmin and logon to phpMyAdmin using your


MySQL user and password.

PhpPgAdmin
phpPgAdmin is a web-based administration tool for PostgreSQL.

Install lighttpd, PHP, and postgresql


Install the additional packages:
apk add lighttpd php-common php-iconv php-json php-gd php-curl php-xml php-pgsql
php-imap php-cgi fcgi
apk add php-pdo php-pdo_pgsql php-soap php-xmlrpc php-posix php-mcrypt php-gettext
php-ldap php-ctype php-dom

Configure Lighttpd
vi /etc/lighttpd/lighttpd.conf

Uncomment line:
include "mod_fastcgi.conf"

Start lighttpd service and add to needed runlevel


rc-service lighttpd start && rc-update add lighttpd default

Install extra packages


apk add postgresql postgresql-client php-pgsql

Configuring postgresql
/etc/init.d/postgresql setup

Starting http service and adding to boot.


/etc/init.d/postgresql start && rc-update add postgresql default

Installing phpPgAdmin

Create a directory named webapps


mkdir -p /usr/share/webapps/

Download the latest release of phpPgAdmin.


cd /usr/share/webapps/
wget http://downloads.sourceforge.net/phppgadmin/phpPgAdmin-5.0.4.tar.gz

Unpack and delete the tar file


tar zxvf phpPgAdmin-5.0.2.tar.gz
rm phpPgAdmin-5.0.4.tar.gz

Change the folder name


mv phpPgAdmin-5.0.4 phppgadmin

Copy the sample config file name


cp /usr/share/webapps/phppgadmin/conf/config.inc.php-dist
/usr/share/webapps/phppgadmin/conf/config.inc.php

Change the folder permissions


chmod -R 777 /usr/share/webapps/

Create a symlink to the phpPgAdmin folder


ln -s /usr/share/webapps/phppgadmin/ /var/www/localhost/htdocs/phppgadmin

Log on your phpPgAdmin


Browse to: http://WEBSERVER_IP_ADDRESS/phppgadmin and logon to phpPgAdmin using
postgresql user and password.

Note: If you are using the Alpine ACF, or if you change the port in the lighttpd.conf file, the go to the
port you set. e.g: if you set the port to 8080, then browse to:
http://WEBSERVER_IP_ADDRESS:8080/phppgadmin

PhpSysInfo
phpSysInfo is a simple application that displays information about the host it's running on.
The following stuff is shown

Uptime
CPU
Memory
SCSI, IDE, PCI
Ethernet
Floppy
Video Information

Install lighttpd and PHP


Install the additional packages:
apk add lighttpd php-common php-iconv php-json php-gd php-curl php-xml php-pgsql
php-imap php-cgi fcgi
apk add php-pdo php-pdo_pgsql php-soap php-xmlrpc php-posix php-mcrypt php-gettext
php-ldap php-ctype php-dom

Configure Lighttpd
vi /etc/lighttpd/lighttpd.conf

Uncomment line:
include "mod_fastcgi.conf"

Start lighttpd service and add to needed runlevel


rc-service lighttpd start && rc-update add lighttpd default

Installation of pypSysInfo
Create a directory named webapps
mkdir -p /usr/share/webapps/

Now get the current release of phpSysInfo.


cd /usr/share/webapps/

wget
http://downloads.sourceforge.net/project/phpsysinfo/phpsysinfo/3.1.12/phpsysinfo3.1.12.tar.gz

Unpack the archive in the current location


tar -xzf phpsysinfo-3.1.12.tar.gz && rm phpsysinfo-3.1.12.tar.gz

A config.php is needed to run phpSysInfo. The fastest way is to make a copy of the template file.
cp phpsysinfo/config.php.new phpsysinfo/config.php

Create a symlink that point to the webserver directory.


ln -s /usr/share/webapps/phpsysinfo/ /var/www/localhost/htdocs/phpsysinfo

Change the permission of the directory.


chmod -R 777 /usr/share/webapps/

Restart lighttpd
/etc/init.d/lighttpd restart

Now the page is up and running


http://IP-ADDRESS/phpsysinfo

HTTP
lighttpd is a simple, standards-compliant, secure, and flexible web server.

General information

Configuration file: /etc/lighttpd/lighttpd.conf


Standard directory for files: /var/www/localhost/htdocs/

Installation
lighttpd

is available in the Alpine Linux repositories. To install, simple launch the command below:

apk add lighttpd

Configuration
If you just want to serve simple HTML pages lighttpd can be used out-of-box. No further configuration
needed.

Controlling Lighttpd
Start lighttpd
After the installation lighttpd is not running. To start lightttpd, use start.
rc-service lighttpd start

You will get a feedback about the status.


* Caching service dependencies
* Starting lighttpd...

[ ok ]
[ ok ]

Stop lighttpd
If you want to stop the web server use stop.
rc-service lighttpd stop

Restart lighttpd
After changing the configuration file lighttpd needs to be restarted.
rc-service lighttpd restart

Runlevel
Normally you want to start the web server when the system is launching. This is done by adding
lighttpd to the needed runlevel.
rc-update add lighttpd default

Testing Lighttpd
This section is assuming that lighttpd is running. If you now launch a web browser from a remote
system and point it to your web server, you will see a page that says "404 - Not Found". Well, at the
moment there is no content available but the server is up and running.
Let's add a simple test page to get rid of the "404 - Not Found page".

echo "Lighttpd is running..." > /var/www/localhost/htdocs/index.html

Now you will get the new "test page" if you reload.

Lighttpd Https access


For higher security Lighttpd can be configured to allow https access.

Generate Certificate and Keys


Either generate the public key and certificate and private key using openssl, or by using the ones
generated by installing ACF. You don't need to do both, just do one or the other. The former method,
with OpenSSL, is preferred since it gives greater control.

Generate self-signed certificates with openssl


To generate certificates, openssl is needed.
apk add openssl

Change to the lighttpd configuration directory


cd /etc/lighttpd

With the command below the self-signed certificate and key pair are generated. A 2048 bit key is the
minimum recommended at the time of writing, so we use '-newkey rsa:2048' in the command. Change
to suit your needs. Answer all questions.
openssl req -newkey rsa:2048 -x509 -keyout server.pem -out server.pem -days 365 nodes

Adjust the permissions


chmod 400 /etc/lighttpd/server.pem

Generate self-signed certificates with acf


Install the ACF
setup-acf

Copy the generated certificate to the lighttpd configuration directory.


mv /etc/ssl/mini_httpd/server.pem /etc/lighttpd/server.pem

Adjust the permissions


chown root:root /etc/lighttpd/server.pem
chmod 400 /etc/lighttpd/server.pem

mini_http is no longer needed.


/etc/init.d/mini_httpd stop && rc-update del mini_httpd

Removing the mini_http package


apk del mini_httpd

Configure Lighttpd
The configuration of lighttpd needs to be modified.
nano /etc/lighttpd/lighttpd.conf

Uncomment this section and adjust the path so 'ssl.pemfile' points to where our cert/key pair is stored.
Or copy the example below into your configuration file if you saved it to /etc/lighttpd/server.pem.
ssl.engine
ssl.pemfile

= "enable"
= "/etc/lighttpd/server.pem"

You'll also want to set the server to listen on port 443. Replace this:
server.port

= 80

with this:
server.port

= 443

Restart lighttpd
rc-service lighttpd restart

Security
BEAST attack, CVE-2011-3389
To help mitigate the BEAST attack add the following to your configuration:
#### Mitigate BEAST attack:
#
#
#
#

A stricter base cipher suite. For details see:


http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-3389
or
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-3389

ssl.cipher-list = "ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4SHA:RC4:HIGH:!MD5:!aNULL:!EDH:!AESGCM"
#
# Make the server prefer the order of the server side cipher suite instead of the
client suite.
# This is necessary to mitigate the BEAST attack (unless you disable all non RC4
algorithms).
# This option is enabled by default, but only used if ssl.cipher-list is set.
ssl.honor-cipher-order = "enable"
# Mitigate CVE-2009-3555 by disabling client triggered renegotiation
# This option is enabled by default.
#
ssl.disable-client-renegotiation = "enable"
#

Perfect Forward Secrecy (PFS)


Perfect Forward Secrecy isn't perfect, but what it does mean is that an adversary who gains the private
key of a server does not have the ability to decrypt every encrypted SSL/TLS session. Without it, an
adversary can simply obtain the private key of a server and decrypt and and all SSL/TLS sessions using
that key. This is a major security and privacy concern and so using PFS is probably a good idea long
term. It means that every session would have to be decrypted individually, regardless of the state
(whether obtained by the adversary or otherwise).
Ultimately when choosing SSL/TLS ciphers it is the usual chose of security or usability? Increasing
one usually decreases the other. Nonetheless, an example to prevent the BEAST attack and offer PFS
is:
ssl.cipher-list = "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHEECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHERSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"

POODLE attack (CVE-2014-3566)


In light of the recent POODLE findings, it's advisable to wherever possible turn off support for SSLv3.
This is quite simple, you can just append the following to your cipher list to explicitly disable SSLv3
support:
:!SSLv3

FREAK attack (CVE-2015-0204)


To prevent the so called FREAK attack, keep your SSL library up to date, and do not offer support for
export grade ciphers.
There's multiple ways to do this, like turning off export cipher support in the cipher list:
:!EXPORT

Although now might be a good time to review the cipher list in use, and use a stronger, explicit set like
the one from the Perfect Forward Secrecy section, or another example:
ssl.cipher-list = "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHERSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHEDSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHEECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-

SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSSAES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK"

Also see https://freakattack.com/

Other configurations
The following are example configs, they will likely need to be modified to suite your particular setup.
Nonetheless they should provide an indication of how to implement the relevant configuration options.

redirecting HTTP to HTTPS


Any requests to the server via HTTP (TCP port 80 by default) will be redirected to HTTPS (port 443):
server.port
= 80
server.username
= "lighttpd"
server.groupname
= "lighttpd"
server.document-root
= "/var/www/localhost/htdocs"
server.errorlog
= "/var/log/lighttpd/error.log"
dir-listing.activate
= "enable"
index-file.names
= ( "index.html" )
mimetype.assign
= ( ".html" => "text/html", ".txt" => "text/plain", ".jpg"
=> "image/jpeg", ".png" => "image/png", "" => "application/octet-stream" )
## Ensure mod_redirect is enabled!
server.modules
= (
"mod_redirect",
)
$SERVER["socket"] == ":80" {
$HTTP["host"] =~ "(.*)" {
url.redirect = ( "^/(.*)" => "https://%1/$1" )
}
}
$SERVER["socket"] == ":443" {
ssl.engine
= "enable"
ssl.pemfile
= "/etc/lighttpd/certs/www.example.com.pem"
## Make sure the line above points to your SSL cert/key pair!
}

Serving both HTTP and HTTPS requests


Simple, just add in the SSL server port, enable the SSL engine and point to the relevant SSL cert/key
pair:
server.port
= 80
server.username
= "lighttpd"
server.groupname
= "lighttpd"
server.document-root
= "/var/www/localhost/htdocs"
server.errorlog
= "/var/log/lighttpd/error.log"
dir-listing.activate
= "enable"
index-file.names
= ( "index.html" )
mimetype.assign
= ( ".html" => "text/html", ".txt" => "text/plain", ".jpg"
=> "image/jpeg", ".png" => "image/png", "" => "application/octet-stream" )
## Below is HTTPS setup. Make sure to point at relevant cert/key pair for HTTPS to
work!
$SERVER["socket"] == ":443" {
ssl.engine
= "enable"
ssl.pemfile
= "/etc/lighttpd/certs/www.example.com.pem"
}

Setting Up Lighttpd with PHP


To setup lighttpd to use PHP, simply follow the instruction on the Setting Up Lighttpd With FastCGI
wiki page.
You will then need a simple test page to prove that PHP is working. Assuming you are using the
default directory of /var/www/localhost/htdocs/ for serving pages, create a test page:
echo "<?php phpinfo(); ?>" > /var/www/localhost/htdocs/index.php

Note the page must have the file extension '.php' or it will not be treated as PHP. This is a simple yet
very common (and infuriating) mistake to make!
Now test the page by opening your browser and requesting the index.php page, you should see an
extensive page featuring a lot of PHP related information. This page should not, of course, be used in
production but merely for testing.

Setting Up Lighttpd With FastCGI


Install the additional packages:
apk add lighttpd php-common php-iconv php-json php-gd php-curl php-xml php-pgsql
php-imap php-cgi fcgi
apk add php-pdo php-pdo_pgsql php-soap php-xmlrpc php-posix php-mcrypt php-gettext
php-ldap php-ctype php-dom

Configure Lighttpd
vi /etc/lighttpd/lighttpd.conf

Uncomment line:
include "mod_fastcgi.conf"

Start lighttpd service and add to needed runlevel


rc-service lighttpd start && rc-update add lighttpd default

Apache

Installing Apache
Add the Apache2 package with the command:
#apk add apache2

Testing

Move to the directory where your site will resides:


#cd /var/www/localhost/htdocs

And create an index.html file to test if everything is ok:


#vi index.html

And add the following lines:


<html>
<head>
<title> My first page </title>
</head>
<body>
It works!
</body>
</html>

That done, let us start apache2 web server:


rc-service apache2 start

Now access: http://<ip_address> and if everything is ok you will see the html page.

Ending
Finally let us set up apache2 to start on operating system startup:
rc-update add apache2

Now you can create your html site and host in this directory.

Setting Up Apache with PHP

Installing Apache + PHP


Add the main packages with the command:

#apk add apache2 php-apache2

Testing
Move to the directory where your site will resides:
#cd /var/www/localhost/htdocs

And create an index.php file to test if everything is ok:


#vi index.php

Add the following lines in the file:


<?php
phpinfo();
?>

That done, let us start apache2 web server:


rc-service apache2 start

Now access: http://<ip_address> and if everything is ok you will see the php info page.

Ending
Finally let us set up apache2 to start on operating system startup:
rc-update add apache2

Now you can create your php site and host in this directory.

Apache authentication: NTLM Single Signon


NTLM single sign on under Apache
Install needed packages (you will need both the main and testing repositories from edge):
apache2
apache-mod-auth-ntlm-winbind
samba (joined to a Windows Domain) with winbind running

add apache user to winbind group


Note: This howto does not show how to join Samba to a Windows domain, only how to setup the
Apache authentication helper that uses the NTLM protocol while authenticating to such a domain. add
to httpd.conf (virtual host):
AuthType NTLM
NTLMauth on
NTLMAuthHelper "/usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp"
Require user jbilyk

Don't forget to customize the final line with the username(s) that you wish to limit usage to.
Alternatively, make the final line "Require valid user" and change the helper line to inlude something
like "-require-membership-of="WORKGROUP\Domain Users"".
Restart apache and test

High Availability High Performance Web Cache

Introduction
This document explains how to use HAProxy and ucarp to provide high performance and highavailability services. This document has been tested using Alpine Linux 2.2.3.
In this document we will use the Squid web cache as the example service. Squid typically uses only a
single processor, even on a multi-processor machine. To get increased web-caching performance, it is
better to scale the web cache out across multiple (cheap) physical boxes. Although web caching is used
as the example service, this document applies to other services, such as mail, web acceleration, etc.

Network Diagram
In the end, we will have an architecture that looks like this:

The workstations all connect to the HAProxy instance at 192.168.1.10. 192.168.1.10 is a virtual IP
controlled by ucarp; that is, HAProxy runs on one of the web cache servers at any given time, but any
of the web caches can be the HAProxy instance.
HAProxy distributes the web traffic across all live web cache servers, which cache the resources from
the Internet.

Benefits

The HAProxy server in the diagram is 'virtual' - it represents the service running on any of the
web cache servers
Each web cache server is configured as a mirror of the others - this simplifies adding
additional capacity.
HAProxy will ignore servers that have either failed or been taken offline, and notices when
they are returned to service
This configuration allows individual servers to be upgraded or modified in a "rolling
blackout", with no downtime for users.
Ucarp automatically restarts the HAProxy service on another cache if the server running
HAProxy crashes. This is automatic recovery with typically less than 3 seconds of downtime
from the clients perspective.

Initial Services
The first step in getting high-availability is to have more than one server; do the following on each of
cache1-4

install squid

apk add squid

acl
acl
acl
acl
acl

create a minimal /etc/squid/squid.conf


all src all
localhost src 127.0.0.1/32
localnet src 10.0.0.0/8
# RFC1918 possible internal network
localnet src 172.16.0.0/12 # RFC1918 possible internal network
localnet src 192.168.0.0/16 # RFC1918 possible internal network

icp_port 3130
icp_access allow all
cache_peer
cache_peer
cache_peer
cache_peer

192.168.1.11
192.168.1.12
192.168.1.13
192.168.1.14

sibling
sibling
sibling
sibling

3128
3128
3128
3128

3130
3130
3130
3130

http_access allow localnet


http_access allow localhost
http_access deny all
http_port 3128
forwarded_for off

Warning: This is a minimal configuration for demonstration purposes only. Likely you will need to
configure more restrictive ACLs

The icp and cache_peer entries allow all four squid server to share cached information. Once one
server has retrieved an item, the others can get the information from the local server instead of
reterieving it themselves.

ensure squid starts on boot

rc_update add squid


/etc/init.d/squid start

At this point, you should be able to set your browser to use any of 192.168.1.1[1-4]:3129 as a proxy
address, and get to the Internet. Because this config file does not use any optimizations, browsing will
be slower than normal. This is to be expected. Any optimizations to the squid configuration you make
to one server can be applied to all in the array. The purpose of this example is to show that the service
is uniform across the array.

Ucarp Virtual IP Manager


Ucarp runs on all the servers and makes sure that a virtual IP address is available. In the example
diagram we use the virtual IP of 192.168.1.10.

The AlpineLinux ucarp init script expects it is run on a multi-homed machine; with the ability
to run ucarp processes on each interface. Copy the scripts for the interface:

apk add ucarp


ln -s /etc/init.d/ucarp /etc/init.d/ucarp.eth0
cp /etc/conf.d/ucarp /etc/conf.d/ucarp.eth0

edit the /etc/conf/ucarp.eth0 file:

REALIP=
VHID=1
VIP=192.168.1.10
PASSWORD=SecretPassword

Create etc/ucarp/vip-up-eth0.sh

#!/bin/sh
# Add the VIP address
ip addr add $2/24 dev $1
for a in 330 440 550; do beep -f $a -l 100; done

Create /etc/ucarp/vip-down-eth0.sh

#!/bin/sh
# Remove the VIP address
ip addr del $2/24 dev $1
for a in 550 440 330; do beep -f $a -l 100; done

Make the scripts executable

chmod +x /etc/ucarp/*.sh

Start ucarp and save the changes

rc-update add ucarp.eth0


/etc/init.d/ucarp.eth0 start
lbu commit

Follow the above steps for each of the Cache servers.

Once it is running on each server, unplug the network cable on each server in turn. After a couple
seconds, the tone should sound on the other boxes as they hold an election to select a new master.
(Note, all boxes will briefly become master, and then the others will quickly demote themselves.) You
should be able to ping 192.168.1.10 no matter which server is elected master.

HA Proxy Load Balancer


The HA Load Balancer:

Automatically distributes requests across the web caches


Automatically detects when a cache changes state, and disables or re-enables sending
requests to it.

Install haproxy on each of the web caches.

apk add haproxy

We are using haproxy as a simple tcp load balancer, so we don't need the advanced http
options. The following is a simple `/etc/haproxy.cfg` file

global
uid haproxy
gid haproxy
chroot /var/empty
defaults
# 30 minutes of waiting for a web request is crazy,
# but some users do it, and then complain the proxy
# broke the interwebs.
timeout client 30m
timeout server 30m
# If the server doesnt respond in 4 seconds its dead
timeout connect 4s
listen http_proxy 192.168.1.10:8080
mode tcp
balance roundrobin
server cache1 192.168.1.11:3128 check
server cache2 192.168.1.12:3128 check
server cache3 192.168.1.13:3128 check
server cache4 192.168.1.14:3128 check

If your squid caches have public, routeable IP addresses, you may wish to change the balance algorithm
to source. Some web applications get confused when a client's IP address changes between requests.
Using balance source load balances clients across all web proxies, but once a client is assigned to a
specific proxy, it continues to use that proxy.

make sure you have the haproxy user and group

adduser -s /bin/false -D -h /dev/null -H haproxy

Do not start haproxy or add it to the rc scripts; we will do that later.

Do the above steps for cache2-4; Note that `/etc/haproxy.cfg` is exactly the same for all
instances.

On the cache that currently owns the 192.168.1.10 ucarp address

/etc/init.d/haproxy start

Change your browser to use 192.168.1.10:8080 as the proxy address.


o You should be able to browse the Internet
o Your browsing should be distributed across all 4 squid servers

Enabling the HA Service


After following the above instructions, you should have the following in place:

Squid is running on each of the web caches


o You can connect to each web cache by ip address and browse the internet
ucarp is running on all instances
o You can ping 192.168.1.10
o If you unplug the ethernet cable to the box serving 192.168.1.10, another takes its
place
haproxy is configured on each cache server, but is not running

Update `etc/ucarp/vip-up-eth0.sh` on each server to include a statement to start haproxy

#!/bin/sh
# Add the VIP address
ip addr add $2/24 dev $1
/etc/init.d/haproxy start
for a in 330 440 550; do beep -f $a -l 100; done

Update `etc/ucarp/vip-down-eth0.sh` on each server to include a statement to stop haproxy

#!/bin/sh
/etc/init.d/haproxy stop
# Remove the VIP address
ip addr del $2/24 dev $1
for a in 550 440 330; do beep -f $a -l 100; done

Save the changes

lbu commit

Maintenance
The haproxy process load balances requests across all available web proxies. If a proxy crashes,
haproxy automatically removes it from the pool, and redirects incoming requests to the remaining
available proxies. Once the proxy is returned to service, haproxy automatically notices and starts
sending requests to it again.

The ucarp process ensures that "haproxy" is running on the virtual ip (192.168.1.10) at all times.

Clients do not need to be reconfigured, even if the machine haproxy is running on crashes. Another box
takes on the virtual address and things "just work"

To remove a proxy from service, it is possible to just take it down. A more graceful way to do this is to
update the haproxy.conf and tell haproxy to use the new config. For instance, to delete 192.168.1.11
from the pool:

log in to 192.168.1.10

edit /etc/haproxy.conf and comment out the 192.168.1.11 line:

# server cache1 192.168.1.11:3128 check # 192.168

request a soft-restart of haproxy

/usr/bin/haproxy -D -p /var/run/haproxy.pid -f /etc/haproxy.cfg -sf \


$( cat /var/run/haproxy.pid )

This causes the existing haproxy to finish connections, but not accept new ones. Eventually, the old
haproxy process will die, leaving only the new process. Since web requests can take a long time, the
old haproxy instance may linger for several minutes. Make sure the old process has terminated before
taking down the web proxy server.

Similarly, to add a web cache into the farm, use the above command to have haproxy start using the
new config file.
The "-sf" flag allows rolling maintenance of the web caches with no observable effect on the clients.

Setting up Transparent Squid Proxy


This document covers how to set up squid as a transparent proxy server.
The following is assumed:

You have already install a server running Alpine Linux as a base, with Alpine 1.10.6 or later.
Your proxy server will reside in a DMZ zone, separated from the network segment your
clients are in. Other implementations are covered here.
o In order to transparently redirect web traffic from your clients to the proxy server in
the DMZ, you will need to configure your intercepting router to DNAT traffic.

Note: If you're looking to setup an explicit squid proxy, or don't know what the differences are, please
see this wiki page

Install Squid
apk add acf-squid

Configure squid with at least the following configuration:

Note: substitute proxy.example.com and the mynet acl for your own desired hostname and
network subnet respectively
## This makes squid transparent in versions before squid 3.1
#http_port 8080 transparent

## For squid 3.1 and later, use this instead


http_port 8080 intercept
## Note that you need Squid 3.4 or above to support IPv6 for intercept mode.
Requires ip6tables support
visible_hostname proxy.example.com
cache_mem 8 MB
cache_dir aufs /var/cache/squid 900 16 256
# Even though we only use one proxy, this line is recommended
# More info: http://www.squidcache.org/Versions/v2/2.7/cfgman/hierarchy_stoplist.html
hierarchy_stoplist cgi-bin ?
# Keep 7 days of logs
logfile_rotate 7
access_log /var/log/squid/access.log squid
cache_store_log none
pid_filename /var/run/squid.pid
# Web auditors want to see the full uri, even with the query terms
strip_query_terms off
refresh_pattern
refresh_pattern
refresh_pattern
refresh_pattern

^ftp:
1440
^gopher:
1440
-i (/cgi-bin/|\?) 0
.
0

20%
0%
0%
20%

10080
1440
0
4320

coredump_dir /var/cache/squid
#
# Authentication
#
# Optional authentication methods (NTLM, etc) can go here
#
# Access Control Lists (ACL's)
#
# These settings are recommended by squid
acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9]
upgrade_http0.9 deny shoutcast
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
# Standard ACL settings
acl QUERY urlpath_regex cgi-bin \? asp aspx jsp
acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563 8004 9000
acl Safe_ports port 21 70 80 81 210 280 443 563 499 591 777 1024 1022 1025-65535
acl purge method PURGE
acl CONNECT method CONNECT
# Require authentication
#acl userlist proxy_auth REQUIRED
acl userlist src 0.0.0.0/0.0.0.0
# Definition of network subnets
acl mynet src 192.168.0.0/24
#
# Access restrictions
#
cache deny QUERY
# Only allow cachemgr access from localhost

http_access allow manager localhost


http_access deny manager
# Only allow purge requests from localhost
http_access allow purge localhost
http_access deny purge
# Deny requests to unknown ports
http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
# Allow hosts in mynet subnet to access the entire Internet without being
# authenticated
http_access allow mynet
# Denying all access not explicitly allowed
http_access deny all
http_reply_access allow all
icp_access allow all

Check squid with:


squid -k reconfigure

Start squid:
/etc/init.d/squid start

Add squid to boot-up sequence:


rc-update add squid default

Remember to add port 8080 to the permitted ports clients can connect on to any firewalls on your proxy
server or in-between the proxy and the clients.
If you are running an Alpine Linux firewall on the firewall separating the Proxy from the clients, you
will need to redirect all traffic from your client subnet on port 80 to the proxy server on port 8080 to
allow web traffic to be proxied.
If you are running shorewall, add this to your /etc/shorewall/rules file:

Note: Substitute loc and dmz:172.16.1.2:8080 for your client subnet zone and proxy server zone
and IP as defined in /etc/shorewall/zones.
## This forces all web traffic to be redirected to the proxy on port 8080
DNAT
loc
dmz:172.16.1.2:8080
tcp
80

And restart shorewall with:


shorewall check
shorewall restart

Alternatively, you can configure Squid to listen on port 80. With this method, it is usual for either the
proxy to be configured 'in-line' so that due to physical cabling traffic must pass through the proxy (in
this instance Squid will usually run on the same physical box as a router, which will be the clients'
default gateway), or alternatively (as described above) traffic on port 80 is re-directed by a router or
firewall to the proxy (but remains on port 80). WCCP on a Cisco router is one method to redirect the
traffic in this way.

Drupal
Drupal is a free and open source content management system (CMS) written in PHP and distributed
under the GNU General Public License. It is used as a back-end system for at least 1% of all websites
worldwide ranging from personal blogs to larger corporate and political sites including whitehouse.gov
and data.gov.uk. It is also used for knowledge management and business collaboration.

Install lighttpd, PHP and MySql


Install the additional packages:
apk add lighttpd php-common php-iconv php-json php-gd php-curl php-xml php-pgsql
php-imap php-cgi fcgi
apk add php-pdo php-pdo_pgsql php-soap php-xmlrpc php-posix php-mcrypt php-gettext
php-ldap php-ctype php-dom

Configure Lighttpd
vi /etc/lighttpd/lighttpd.conf

Uncomment line:
include "mod_fastcgi.conf"

Start lighttpd service and add to needed runlevel


rc-service lighttpd start && rc-update add lighttpd default

Install extra packages:


apk add mysql mysql-client php-mysql php-mysqli php-pdo_mysql

Installing and configuring drupal


Create a folder named webapps
mkdir -p /usr/share/webapps/

Download the source code archive

cd /usr/share/webapps/
wget http://ftp.drupal.org/files/projects/drupal-7.19.tar.gz

Unpack the archive and delete the tarball afterwards


tar zxvf drupal-7.19.tar.gz
rm drupal-7.19.tar.gz

Change the folder name


mv drupal-7.19 drupal

Change the folder permissions


chown -R lighttpd /usr/share/webapps/

Creating settings file


cp /usr/share/webapps/drupal/sites/default/default.settings.php
/usr/share/webapps/drupal/sites/default/settings.php

Create a symlink to the drupal folder


ln -s /usr/share/webapps/drupal/ /var/www/localhost/htdocs/drupal

Starting
Starting http service and adding to boot
/etc/init.d/lighttpd start && rc-update add lighttpd default

Config MySql
/usr/bin/mysql_install_db --user=mysql
/etc/init.d/mysql start && rc-update add mysql default
/usr/bin/mysqladmin -u root password 'password'

Create the drupal database

mysql -u root -p
CREATE DATABASE drupal;
GRANT ALL PRIVILEGES ON drupal.* TO "root";
FLUSH PRIVILEGES;
EXIT

Config your drupal


Browse to: http://WEBSERVER_IP_ADDRESS/drupal and Install Drupal completing the information
as appropriate from the web browser.
Drupal Installation steps:
Note: After select each option, pres "Save and continue" button.
1 - Select an installation profile

Standard ( Install with commonly used features pre-configured.)

Minimal ( Start with only a few modules enabled.)

2 - Choose language

English (built-in)

3 - Verify requirements

(Nothing to do here if all is ok)

4 - Set up database Database type MySQL

Database name drupal


Database username root
Database password your-mysql-password

- 5 Configure site

SITE INFORMATION

SITE MAINTENANCE ACCOUNT

Site name
Site e-mail address

Username
E-mail address
Password
Confirm password

SERVER SETTINGS

Default country
Default time zone

UPDATE NOTIFICATIONS

Check for updates automatically


Receive e-mail notifications

After click on "Save and continue" you will see Drupal installation complete
Congratulations, you installed Drupal!
Review the messages above before visiting your new site.
Review the messages above before visiting your new site. You have drupal cloud computing system
working, to access go to http://WEBSERVER_IP_ADDRESS/drupal and enjoy!

WordPress
WordPress is web software you can use to create a beautiful website or blog. For Wordpress a bunch of
plugins are available.

Install lighttpd, PHP, and MySql


Install the additional packages:
apk add lighttpd php-common php-iconv php-json php-gd php-curl php-xml php-pgsql
php-imap php-cgi fcgi
apk add php-pdo php-pdo_pgsql php-soap php-xmlrpc php-posix php-mcrypt php-gettext
php-ldap php-ctype php-dom

Configure Lighttpd
vi /etc/lighttpd/lighttpd.conf

Uncomment line:
include "mod_fastcgi.conf"

Start lighttpd service and add to needed runlevel


rc-service lighttpd start && rc-update add lighttpd default

Install extra packages:


apk add php-mysql mysql mysql-client php-zlib

Installing and configuring WordPress


Create a directory named webapps

mkdir -p /usr/share/webapps/

Download the latest Wordpress source files


cd /usr/share/webapps/
wget http://wordpress.org/latest.tar.gz

Unpack the archive and delete it afterwards


tar -xzvf latest.tar.gz
rm latest.tar.gz

Change the folder persmissions


chown -R lighttpd /usr/share/webapps/

Create a symlink to the wordpress folder


ln -s /usr/share/webapps/wordpress/ /var/www/localhost/htdocs/wordpress

Config and start MySql


/usr/bin/mysql_install_db --user=mysql
/etc/init.d/mysql start && rc-update add mysql default
/usr/bin/mysqladmin -u root password 'password'

Create the WordPress database


mysql -u root -p
CREATE DATABASE wordpress;
GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpress'@'localhost' IDENTIFIED BY
'wordpress password';
FLUSH PRIVILEGES;
EXIT

Config your WordPress

Browse to http://WEBSERVER_IP_ADDRESS/wordpress/

Click on: "Create a Configuration File"


Click on: "Lets go!"

Database Name: wordpress


User Name: wordpress
Password: <wordpress password>
Database Host: localhost
Table Prefix: wp_

You may need create the wp-config.php manually, so modify define the 'DB_NAME', DB_USER and
DB_PASSWORD, then copy and paste the text into it. After you've done that, click "Run the install."

Enter Information as needed

Site Title
Username
Password, twice
Your E-mail

After you've done that, click "Install WordPress"


You have WordPress working, to access go to http://WEBSERVER_IP_ADDRESS/wordpress and
enjoy!

Mail

Hosting services on Alpine


Introduction
Alpine is well suited for hosting email-, web- or other network-related services.
Your biggest task is to figure out what you want your system to do.

Preparing Alpine
First you need to get alpine up and running.
Follow the Installing_Alpine instructions on how to get your Alpine booted.
If nothing else is mentioned in the below instructions, you should use the latest stable release:

Download
now!
alpine-3.2.3-x86_64.iso
(299 MiB) Released 2015-08-13

.
VServer or not
VServer itself has nothing to do with the various services.
But if you intend to run multiple services on same box (e.g. mail and webhosting) it might be wise to
run the various services in separate vserver-guests.

Setting_up_a_basic_vserver | Basic information on how to set up vserver hosts/guests

Mail
We split the 'Mail' section into various tasks.
One task is to gather and process mail. Some other task would be to prevent spam and virus etc.
Finally we need to make sure the user can fetch/read his mail.

Receive mail

Setting_up_postfix_with_virtual_domains | Postfix can be configured in multiple ways - Here


we do it with virtual domains

Processing mail - Virus protection

Setting_up_clamav_for_postfix | Referrers to Setting_up_postfix_with_virtual_domains


instructions

Processing mail - Spam protection

Setting_up_gross_for_postfix | Referrers to Setting_up_postfix_with_virtual_domains


instructions
Setting_up_clamsmtp | Use ClamSMTP to provide advanced content and virus filtering for
spam
http://www.sanesecurity.co.uk/ | Another good way of catching SPAM is Sanesecurity and
MSRBL definitions

Delivering mail to the user

Setting_up_dovecot_with_imap_and_ssl | Secure way to fetch you mail from the mailer


daemon
Setting_up_dovecot_with_imap_and_tls | Secure way to fetch you mail from the mailer
daemon

Other Mail-related documents

Hosting_Web/Email_services_on_Alpine | Describes multiple services on same document


Protecting_your_email_server_with_Alpine | Describes multiple services on same document

Web

Setting_up_trac_wiki | A ticket/wiki system


Lighttpd | Lighttpd web server
Cherokee | Cherokee web server
Apache | Apache web server
Darkhttpd | Darkhttpd web server

SSH

Setting_up_a_ssh-server | OpenSSH and Dropbear SSH servers

DNS

Setting_up_unbound_DNS_server | A validating, recursive, and caching DNS resolver that


supports DNSSEC

Setting_up_nsd_DNS_server | An authoritative-only DNS server

Proxy

Setting up Explicit Squid Proxy | Configuring an explicit Squid proxy server


Setting up Transparent Squid Proxy | Configuring a transparent Squid proxy server

Hosting Web/Email services on Alpine


Introduction
This information was pulled from a few other pages on the Alpine Wiki website, see links, along with
the websites for the particular packages. It is a suggestion/step by step instruction guide.
You might be wondering, why would anyone want to run Web and Email services off a Linux install
that runs in ram? Good question. With Vservers we can run the host in Memory and do all sorts of
things with the guests. Put the guests on DAS in the host machine or do raided iSCSI for the guest. This
way if your disks start going bad or drop off entirely you most likely will be able to get at the data from
a running system.
Guest OS here or
[Host Alpine Box] --------------------- [DAS]
|
|
|
|Guest OS here
|
|
iSCSI iSCSI

Vserver
A great install doc can be found here. Setting up a basic vserver
Notes have been added to use guest OS other than Alpine. Take care to make sure that the /tmp
directory is not being found in fstab for the vserver.
Also remember that you will have to do all Firewall configuration from the Host machine.
If you are running different versions of alpine or don't want to mess with getting the vserver to use a
package stored on the Disk just point your apks to somewhere else.
vi /etc/apk/apk.conf
APK_PATH=http://dev.alpinelinux.org/alpine/v1.7/apks

Web Services
There are many http servers out there. Alpine comes with a few different ones. For this guide we
installed lighttpd.
apk_fetch -u
apk_get install lighttpd openssl php

Most everything is already taken care of with lighttpd. Make sure to uncomment the ssl options
ssl.engine = "enable"

ssl.pemfile = "/etc/lighttpd/server.pem"
/etc/init.d/lighttpd start

See below for generating the server.pem


Now you can start using lighttpd and start making your own website. Alpine come with phpBB and
mediawiki if you want to use those. You may have to use a sql database. The place to put your pages is
/var/www/localhost/htdocs/

By default lighttpd uses symlinks and does so correctly. So you can just symlink to directories when
your pages may be also
ln -s /home/user/htdocs /var/www/localhost/htdocs/user

Generating the Server.pem


For other services we are also going to be using ssl. An easy way to just start using it is generating your
own self sign cert. Script and Configuration file taken from setup-webconf script on Alpine.
ssl.cnf
[ req ]
default_bits = 1024
encrypt_key = yes
distinguished_name = req_dn
x509_extensions = cert_type
prompt = no
[ req_dn ]
OU=HTTPS server
CN=example.net
emailAddress=postmaster@example.net
[ cert_type ]
nsCertType = server

ssl.sh
#/bin/sh
openssl genrsa 512/1024 >server.pem
openssl req -new -key server.pem -days 365 -out request.pem
openssl genrsa 2048 > keyfile.pem
openssl req -new -x509 -nodes -sha1 -days 3650 -key keyfile.pem \
-config ssl.cnf > server.pem
cat keyfile.pem >> server.pem

If you use this to generate the ssl certs for other services you may just change the req_dn information.

Mail Services
Some of the information presented can be found here also. This though is for a email gateway.
Protecting your email server with Alpine

apk_get install postfix dovecot clamav clamsmtpd gross

Postfix
Postfix has a few things that need to be added to its configuration so that it can send email through
clamav and also so it will accept mail for domains and users.

Main.cf
vi /etc/postfix/main.cf
#/etc/postfix/main.cf
myhostname = mx.example.net
mydomain = example.net
relayhost = #blank will do dns lookups for destinations
home_maildir = Maildir/
smtpd_banner = $myhostname ESMTP #The way postfix answers.
transport_maps = hash:/etc/postfix/transport #Place to add how you want to route
domains. See example below. Show how to host more than one domain.
local_transport = virtual
virtual_mailbox_domains = example.net, bobo.net #list of hosted domains
virtual_mailbox_base = /var/spool/vhosts
virtual_uid_maps = static:1004 # uid of user to be used to read/write mail
virtual_gid_maps = static:1004 # gid of user to be used to read/write mail
virtual_alias_maps = hash:/etc/postfix/valias #alias for each different hosted
domain. See below.
virtual_mailbox_maps = hash:/etc/postfix/vmap #where and what mailbox to drop the
mail to. See below.
smtpd_helo_required = yes
disable_vrfy_command = yes
content_filter = scan:[127.0.0.1]:10025 # clamscan to be configured later
smtpd_recipient_restrictions = reject_unauth_pipelining,
permit_sasl_authenticated,permit_mynetworks,reject_invalid_hostname,
reject_non_fqdn_hostname,reject_non_fqdn_sender,
reject_non_fqdn_recipient,reject_unknown_sender_domain,
reject_unknown_recipient_domain,reject_unauth_destination, check_policy_service
inet:127.0.0.1:5525,permit
smtpd_data_restrictions = reject_unauth_pipelining, permit
smtpd_sasl_auth_enable = yes
broken_sasl_auth_clients = yes
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_tls_cert_file = /etc/ssl/postfix/server.pem
smtpd_tls_key_file = $smtpd_tls_cert_file

Master.cf
Settings in the master.cf for virus/spam scanning. Add these to the end of the file. Similar to those
found Protecting your email server with Alpine.
scan

unix
n
16
smtp
-o smtp_send_xforward_command=yes
-o smtp_enforce_tsl=no
127.0.0.1:10026 inet
n
n
16
smtpd
-o content_filter=
-o
receive_override_options=no_unknown_recipient_checks,no_header_body_checks
-o smtpd_helo_restrictions=
-o smtpd_client_restrictions=
-o smtpd_sender_restrictions=
-o smtpd_recipient_restrictions=permit_mynetworks,reject
-o mynetworks_style=host
-o smtpd_authorized_xforward_host=127.0.0.1/8

Valias
#etc/postfix/valias
postmaster@example.net
hostmaster@example.net
hostmaster@bobo.net
postmaster@bobo.net

user1@example.net
user2@example.net
user1@example.net
user2@bobo.net

Vmap
#/etc/postfix/vmap
user1@example.net
user2@example.net
@example.net

example.net/user1
example.net/user2
example.net/catchall #everyone else doesn't match rule above

Transport
#/etc/postfix/transport
example.net
virtual:
bobo.net
virtual:
foo.net
smtp:1.2.3.4 #send foo.net through this smtp server
*
:
#everything else go through relayhost rule

Once these files are created you will need to make them into .db files
postmap valias
postmap transport
postmap vmap

Dovecot
Dovecot on Alpine will only do imap and imaps services for now.
Most of dovecot is configured already for imap. You may have to gen the key as shown above. Just
change the cnf file a little to say something about mail.domainname.
ssl_cert_file = /etc/ssl/dovecot/server.pem
ssl_key_file = /etc/ssl/dovecot/keyfile.pem
mail_location = maildir:/var/spool/vhosts/&d/%n
valid_chroot_dirs = /var/spool/vhosts
passdb passwd-file {
args = /etc/dovecot/passwd
}
userdb passwd-file {
args = /etc/dovecot/users
}
#section for postfix sasl auth
socket listen {
client {
path = /var/spool/postfix/private/auth
user = postfix
group = postfix
mode = 0660
}
}

To generate the passwords you can use the dovecotpw command.


dovecotpw -s MD5-CRYPT

The hash below can be used for the password test123


The /etc/dovecot/passwd file should look like this:
user1@example.net:$1$tz5sbjAD$Wq9.NkSyNo/oElzFgI68.0
user2@example.net:$1$tz5sbjAD$Wq9.NkSyNo/oElzFgI68.0

THe /etc/dovecot/userdb file should look like this:


user1@example.net::1004:1004::/var/spool/vhosts/example.net/:/bin/false::
user2@example.net::1004:1004::/var/spool/vhosts/example.net/:/bin/false::
user@domain::uid : gid of found in virtual_uid_maps::location of maildir:shell::

Clamsmtpd
Configure according to instructions Protecting your email server with Alpine

Gross
Configure according to instructions Protecting your email server with Alpine

Final Steps
Start the services and make sure to rc_add them
/etc/init.d/postfix start
rc_add -k postfix