Network Security



Michael Sweeney

Network Security Using Linux
by Michael Sweeney Copyright 2005 Michael Sweeney. All rights reserved Printed in the United States of America

Published by PacketPress, 4917 Leeds Ave, Orange, CA 92867.

PacketPress books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles ( For more information contact our sales department at: 1-714-637-4235 or

Editor: Jeanne Teehan

Technical Editor:

Cover Designer: Amanda Sweeney

Printing History: January 2005 First Edition.

While every precaution has been taken in the preparation of this book, the publisher and the author assume no responsibility for errors, or omissions, or for damages resulting from the use of the information contained herein.

"The idea is to try to give all the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another" Richard Feynman

Table of Contents
Network Security using Linux......................................................... Credits.............................................................................................X Preface............................................................................................xii
Who is this book for?......................................................................................xiii How the book was written..............................................................................xiii

Chapter 1..........................................................................................1
TCP/IP Fundamentals.........................................................................................1 Layers.................................................................................................................2 TCP/IP Addressing.............................................................................................3 Subnetting with CIDR...................................................................................6 Subnetting with VLSM..................................................................................7 TCP/IP Version 6...............................................................................................8 IPv6 and the Kernel.....................................................................................11 Constructing Packets........................................................................................14 TCP Communication........................................................................................16 Any port will do...........................................................................................18 What does a router really do?...........................................................................18 Open Source Linux Routers........................................................................20 Is a Linux router secure?..................................................................................22 Shutting off the unwanted services.............................................................22

Chapter 2........................................................................................24
Firewalling the Network...................................................................................24 Isn’t a router a firewall?...................................................................................26 IP v6 and IPTables...........................................................................................28 Patch-O-Matic.............................................................................................29 Firewalling 101................................................................................................31 Papers Please....................................................................................................34 The Penguin Builds a Wall...............................................................................34



Bastille Linux...................................................................................................36 Free is good......................................................................................................37 IPCOP..........................................................................................................38 Firestarter.....................................................................................................40 Shorewall.....................................................................................................41 Web Based Tools.........................................................................................43 Commercial Firewalls......................................................................................44 Astaro..........................................................................................................44 Smoothwall..................................................................................................46 Gibraltar.......................................................................................................47 Resources.....................................................................................................50

Chapter 3........................................................................................52
IP Tables, Rules and Filters..............................................................................52 Chain Syntax...........................................................................................53 Rules.......................................................................................................53 Building of a Basic Rule..............................................................................54 Demonstrating rules................................................................................55 Advanced Rules...........................................................................................56 Matching Connection States...................................................................56 Configuring NAT...................................................................................57 Defending Against Basic Attacks ..........................................................59 Examing The Rules ................................................................................60 Strengthen Your Rules with ROPE .......................................................60 Your Basic Firewall.....................................................................................62 Firewall Testing...........................................................................................63 Firewall Script........................................................................................65 Resources.....................................................................................................72

Chapter 4........................................................................................73
Updating Linux................................................................................................73 RPMs................................................................................................................73 Red Hat Up2date..............................................................................................81



YUM.................................................................................................................84 APT..................................................................................................................86 What is a kernel update?..................................................................................87 How do I tell which kernel I have installed?...................................................88 How do I update the kernel?............................................................................88 Alternative Security Kernels............................................................................90 Keeping the LID on.....................................................................................91 Resources.....................................................................................................92

Chapter 5........................................................................................93
Encryption or protecting your Data..................................................................93 What is encryption?..........................................................................................93 What is this alphabet soup?..............................................................................94 How does encryption work?............................................................................95 What are keys all about?..................................................................................96 Why do I need encryption?..............................................................................98 How do I use GPG?..........................................................................................98 Managing keys...........................................................................................106 Revoking a Key....................................................................................106 Key Signing Parties..............................................................................107 Additional Notes About GnuPG................................................................108 Securing Data with SSH.................................................................................109 What is OpenSSH?.........................................................................................109 The basics of SSH..........................................................................................111 What else can SSH do?.............................................................................112 SSH Port Forwarding ...............................................................................115 What is a X.509 Certificate?..........................................................................118 Make Your Own Certificates.....................................................................118 Are You Certified?....................................................................................119 How to use the Certificate.........................................................................125 Secure Socket Layer.......................................................................................128 SSL and Apache.............................................................................................128




Chapter 6......................................................................................130
Detecting Intruders.........................................................................................130 Deploying an IDS...........................................................................................131 What is Snort..................................................................................................132 Building a Sensor...........................................................................................133 Secure Communications.................................................................................138 Making the Pig Fly.........................................................................................139 Installing MySQL......................................................................................139 Installing Snort..........................................................................................144 Snort Configuration...................................................................................146 Syslog Notes..............................................................................................147 Configuring Snort’s New Database...........................................................149 Starting the Pig..........................................................................................152 Apache.......................................................................................................153 Installing PHP............................................................................................154 Snort on ACID...........................................................................................156 Securing the Pig.........................................................................................160 Multiple NIC cards...............................................................................161 Rules? What Rules?.......................................................................................162 Updating the Rules...............................................................................166 Deploying Snort.............................................................................................167 Tapping the network..................................................................................168 Where to place Snort.................................................................................171 Managing Snort..............................................................................................171 Webmin.....................................................................................................171 Snort Center...............................................................................................173 Resources:..................................................................................................173

Chapter 7......................................................................................174
Virtual Private Networks................................................................................174 IPsec...........................................................................................................174



L2TP..........................................................................................................176 PPTP..........................................................................................................177 VPN Utilities..................................................................................................177 PPTP Client...............................................................................................177 OpenSwan..................................................................................................181 Installing and Configuring Openswan.......................................................183 Certificates and Keys............................................................................184 Configuration........................................................................................186 Resources...................................................................................................188

Chapter 8......................................................................................190
Logging for Fun and Profit............................................................................190 NTP for Linux...........................................................................................192 Monitoring and Analyzing the Logs.........................................................195 What to Look for..................................................................................202 Tuning Syslog.......................................................................................202 Rotating Logs........................................................................................204 Syslog Improved...................................................................................206 Securing Syslog Traffic........................................................................208 Windows to Syslog Converters............................................................208 Configuration Guides...........................................................................208 Sawmill.................................................................................................209 Logwatch..............................................................................................211 Swatch...................................................................................................213 LogSurfer..............................................................................................216 Nagios...................................................................................................217 Resources...................................................................................................219

Chapter 9......................................................................................220

Appendix 1...................................................................................223 INDEX..........................................................................................226



his book is not the work of a single person; it is the work of a group of people who collectively added to the sum of the book. No one person knows everything there is to know about Linux security, but I have endeavored to blend my own experience and knowledge with new knowledge that I acquired from others who unselflessly shared their own knowledge during the writing of this book. I would like to thank a few people and organizations that provided support, knowledge and software for this book. Many times, they also provided the motivation any author needs to get through the arduous task of trying write about a complex subject and make it into something that can help the reader grow their own pool of knowledge. So without any further ado, I am pleased to thank these folks and groups publicly and offer my sincerest graditude.
• • • • • • • • • Linus Torvalds, for without Linus, there would not be a penguin in our hearts The development teams for OpenOffice and StarOffice Martin Roesch, who created Snort and then gave it to the world The development team who birthed the Apache web server Rasmus Lerdorf, creator of PHP Astaro, for donating time and software to the project Smoothwall for donating time and software to the project Chris Loweth, for his great free site for building firewall rules Bruce Timberlake, for the Building a LAMP server document


And thank you to a host of programmers, Linux enthusiasts and other unsung heroes on the Internet answering all manner of Linux related questions for free in the spirit of Open Source; for you, a simple thank you just does not seem enough. And not least of all, thank you to my long suffering family. As any writer knows, these projects take on a life of their own and become all consuming. My family was very supportive and understanding when Dad was locked in the home office for many, many hours at a stretch. For this support, I offer a very humble and heartfelt “Thank you” to Jeanne, Amanda and Sara. Mike Sweeney “Abu el Banat”


p. X

Technical Details for Creating this Book
Open Source Applications and OS Used
• • • • • • • • • • Red Hat 9 Fedora Core 1 Fedora Core 2 SuSE 9.1 Slackware 10 FreeBSD OpenOffice Opera Mozilla Firefox Thunderbird Email

Commercial Applications used
• • • • • • • • • VMware Virtual PC Snagit PaintShop Pro UltraEdit-32 Adobe Acrobate 6 Pdf995 F-Secure SSH Client X-Win32

• • • Apple iPod Special Extended DVD version of the Lord of the Rings Diet Coke and illy Espresso


p. XI

“Security against defeat implies defensive tactics; ability to defeat the enemy means taking the offensive.” Sun Tzu


ven though these words were written by a general who lived in China around 500 B.C., the words still apply today just as much as they did then. Security for our networks and data requires us to take the offensive by installing strong defenses on the network perimeter, using all available tools and by studying security methodology to know where our weaknesses might be. If we have done our due diligence and we keep up with the newest security threats, we have a much better chance; it would be naïve to expect we can deny the victory of a successful intrusion by the black hats in the world. Please note that I said that we have a better chance and not absolutely deny victory to the black hats. There is no one hundred percent foolproof security, aside from turning off the system. And even that is not one hundred percent as someone could physically walk off with the device. All we can hope to accomplish to force the black hats to go bother someone else who less prepared than we are.

Security is an ongoing effort, an effort where we cannot just apply our patches, tighten up the services, firewall off the world and call it done. Security constantly evolves to reflect the constantly changing face of the enemy. As one side gets a leg up on the opposition, the other side will find a way to get back on top, and so it goes on and on. What was an effective defense last year is not up to the task this year as the threats have again evolved. Part of our job is to constantly stay on top of these threats and adjust our defenses to protect against them. But in all cases there is a certain level of protection that has to be taken regardless of the threat level. That baseline level of defense is what this books strives to give you: the knowledge to get to that first rung of defense. No matter what we do later, we always have to do a few things the same way such as identifying services that need to be shut off; configure a basic firewall; use a basic intrusion detection and provide some type of logging. These basics are the sum of this book. Once you have read and worked through this book, you will have a solid understanding of the basics of Linux security and how to build on the basics in the future to adapt to future threats. Part of our threat prevention is to use resources like SANS and their Top 20 List of The Most Critical Internet Security Vulnerabilities which can be found at This document is an excellent place to start identifying security threats to your network and systems. It covers both the Windows world and Unix/Linux plaforms as the two are fairly well mixed in todays world. Another excellent resource is the information provided by the good folks at where, along with information on current threats, you can find articles and white papers discussing how to defend against the threats and how to use a variety of tools to aid you with the security defense of your network. Another venerable security site is This site was established in 1988 and has been giving security professionals and other interested parties advisories and security advice ever since then. The web site has Preface p. xii

best practices, defense measures, fixes, training and other very useful information. Other sources of information can be found on some of the mail lists and newsgroups like bugtraq and others which you can find at In the end, security is of our own making and up to us to implement. The fact you are reading this book says that you are one of the few, the brave, the proud who take an active interest in network security and are willing to learn what it takes to get it right.

Who is this book for?
This book is not written for the newbie Linux administrator or user. While a reasonably competent Linux newbie could follow along, the book is really for the intermediate Linux administrator who has inherited the network security task or the Windows Administrator who has bowed to the inevitability of Linux joining their shop. There is a certain level of skill and knowledge assumed but not too much. Many of the applications and scripts shown in this book are not simply “point and click” to install packages and will require some “fiddling” to get it to work. The style of presentation is to supply what information is needed to install the software and some of the theory behind the application. The detail is generally an overview of the technology as virtually of the subjects here in this book have whole books dedicated to each application. What I have tried to do is to bring enough knowledge together in a single book to serve as a guide for the initial configuration and operation.

How the book was written
The sample configurations and installations are relatively distribution nonspecific unless I have otherwise noted. The one standout to this are the forms of BSD which is not a “linux” distribution but a highly respected operating system in the world of network security. I have included a small amount of BSD notes and some resources to draw from. To fully detail the usage of BSD in securing your network would be an entire book in itself. For greater detail on BSD, go to Virtually all of the information contained in this book is available off the Internet but it can be imcomplete with key assumptions made by the authors, poorly written, outdated or just plain wrong. Each installation I show was actually performed by me on Red Hat, Fedora, SuSE, Slackware or some other distribution in order to get the details correct. There may be slight differences due to where libraries reside or the package places files but overall they should be correct. I have tried to supply screen shots and as much code as reasonable to help guide you in installing the various applications yourself. I have supplemented the chapters with links and resources throughout so you should be able to go as far as you wish with miminal effort in researching the next steps. This book is not intended to be the final word on any of the various software packages presented within these pages but a guide to get you up and running quickly.


p. xiii

Chapter 1
“The pessimist sees difficulty in every opportunity. The optimist sees the opportunity in every difficulty.” Winston Churchill

TCP/IP Fundamentals
o understand how to implement much of what is referred as network security, we need to understand what and how Transmission Control Protocol/Internet Protocol (TCP/IP) works and functions. This knowledge can be crucial to configuring network security so you can protect what you think are protecting. TCP/IP is a suite of protocols that allows computers to share data by using logical network connections which are spelled out in a series of Request For Comments or RFCs. TCP/IP is not just a single protocol although many do use the name generically and refer to TCP/IP in the singular form. Applications like SMTP, FTP, Telnet and others are all part of the TCP/IP suite of applications and each has its own RFC specifying how the protocol will work. TCP/IP addressing is also part of the RFCs and in RFC 1918 which can be found at we have the rules for private IP addressing. In Table 1.1 below we have a list of the RFCs that define TCP/IP. Table 1.1 RFCs that makes up TCP/IP Protocol


RFC 959 RFC 821,822 RFC 854 RFC 1098,1157,1212 RFC 793 RFC 768 RFC 826 RFC 903 RFC 791 RFC 792 RFC 1350 RFC 894

Along with the various RFCs that make up TCP/IP we need also need to learn about the two layer models that are used. The first model, which is the OSI seven layer model breaks the network into seven layers and each layer has a certain set

TCP/IP Fundamentals

p. 1

of functions assigned to it. There is a second model called the DoD four layer model which is what TCP/IP is based on. These layers do not map directly to each other and we see that in a following section. We have to understand the theory of the TCP/IP architectural model so we understand what an IP address is, what is a subnet mask and how a subnet mask functions. And along with the IP addressing, we need to understand what ports are and how ports function in the TCP/IP architecture. We have to know the basic construction of a packet because in today’s world of network security threats, specially crafted packets can be sent against our servers and routers which can bring our network down. We have to know what makes a "good" packet and what makes a good packet go bad.

It has been said that it is not important to know the Open Systems Interconnect or OSI seven layer model or the four layer TCP model. Those that say this really do not understand what they are saying nor do they understand how TCP/IP works. The very important knowledge of how something SHOULD behave is the first step in understanding why something is NOT behaving as expected. Sometimes a clue to what is happening to your network is not based on symptoms that you can see but understanding how the network rules can be abused. For example, if an intruder launches a poisoned ARP attack against one of your switches, to effectively combat that attack, you have to know two things. One, that ARP is layer two and two, that ARP is part of the normal TCP/IP based communications. Since ARP is part of the TCP/IP suite, you need to know why we use ARP and how ARP is used normally. This knowledge will help guide us in our defense from the attack. The OSI model is comprised of seven layers as we see in Figure 1.1 below. Figure 1.1 The OSI seven layer model Application Presentation Session Transport Network Datalink Physical Layer 7 Layer 6 Layer 5 Layer 4 Layer 3 Layer 2 Layer 1 Telnet, SMTP,NFS, Rlogin,DNS Data Compression, Encryption RPC, Named Pipes TCP, SPX, NetBEUI RIP, ICMP, OSPF, IPX 802.3, 802.5, EC, Flow Control IEEE 802, Physical

The TCP or DoD model is comprised of four layers that do not exactly map to the seven layers of the OSI model. We can see the comparison in Figure 1.2. Figure 1.2 Comparing the OSI layers to the DoD layers
Application Layer 7 Application Layer 4

TCP/IP Fundamentals

p. 2

Presentation Session Transport Network Datalink Physical

Layer 6 Application Layer 5 Application Layer 4 Transport Layer 3 Internet Layer 2 Network Interface Layer 1 Network Interface

Layer 4 Layer 4 Layer 3 Layer 2 Layer 1 Layer 1

We can see that the OSI layers 7 to 5 map to the DoD layer 4. OSI layer 4 maps to DoD layer 3, OSI layer 3 maps to DoD layer 2 and OSI layers 2 and 1 map to DoD layer 1. This is all just a bit confusing but if you remember the OSI model, things will work out fine.

TCP/IP Addressing
TCP/IP is the standard that most systems nowdays use to communicate to other systems. TCP/IP addressing is used on Local Area Networks (LANs), Wide Area Networks (WANs) and all kinds of private networks. IP addressing is based on 32 bits that represent the IP address network ID and the host ID. The address can be in binary, hex or more commonly, dotted quad notation which what most of us know and use on a daily basis. Dotted quad uses four groups of numbers with the dot or period separating them. A second group of bits, up to 32 bits in length, is used as a mask over the first set to provide the network ID and host separation. There are 3 major classes of TCP/IP addresses called Class A, Class B and Class C. There are two more classes called D and E but for our discussion they do not apply. In RFC 1166, each class is assigned a specific range of IP addresses which we see in Figure 1.3 Figure 1.3 Listing of 3 main IP address classes Class A - to Class B - to Class C - to Within each Class A network there are a bit over 16 million addresses available. For Class B network, there are 65,534 addresses available and in a Class C network, we have 254 addresses available. The number of addresses available is based on the subnet mask. There are some exceptions that are reserved for what is referred to as private networks and one class A network that is dedicated for loopbacks. The reserved IP network for loopbacks is the network and traditionally, the is the loopback address. The actual loopback address range is from to For private or non-internet routed networks, there is a range of IP networks within each class that is shown in Figure 1.4

TCP/IP Fundamentals

p. 3

Figure 1.4 Private IP network ranges. Class A private address range to Class B private address range to Class C private address range to Up to this point, we have been talking about something called Classful networks. Actually, there are two types of subnets, Classful and Classless. The Class or Classless refers to how the subnet mask is used. Traditional or Classful subnets are shown in Figure 1.5: Figure 1.5 Classful Subnet Masks Class A Class B Class C We can see that the subnet mask boundaries fall on the natural end of an octet. The Classless subnet mask does not follow the natural boundaries like the Classful masks. These subnets can borrow bits from what normally would be host IDs to further break down the networks into smaller units. This allows us not to waste IP addresses when we might need more network IDs than host IDs. In Figure 1.6 we see a sample of what a Classless subnet mask might look like in dotted notation. Figure 1.6 Sample of Classless subnet masking We can easily see in the first line what was a class A subnet mask has been extended with the addition of the 228 in the second octet. This is where we borrow bits to break the single class A network into multiple class A networks with smaller host ranges. You do not get something for nothing and it is not any different working with IP addressing. If you borrow bits to make more network IDs, you give up some number of host IDs you can have on each subnet. There is a shorthand way to state the subnet mask called slash notation. Instead of writing out the long for a class C mask, we would just say /24. The 24 is derived from the fact that we have 3 groups of 8 bits for the mask and 3 x 8 is 24. If the mask was, our slash notation would be /30. The slash 30 comes from taking the normal class C mask of 24 and adding the next 6 bits used for the mask. When we work with IP addressing, we are using both the power of 2, since we are dealing with binary numbers, and a boolean function called "ANDing" where we take a single bit and "AND" or add it to a second bit. If both of the bits are set to a 1 then the result is a 1. If either of the bits are a 0, then the result is a 0. So to

TCP/IP Fundamentals

p. 4

work out our network and host IDs, we take the address range and then we AND the subnet mask as we see here in Figure 1.7. Figure 1.7 Sample IP address in binary Class C IP address in bits 11000000.10101000.00110010.11110000 XXX . XXX . XXX . 240 Classful Class C mask in bits 11111111.11111111.11111111.00000000 255 . 255 . 255 . 0 There are two ways to tell that we are working with a class C address. The first way is to examine the first three left bits of the IP address. In this case, they are 110 which says this is a class C network. If the left most bits were 10 then it would be a class B address and if the leftmost bit was a 0, then it would be a class A network. In Figure 1.8 we see a chart that shows the various classes and the associated bits that define which class the network is. Figure 1.8 Class in binary 1 10 110 Class A Class B Class C

The alternative way is to look at the subnet mask. In a classful network, the mask tells us that we are working with a Class C address since the mask will mask out the first three octets. The forth octet is our actual IP address which is in binary. But most humans need something other than binary to work with so we need to convert the binary to decimal. Binary is base 2 unlike the normal human decimal system which is base 10. In binary we only have ones and zeros to work with and to convert our IP address to a more easily understood base 10 number, we just need to perform some simple addition. Each bit stands for a value in base 2. Counting from right to left, we start with the number one and double it which is two. Then we take the two and double it to four and so on till we count eight bits. But we have to remember that when we convert the IP address, the 1 goes to the far right or Least Significant Bit (LSB) and the 128 goes to the left as shown in Figure 1.9 Figure 1.9 Setting up the base 2 values for eight bits of an IP address 128 64 32 16 8 4 2 1 When the bit in the column is set to a 1, then we count the number. If the column is a 0, we skip that number. So in our example shown below, we can see first 4 bits from left to right are set to 1. 1 1 1 1 0 0 0 0

TCP/IP Fundamentals

p. 5

This means we will add 128, 64, 32 and 16 for a total of 240. Subnetting with CIDR The slash notation /24 is a shorthand way to say is our mask. Since we know the IP address is a class C and the mask is classful, then our network ID can be from to This a classful address since the subnet mask uses 24 bits and the subnet mask falls on the natural octet boundary. But what if the subnet mask had used 25 bits, would it still would have been a class C address? The answer is yes, but it is now a CIDR or Classless Interdomain Routing mask. CIDR will break the single class C address into smaller segments with fewer hosts per network segment. CIDR is based on splitting the octet that normally would be the number of hosts into a few more networks and with fewer hosts per network. This allows us to tailor our number of subnets and number of hosts to our requirements and not waste addresses. A very common application of this is on WANs where you want only two IP addresses available, one for each end, but you need to give several links IP addresses. Instead of using several class C subnets and wasting 252 addresses, we can split a single class C network into several subnets by using a classless subnet mask. To do this, you take the single class C IP network and apply a /30 or mask. When we do the math, we see that we now have 64 subnets available and each subnet has 4 addresses available: one for the wire, one for broadcast and 2 for hosts or in this case, interfaces. In a second example, if we had a class C network with 253 hosts available but we needed two networks and only 100 hosts per network, a possible solution would be to use a /25 or mask. This gives us two class C subnets and 126 hosts per subnet. In Table 1.2, we see a chart of the number of subnets and hosts per subnet for a Class C network. Table 1.2 Subnetting Chart for Class C Network Subnet Bits
0 1 2 3 4 5 6

Host ID Bits
8 7 6 5 6 7 8

1 2 4 8 16 32 64

254 126 62 30 14 6 2

/24 /25 /26 /27 /28 /29 /30

This chart allows us to easily see how the number of available hosts will diminish as the number of subnets increases.

TCP/IP Fundamentals

p. 6

Subnetting with VLSM Variable Length Subnet Masking or VLSM is an interesting twist on subnetting a network. It starts as almost the same as CIDR but gets more complicated as we shall see. With VLSM, we will take an IP address, subnet it and then take one of the resulting subnets and subnet it again. As you can imagine, if you do not do your homework about the architecture of your subnets first, it would be very easy to have VLSM get out of hand and you then will have a real mess. About the only real difference is that CIDR was for the ISPs and such while VLSM is used on private networks. The idea is the same, subnetting a subnet to get a more granular control over the IP networks and address range. Using as our example, we will start with the default /24 mask. /24 This gives us 1 subnet and 253 hosts (-1 for broadcast and -1 for the wire address of 0). But we need two subnets and we only have one address, so what to do? We will borrow one of the eight bits that make up the host address and now use a /25 mask. /25 ------------------| (subnet A) (subnet B) Now we have two subnets to work with. We still need to split one of the two subnets because we only need 28 hosts. So we can take subnet A and borrow a few more bits, two to be precise, and now apply a /27 mask to subnet A. (subnet A) ----------| /27 /27 /27 /27 /27 /27 /27 We now have eight subnets with thirty hosts each. For VLSM to work, we have to use routers and other network equipment that understands VLSM and make sure we get our routing tables correct by using a VLSM aware routing protocol like RIP2, OSPF, BGP4 or EIGRP. Each of these protocols will send the classless subnet information along with the IP addressing.

TCP/IP Fundamentals

p. 7

TCP/IP Version 6
There is a new version of TCP/IP addressing being used called TCP/IP IPv6. Version 6 was originated several years ago due to the apparent upcoming scarcity of TCP/IP addresses in Version 4. But, as Version 6 was being developed, new technologies such as Network Address Translation (NAT) and Port Address Translation(PAT) caused CIDR to become much more common. NAT allowed us to have a network use one IP numbering scheme internally but present a different IP scheme to the outside world. NAT and PAT also gave us the ability to take a single IP address and to “share” it among a group of private IP addresses which is a great way to conserve address space. This also referred to “overloading” and it works by taking the IP, and for each connection made using the IP address, assign a port to that connection. In Figure 1.10 we see a Cisco router using NAT and PAT to overload a single IP address for the LAN. Figure 1.10 Cisco Router using NAT and PAT
nemesis#show ip nat trans Pro Inside global tcp tcp tcp udp tcp tcp tcp tcp nemesis# Inside local Outside local Outside global

We can see where the router assigns the port of 3004 to an internal IP address such as, and translates the IP address and port to an outside connection of with a port number of 8200. The outside IP address will never see the internal IP address, it will only see the outside assigned IP address. This was one technology that stretched the lifespan of TCP/IPv4 so that version 6 could be developed. IPv6 is said to provide enough IP addresses for the next 30 years. Time will tell how optimistic this project is. But IPv6 is the wave of the future and we have to TCP/IP Fundamentals p. 8

know it and understand it in order to use it effectively and protect the networks using it. The full details of IPv6 can be found in the RFC 2373 which is the “IP Version 6 Addressing Architecture” and RFC 2460 “Internet Protocol, Version 6 Specification”. There are other RFCs for IPv6 that detail items like how DHCP works under IPv6 and DNS under IPv6. We need to remember that IPv6 is simply not just changing IP addressing but is a complete redesign of how TCP/IP works. A partial list of the designer’s goals for IPv6 are:
• • • • • • • • Larger address space Better management of address space Eliminates stopgap measures such as NAT Easy or easier administration of TCP/IP Efficient routing Better support for multicasting Better support for security Ensure compliance with the Mobile IP standard

The designers of IPv6 came very close or did hit the mark in almost all of these goals. Some of the new features and changes are listed below.
• • • • • • • • • • • • IPv6 Addresses are 128 bits in length instead of the old 32 bits Address space is now hierarchical Support for multicast is improved New support for “anycast” addressing We have an auto configure ability with IPv6 The datagram has be been redesigned and given new capabilities We have support for Quality of Service (QoS) for multimedia and other apps Fragmentation and reassemble has been significantly improved over IPv4 Modern routing support for easier expansion Transition support to help ease the path from IPv4 to IPv6 Streamlined header format The CRC checksum has been eliminated

TCP/IP Fundamentals

p. 9

• • • • •

V4 field “Internet Header Length” is removed since headers are a fixed 40 bytes The default MTU has been increased from 576 bytes to 1280 bytes Only the source node can fragment a packet IPv4 can be “tunneled” over IPv6 links The mandatory use of IPsec in IPv6

There are many important aspects of IPv6 that has not changed from IPv4. IPv6 still resides in the Network Layer or Layer 3 as does IPv4. IPv6 addresses are still assigned to an interface. A normal node like a PC would have a single IP address and the routers would normally have more then one address based on their requirements. There are still public and private IP addresses, however in IPv6, they are used differently than in IPv4. And the core function of IP addressing is still the same: the identification of a interface and the routing of packets. Since the thrust of IPv6 was to have more addresses because IPv4 was fast running out, let's take a look at the new addressing design of IPv6. In IPv4 we used 32 bits but now in IPv6 we use 128 bits. This are eight octets consisting of 16 bits instead of the four 8 bit octets. The math is pretty interesting. Since we still are using base 2, our total address count is 2 128. If we were to do the math (I used Google get the exact number), it will work out to be: 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses. Needless to say it will a while before we use all these addresses up. To use IPv6 addresses would be difficult and awkward at best if we insisted on using the same dotted notation that we use with the 32 bit IPv4. The IPv6 address consisting of eight 16 bit octets is represented in hexadecimal instead of decimal as we can see in the sample address below: B223:BB34:0000:0000:0000:0099:BB78:5079 This long number would be pretty tough to remember or write down so we have some short cuts that can help. Leading zeros in a field are optional and since IPv6 addresses will often have successive zeros, we can use a double colon (::) to represent the zeros. We can use the :: at the beginning, the middle or the end of an address and only once. Our sample IP address can be rewritten like this: B223:BB34::99:BB78:5079 Please note that not only can we drop the three groups of zeros, but we can also drop the two leading zeros that preface the 6th octet of 0099. The leading bits or prefix of an IPv6 address represent the specific type of address it will be. Just like in IPv4, there are specific addresses for specific tasks. We have a chart of the common types in Table 1.3: Table 1.3 IPv6 Address Architecture TCP/IP Fundamentals p. 10

Reserved Unassigned

Binary Prefix
0000 0000 0000 0001

NSAP Allocation
IPX Allocation Unassigned Unassigned Unassigned Unassigned Provider Unicast Unassigned Geographic Unicast Unassigned Unassigned Unassigned Unassigned Unassigned Unassigned Unassigned Link Local Use Site Local use Multicast

0000 001
0000 010 0000 011 0000 1 0001 001 010 011 100 101 110 1110 1111 0 1111 10 1111 110 1111 1110 0 1111 1110 10 1111 1110 11 1111 1111

We still have a loopback address in IPv6 and a new address called “unspecified”. IPv6 drops the broadcast packet and relies on the multicasting abilities that have been built into IPv6. The specific addresses for loopback and unspecified are listed below. 0:0:0:0:0:0:0:1 0:0:0:0:0:0:0:0 Loopback Address Unspecified Address

IPv6 and the Kernel
To get IP6 on most distributions of Linux and BSD, we need to either enable it on the initial installation or recompile the kernel. The recompiling of the kernel can be done from either X Windows using the make xconfig command or from the CLI using the make menuconfig command.

TCP/IP Fundamentals

p. 11

First we need to go to the directory where our kernel sources are located. In the sample here, the kernel is Red Hat 9 that has been updated to 2.4.20-31.9 so our path is /usr/src/linux-2.4.20-31.9. Our next task involves some some housekeeping and that means we should run the command:
make mrproper

This gets our files ready for compiling process. Next we decide to run either the CLI version or the X Windows version. In Figure 1.11 we are running the X Windows version. Figure 1.11 Configuring the kernel with Xconfig

To use IPv6 with our new kernel we need to choose Networking Options by clicking on the correct button. This will give us the submenu where we pick IPv6. Figure 1.12 shows the submenu from selecting Network Options. Figure 1.12 Choosing IP v6

TCP/IP Fundamentals

p. 12

Once we have selected IPv6, we will go back to the main menu and then click on “save and exit”. This is where the fun really begins for us. There are several steps that we need to complete in order to compile the new kernel. Our steps for compiling the new kernel are:
Using make mrproper will delete the current .config file. So you may want to make a copy of the current .config file before you start the recompiling process. • • • • • • make mrproper make menuconfig or make xconfig make clean make bzImage make modules make modules_install

Once we have run through these steps, we need to copy the new kernel which is in /usr/src/linux-2.4.20-31.9/arch/i386/boot directory in the compressed bz format to the /boot directory.
# cp arch/i386/boot/bzImage /boot/vmlinuz-2.4.20-31.9custom

The extension custom is added by Red Hat automatically when recompiling the kernel unless you change it in the Makefile. You would change the EXTRA VERSION parameter where it says EXTRA VERSION = -31.9custom to a new name that you choose.

Check to see if all the files are copied to the /boot directory. We need to see the vmlinuz kernel file, the file and the initrd*.img file. If they are not all there then run:
make install

That should put the all files where they belong. There are two loaders used for Linux: LILO and GRUB. GRUB replaced LILO so this is what we will use and so we need to edit the configuration file to reflect our new kernel. You will find the configuration file in the /boot/grub and is called grub.conf. Now we need to add a few lines like this:
title Red Hat Linux (2.4.20-31.9custom) root (hd0,0) kernel /vmlinuz-2.4.20-31.9custom ro root=LABEL=/ hdb=ide-scsi initrd /initrd-2.4.20-31.9custom.img

The title is what we want it to say on the boot screen and then we need to tell grub which kernel to use and which initrd to use. At this point you should be able to reboot your Linux installation and have the IPv6 stack loaded in. The easy way to verify this is to use the ifconfig command as we see here in Figure 1.13: TCP/IP Fundamentals p. 13

Figure 1.13 Using ifconfig to view the new IP v6 address information
[root@RedRum /]# ifconfig eth0 Link encap:Ethernet HWaddr 00:10:B5:8E:71:AD Bcast: Mask:

inet addr:

inet6 addr: fe80::210:b5ff:fe8e:71ad/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:173028 errors:0 dropped:0 overruns:0 frame:0 TX packets:238395 errors:1 dropped:0 overruns:0 carrier:2 collisions:5173 txqueuelen:100

We can see a new line in the output called inet6 addr and this system’s IPv6 address is fe80::210:b5ff:fe8e:71ad /64. At this point IPv6 is alive and well on this system and we can start working with IPv6. We can also see that while IPv6 is running, so is our old IPv4 stack and IP address. We can do a quick test by using ping6 which is the IPv6 ping utility as seen here in Figure 1.14: Figure 1.14 Using Pingv6 to test IPv6 loopback
[root@RedRum root]# ping6 -c 1 ::1 PING ::1(::1) 56 data bytes 64 bytes from ::1: ICMP_seq=1 ttl=64 time=0.071 ms

--- ::1 ping statistics --1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms [root@RedRum root]#

IPv6 is ushering in a whole way of architecting IP networks and subnets. It will require that the designer follow a tighter set of rules and understand more of how the network will function at a lower level than with IPv4. From a security perspective, IPv6 is troublesome since many of the tools and devices such as IDS sensors are not IPv6 aware and would not be able to prevent or detect an attack based on IPv6. This will change, but it will change slowly. For an excellent resource on IPv6 for Linux, go to:

Constructing Packets
Now that we have an idea of how TCP/IP handles the addressing of packets, let's take a look at the construction of the packet. Packets are built according to rules laid down by RFCs. And the TCP/IP stacks on operating systems and hosts are designed with these rules in mind. But, a clever attacker can either bend the rules or break them outright to crash the attacked system, or cause it to fault into a condition that the attack can use to gain access. A classic case of a specially crafted packet to bring down a host was the “Ping of Death” where the ping TCP/IP Fundamentals p. 14

packet based on ICMP was built longer than the ICMP specification called for. When sent to certain types of hosts, it would crash the IP stack and in turn the entire system, hence the term "ping of death". Newer systems and IP stacks are more robustly designed against these types of attacks but there are plenty of older systems that it would be effective against. Other specially crafted packets are packets that have the SYN or a FIN flags set without the actual connection being set up. Many types of scanners use this type of packet to get around a firewall since a SYN or FIN packet is a legal packet according to many firewalls. But, a SYN without a FIN can be a way to scan or crash a host by using up all the memory by sending SYNs and causing the target to hold open the connection waiting for the FIN that will never come. Some firewalls will guard against this sort of attack but it can be quite effective. These types of firewalls use what is called “statefull inspection” to watch for connections that have been held open for too long or for too many connections from a single source. With this example, you can start to see that understanding how TCP/IP works can be one way to help protect your data and strengthen your network security. Packets follow standard rules for their construction. Each part of the packet, things like the header, source address space, destination address space and so on, are predetermined by an RFC. Contained within the packet is a frame and the data payload. In the Figure 1.15 below, we see a graphical representation of what a packet looks like for IPv4: Figure 1.15 Construction of an IPv4 Packet
0 Bits 4 8 16 24 31



Service Type

Total Length



Fragment Offset

Time to Live


Header Checksum

Source Address 32 bits

Destination Address 32 bits

Options and Padding

As we discussed in the prior section TCP/IP Version 6, the construction of IPv6 is very different compared to IPv4. In Figure 1.16 we see a typical IPv6 packet:

TCP/IP Fundamentals

p. 15

Figure 1.16 Construction of an IPv6 Packet
0 Bits 4 12 16 24 31



Flow Label

Payload Length

Next Header

Hop Limit

Source Address 128 bits

Destination Address 128 bits

We can see right away that the header of IPv6 is simpler then IPv4. Also, the address space is larger but the overall header length is not much larger. As we learned earlier, the checksum is removed from IPv6. There are some new fields like the Service Type and the Flow Label. Service Type represents the priority of the packet. The Flow Label field can specify a series of packets such as VoIP as a “flow” and request a certain service for this designated flow. It is relatively easy to make your own packets either by using a sniffer or a tool such as Hping available at Be aware that building your own packets is an advanced skill and that you should NEVER do this on a production network or risk the wrath of bosses, customers or family members trying to read their email.

TCP Communication
We cannot have security without understanding what exactly are we trying to secure and understanding how our tools work. There are two types of TCP/IP protocols or communication methods used. The first is Transmission Control Protocol, or TCP, that is based on having an acknowledged connection between two hosts. This connection is based on a three-way handshake that takes place at the start of the connection. Host A will send a packet that is a SYN request or a request to open a connection to Host B, who we wish to exchange data with. Host A will receive the SYN request and will send back an ACK plus a SYN which says “I heard the SYN request and I am sending back an ACK to acknowledge that I got the SYN request.” Host A gets the ACK-SYN and will reply with its own ACK and then exchanging data begins. In Figure 1.17 we see this diagrammed:

TCP/IP Fundamentals

p. 16

Figure 1.17 Diagram of TCP/IP Three-way handshake
Host A
SYN Packet from Host A

Host B

ACK and SYN Packet From Host B

Host A
ACK Packet from Host A

Host B

Host A

Data Transfer

Host B

In Figure 1.18 we see an Ethereal packet trace file of the start of a SMTP exchange between a server and the client. The three handshake starts the process before the actual POP connection is opened. Figure 1.18 Packet trace of the three way TCP handshake

The first highlighted line is the first SYN to start the connection. Then we follow with the ACK/SYN packet and lastly the final ACK. At the end of the data transmission, a FIN packet is sent to tell the host that the data exchange is completed. There are other flags or bits that can be set such as RESET which will reset the connection and the size of the packet. Some of the common connection-based applications are HTTP and SMTP. User Data Protocol, or UDP, is not quite as polite on the network. UDP just throws the packet out onto the wire and hopes that it makes it to the target. UDP relies on the application to handle the checking of the packet arrival, along with any retransmission that might need to take place. UDP does not make the effort that TCP does to build up the connection. It relys on the higher layers of the stack to make this effort of checking and error correction. This makes for a low overhead protocol unlike TCP which handles all the checking at the TCP stack. Common protocols that are UDP based are Domain Naming Service (DNS), Network File System (NFS) and many of the streaming applications such as Real Audio.

TCP/IP Fundamentals

p. 17

Any port will do A key concept to understanding network security is the concept of TCP ports and what ports can do for us with respect to TCP/IP. An easy way to understand ports is this. Imagine how the postal service works. You address a letter (data packet with an IP address) to a friend (destination) with a PO box address and drop it into the mail box (data packet being sent to the network via a router). The envelope (data packet) goes to your friend's apartment (TCP/IP address) and then is placed into the correct PO box number ( TCP/IP port). TCP/IP uses ports in the same manner. If your friend’s PO box was really SMTP, the port that TCP/IP would send the packet to would be port 25. There are known ports that are specifically spelled out by various RFCs and they start at 1 and go to 1024. These would be port numbers like 23 for telnet, 22 for ssh, 25 for SMTP, 80 for web traffic, 110 for POP email, 443 for HTTPS and others. Ports above 1024 are something akin to entering the wild west. Some ports are known by custom and others are just random and cross over onto the customary numbering. Knowing which ports your applications use or needs to access to is critical in configuring your network security. There is an official list of ports at One of the easiest ways to see what exactly each application is using is to use a network analysis tool such as Etherpeek from Wildpackets or an open source tool such as Ethereal. Etherpeek can be found at and Ethereal can be found at

What does a router really do?
Technically speaking, a router is a device that connects two different subnets together at the layer three of the OSI model. Routers deal with “packets” unlike a switch that is a layer two device and deals with “frames”. These are NOT the same thing and when troubleshooting a network, it’s in your best interest to remember the differences. The switch is a layer two device and is only concerned about MAC addresses. The switch could care less about what IP address is being used by the packet. The router on the other hand, does care a lot about the IP address being used. A router makes choices of where to route packets by using routing tables that hold information about all the subnets that router knows how to get to and the tables can be static or dynamic. Routers can use different protocols to dynamically build these tables such as Open Shortest Path First or OSPF, Routing Information Protocol or RIP, Border Gate Protocol or BPG and others. Each of the routing protocols was designed for a particular reason at the time and therefore is suited for a particular network design. There is not a single routing protocol that works for everything. When you have a network, when a host wants to exchange packets with a local member of the network, this is easy to accomplish using Address Resolution Protocol or ARP. Host A would send an ARP out on to the network looking for Host B’s MAC address. Host A normally knows the IP address of Host B but not the MAC address. When Host B receives the ARP, it will send back a packet with its MAC address to Host A and now both can send data back and forth. In Figures 1.19 and 1.20, we see the request packet and the reply packet for ARP. TCP/IP Fundamentals p. 18

Figure 1.19 Structure of ARP Request Packet
Ethernet Header Destination: Source: Protocol Type: FF:FF:FF:FF:FF:FF 00:00:0E:D5:C7:E7 0x0806 IP ARP

ARP - Address Resolution Protocol Hardware: Protocol: 1 Ethernet (10Mb) IP


Hardware Address Length:6 Protocol Address Length:4 Operation: 1 ARP Request (Requester

Sender Hardware Address: 00:00:0E:D5:C7:E6 MAC address) Sender Internet Address: Target Hardware Address: 00:00:00:00:00:00 Target Internet Address:

Figure 1.20 Structure of ARP Reply Packet
Ethernet Header Destination: Source: Protocol Type: 00:00:0E:D5:C7:E7 00:E0:B1:23:00:E3 0x0806 IP ARP

ARP - Address Resolution Protocol Hardware: Protocol: 1 Ethernet (10Mb) IP


Hardware Address Length:6 Protocol Address Length:4 Operation: 2 ARP Response

Sender Hardware Address: 00:E0:B1:23:00:E3 Sender Internet Address: Target Hardware Address: 00:00:0E:D5:C7:E6 (Requester MAC address) Target Internet Address:

We can see in the first packet that the Target Hardware address is all zeros. This is why we need ARP, to get this MAC address so we can send the rest of our data. In the ARP reply, the host has put its MAC address in the Sender Hardware Address, so now both ends have a MAC address and an IP address and data flow can start.

TCP/IP Fundamentals

p. 19

When Host A wants to send the packet to Host C on a different network, it becomes trickier. The ARP packet is received by the router who looks at the packet and tries to look up on what interface Host C can be found. This look up takes place by using the routing tables. If the router finds the network on one of the local interfaces, the router will forward the packet to Host C after rewriting the IP header to put it’s own MAC as the source of the packet. The modified ARP request is received by Host C and the normal response is sent back to what it thinks is Host A but it’s really the router. The router gets the response and forwards it to Host A. If the router does not know what interface Host C is on, then the router will look to see where the default gateway interface is and it will forward the packet to the gateway or next hop. The default gateway is really the interface of last resort. The router looks at the packet and says “I have no idea where this host is so I will give it to the next hop who will know where to send it.” Open Source Linux Routers There are a few open source projects for building Linux-based routers. Most are not pure routers but they will build in some basic firewalling capabilities such as NAT. One of the most flexible is called “Freesco” which is a contraction of the words 'Free' and 'Cisco'. This Linux based router will fit entirely on a single 3.5 high density floppy disk. You can download the package at There is a very comprehensive support structure on the Forums for this project which is a testament to the flexibility and the need for a cheap but effective router. In the Figure 1.21 we see the advanced configuration screen. Figure 1.21 Advance Settings for the Freesco Router

We can see from this screen that the Freesco router has choices of interface types that range from dial up to DSL to ISDN. Freesco supports up to ten network cards, five printers and ten modems. It also supports PPPoE and dynamic DNS. All of this power just needs a 386sx (remember those?) and 16MB of RAM. This TCP/IP Fundamentals p. 20

means that old, old PC gathering dust in the back of the garage could be recycled to a useful purpose. There is another floppy-based router with firewalling called “floppyfw” and can be downloaded at This router is based on Debian and is not quite as compact as Freesco but still very usable. It does run on a more advanced kernel, version 2.4.27, so the firewalling capabilities are more advanced. For a more powerful router complete with OSPF and BGP protocol support, there is Zebra available at Zebra is a true router that supports OSPF, RIP 1 and 2 and BGP. Each protocol gets its own process so the router is very efficient. Unlike many of the floppy based routers, Zebra will support IPv6 if you use any of the BSD distributions such as FreeBSD or OpenBSD. When you start working with Zebra, one of the first things that is apparent is the similarity to the Cisco router interface. If you have worked on Cisco routers in the past, you will feel right at home with Zebra. In Figure 1.22 we see the familiar command line of Zebra running on a FreeBSD box. Figure 1.22 Zebra Router running on FreeBSD

Installing and configuring Zebra is very straightforward. First, download the file and decompress it into a folder. Change to the folder and verify that the files were copied there. Now run ./configure. This may take a while but when it is completed run make. When the make is finished run make install and wait for it to finish. Once Zebra is installed, there are a few ways to start it, depending on your Linux or BSD distribution, but first we need to configure the zebra.conf file. . We opened a shell and since in this example Zebra was installed on FreeBSD, we change to the /etc directory and find the zebra.conf.sample file. Just copy this

TCP/IP Fundamentals

p. 21

file to make a new file called zebra.conf. This gives us the shell of a configuration and our two passwords to login and enable as zebra. Now we change to the /usr/local/sbin/ directory and start zebra by typing in ./zebra start &. Open a new shell and type in telnet local host 2601. You should get a login prompt from Zebra. Use the password of zebra and then go into the enable mode by typing in enable and the password of zebra. The router is now ready to be configured and used. Zebra is the core for a new project router that is called “Quagga Routing Suite” and is an enhancement to Zebra. Quagga can be downloaded at and can be built such as Zebra was, or you can download precompiled binaries for Red Hat, Fedora, Debian and Gentoo distributions. And excellent resource with a sample build of a Zebra router can be found at

Is a Linux router secure?
There are many things that can be done to make your Linux based router more secure. The concepts are pretty much the same across the board. The following text is aimed at the router crowd but many of the suggestions and best practices also are for any Linux server and not just Linux routers. For those who would like to experiment with Linux as a router, one of the best sites with all the details to get you started can be found at These directions will show you how to build up a small but flexible Linux based router suitable for home use on cable or DSL, a budget home office or remote site and small companies. It also assumes that you know your way around Linux and do not mind having to change direction due to software upgrades over time. Shutting off the unwanted services It is amazing how many times Linux is installed and deployed in the default mode. Many times the administrator will install Linux as a server with everything installed “just in case” and leave everything as the default. This is not an optimal way to deploy and secure your Linux router. The first thing we need to do is find out just which services are running on our Linux server/router. More importantly, if they are running, we need to determine at which Linux run level they are running at. This can be important as Linux has two primary run levels. There is level 3 which is the text mode and run level 5 which is X Windows. Technically speaking, your server or router should be at the text level but many nowdays prefer to use the GUI interface but the pretty front end is less secure so make your own choice. In order to find out which services are enabled and which run levels, we will use the chkconfig command as we see in Figure 1.23: Figure 1.23 Using chkconfig to list services running at each run level
# what run level are we at? [root@RedRum root]# cat /etc/inittab |grep ^id: id:5:initdefault:

TCP/IP Fundamentals

p. 22

[root@RedRum root]# #show me which services running and when [root@RedRum root]# chkconfig --list|grep 5:on|sort acpid anacron apmd atd autofs crond cups gpm iptables irqbalance isdn kudzu messagebus microcode_ctl netfs network nfs nfslock ntpd pcmcia portmap random rawdevices rhnsd sendmail smartd sshd syslog usermin webmin xfs xinetd 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 2:off 2:on 2:on 2:off 2:off 2:on 2:on 2:on 2:on 2:off 2:on 2:off 2:off 2:on 2:off 2:on 2:off 2:off 2:off 2:on 2:off 2:on 2:off 2:off 2:on 2:on 2:on 2:on 2:on 2:on 2:on 2:off 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:off 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:off 4:on 4:off 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:off 4:off 4:on 4:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off

This sample was from a Fedora Core 1 default server installation with most of the common services plus a few tools like Webmin installed. If this server/router was to be placed in production, services like Kudzu should be turned off.

TCP/IP Fundamentals

p. 23

Chapter 2
Firewalling the Network
“So in war, the way is to avoid what is strong and to strike at what is weak.” Sun Tzo


etwork security and firewalls are words that evoke the image of hacker wars and the person bathed in the blue-white light of a monitor in a dark room littered with empty cola cans and pizza boxes. At times that image very well might be true but for most of the time network security is how to keep your data integrity safe from harm.

The act of network security is a multifaceted task and many times, a thankless task since it involves restricting people’s access. Network security is a combination of hardware, software and social engineering. The firewall is one part of the overall design of a well-protected network. The firewall is much like the castle of old where there was a wall surrounding the city and there were gates that controlled the access to the city. The gate would be lowered or raised depending on who was knocking at the door. It is not any different for the firewall today. The firewall is the gatekeeper charged with the first level of protection of the data inside the perimeter. Much like the castle drawbridge, the firewall is not the ultimate protection since spies can sneak past the door but it will keep out the obvious riffraff and troublemakers. Firewalls used to have a very simple task: block certain IP addresses and let the rest in. Things have changed in the passing years; now firewalls examine which ports are being used, whether that IP handshake is really a legitimate connection?, is that user allowed access to that website?, does the payload really match what the packet claims to be and so on. In fact, with the newest generation of firewall, antivirus, content filtering and monitoring appliances, the line between what used to be discreet devices has become very blurred. All current distributions of Linux which is considered kernels 2.4 and 2.6, include firewalling capabilities using netfilters which are the software hooks, and a module called iptables, which allows us to build a table of rules for the firewall. These features allow us to configure a firewall on our Linux box to protect itself or protect our network with various levels of sophistication. Before IP Tables there was something called IP Chains but that is an obsolete technology as of kernel 2.2 and will not be covered in this book. Linux offers very compelling reasons why it should be used as a firewall. Linux is cheap, so cheap that you can get it for free, unlike many other operating systems. Linux can be locked down very tight and can have a very small software footprint when optimized for a certain function like being a firewall. You can use an “obsolete” Pentium II class personal computer and small hard drive to get a firewall and extra features like Snort for IDS and syslogging for reports. Take the Firewalling the Network p. 24

hardware to a reasonably fast Pentium III class personal computer and now we can add in antivirus, content filtering, proxy services and even VPNs for a small shop. Or as an option, you can buy one of the small two or three interface WRAP, WISP or other embedded Linux motherboards and use LEAF, M0N0WALL or other custom Linux Firewall software to build your own miniappliance. Linux is very modular so we can load only the modules we need to run our firewall and now have to worry about locking down something that should not exist in our firewall unless some other operating system based firewalls. Being modular allows Linux to be very flexible and since it is so flexible, it can be tuned to meet virtually any networking requirement. Any firewall has certain features and some firewalls have more then others. A list of basic features of firewalls is shown here:
• • • • Static packet filters NAT and PAT Stateful inspection of packets Proxy

Each feature has strong and weak points and not all of these features are found on all firewalls. But in order to look at a certain firewall’s feature set and to make a choice whether to use it or not, you need to know about all of these features. Let's take a look at these features and what they bring to our firewall and why they might be important for our firewall to have. The static packet filters in the simplest form are access control lists or ACLs. These lists can filter packets based on IP address, mac address, packet type, protocol and if the packet is from the source or from a certain destination. These ACLs can be simple to implement but, depending on the firewall, not very flexible. Depending on the device and how the filter list is order, they can add significant amount of overhead to the CPU. When using ACLs, they should be used sparingly and used to perform a cursory filtering. An obvious disadvantage to using static packet filters is that since they are static, the door they open is always left open. NAT and PAT are universally supported on most if not all firewalls. This technology strictly speaking, is not a firewall technology but has become ubiquitous to firewalls. This spans the sixty-dollar cheap firewalls to the very high end firewalls costing thousands of dollars. NAT and PAT have become the way to get around limitations of IPv4 addressing, reusing IP ranges, merging two networks of the same subnet and other tricks. For many, the most important trick of NAT and PAT is the ability to “overload” a single IP address in front of a private network. In Linux this is called “Masquerading” and is supported in kernel 2.2x to 2.6.x. Linux also refers to Destination NAT (DNAT) and Source NAT (SNAT). Source NAT is when the firewall alters the source address of the first packet to reflect a different source. If this sounds like Masquerading, it is very close since masquerading allows the overloading of a single IP and Firewalling the Network p. 25

changing the source address. Static NAT takes place after any routing and just before the packet is placed onto the wire. Dynamic NAT is when the destination address is changed. You have heard of this when you hear the terms “port forwarding” and “transparent proxying.” The dynamic NAT takes place before routing of the packet. Stateful inspection is where the firewall will build a table that consists of the state of outbound connections and then compare the inbound connections to this table to verify the state of the connections are correct. If the states do not match, then the firewall can drop or send a reset packet to clear the connection. This becomes very important during a SYN port scan where an attacker will scan the firewall with only SYN packets which will leave open connections waiting for the FIN that will never arrive. These open connections can cause the firewall or other network device to run out of memory if it does not have a way to either clear out these half connections or to time them out quickly. It is the details in how the firewall handles this type of probing that helps to separate the cheap firewalls from the better firewalls. A firewall that crashes when attacked is not much good to you or to your users. Proxying is not really a firewalling technology much like NAT is not a firewalling technology but in today’s world, the firewall and the proxy server have merged to the point where proxying is considered a type of firewalling.. The idea behind a proxy server is to have the client send its request to the proxy server which in turn will generate a request on the client’s behalf to what ever service was requested. This can be http requests to outside resources, SMTP requests, POP requests, FTP requests and more. Using a proxy has the advantage of hiding the client but also giving the administrators the ability to verify the clients have permission to access the resource being asked for.

Isn’t a router a firewall?
A lot of folks will talk about firewalls and some of these self-proclaimed experts claim that with the firewall in place there is no need for anything else. And other experts claim that a router is just as good as a firewall. And still others buy a sixty-dollar throwaway router/firewall and claim that will protect their company's information or even their own personal information at home. These claims could not be further from the truth if one just sits down and thinks a bit about what a firewall is and how it works. There are different types of firewalls to choose from. Just like a mechanic's tools, there is the right tool for the job and there are some tasks that certain firewalls are good at and other tasks certain firewalls are not good at. There are several types or classes of firewalls. We can see some of them in Table 2.1: Table 2.1 Firewall Classes Types
Personal Firewall Router Firewall

Kerio, Tiny Firewall, Zone Alarm, Norton Cisco with FW IOS code loaded

Firewalling the Network

p. 26

Low End Hardware High End Hardware Server Firewalls

Linksys, Dlink, NetGear Cisco PIX, Fortinet, Netscreen, CheckPoint Astaro, Smoothwall, MS ISA Server, Linux Netfilter/IP Tables

Each one of these devices is a firewall but there are huge differences between how well they protect your network and data. Each has its place and it is up to you not to mismatch the device or software to a task it was not designed to do. You do not want to protect your enterprise of 4,000 users with a couple of sixty dollar firewalls. This is not a career-enhancing move, trust me on this. Having to stand in front of your boss and your boss's boss to explain how the company just got hacked due to cheapo firewall is not a good thing. Or something worse, how about having to explain to police and Feds that it really wasn’t you who just maxed out several credit cards in your name with all your personal identification that was lifted off your home PC. The personal firewall is good for the road warrior or the consultant who will be on strange and unknown networks all the time. Personal firewalls will even work on the desktop for the home user or the ultra paranoid. They have limits though. They can be a headache when trying to manage them on more than just a few systems. They can interfere with software applications that are not well behaved. This tends to show up in custom or in house applications. The router firewall is a good way to leverage existing hardware or even redeploy some older hardware that may not be up to the task of protecting the perimeter. A Cisco router loaded with their IOS code and the firewall feature set is a good example of this type of firewall. Do not confuse this class of firewall with the small consumer units sold by the pallet load in warehouse stores. They may “route packets” but they are very limited to what they offer as a router and even more limited in what they offer as a firewall. The router firewall offers a compromise between features and performance. Having a router working also as a firewall will impact the routing performance and add latency to the data path. Since routers are generally easy to manage, most router firewalls can be deployed and managed in the enterprise without too much trouble. The low-end hardware firewalls generally fall into the four-hundred to sixhundred-dollar range. These are dedicated boxes and normally are not upgradeable or expandable but this is changing with the very newest generations of designs. Like many other items in the high tech world, a given device gets smaller, cheaper and more powerful as time goes on. These devices are normally marketed to the Small Home office, remote branches and small business offices of 100 users or less. Many times, the new designs offer ways to provide additional security for the LAN when connecting to the remote office. The newer designs also will give up some capacity of user count in exchange for adding VPN connections as a feature. They have to give up something since adding the VPN feature to the firewall will drive up the CPU processing power requirements due to the encrypting and decrypting of the data on the fly. Many of these midrange firewalls are now offering extra features like intrusion detection and content filtering in a package smaller than 1U. At this pricing level, we will Firewalling the Network p. 27

normally see a decent logging function and some offer automated alerting of alarms. High-end hardware firewalls tend to be feature rich and have a very high capacity for user counts that measure in the thousands. Now we are talking about the firewalls that can run into several thousand dollars. They can have very complex configurations and are designed to be managed across the enterprise. These firewalls also will offer anti-virus scanning, content filtering, intrusion detection and various proxy services. They also offer very robust logging and alerting features. Most in this class offer some method of having a redundant firewall in place either as a hot stand by or by load balancing between two or more firewalls. The Server class of firewalls is a very mixed bag. These can run from the recycled 486 and a stripped out version of Linux to a 4U rack mounted with RAID drives and a feature rich OS installed. The OS can be Linux or it may be a very custom form of Linux or even Windows, if you want to consider Microsoft’s ISA Server software. Since this book is about Linux, we will concentrate on the Linux versions. A real advantage to server-class firewalls is that both the hardware and the software tend to be modular, which means upgrades are easily accomplished. Features can be readily added or removed and capacity can be added as needed. But with this flexibility comes the risk of failure as a server class firewall will generally have more moving parts or parts not built to the same level of quality as dedicated hardware firewalls. Let's be honest, you can not expect that ten-dollar imported IDE controller to be as high of quality as the branded controller which costs thirty dollars. The vendor had to cut a corner somewhere in the process and quality assurance is one of the easiest things to trim. This is an import aspect to keep in mind when building up or purchasing the hardware for a server-class firewall. Building a mission critical security device for your network is not the time to be cheap. Server class firewalls can offer everything the high end hardware dedicated solution can offer when built correctly. You can have relatively easy management, robust logging, high capacity, anti-virus scanning, VPNs, intrusion detection, content filtering and more. You can have all of this in two different ways, commercial products or open source and free applications. The commercial products are generally more polished and sometimes more stable than the free applications but in some cases, the free application is very good. The open source applications also expect a higher level of technical expertise from the user where the commercial products appealing to a wider market have made things easier to install and use.

IP v6 and IPTables
When discussing firewalls and what they can and cannot do, we need to discuss the support or non-support of IPv6 on today's networks. If you are considering a firewall for the enterprise, you should investigate if the vendor has a path or already supports IPv6. In the server class firewalls, this is easily accomplished by recompiling the kernel as we saw in Chapter 1. In the smaller units, this is not so easily accomplished and with the SOHO firewalls, it very well may take a long while before the vendors offer IPv6 support, if ever. However, for those that care to, you can take a Linksys 54G wireless Firewalling the Network p. 28

router/firewall which is based on Linux and install a hacked kernel which does support IPv6 and other cool features like ssh. More information and files can be viewed and downloaded about this at Perhaps more SOHO firewall vendors will take note of the success of the Linux based Linksys and offer more flexible products. For those who wish to experiment or to build a real firewall with IPv6 support, there are several sources to draw from. For those who prefer BSD, there is a paper using OpenBSD found at that details how to get a basic IPv6 firewall up and running. For those using Red Hat 9 or Fedora and have configured the kernel for IPv6 support, you can easily add IPv6 firewall support by using our old friend, chkconfig. This is assuming you have the ip6table file. If you do a which command for ip6tables and you come up empty handed, then get the rpm of and install it. For our Red Hat 9 server this is what we had to do even though we had recompiled the kernel to support IPv6 as you may remember in chapter 1. The only thing to remember is that the ip6table module is exclusive and requires that ipchains and iptables be turned off. The steps are shown here here in Figure 2.1: Figure 2.1 Using the service command to control iptables
[root@RedRum utilities]# service iptables stop Flushing all chains: [ OK ]

Removing user defined chains: [ OK ]

Resetting built-in chains to the default ACCEPT policy: [ OK ]

[root@RedRum utilities]# chkconfig iptables off [root@RedRum utilities]# service ip6tables start [root@RedRum utilities]#

We stop iptables for IPv4 and then we start the new ip6tables. If we are using ipchains, this needs to be stopped also. We use the chkconfig command to turn off the iptables and ipchains and then we will use use the chkconfig –level 345 ip6table on command to have ip6tables start each time Linux starts. Patch-O-Matic To help keep iptables patched and current, there is site called and use the tool called “patch-o-matic” to keep your iptables current. You can download the current patch-o-matic at and in Figure 2.2 we see using wget to download the current version.

Firewalling the Network

p. 29

Figure 2.2 Using wget to download current Patch-O-Matic
# wget

Then use tar to unpack the file as we see here:
# tar jxvf patch-o-matic-20031219.tar.bz2

Once you have unpacked the patch-o-matic you can run it as shown below in Figure 2.3 Figure 2.3 Running the Patch-O-Matic for patching iptables
[root@RedRum patch-o-matic]# KERNEL_DIR=/usr/src/linux-2.4 ./runme pending Examining kernel in /usr/src/linux-2.4 --------------------------------------------------------------

Welcome to Rusty's Patch-o-matic!



Userspace: /root/utilities

Testing.. Each patch is a new feature: many have minimal impact, some do not. Almost every one has bugs, so I don't recommend applying them all! ------------------------------------------------------Already applied: submitted/01_2.4.19 submitted/02_2.4.20 submitted/03_2.4.21

Testing... 04_2.4.22.patch NOT APPLIED (4 missing files) The submitted/04_2.4.22 patch: Authors: Various (see below) Status: Included in stock 2.4.22 kernel

This big patch contains all netfilter/iptables changes between stock kernel versions 2.4.21 and 2.4.22.

submitted/04_ip6tables-proc.patch.ipv6 (Patrick McHardy) + Add list of ip6tables matches and targets to /proc. submitted/05_iptables-proc.patch (Patrick McHardy) + Ditto for iptables.

Firewalling the Network

p. 30

::: trimmed for clairity::: -----------------------------------------------------Do you want to apply this patch [N/y/t/f/a/r/b/w/q/

At this point we have several choices to choose from. We can test the patches, we apply them, apply them even if the test failed and more. In Figure 2.4 we see the complete list of choices available to us: Figure 2.4 Patch-O-Matic Choices
-----------------------------------------------------Do you want to apply this patch [N/y/t/f/a/r/b/w/q/?] ? Answer one of the following: T to test that patch will apply cleanly Y to apply patch N to skip this patch F to apply patch even if test fails A to restart patch-o-matic in apply mode R to restart patch-o-matic in REVERSE mode B to walk Back one patch in the list W to Walk forward one patch in the list Q to quit immediately ? for help ------------------------------------------------------

When we have applied our patches, we then have to recompile our kernel to pick up all the changes. The Patch-O-Matic will also let us get the bleeding edge features and some third party features but be warned that they may or may not work as advertised. The patch-o-matic has a readme file that gives more features than we have room here to discuss so read the readme file FIRST.

Firewalling 101
When we start deploying our firewalls we need to follow some best practices and understand what a firewall can and cannot do. There are some typical rules in deploying the firewall and we have them listed here:
• • • • • Deny all traffic unless specifically allowed Block all packets inbound that claim to have an IP address from the interior network or perimeter network Block all packets outbound that claim to have an IP address from the external network Allow outbound SMTP packets for email Allow inbound SMTP packets to a specific host for email

Firewalling the Network

p. 31

• • •

Allow all proxy traffic outbound Allow responses to proxy traffic inbound Allow DNS UDP queries and answers from DNS server to internet

These rules are just a guideline to best practices in deploying a firewall on the perimeter. If the firewall is deployed internally to help protect certain resources, then your settings would be different. We can get around some of the limits by being smart on how we deploy our resources. The classic firewall deployment will be a single firewall with a router in front of it which may or may not be owned by you and a router behind it to handle all the routing on the LAN. Very few firewalls will also route as we discussed earlier in this chapter. In Figure 2.5 we see a diagram of the basic firewall deployment. Figure 2.5 Basic Single Firewall Deployment

DMZ internet Perimeter Router Firewall SWITCH LAN ROUTER


User on LAN

In this case we have a firewall that has a DMZ port built in to it. This is not always the case on the cheaper firewalls. But, we can still have a reasonably secure DMZ by using two firewalls in tandem as we see in Figure 2.6.

Firewalling the Network

p. 32

Figure 2.6 Basic Dual Firewall Deployment

internet Perimeter Router Firewall-1 DMZ Firewall -2




User on LAN

In this diagram we have two routers with only two interfaces, each facing one another. The DMZ is the connection between the two firewalls. A simple hub or switch will give us the port count that we need for the DMZ. Firewalls are not limited to just the borders and perimeters of your network. A current trend in security architecture is to move away from the “pond” model where once you are past the firewall, you have access to the entire pond to a model of islands. Critical resources are identified such as financial servers, HR servers and the like. When you have identified these resources, we will install a firewall blocking unrestricted access to the group of resources. In Figure 2.7 we see a basic deployment of multiple firewalls to isolate resources. Figure 2.7 Multiple firewalls installed to protect local resources
ROUTER internet Firewall-1

Internal DNS

Public Folders User on LAN


HR SQL Database



As we mentioned earlier, this is a good way to recycle routers or firewalls that may not be up to the demands of perimeter duty but still are an asset to the company. Firewalling the Network p. 33

Papers Please
A firewall will inspect the traffic that traverses it and decide if it should pass or be denied. There are different mechanisms to decide or filter the traffic depending on the level of sophistication used by the firewall. The inspection can be as simple as a source or destination IP address match or as deep as inspecting the contents of the packet payload to verify that what the packet claims to be is really what the packet is carrying. This range of inspection technologies is a clue that a firewall can examine more than a single layer in the OSI model. A firewall can go from layer two and examining MAC addressing to the application layer. Linux uses iptables as the core inspection technology and the flow map shown in Figure 2.8 shows us how iptables work in the kernel. Figure 2.8 Flow map of datagram through the kernel with iptables







The Penguin Builds a Wall
One of the great things about Linux is that included in every newer kernel is a method to build a firewall using using netfilter and iptables. Now, we can build the firewall ourselves or we can use one of the many Open Source projects which normally offer extra bells and whistles over the basic configuration. We can even go with a commercial solution such as Smoothwall or Astaro for a Linux based and offically supported solution. We can have our firewall where it can boot off a business card size CDR, run on a small embedded PC form factor or on a older personal computer that we happen to have laying around. You can even recycle an older Macintosh with the Power PC chip by using Yellow Dog which is based on Fedora. You can even recycle old vendor equipment like I did by using an old IDS sensor from Cisco which was nothing more then a dual Pentium II PC in a fancy box. When building a firewall using Linux, we have shut off certain services or remove the modules completely from the kernel to avoid security holes. A firewall should be a dedicated box so there is no need for OpenOffice, developers tools, games and in some cases, X Windows installed. The command chkconfig is now your new best friend for finding out what services are running and to turn them off. Of course, recompiling your kernel is always an option. We see in Firewalling the Network p. 34

Figure 2.9 that we used the command chkconfig -list to get a list of what process is currently running at which run level. Figure 2.9 chkconfig –list command showing list of processes
[root@RedRum root]# chkconfig --list kudzu syslog netfs network random rawdevices pcmcia saslauthd keytable apmd atd 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:on 1:off 1:off 2:off 2:on 2:off 2:on 2:on 2:off 2:on 2:off 2:on 2:on 2:off 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:off 3:on 3:on 3:on 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:off 4:on 4:on 4:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:off 5:on 5:on 5:on 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off

:::truncated for clarity:::

We can see things like kudzu is running at run levels 3, 4 and 5. Syslog is running on levels 2, 3, 4 and 5. Kudzu is a process that detects new hardware and is the type of service we need to remove, disable or otherwise stop from running on our firewall. In the following list, we see more services that might be shut down.
• • • • • • • • • • • • • • nfs nfslock kudzu pcmcia ftp httpd lpd telnet isdn (unless connecting via ISDN) rsh rlogin smb sendmail named

Firewalling the Network

p. 35

• • •

autofs gpm rhnsd

This is not an exclusive list of services; we have more services suggested in Chapter 6 where we configure a Snort Intrusion Detection Sensor that shares the same need for a tight Linux installation. In order to shut down a given service, we can use chkconfig command to specify the runlevels and what the service is that we want shutdown. To help you secure your Linux distribution, we have a list of sites in Table 2.2 that offer security guides : Table 2.2 Security Guides for Linux and Unix Distributions Distribution
Debian Red Hat Mandrake SuSE FreeBSD Unix


There are other guides available by using Google or your favorite search engine but these will get you started. Once you have read through the guide for your distribution and configured your Linux server as suggested to lock it down, we need to still do some minor housekeeping. We need to make sure there are no extra users configured, root access is not enabled by default, all current patches are in place and that you have a backup of the clean system all dialed in. These tips are covered in many of the security guides but it does not hurt to go over yet again. Also we do not want to forget about the physical security of the firewall. This is not a device that you want anyone to have ready access to, so the firewall itself needs to be secured. You also should configure the firewall to not boot off CD, floppy, USB or any other port.

Bastille Linux
There is another option in the effort of building a secure Linux server to be a firewall and that is called Bastille Linux. This is the brainchild of Jon Lasser, Jay Beale and a host of unsung heroes of Linux who put in time and effort in developing this set of scripts. Bastille attempts to lock down Linux using scripts that you will answer questions to as you work your way through the process. Bastille is packaged for Red Hat 9, RHEL3, Fedora, SuSE, Mandrake and Debian. You always have the choice to compile Bastille from the source files. In Figure 2.10 we see the start of the Bastille Linux process which takes you through a series of questions and answers.

Firewalling the Network

p. 36

Figure 2.10 Bastille Linux locking down a SuSE box

There are some requirements for Bastille to install and run. You need to have the packages perl-Curses and perl-Tk installed. You can get these files from several places such like or Dag Wieers but the starting place is and use this chart to match your version of Linux to where to get the needed files.

Free is good
There are several ways that we can build our own firewall using netfilter or firewall software. And let's get one thing out of the way first thing, building rules for a firewall by hand is not a fun task. It’s fine to know how they are built but when there are so many good tools available, why do it the hard way when you are just getting started? One of the easiest ways is to go to and using the excellent tool found there, build your own rules and tables. In Figure 2.10 we see the basic interface for fwbuilder. Figure 2.11 Firewall Builder Interface

Firewalling the Network

p. 37

Firewall builder gives you a way to use wizards to help build your policies and to compile them for the firewall to use. Firewall Builder comes with over 100 predefined objects for popular protocols and services. IPCOP IPCOP is an open source project based dedicated firewall that can be found at Ipcop follows the “cop” metaphor in several ways. The interfaces are called Red, Green, Blue and Orange depending on their function. The functions are listed below:
• • • • RED Interface GREEN Interface BLUE Interface ORANGE Interface The untrusted network The trusted network or LOCAL network A home for wireless devices which are allowed access to the GREEN by way of “pin holes” or VPNs This is the DMZ of IPCOP. Tightly controlled.

The RED and GREEN networks are mandatory while the BLUE and ORANGE are optional interfaces. Each interface must be a separate physical interface. The security model of IPCOP is similar to Smoothwall where anything on the GREEN network is trusted and allowed to egress the network as legimate traffic or some piece of spyware/trojan phoning home. Red is the least trusted interface, then comes ORANGE and finally BLUE. To start the installation of IPCOP we download the ISO image and then burn a bootable CDR. The installation routine is pretty standard without any surprises and it will erase the hard disk drive, so make sure you have backed up whatever is on the disk if you want it back later. A plus of IPCOP is that it has good documentation to build a flashcard based firewall. Once IPCOP is installed we can access it by using https (note the s at the end of the http) and logging into IPCOP. Along with the web interface, we can configure an SSH administrative console. When we start IPCOP we see the screen that shown in Figure 2.12 Figure 2.12 – IPCop Startup Screen

Firewalling the Network

p. 38

We have a tabbed interface that we access by using a web browser of our choice. The tabs are very easy to follow and the menuing is relatively straightforward. The status screen that we see in Figure 2.13 gives us the complete status of the services listed, as well as, memory and disk usage. Figure 2.13 IPCop Status Screen

There are several links just below the tabs at the top of the scren also indicate uptime, loaded modules, users, routes, arp tables and more. In Figure 2.14, we see where we will configure our rules for IPCOP. Figure 2.14 IPCop Rules Configuration

The act of creating rules using this interface is very easy. Creating VPNs is just as easy with simple screens and choices. In Figure 2.15, we see our traffic graphs that will tell us the amount of traffic inbound and outbound on our firewall.

Firewalling the Network

p. 39

Figure 2.15 IPCop Traffic Report

While this is not an indepth review of IPCOP, my own thoughts are that the price is right (it's free) and the interface is elegant and well designed. IPCOP has some excellent features such as Proxy services, DHCP server, Dynamic DNS management and traffic shaping. And along with the solid feature set, IPCOP is well documented which is a great help to the security administrator or engineer configuring IPCOP. Firestarter Firestarter is a open source tool to help easily configure your firewall with a Gnome GUI front end and can be found at Firstarter is in the style of a wizard that you start as we see in Figure 2.16 Figure 2.16 Starting of Firestarter Firewall Configuration Tool

Firewalling the Network

p. 40

One we start Firestarter, we then configure our interface and the ports we want the rest of the world to access to, as we see in Figure 2.17. Figure 2.17 Configuring Access to Ports

Once we have the ports configured, we then save our configuration and start the firewall. Once the firewall is started, we have a real-time view of what is being blocked by the firewall, and we can adjust our ports by simply clicking a few times with the mouse. In Figure 2.18 we see the real-time monitoring after I ran a port scan against the newly protected Fedora host. Figure 2.18 Results of a port scan against newly protected host

Shorewall Shorewall is a tool for configuring Netfilter. Shorewall can be downloaded at along with current news and some excellent documentation.

Firewalling the Network

p. 41

Shorewall requires that your Linux distribution have a file called iproute/iproute2 installed. To verify that you have this file, use the which ip command to check for a response like /sbin/ip

[root@RedRum utilities]# shorewall start Loading /usr/share/shorewall/functions... Processing /etc/shorewall/params ... Processing /etc/shorewall/shorewall.conf... Loading Modules... Shorewall Startup is disabled -- to enable startup after you have completed Shorewall configuration, change the setting of STARTUP_ENABLED to Yes in /etc/shorewall/shorewall.conf [root@RedRum utilities]#

When you look in the /etc/shorewall/policy file that Shorewall uses, you will find five policies defined:
• • • • • Accept Reject Drop Continue None

Along with the five policies, there are five columns as we see here:
######################################################### #SOURCE DEST # POLICY LOG LIMIT:BURST LEVEL

Our rules and polices will be built using these five policies and five columns. For a single interface we see below how the policy might look:

This short configuration allows all connections to the internet from the firewall and then drops any connection requests from the internet to the firewall. Shorewall requires the REJECT of all other connection requests. This is pretty straightforward and easier than trying to write the iptable rules directly. Shorewall keeps the interface information in /etc/shorewall/interfaces and assumes that eth0 is the default interface. You can edit this file to reflect your

Firewalling the Network

p. 42

own interface ID if the interface is something else other than eth0 such as ppp0 or ippp0. Along with the basic rules, Shorewall has something called “ACTIONs” which allow you to configure things like ALLOWing webserver access or SSH access. A sample ACTION would be is that you want to allow webserver access and SSH on your server. In the following code sample we see how this is accomplished:

A complete listing of the available ACTIONs in Shorewall can be found in /usr/share/shorewall/actions.std. Shorewall uses a concept called “Zones” to help manage the security. In the single interface above, the zones were not a big deal but with three interfaces in use, outside, inside and DMZ, zones becomes important. In this sample, we see a basic zone configuration for three interfaces. This file called “zones” can be found /etc/shorewall/zones. Below is what the zones file contains:
#ZONE net loc dmz DISPLAY Net Local DMZ COMMENTS Internet Local Networks Demilitarized Zone

We can see the three zones configured for the internet, the local network and the DMZ. Web Based Tools There is a subgroup of rule building tools that are not available for downloading but are intended to be used on the net. There is a site called Linwiz at that Chris Lowth has put together that offers a very nice and free way to build some iptables rules for your soon to be firewall. He has broken it out into two wizards: one for personal firewalls and one for server firewalls. We see a screen shot of his interface in Figure 2.19: Figure 2.19 Chris Lowth’s LinWiz Wizards

Firewalling the Network

p. 43

Chris has some good descriptions on the site to what each wizard can and cannot do. The one caveat is that these rules are for a workstation or server with a single interface installed.

Commercial Firewalls
This is not by any stretch a comprehensive list of commercial firewall providers. What I am doing is showing a couple of the bigger names to give you an idea of what is available. The first commercial product we will start with is called “Astaro” and uses iptables, Snort and Webmin in a very customized package. Astaro With Astaro, we get all the expected features of a commercial product. We get HTTP proxy, DNS Proxy, SMTP proxy, AV, IDS, VPN, NAT and much more. Astaro is available for downloading from and you can get a limited but free license for 10 home users from Astaro. In my research for this book, I used an old Cisco 4230 IDS sensor as my “server” to test. It is a dual PIII 600 and while a tremendous amount of overkill for the typical firewall, it did show how robust the installation of Astaro is. When the installer for Astaro started, it easily picked up the dual processors, the hard drives, both NICs and Astaro was up and running in about 15 minutes from the time I placed the CD into the drive. The configuration of Astaro is handled by a web interface using SSL and is pretty straightforward as we can see in Figure 2.20: Figure 2.20 Introduction to Astaro and menus.

Astaro starts out with the premise that nothing is allowed in or out for either direction. Pretty secure way to start, huh? Of course, if you don’t know that Firewalling the Network p. 44

default rule, you can get pretty frustrated because this rule is implied, not spelled out. All the configuration is driven from the various menus. In Figure 2.21, we see where we build our rules and filters to manage traffic access. Figure 2.21 Astaro Rules and Filters Screen

Astaro has built-in reporting that is basic but effective as we can see in Figure 2.22. Figure 2.22 Basic Executive Summary Report from Astaro

The reports can be viewed on the screen, printed or emailed depending on your configuration. Other reporting features are excellent logging and a choice of local logs or sending the information to a remote syslog server. Updates from Astaro can be handled automatically or you can apply them manually. The content filter and AV scanning works well. The AV has a feature where when a user downloads a file, it's cached on the firewall and scanned, then the user is presented with a link to download the scanned file from the firewall. Firewalling the Network p. 45

Smoothwall Smoothwall is another product with feet in both camps of commercial and free. There is Smoothwall Express and there is the Smoothwall Corporate version, and both are available from The Smoothwall Corporate is modular with various plugins that offer more functionality or features depending on your need and budget. In Figure 2.23, we see the startup screen for Smoothwall. Figure 2.23 Smoothwall Start Up Screen

One of the first big differences you will notice between Smoothwall and Astaro is that Smoothwall will allow traffic out of the network by default, unlike Astaro which defaults to a “block everything” rule. The philosophy behind Smoothwall is that you do not have to be a Linux geek in order to configure and use Smoothwall. This is unlike Astaro where it helps to be a bit of a geek since there is not very much hand holding within the menus. We can see this in action on the networking screen in Figure 2.24: Figure 2.24 Smoothwall Networking Screen

Firewalling the Network

p. 46

The interface for adding, blocking or passing traffic is very straightforward. Smoothwall's hardware requirements are pretty minimal even when using the VPN module with proxy services enabled. A Pentium III 500 with 256MB of RAM will get the job done in style. Gibraltar This is a commercial product with its roots in open source so there is a free personal version which is limited to 5 users and no web interface. One of the more interesting features of Gibraltar is that it can run from the CD and not use a hard drive. This concept is good from a security perspective, but it means you have to get the configuration absolutely right and changes afterwards can be a pain. You can download Gibraltar at Once you have the file and have burned the ISO image, all you need to do is boot your soon-to-be firewall off the ISO image. At the prompt, just type in command defaultconfig to get the firewall up and running with a default rule set. To log in to the firewall at the command line just use the name “root” and a blank password. To use the web administrator interface, go to which is the default IP address unless you have changed it using the ifconfig command through the console. You must use https since this is a secured connection. The first step is to upload the license for Gibraltar. In Figure 2.25 we see the startup screen. Figure 2.25 Gibraltar Startup Screen using https

There is a 30-day license for testing or a free home use license for 5 concurrent connections. Once the license is installed, we can move on to configuring the firewall. In Figure 2.26 we see how we can set the various services on the firerwall. We can see in this screen shot that Gibraltar as very simple and elegant design for the interface.

Firewalling the Network

p. 47

Figure 2.26 – Gibraltar Setting Services

Sometimes setting up NAT can be confusing but Gibraltar has done a very clean job of their menuing, as we see in Figure 2.27: Figure 2.27 Setting NAT with Gibraltar

Gibraltar offers HTTP, FTP and POP3 proxy services which is a real help for security. Like almost all firewalls now, VPNs are offered using IPsec. And what is a firewall without rules? Again, Gibraltar presents cleanly designed menus as we see in Figure 2.28: Figure 2.28 Setting Rules with Gibraltar

Firewalling the Network

p. 48

And finally, without a way to save all of this configuration, it would be a waste of effort and here is where Gibraltar is different from most firewalls. In Figure 2.29, we see that we can save our configuration not only to disk but to USB, floppy or source. Figure 2.29 Save Configuration Options

This is a great idea for the paranoid among us. You can save the configuration to source and burn a new CD. Now the firewall cannot be hacked, compromised and remaining compromised after a restart of the firewall. Why? Because a simple reboot off the custom CDR will put everything right back to where it was and that most of the system is on read-only media. We can also save to a USB device or to a floppy. This is really a great feature and I'm surprised more firewalls do not offer it as an option. Resources As we can see from this chapter there is a wealth of choices regarding firewalling with Linux. We can roll our own using various Open Source projects or we can buy some very high-quality products based on Linux. And even then, some of the products, since they were based on open sourced projects, can be had for free in a personal edition. Each product has its own way of presenting the interface to make the firewall work and how to manage the firewall, but in the end, they work virtually the same way underneath the pretty GUI front ends. You are strongly encouraged to play around with these applications and see which one you prefer for whatever reason. They all work well although some claim to work better than others. For the command line junkies among us, you can go nuts playing with iptables in the next chapter, but for the folks who just want to get a firewall up and running, these applications and others not mentioned are an easy way to get started. And yes, if I did not mention your favorite firewall, do not despair; it was not that I did not like it, I just did not have the pages to write about every single firewall for Linux there is. So as amends, here is a partial list of other firewalls for Linux:

Firewalling the Network

p. 49

Guard Dog Firewall configuration tool LEAF

Embedded Linux Appliance Project Sentry Firewall CDR based firewall XFWall GUI Firewall M0N0WALL

Embedded Firewall based on BSD

Firewalling the Network

p. 50

Chapter 3
“There is nothing so likely to produce peace as to be well prepared to meet the enemy.” George Washington

IP Tables, Rules and Filters


e just learned about several tools and commercial firewalls for Linux and for configuring iptables. So why learn how to write our own rules? Because as good as the tools are, there will come a time when the tool will not be able to write the rule that you need. So you will need to know how to write the rules manually and use the interface to netfilter called iptables.

To refresh our memory, in chapter 2 when we discussed firewalls, you may remember that a packet filter is a device that examines the header information of a packet and then uses rules to accept, deny or drop the packets. We also learned that packet filtering is part of netfilter which is part of the Linux kernel. What we call iptables is the user interface to netfilter and it gives us a method to build custom rules for our packet filter. The iptables feature for Linux is not nearly as well known as it should be and we hope to help overcome some of the shadow that seems to cover the concept of iptables and rules on Linux. To answer the question of why use iptables, lets take a quick look at some of the key features of iptables: • • • • • Stateless packet filtering for both IPv4 and IPv6 Stateful packet filtering for IPv4 Very flexible implementation of NAT Extendible architecture Many plugins and modules available

With this flexibility built into iptables, we can build stateless or stateful firewalls; use NAT to share a single IP internet connection; mangle packets by rewriting certain fields and more. As we learned in chapter 2, iptables is part of netfilter and and writing the rules can be quite complex to do manually. But there are times when the manual touch is needed so we need to learn what iptables are, how to build the rules and how the rules actually work. Before we get too far into it, we need to go over some basic definitions that we will use when working with iptables. • • Table Chains There are three tables: FILTER, NAT and MANGLE. The default is FILTER A chain is the path a packet will travel through netfilter

IP Tables

p. 51


Rules are what we write and place into the chain to achieve a result

In this chapter, we will also assume that eth0 is the OUTSIDE or WAN side of the firewall and that eth1 is the INSIDE or LAN side of the firewall. The whole process of iptables and firewalls can be summed up in a few steps that are shown in Figure 3.1. We will assume we just requested a web page and now netfilter and iptables takes over. Figure 3.1Steps of packet flow through iptables • • • • Outbound packets go through the OUTPUT chain The packet is examined by the kernel to see if there is a match in the rules If there is a match, the rule is executed If there is not a match, then the chain policy is executed

That is not too hard to understand, is it? Now that we know what happens, just how do we get it to happen? This is pretty easy also but we can make it very complicated. I like things to be simple so we will stay with simple for now. The syntax of iptables can be broken into some basic building blocks; chain syntax, rules and iptables. We can see each block in detail below:. Chain Syntax • • • • • • iptables -N <name of chain> This is how we create a new chain iptables -X <name of chain> This is how we delete a custom chain iptables -P <name of chain name of policy> The policy is DROP or ACCEPT iptables -L <name of chain> This is how we list the rules in a chain iptables -F <name of chain> This is how we flush the rules in a chain iptables -Z <name of chain> This will zero the packet and byte count

Rules Now that we have seen the iptable's syntax, let's take a look at the syntax for rules. We will not cover all the various options here but we will cover the more common and therefore, more immediately useful options. You can read about all the options in the iptables man pages. A rule can perform certain actions on a chain and these actions are: • • • IP Tables Append (-A) Delete (-D) Replace (-R) p. 52

Insert (-I)

The filter table has three built-in chains, INPUT, OUTPUT and FORWARD. The NAT tables has two built-in chains, PREROUTING and POSTROUTING. The mangle table has two built-in chains, PREROUTING and OUTPUT. The options list that describes what to match in a rule is too extensive to list here. For a starting point of gathering more information, go to The partial list is shown in Table 3.1: Table 3.1 Partial What to Match Options List Option
-p -s -d -i -o --dport --sport --tcp-flags --syn --state --mac-source

Protocol: TCP, UDP, ICMP, etc. Source Address Destination Address Incoming Interface Outgoing Interface Destination Port Source Port Allows TCP flag matching TCP Flags SYN, RST, ACK SYN NEW, ESTABLISHED, RELATED or INVALID Source MAC address

We have options of where to send the packet if there is a match. We have the list of common options in Table 3.2: Table 3.2 Where to send Options on Match Target

The packet is accepted The packet is silently discarded The packet is discarded and the sender notified Packet is logged via syslog and not modified Swap the Source/Destination IP address and resend Return to the calling chain

Building of a Basic Rule The basic iptable syntax is very simple as we will see. The command is this:
Iptables [ -t <tablename>] <chain-name> <parameter-1> <option-1>

IP Tables

p. 53

We will start by building a rule that will block all protocols from a single IP address. This will give us the basic concepts of putting a rule together from the building blocks that make up the rule. Demonstrating rules In Figure 3.2 we are using ICMP to send a single ping to our loopback interface. Figure 3.2 Sending ICMP packet to the Loopback Interface
[root@FedoraC1 root]# ping -c 1 PING ( 56(84) bytes of data. 64 bytes from ICMP_seq=0 ttl=64 time=0.111 ms

--- ping statistics --1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms, pipe 2 [root@FedoraC1 root]#

Now we will make a rule to block ICMP by using DROP and resend ICMP to the loopback interface. We see the process in Figure 3.3: Figure 3.3 Making DROP rule and resending ICMP test
[root@FedoraC1 root]# iptables -A INPUT -s -p ICMP -j DROP [root@FedoraC1 root]# ping -c 1 PING ( 56(84) bytes of data. --- ping statistics --1 packets transmitted, 0 received, 100% packet loss, time 0ms

We see that the rule blocked the ICMP packet and there no any messages sent back saying that the packet was rejected or otherwise. The packet just disappeared in the ether of the network. Now lets see how we can delete the rule and get our ICMP working again. We will use the same command as we did when setting up the rule but type -D instead of the -A. If you have a lot of rules or if you are not sure, this is the safe way. An easy but dangerous way to delete rules is to use the -F (FLUSH) command which will flush ALL rules in the given chain. In Figure 3.4 we see the deletion of the rule and the results when we send our ping to the loopback interface. Figure 3.4 Deleting ICMP rule and the retest
[root@FedoraC1 root]# iptables -D INPUT -s -p ICMP -j DROP [root@FedoraC1 root]# ping -c 1 PING ( 56(84) bytes of data. 64 bytes from ICMP_seq=0 ttl=64 time=0.539 ms

--- ping statistics ---

IP Tables

p. 54

1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms, pipe 2 [root@FedoraC1 root]#

We can see that the rule was deleted and our ping test is working again. In this simple demo we have several concepts applied. We have Appended (-A) a rule to a chain, in this case, the INPUT chain. We have shown how to delete (-D) the rule from the chain. We used the -s to specify the source address which in this case was the loopback address. Let's talk about source and destination addressing in a rule. We have four ways that we can specify the source and destination addresses in our rule. Figure 3.5 lists the four ways: Figure 3.5 Ways to specify Source and Destination Addressing 1 2 3 4 Use a qualified domain name like or localhost Specify a specific IP address such as or Specify an IP address range using the slash notation such as /24 Specify an IP address range using the full octets such as

We can specify our protocol by name like we did in the demo with ICMP or we can specify by protocol number such as 23 (telnet). We can even invert the rule by using a | in front of the protocol like -p |23 which would translate into anything NOT telnet. When we specify our interfaces, we need to keep the packet flow in mind. For example, if we have a rule that will apply to the INPUT chain, it does not serve a purpose to specify an output interface as there is not one on the INPUT chain. And conversely, packets using the OUTPUT chain do not have an input interface. If a rule is applied with an interface specified, there will never be a match to it. We can specify the + sign after an interface to say “match to ALL of this type of interface.” One of the most common uses of this is with PPP on dialup. We would specify the interface as PPP+ which would apply the rule to any PPP interface. Advanced Rules This is where the rubber meets the road for making and using rules. While this guide will not give you every way to make a rule, we will give enough hints to send you off on your own. There are many detailed resources available on the internet where someone has focused on a particular problem and designed a rule or a set of rules to solve it. Matching Connection States As we read earlier, iptables will support Stateful and Stateless packet filtering. To use these filtering techniques we have four match criteria that we can use. These four match criteria are: 1 NEW The packet creates a new connection p. 55

IP Tables

2 3 4

Established Related Invalid

The packet belongs to a current connection The packet is related to but not belonging to a connection The packet could not be identified for some reason

A sample rule for using these matches is:
# iptables -A FORWARD -i eth0 -m state ! --state NEW -j DROP

In this sample rule, we are Appending a rule to the FORWARD chain for interface eth0. We then say that if the state is not NEW that the rule is to DROP the packet. Configuring NAT NAT is one of those basic but advanced configurations that most people have to deal with at one time or another. It can be very useful when you only have a single IP address for your home network, you want to maximize the usage of your pool of IP addresses at work, you need to merge two networks that use the same subnet scheme or you want to isolate a section of your LAN. And these are just a few examples; with some imagination, the list could easily go on for a few pages. We can set up a one-to-one NAT or one-to-many NAT. The sample configuration will be using NAT in a home office or a SOHO with a single IP address and small LAN behind it. In the world of Linux there is something called “Masquerade” which is the equivalent to one-to-one NAT. Before kernel 2.4, there was a requirement to use iproute2 to have real NAT but that requirement is no more. So with kernel 2.4 and newer, Masquerade is considered a true one-to-one NAT now. Let's verify that you have a kernel compiled to support NAT. We run a simple command as we see here:
ls /proc/sys/net/ipv4

We are looking to see if there are two files, “ip_dynaddr” and “ip_forward” listed in the output. If you see these two files, you are ready to go but if they are not there, the kernel needs to recompiled. Whether the kernel built IPMASQ statically or dynamically, these files would be present. Next, we need to check for a list of files like these: • • • ip_masquerade ip_conntrack ip_tables_names

We start by looking to see if these modules were compiled statically by using the /sbin/lsmod command. If we do not see them, then use ls /proc/net and see if they are there. If you see the files using the ls command but no modules showed loaded when you use the lsmod command, then you are statically compiled and

IP Tables

p. 56

ready to go at this point. If you did not see the files yet, we have one more place to look. We will use the command:
ls /lib/modules/`uname -r`/kernel/net/ipv4/netfilter/

A sample output would look like Figure 3.6: Figure 3.6 Listing Modules for iptables and IPMASQ
[root@RedRum root]# ls /lib/modules/`uname -r`/kernel/net/ipv4/netfilter/ arptable_filter.o arp_tables.o ipchains.o ip_nat_snmp_basic.o ip_nat_tftp.o ip_queue.o iptable_filter.o iptable_mangle.o iptable_nat.o ip_tables.o ipt_ah.o ipt_conntrack.o ipt_dscp.o ipt_DSCP.o ipt_ecn.o ipt_ECN.o ipt_esp.o ipt_helper.o ipt_length.o ipt_limit.o ipt_LOG. ipt_mac.o ipt_mark.o ipt_MARK.o ipt_MASQUERADE.o ipt_MIRROR.o ipt_owner.o ipt_pkttype.o ipt_REDIRECT.o ipt_REJECT.o ipt_state.o ipt_tcpmss.o ipt_TCPMSS.o ipt_tos.o ipt_TOS.o ipt_ttl.o

ip_conntrack_amanda.o ip_conntrack_ftp.o ip_conntrack_irc.o ip_conntrack.o ip_conntrack_tftp.o ipfwadm.o ip_nat_amanda.o ip_nat_ftp.o ip_nat_irc.o


ipt_multiport.o ipt_unclean.o

A listing like this tells us that iptables was compiled with modules and is ready to go for setting up IPMASQ. If you still do not see the files, then you need to recompile the kernel for iptables and masquarade support. We recompiled the kernel in chapter 1 to enable IPv6 support. You can refer to chapter 1 for instructions on recompiling your kernel and you can go to for more information on recompiling your kernel.
Just a word here about this firewall script. The original script is found at and written by David Ranch. It is very hard to improve on a good thing and David has written an excellent starter firewall script which I have reproduced part of here.

When we start writing our basic NAT script, you will see some differences if you compiled iptables yourself or if iptables was included by default. The difference is where iptables is located. If you compiled iptables yourself, you will find iptables in /usr/local/sbin, and if iptables were included by default, you will find iptables in /sbin. You can find out where iptables is located by using the whereis iptables command. One of the first things we need to do is to enable ip forwarding since ip forwarding is disabled by default on most if not all distros. This is easily accomplished by this command:
$echo “1” > /proc/sys/net/ipv4/ip_forward

IP Tables

p. 57

The “1” will replace the “0” that disables the ip forwarding. For some versions of Red Hat, we can edit /etc/sysconfig/network to this:

Now we need to write a set of rules that allows packets out and only allows existing connections in through the firewall like we discussed earlier. Remember, eth0 is our external interface and eth1 is our internal interface.
$IPTABLES –A FORWARD –I ethe0 –o ether1 –m state –state ESTABLISHED, RELATED –j ACCEPT $IPTABLES –A FORWARD –I ethe1 –o ether0 –j ACCEPT

$IPTABLES –A FORWARD –j LOGAnd now we enable NAT or IPMASQ depending on how you want to phrase it.

Now we have NAT enabled for our firewall with some basic connection protection in place. As you can see, making up the rules is pretty easy with some forethought and some understanding of how the rules are formatted and how they work. Defending Against Basic Attacks Since iptables is stateful inspection, or at least it is for Ipv4, we can configure rules to block or limit some types of traffic based on state. For SYNFlooding, we can configure iptables to limit the number of SYNs to one per second by using the limit match.
#syn-flood protection iptables -A FORWARD -p tcp --syn -m limit --limit 1/s -j ACCEPT

The port scanner defense checks the TCP flags and applies a limit to the connection.
#furtive port scanner iptables -A FORWARD -p tcp --tcp-flags SYN,ACK,FIN,RST RST -m limit

The "ping of death" rule is based on a malformed ICMP packet that is an echo request. The packet was larger than the "legal" size per RFC and many TCP/IP stacks would crash when one of these arrived. It is not such a problem any more but there is no sense in letting them in. You would do well to also add a rule for ICMP flooding along the same lines as the SYN flood protection rule.
#ping of death iptables -A FORWARD -p ICMP --ICMP-type echo-request -m limit

IP Tables

p. 58

If you want some protection against DoS you can add one connection limit to rules. -m limit --limit 5/s --limit-burst 2 This will set a limit of ~5 new connections per second. You will need to adjust the --limit-bust and -limit numbers to reflect your needs.

These few rules just scratch the surface of what you can write to defend your network. The writing of iptable rules can be very complex and I would suggest that you find a good script and modify it to suit your own needs. Examing The Rules Now that we have all these rules, how do we view them? Or if you walk up to an unknown system and what you need to do is to view the rules that are currently in place? The good news is that it is easy to see what rules are defined and in place. In Figure 3.7 we see the use of the -L command. Figure 3.7 Using -L to see the current rule set for iptables
[root@RedRum log]# iptables -L Chain INPUT (policy DROP) target ACCEPT ACCEPT ACCEPT ACCEPT ACCEPT ACCEPT ACCEPT ACCEPT ACCEPT ACCEPT prot opt source destination anywhere tcp flags:!SYN,RST,ACK/SYN anywhere anywhere tcpflags:!SYN,RST,ACK/SYN anywhere anywhere tcp flags:!SYN,RST,ACK/SYN anywhere anywhere anywhere anywhere anywhere

tcp -- udp -- tcp -- udp -- tcp -- udp -- all -- all -- all -- all --

These are the iptable rules configured on RedRum. RedRum is accepting tcp from and ALL from 10, 14, 18 and 101. RedRum is also accepting TCP and UDP from If you use the -v (small v) with the -L you can get traffic statistics of packet counts and byte counts. Strengthen Your Rules with ROPE Using ROPE sounds pretty funny when talking about iptables but in this case, ROPE is a match module with a scripting language for iptables. You can download ROPE from and read a nice tutorial on

Updating Linux

p. 59

how to use ROPE. What does ROPE do for us? ROPE gives us a way to enhance our rules in iptables so we can now block and drop packets such as bittorent, Gnutella and Limewire. In Figure 3.8 we see a sample of a script that will block Gnutella. The scripting language is very straightforward and easy to pick up. Figure 3.8 Configuring ROPE script to block Gnutella
expect_str( "GNUTELLA CONNECT/" ) expect_while( {isdigit} ) expect_str( "." ) expect_while( {isdigit} ) expect_str( chr(13) chr(10) ) yes

Once we have a script, we need to compile it using Perl so we can insert it into an iptable rule. This is a good time to point out that ROPE offers two modes of operation. You can run the script in "userland" which lets ROPE interpreter debug and test your script as a user process. The second mode is called "KernelLand" and this runs as an iptable module in the kernel. So we have tested our new ROPE script and compiled it so we are ready to put it into production. In Figure 3.9 we see how to insert the script into an iptables rule. Figure 3.9 Inserting ROPE script into iptables rule
iptables -A FORWARD -m rope --script gnutella - DROP

So you are thinking that this ROPE stuff is pretty cool and you want it? Not a problem, you need to download the source and recompile your kernel or a replacement kernel. ROPE has been developed and tested on kernel 2.4 and there are promises of 2.6 kernel support shortly. There is a set of detailed instructions available for recompiling the kernel and enabling ROPE which can be found at To use the debug mode or userland mode of ROPE, we need a sample file for the script to run against. While this is not difficult to get, it will take a few steps as we shall see. Our first step is to get a trace file containing the packets we want to filter on. To accomplish this, we can use tcpdump as we see here in Figure 3.10. I have set up tcpdump to catch some traffic as a test. Figure 3.10 Catching packets with tcpdump
# tcpdump -w mikestest.dat -s 80 tcpdump: listening on eth0

16 packets received by filter 0 packets dropped by kernel [ root@orion test]#

Now that we have a sample trace file, we need to convert it to ASCII as seen here in Figure 3.11:

Updating Linux

p. 60

Figure 3.11 Converting tcpdump file to ACSII
[root@orion test]# tcpdump -r mikestest.dat -X > test.txt [root@orion test]# cat test.txt 11:44:56.605528 > P 3131951956:3131952024(68) ack 3602554727 win 10960 (DF) [tos 0x10] 0x0000 0x0010 0x0020 0x0030 0x0040 4510 006c d2fc 4000 4006 8212 c0a8 3212 c0a8 320a 0016 0831 baad cb54 d6ba 9f67 5018 2ad0 851b 0000 6786 23f6 516c 8e11 6466 10fa 0bac d7a4 62ab 8428 8a0b 6581 aa6c E..l..@.@......2 ..2....1...T...g P.*.....g.#.Ql.. df......b..(..e.

Now that we have our text file, we need to edit the file just for the test packet we want to test against. Once we have done that, we need to convert from the text file to a binary packet dump file using the command line tool rddump which is part of the ROPE package. We see in Figure 3.12 using the rddump utiltity to perform the conversion. Figure 3.12 Converting test.txt file to binary packet dump file
[root@orion test]# rddump test-edited.txt packet.bin

Now you write your script that you want to use the test packet with and compile it shown in Figure 3.13: Figure 3.13 Compiling the new ROPE script
# perl < mytest.rope > mytest.rp

Now that we have a test packet and a compiled script, we can run our test in user mode shown in Figure 3.14: Figure 3.14 Running the test script in Userland
# rope myscript.rp packet.bin

When the new script works to your satisfaction, you can put the script in / etc/rope.d/script and then call the script in an iptables rule as we saw earlier in this section. Your Basic Firewall The firewall we have worked on so far will go away on a reboot of Linux. So we need a way to make our firewall full time and permanent. There is an experimental command called iptable-save but the safest method is to write all of our firewall commands to a shell script and put into your startup scripts. In our example we will call our firewall script, rc.firewall-basic and it will be copied to /etc/rc.d. We then will edit our /etc/rc.d/rc.local script and add the path and name in order to run our firewall script each time Linux starts. If you decide to try the iptables save command, you will use it like this:
iptables-save > /etc/sysconfig/iptables

Updating Linux

p. 61

The results you get for the configuration file will look something like figure 3.15: Figure 3.15 Results from using iptables save
# iptables-save > /etc/sysconfig/iptables-config # less iptables-config # Generated by iptables-save v1.2.9 on Sat Dec *filter :INPUT ACCEPT [4237:589725] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1459:203216] -A INPUT -s -p ICMP -j DROP COMMIT # Completed on Sat Dec 4 04:39:50 2004 4 04:39:50 2004

We can see the three chains , INPUT, FORWARD and OUTPUT and our command to drop ICMP packets on the loopback is in place with the word 'commit'. Firewall Testing The last two chapters of firewalls and iptables are nice but how do we know that they are really doing their job of protecting our network? Enter a open source project called “ftester” which is a set of Perl scripts that will test a firewall or IDS sensor. This free tool can be downloaded from features of ftester are listed in Figure 3.16: Figure 3.16 Features of the firewall testing software Ftester
• • • • • • firewall testing IDS testing simulation of real tcp connections for stateful inspection firewalls (Netfilter,IPfilter...) and IDS (snort) connection spoofing IP fragmentation / TCP segmentation IDS evasion techniques

The required Perl modules are listed here:
• • • Net::RawIP Net::PcapUtils NetPacket

The easy way to get the Perl packages is to the Perl utility called MCPAN with the following commands:

Updating Linux

p. 62

# perl -MCPAN -e "install Net::RawIP" # perl -MCPAN -e "install Net::PcapUtils" # perl -MCPAN -e "install NetPacket"

In my case which was testing on Fedora Core 2 and a Red Hat 9 boxes, the Net::PcapUtils failed to install due to needing Net::Pcap. But, Net::Pcap would fail. The version that CPAN was trying to install was an older version so I used wget to download an RPM built for Fedora Core 2 and life was good as we see below in Figure 3.17: Figure 3.17 installing Pcap using wget
# wget [root@Fedora-Core2 root]# rpm -i perl-Net-Pcap*

[root@Fedora-Core2 root]# rpm -q perl-Net-Pcap perl-Net-Pcap-0.05-1.1.fc2.rf

[root@Fedora-Core2 root]# perl -MCPAN -e "install Net::PcapUtils" ::::output trimed for clarity::: /usr/bin/make install -- OK

[root@Fedora-Core2 root]#

This shows you that inspite of everything, you still may have to get creative to install and compile some applications. Once we had Pcap out of the way, the installation and compiling finished up with NetPacket in fine shape. Now we can install the actual scripts to test our firewall. So we will download our scripts using wget again as we see here:
# wget

The installation is just to unpack the scripts and then make our configuration file for ftest to use. Here we see how the ftestd process is started first and then we start the test. We specify the - i and give the interface to use as we see in Figure 3.18: Figure 3.18 Starting ftesterd with specified interface
[root@Fedora-Core2 ftester-0.9]# ./ftestd -i eth0 Firewall Tester sniffer v.0.7 Copyright (C) 2001,2002 Andrea Barisani <>

default system TTL = replies TTL listening on eth0 = 200

Then we start the our test by telling the ftest script where ftest.conf configuration file is located as we see here: Updating Linux p. 63

[root@Fedora-Core2 ftester-0.9]# ./ftest -f ftest.conf

When the script hits the stop_signal in the script, the test is over and the ftestd process will stop with this message:
received disconnection signal at ./ftestd line 248. [root@Fedora-Core2 ftester-0.9]#

So, what have we accomplished so far? Within the ftest.conf file , we can set up what IP address we want the packets to be from, where to sent the packets, what type of protocol to be used, what flags to be set and more.
An easy way to change all of the IP addressing to meet your needs is to use the global search and replacement of VI by using this comand: :%s/old_address/new_address/g

The file is well documented and it can be configured to meet whatever needs you have. In our test, I just ran some basic queries against a Red Hat host using iptables as the firewall. We can see the results from the log file using dmesg in Figure 3.19: Figure 3.19 Resulting log file in Firestarter from using ftest

We can see the IP address of which I configured in the ftest.conf to be my source IP address. The ports tested were 666, 1022, 1025, 21 and 80. The basic structure of the rules in the configuration file is:
Source Address:Source Port:Destination Address:Destination Port:Flags:Protocol:Type of Service # typical entry will be something like this

In this rule, our source address is; our port is 1025; our destination address is; our flags are AS; our protocol is TCP, and our service is 0. This makes for a easy to write configuration file for testing

Updating Linux

p. 64

your own firewalls or IDS sensors. An excellent resource to help you with using ftester is found at : Firewall Script
<rc.firewall-2.4 START> #!/bin/sh # # rc.firewall-2.4 FWVER=0.75 # # # # # # # # echo -e "\n\nLoading simple rc.firewall version $FWVER..\n" Initial SIMPLE IP Masquerade test for 2.4.x kernels using IPTABLES. Once IP Masquerading has been tested, with this simple ruleset, it is highly recommended to use a stronger IPTABLES ruleset either given later in this HOWTO or from another reputable resource.

# The location of the iptables and kernel module programs # # # # # # # ** Please use the "whereis iptables" command to Figure out # ** where your copy is and change the path below to reflect # ** your setup # #IPTABLES=/sbin/iptables IPTABLES=/usr/local/sbin/iptables DEPMOD=/sbin/depmod MODPROBE=/sbin/modprobe If your Linux distribution came with a copy of iptables, most likely all the programs will be located in /sbin. If

you manually compiled iptables, the default location will be in /usr/local/sbin

#Setting the EXTERNAL and INTERNAL interfaces for the network # # # Each IP Masquerade network needs to have at least one external and one internal network. The external network

Updating Linux

p. 65

# # # # # # # # # # # # #

is where the natting will occur and the internal network should preferably be addressed with a RFC1918 private address scheme.

For this example, "eth0" is external and "eth1" is internal"


If this doesnt EXACTLY fit your configuration, you must change the EXTIF or INTIF variables above. For example:

If you are a PPPoE or analog modem user:


EXTIF="eth0" INTIF="eth1" echo " echo " External Interface: Internal Interface: $EXTIF" $INTIF"

#====================================================================== #== No editing beyond this line is required for initial MASQ testing ==

echo -en "

loading modules: "

# Need to verify that all modules have all required dependencies # echo " - Verifying that all kernel modules are ok"


# With the new IPTABLES code, the core MASQ functionality is now either # modular or compiled into the kernel. # options as MODULES. This HOWTO shows ALL IPTABLES

If your kernel is compiled correctly, there is

# NO need to load the kernel modules manually. # # # # # # NOTE: The following items are listed ONLY for informational reasons. There is no reason to manual load these modules unless your kernel is either mis-configured or you intentionally disabled the kernel module autoloader.

Updating Linux

p. 66

# Upon the commands of starting up IP Masq on the server, the # following kernel modules will be automatically loaded: # # NOTE: # Only load the IP MASQ modules you need. All current IP MASQ

modules are shown below but are commented out from loading.

# ===============================================================

echo "----------------------------------------------------------------------"

#Load the main body of the IPTABLES module - "iptable" # # # # echo -en "ip_tables, " $MODPROBE ip_tables - Loaded manually to clean up kernel auto-loading timing issues - Loaded automatically when the "iptables" command is invoked

#Load the IPTABLES filtering module - "iptable_filter" # - Loaded automatically when filter policies are activated

#Load the stateful connection tracking framework - "ip_conntrack" # # The conntrack module in itself does nothing without other specific

# conntrack modules being loaded afterwards such as the "ip_conntrack_ftp" module # # # # # # echo -en "ip_conntrack, " $MODPROBE ip_conntrack - Loaded manually to clean up kernel auto-loading timing issues - This module is loaded automatically when MASQ functionality is enabled

#Load the FTP tracking mechanism for full FTP tracking # # Enabled by default -- insert a "#" on the next line to deactivate #

Updating Linux

p. 67

echo -en "ip_conntrack_ftp, " $MODPROBE ip_conntrack_ftp

#Load the IRC tracking mechanism for full IRC tracking # # Enabled by default -- insert a "#" on the next line to deactivate # echo -en "ip_conntrack_irc, " $MODPROBE ip_conntrack_irc

#Load the general IPTABLES NAT code - "iptable_nat" # # # # echo -en "iptable_nat, " $MODPROBE iptable_nat - Loaded manually to clean up kernel auto-loading timing issues - Loaded automatically when MASQ functionality is turned on

#Loads the FTP NAT functionality into the core IPTABLES code # Required to support non-PASV FTP. # # Enabled by default -- insert a "#" on the next line to deactivate # echo -en "ip_nat_ftp, " $MODPROBE ip_nat_ftp

#Loads the IRC NAT functionality into the core IPTABLES code # Required to support NAT of IRC DCC requests # # Disabled by default -- remove the "#" on the next line to activate # #echo -e "ip_nat_irc" #$MODPROBE ip_nat_irc

echo "----------------------------------------------------------------------"

# Just to be complete, here is a partial list of some of the other # IPTABLES kernel modules and their function. Please note that most

Updating Linux

p. 68

# of these modules (the ipt ones) are automatically loaded by the # master kernel module for proper operation and don't need to be # manually loaded. # -------------------------------------------------------------------# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ipt_REJECT - this target DROPs the packet and returns a configurable ICMP packet back to the ipt_LOG - this target allows for packets to be logged iptable_filter - this module allows for packets to be DROPped, REJECTed, or LOGged. This module automatically ipt_unclean - this match allows to catch packets that have invalid IP/TCP flags set ipt_state - this match allows to catch packets with various IP and TCP flags set/unset ipt_multiport - this match allows for targets within a range of port numbers vs. listing each port individually ipt_limit - this target allows for packets to be limited to to many hits per sec/min/hr ipt_tcpmss - this target allows to manipulate the TCP MSS option for braindead remote firewalls. This automatically loads the ipt_TCPMSS module ipt_mark iptable_mangle - this target allows for packets to be manipulated for things like the TCPMSS option, etc. - this target marks a given packet for future action. This automatically loads the ipt_MARK module ip_nat_snmp_basic - this module allows for proper NATing of some SNMP traffic

loads the following modules:

Updating Linux

p. 69

# echo -e "

sender. Done loading modules.\n"

#CRITICAL: # # # # # # # # echo "

Enable IP forwarding since it is disabled by default since

Redhat Users:

you may try changing the options in /etc/sysconfig/network from:


Enabling forwarding.."

echo "1" > /proc/sys/net/ipv4/ip_forward # Dynamic IP users: # # # # # echo " Enabling DynamicAddr.." If you get your IP address dynamically from SLIP, PPP, or DHCP, enable this following option. This enables dynamic-address hacking

which makes the life with Diald and similar programs much easier.

echo "1" > /proc/sys/net/ipv4/ip_dynaddr

# Enable simple IP forwarding and Masquerading # # NOTE: In IPTABLES speak, IP Masquerading is a form of SourceNAT or

SNAT. # # NOTE #2: The following is an example for an internal LAN address in

# the 192.168.0.x network with a or a "24" bit subnet mask # connecting to the Internet on external interface "eth0". This # example will MASQ internal traffic out to the Internet but not # allow non-initiated traffic into your internal network. # # ** Please change the above network numbers, subnet mask, and

# your Internet connection interface name to match your setup Clearing # any previous configuration # # Unless specified, the defaults for INPUT and OUTPUT is ACCEPT

Updating Linux

p. 70

# #

The default for FORWARD is DROP (REJECT is not a valid policy)

echo "

Clearing any existing rules and setting default policy.."


echo " FWD: Allow all connections OUT and only existing and related ones IN" $IPTABLES -A FORWARD -i $EXTIF -o $INTIF -m state --state ESTABLISHED,RELATED -j ACCEPT $IPTABLES -A FORWARD -i $INTIF -o $EXTIF -j ACCEPT $IPTABLES -A FORWARD -j LOG

echo "

Enabling SNAT (MASQUERADE) functionality on $EXTIF"


echo -e "\nrc.firewall-2.4 v$FWVER done.\n" <rc.firewall-2.4 STOP>

This script is by David Ranch and can be found at Resources There are many resources available to help us to help build firewalls using iptables. In chapter 2 we touched on a few utilities but these resources will help you write your own rules for custom applications.
Excellent firewall script NARC (Netfilter Automatic Rules) IP Masquerade

Iptables match module for P2P

Updating Linux

p. 71

Chapter 4
“All that is necessary for evil to succeed is that good men do nothing.” Edmund Burke

Updating Linux
n the world of Windows we have the Windows Update utility to help manage patches and hot fixes. In the world of Linux, no conversation about Linux would be complete without a discussion on the various ways we can update Linux. The updates and patches are everything from Kernel updates to application updates. In the old days, the updates were always a real joy, the old timers lovingly refer to something called “RPM hell” where you tried to install packages only to find there were three dependencies so you go to install those, but one of them requires yet another package and on it goes. The good news is that this RPM hell has been pretty much vanquished by some new tools that Red Hat, Yellow Dog and Debian have developed and have been ported across the various platforms. An example of this porting is the application known today as YUM but which started life as Yellowdog Updater or YUP. Yellowdog was a port of Linux to run on the Macintosh and used the RPM packaging. YUP was good but slow and over time it was updated to what is called today, Yellowdog Updater Modified or YUM. Advanced Package Management Tool, or APT, was developed for Debian Linux. APT is a bit different then YUM as it’s not a “program” but a set of C++ functions that are used by command line utilities like apt-get and apt-install. APT was developed for the .deb packages but has been ported to other distributions such as Red Hat and Fedora and works with RPMs. The native Red Hat and Fedora updater is called Up2date, but later versions of Fedora have yum included also. Up2date has both a command line interface and a GUI front end resulting in easy updating. Each of the updaters works with packages or precompiled binaries where someone has taken the source files and compiled the application or patch into a binary file specific to a distribution of Linux and sometimes specific to a certain version of the distribution. Packages can be found in several different flavors with RPM being one of the most common formats that crosses from Red Hat to SuSE and several distributions in between.
A good habit to get into for today's world is to always use md5sum to verify the integrity of the files before you compile them or install them. Most reputable sites will have the signature posted or available for downloading so you can verify the file.


The grandaddy of package managers is called Red Hat Package Manager or RPM. RPMs are one of the original ways to package all the required files for an Updating Linux p. 72

application, check for dependencies; and check for versions. RPM keeps a database of all this information on the local system. However, RPM cannot download any needed dependencies, it will just tell you what is missing and it is up to you to download the needed files. This inability to resolve the dependencies is what leads us into an RPM hell of intermingled dependencies. There have been resolvers developed over time to fill in this oversight such as Up2date which will take the RPM and resolve any dependencies required. Other resolvers include SuSE's YaST and YUM. The resolvers work something like this: each RPM has a header and this header has the complete file list, features, what is required and what the RPM conflicts with. Virtually all of the package managers will use this header information to build a database or an index. So the design of RPM packages means that if you wanted to install application ABC, you would find and download the RPM package of ABC and then use the RPM package manager to install it. The RPM manager would read the header information and make sure that all the required dependencies are accounted for and installed. If they are not, a well designed package manager will flag the file and ask you if you want to download the required but missing parts. The same process that works for installing the package works to update a package. You can also remove a package by using the package manager. We normally will use the RPM package manager from a command line interface. In Figure 4.1 we see a few of the command options available to us via the command line. Figure 4.1 Command Line RPM Package Management
# rpm -i <filename> ; installs <filename> # rpm -e <filename> ; erases <filename> this can be used with –nodeps to erase a stubborn file with dependencies # rpm -q <filename> ; will query the rpm database about the filename # rpm -U <filename> ; will update a given filename

We can also mix options on the command line. Figure 4.2 shows the mixing of the -q (query) and the -l (list) options: Figure 4.2 Mixing RPM options on the command line
# rpm -ql telnet /usr/bin/telnet /usr/share/man/man1/telnet.1.gz

You can see that not only do we query the RPM database, we see where the files matching the query are located. Another example would be this to upgrade a package as shown here:
# rpm -Uvh <package name>

This would upgrade (-U) the package and the “v” and “h” will tell RPM to be “verbose” and to print the hash marks to show the progress. If we wanted to find out information about our installed package such as httpd, perhaps because we need to verify what exactly is installed and when it was installed, we can easily

Updating Linux

p. 73

accomplish this with the query parameter. In Figure 4.3 we see how when we combine parameters with RPM that we can get all this information. Figure 4.3 Running RPM query to get detailed information
[root@FedoraC1 root]# rpm -qi httpd Name : httpd Relocations: (not

relocateable) Version Release : 2.0.50 : 1.0 Vendor: Red Hat, Inc. Build Date: Thu 01 Jul

2004 05:51:24 AM PDT Install Date: Wed 17 Nov 2004 03:05:50 PM PST Group 1.0.src.rpm Size License Signature : DSA/SHA1, Mon 19 Jul 2004 10:56:07 AM PDT, Key ID : 2608371 License: Apache Software : System Environment/Daemons Source RPM: httpd-2.0.50Build Host:

b44269d04f2a6fd2 Packager URL Summary : Red Hat, Inc. <> : : The httpd Web server

Description : This package contains a powerful, full-featured, efficient, and freely-available Web server based on work done by the Apache Software Foundation. It is also the most popular Web server on the Internet. [root@FedoraC1 root]#

You can see that we can get a lot of useful information using the -q and combine the -i with it. One of the problems we face in this book is that when we install from source files, RPM is clueless as to what we just did. But from a security standpoint, we do not want to install packages that we did not create or which we can not verify, so we are in a bit of a bind. Never fear, there is always hope and in this case, hope comes from a script called rpm-orphan-find which can be downloaded at This script by Paul Heinlein and Peter Samuelson will look for orphaned libraries and then build a virtual RPM package that will be “installed” which will add our source file installed applications to the RPM database. This will help prevent erroneous dependencies from being reported when you try to install a package using RPM package manager. In Figure 4.4 we see the script in action. Figure 4.4 The rpm-orphan-find script in action
[root@FedoraC1 root]# ./ Scanning system libraries linux-base-libs version 1.0-1...

Updating Linux

p. 74

Orphans found: 3/1850...

Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.81585 + umask 022 + cd /usr/src/redhat/BUILD + LANG=C + export LANG + unset DISPLAY + exit 0 Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.81585 + umask 022 + cd /usr/src/redhat/BUILD + LANG=C + export LANG + unset DISPLAY + exit 0 Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.81585 + umask 022 + cd /usr/src/redhat/BUILD + LANG=C + export LANG + unset DISPLAY + /usr/lib/rpm/redhat/brp-compress + /usr/lib/rpm/redhat/brp-strip /usr/bin/strip + /usr/lib/rpm/redhat/brp-strip-static-archive /usr/bin/strip + /usr/lib/rpm/redhat/brp-strip-comment-note /usr/bin/strip / usr/bin/objdump Processing files: linux-base-libs-1.0-1 Checking for unpackaged file(s): /usr/lib/rpm/check-files %{buildroot} Wrote: /usr/src/redhat/SRPMS/linux-base-libs-1.0-1.src.rpm Wrote: /usr/src/redhat/RPMS/i386/linux-base-libs-1.0-1.i386.rpm Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.93261 + umask 022 + cd /usr/src/redhat/BUILD + exit 0 [root@FedoraC1 root]#

The weakness of the original RPM package manager was that it did not find the various dependencies that the application or update needed before it would install or update. This has been addressed in Red Hat and Fedora to a point using the

Updating Linux

p. 75

GUI package management tool that is found under system settings – Add/Remove applications as seen in Figure 4.5 below. Figure 4.5 Red Hat and Fedora Package Manager

Other vendors who use RPMs have developed or borrowed package managers that also check for the various dependencies and make sure all the correct files are downloaded from the various repositories. Debian uses their APT package manager and packages the binaries into the RPM format. This tool has been ported to other distributions such as Red Hat and Fedora. You can download APT at As we mentioned earlier, APT is really a C++ call for various options such as apt-install. We see in Figure 4.6 the various APT commands we can use. Figure 4.6 Commands for the APT package manager
update upgrade install remove source build-dep dist-upgrade clean autoclean check Retrieve new lists of packages Perform an upgrade Install new packages (pkg is libc6 not libc6.rpm) Remove packages Download source archives Configure build-dependencies for source packages Distribution upgrade, see apt-get(8) Erase downloaded archive files Erase old downloaded archive files Verify that there are no broken dependencies

To use APT we should update the database periodically to keep a current listing of available files. We can add or delete repositories for APT by editing the /etc/apt/sources.list file. In Figure 4.7 we see how the the update command works.

Updating Linux

p. 76

Figure 4.7 Updating apt's database
[root@RedRum root]# apt-get update Get:1 redhat/9/i386 release [1171B] Fetched 1171B in 0s (1954B/s) Hit redhat/9/i386/os pkglist Hit redhat/9/i386/os release Hit redhat/9/i386/updates pkglist Hit redhat/9/i386/updates release Hit redhat/9/i386/freshrpms pkglist Hit redhat/9/i386/freshrpms release Reading Package Lists... Done Building Dependency Tree... Done You have new mail in /var/spool/mail/root [root@RedRum root]#

To install a package is very easy using APT. We just use the command apt-get and then the keyword install. We see in Figure 4.8 using the install command to install the ghostscript-devel package. Figure 4.8 Installing a package using apt-get install
[root@RedRum root]# apt-get install ghostscript-devel Reading Package Lists... Done Building Dependency Tree... Done The following NEW packages will be installed: ghostscript-devel 0 packages upgraded, 1 newly installed, 0 removed and 0 not upgraded. Need to get 30.2kB of archives. After unpacking 52.3kB of additional disk space will be used. Get:1 redhat/9/i386/updates ghostscript-devel 7.05-32.1 [30.2kB] Fetched 30.2kB in 1s (22.2kB/s) Executing RPM (-Uvh)... Preparing... [100%] 1:ghostscript-devel [100%] [root@RedRum root]# ######################################## ########################################

While APT is a very cool application to use, there is one issue that you need to remember and that is APT keeps a copy of each file in the /var/apt/cache which can fill up the disk if you never do any housekeeping. The housekeeping is very easy; we use the apt-get clean command which will clean out the file cache. Updating Linux p. 77

There is a GUI interface available for APT called Synaptic. In Figure 4.9 we can see that Synaptic gives us the “look and feel” of many of the other GUI package managers. Figure 4.9 Synaptic running on Fedora Core 1

Synaptic gives us a complete package management tool where we can install, upgrade or remove packages all with a GUI toolset. You can use APT to get Synaptic, or you can download the files from SuSE also uses the RPM format for packaging their binaries and they have built in a package manager into the SuSE YaST Control Center. In Figure 4.10 below we see the YaST Control Center at the package management option. Figure 4.10 SuSE Control Center Package Management Option

Updating Linux

p. 78

Here we can perform our system updates from the internet or local files. We can also install or remove software applications. In the world of package management there are still some distributions that default to using the command line. BSD and Slackware both use a command line interface for their package management. The BSD updates and applications are called “ports” and you download the port and “make” the package. For the BSD user, one of the first things to do is to make the readme files as we see in Figure 4.11: Figure 4.11 Making BSD readmes files for ports
% cd /usr/ports % make readmes

This will take a long while but when it is done you will have a collection of resources at /usr/ports/README.html for working with ports (and BSD) that will be invaluable. When working with RPMs there is an occasional bug that causes the RPM database to “lock up”. This bug will show itself by an error message that says:
rpmdb: Program version 4.2 doesn’t match environment version

If you find you are having this problem with any of the updaters that use RPMs, never fear, there are couple of easy fixes. The following is an informal fix and works most of the time: # rm -f /var/lib/rpm/__db* This clears the files that hold the lock state information. With these files gone, RPM is a happy camper once again. If this easy fix does not fix the RPM database, it’s time for more drastic measures. In this solution, it is HIGHLY recommended to make a backup of the database by following these directions:
# cd /var/lib # mkdir rpm-backup # rsync -av ./rpm/. ./rpm-backup/.

We can see the exact sequence in Figure 4.12 below:

Updating Linux

p. 79

Figure 4.12 Backing up the RPM database prior to rebuilding

Once we have the backup, we can run a rebuild against the database like this:
# rpm -vv –-rebuilddb

The –vv will give us quite a bit of information to what exactly is happening to our rebuild as it runs for a few minutes. Occasionally we will see the following harmless error message during or after rebuilding our RPM database. It is harmless and just messy to see.
error: db4 error(16) from dbenv->remove: Device or resource busy

Red Hat Up2date
Red Hat developed an application called up2date to help ease the pain of the RPM madness and included it in Red Hat 7 and all current versions including Fedora and the commercial versions of Red Hat. Up2date uses what is called “channels” to get the various packages required. For the official Red Hat ES, WS packages, the updater will have you set up an account with Red Hat and buy entitlements which specify what level of support you will receive from Red Hat. Up2date then uses the official Red Hat channels to get the updates required. Fedora uses the same tool but does not require a user name and will not use the official Corporate Red Hat channels. In Figure 4.13 we see the Red Hat ES up2date application at work retrieving information to what package updates are available.

Updating Linux

p. 80

Figure 4.13 Red Hat Up2date application retrieving updates

The downside for the Fedora user community is that sometimes we have seen the official Fedora update channels slow down due to heavy use. But, we are able to use mirror channel sites and find which ones work best for us. Up2date can use both apt and yum respositories so we have quite a few options available to us. All we need to do is go to the /etc/sysconfig/rhn/sources file as we see in Figure 4.14 and add what channels we want: Figure 4.14 Source site file for Fedora updates
[root@RedRum rhn]# cat sources ### this describes the various package repos up2date will look into ### for packages. It currently supports apt-rpm repos, yum repos, ### and "dir" repos

### format is one repo entry per line, # starts comments, the ### first word on each line is the type of repo. ### the default rhn (using "default" as the url means ### use the one in the up2date config file #up2date default

### When a channel-label is required for the non up2date repo's, ### the label is solely used as an internal identifier and is not ### based on the url or any other info from the repo.

### an apt style repo, this time arjanv's 2.6 kernel repo ### format is: ### type channel-label service:server path repo name

Updating Linux

p. 81

#apt arjan-2.6-kernel-i386 ~arjanv/2.5/ kernel

### Note that for apt repos, there can be multiple repo names specified ### space separated.

### an yum style repo ### format: ### type channel-label url

yum fedora-core-1 yum updates-released yum security ### an local directory full of packages ### format #dir my-favorite-rpms-i386-9 /var/spool/RPMS/

# multiple versions of all repos except "up2date" can be used. Dependencies # can be resolved "cross-repo" if need be.

[root@RedRum rhn]#

Where the file says ### type channel label ### we will add our mirror sites. To help the speed of the updates try the following changes to use new mirror sites for up2date.
## Mirror USA East Coast yum ibiblio-fedora-core-1 6/os/

We can find more servers at We can also use the command line for our updates on Fedora and Red Hat using the up2date command. In Figure 4.15, we use the command line to list the available updates. Figure 4.15 Using command line to retrieve listing of available updates
[root@RedRum lib]# up2date -l . Fetching package list for channel: fedora-core-1...

Updating Linux

p. 82

Fetching ########################################

Fetching package list for channel: updates-released...

Fetching ######################################## Fetching package list for channel: security...

Fetching headers/ ######################################## Fetching Obsoletes list for channel: fedora-core-1...

Fetching Obsoletes list for channel: updates-released...

Fetching Obsoletes list for channel: security...

Fetching rpm headers... ######################################## . Name Version Rel

----------------------------------------------krb5-devel krb5-libs [root@RedRum lib]# 1.3.4 1.3.4 5 5 i386 i386

In this example we have only two available updates: krb5-devel and krb5-libs. Other options we have are to search for a package using up2date --showall. By using grep we can find the specific package we are interested in. We can retrieve just the source file or srpm by using up2date --get -src <package name>.A useful option is to install the package as a test run where the package manager will go through the motions of installing the package but without making any changes. This option is up2date --dry -run <package>. Notice the double dash in front of “dry” and the single dash in front of “run”. The configuration of up2date is kept in /etc/sysconfig/rhn/up2date.

Updating Linux

p. 83

Yellow Dog Update Manager or YUM was developed originally by Yellow Dog linux but since has been ported to various distributions such as Red Hat and Fedora. In fact, YUM is officially included in Fedora core 1,2 and 3. There are a few advantages to using yum over APT such as a smaller code base and yum is written in Python which allowed the author to reuse some code in up2date. Truthfully, yum works good and APT works good, you can pick either and do well. To get a current distribution of YUM, go to: and download the correct version you need for your distribution. To find out your current version of YUM, use this command:
[root@orion root]# yum --version 2.1.12 [root@orion root]#

We can see that the current version of YUM installed is 2.1.12. The configuration file for yum is found at /etc/yum.conf. This file is the global configuration options and the server list of the repositories. Of course for a production shop, you can always set up your own repositories. This way you download the updates once to your server and then your local systems can update from there saving on the bandwidth of the WAN. Yum can also tell us package information even if it is not installed. In the sample output below yum was asked about the version of the kernel. In Figure 4.16 we have a sample output of the yum info command. Figure 4.16 Output of yum query to package information
[root@RedRum]# yum info kernel Name Arch : kernel : athlon

Version: 2.4.22 Release: Size Group Repo : 30.49 MB : System Environment/Kernel : Fedora Core 1 - i386 - ATrpms stable

Summary: The Linux kernel (the core of the Linux operating system). Description: The kernel package contains the Linux kernel (vmlinuz), the core of the Red Hat Linux operating system. The kernel handles the basic functions of the operating system: memory allocation, process allocation, device input and output, etc.

We can see that the kernel architecture is for an Athlon and is version 2.4.22. YUM also gives us the size of 30.49 MB along with a summary description of the package. Along with installing packages, YUM can remove packages. The Updating Linux p. 84

command yum remove mythtv will remove any package that starts with the word mythtv. A list of repositories for YUM can be found at: Least you think that installing and removing are all YUM can accomplish, not true! Here is a partial list of YUM commands and these commands can take the shell style wild-card (*,?) instead using the package name:
• • update upgrade This will update all packages unless otherwise specified. Use this command with caution. Much the same as update but uses the - obsoletes flag. This is a good way to upgrade rom one version to a new version of the distribution such as moving from Red Hat 8 to Red Hat 9 Searches all packages and discriptions for a match for a <word> Removes packages AND packages that depend on the removed package. The erase command is the same thing Shows basic information about a given such as release, size, architecture and more. • • • check-update clean list Lets you run a trial update and see what will be updated without having to do the update Performs housekeeping of the yum cache. Used with various options to list information about packages

• •

search remove

info package

There are a few options to be aware of when working with YUM. The most popular option is the -y which tells YUM to install the packages without prompting you for an approval. Another useful command is yum list available which gives a complete list of all the packages available. For more details, read the man pages for yum which details more then we have space here.

Advanced Package Management or APT was developed by Debian and has since been ported to many other distributions. APT is not an application per se but C++ calls for commands such has apt-install and apt-get. APT in its native format is a command line interface but there are GUI front ends available for it such as Synaptic. To install APT and Synaptic on Red Hat and Fedora is relatively easy with the “kickstart” package found at This page will step you through installing APT and Updating Linux p. 85

Synaptic on Red Hat 9 and Fedora Core 1 and 2. The other requirement is that you use RPM version 4.3. Many distributions of Linux now come APT enabled so the quick way to get Synaptic is to use the following commands:
apt-get updates apt-get dist-upgrade

apt-get install synaptic

But, for those that enjoy the command line, APT offers a wealth of commands and options for updating, upgrading, fixing and auditing packages to be installed or that might be installed. Some of the common commands that you will use with the apt-get prefix are:
• • • • • • • • • • • • update upgrade This will resynchronize the package index files This will upgrade all packages currently installed

dist-upgrade This will upgrade and also correct changing dependencies install remove check clean This is how you install a given package or packages This will remove a package or packages This will update the package cache and check for broken dependencies This cleans out the local repository cache

--fix-broken This will attempt to fix broken dependencies --fix-missing This will cause ATP to ignore missing dependencies --dry-run This will run through an action but not apply the actions

--build This will compile the sources after they have been downloaded --reinstall This will reinstall a package using the newest build

The file that holds where APT goes to get the files is called sources.list and is located at /etc/apt/sources.list. We have a sample from a Red Hat 9 installation of APT shown here:
# List of available apt repositories available from # This file should contain an uncommented default suitable for your system. # # See for a list of other repositories and mirrors. # # $Id: sources.list,v 1.8 2003/04/16 09:59:58 dude Exp $

Updating Linux

p. 86

# Red Hat Linux 9 rpm redhat/9/i386 os updates freshrpms #rpm-src redhat/9/i386 os updates freshrpms

The uncommented or # line is the default location that APT will use to get the files.

What is a kernel update?
In the world of Linux, the kernel is king. This is the core of the Linux operating system and as such, it is very important to keep the kernel patched to the latest code for security. While Linux does not have the range of security issues as other operating systems do, there are and will be security breaches that will need to be addressed. There are two major Linux kernels today to work with. There is the older version 2.4 and the newer version 2.6. While 2.6 offers major improvements over the 2.4 version, there are some issues that make 2.4 attractive for now. When 2.4 was released, there came USB support, ISA Plug and Play, PC Card support and other additional and useful features. It was a major improvement over the 2.2 kernel from an architectural standpoint. The new 2.6 kernel is again a major improvement to the Linux architecture and offers many new features. One of the most significant features of 2.6 is that it can scale down to PDAs or scale up to the new Non-Uniform Memory Access or NUMA designs. The entire memory model has been changed for the better with this new design. Another major feature that is fully implemented on 2.6 is hyperthreading support. We can now optimize processor loads across multiple processors. The new kernel has better hardware detection and support for things like newer laptops and desktops. Security has not been overlooked in the new kernel design. The 2.6 kernel has improved security by moving the kernel-based security into modules. Internet Protocol Security or IPsec has been added to the kernel. Network File System Or NFS security has also been improved. So we can see that while kernel 2.4 offers good functionality and stability, the new version 2.6 kernel may be just the ticket for your project, even with some of the teething pains that accompany such a large improvement and re-architecting of the kernel.

How do I tell which kernel I have installed?
Determining which kernel you are currently running is very easy from the command line. From the command line or a terminal session, just type in the following command:
[root@RedRum lib]# uname -r 2.4.22-1.2199.nptl [root@RedRum lib]#

Updating Linux

p. 87

The uname –r command will give back the version information of the kernel currently running. In this example, which is Fedora Core1 we see the kernel is version 2.4 22-1.2199. This set of numbers tell us that we have version 2 of code and the “4” is the minor version of code. The “22” is our patch level. If the second number is even, as we see in this case by the “4”, then that kernel code is considered to be a stable release. If, however, the second number is odd, then the kernel is a development version.

How do I update the kernel?
In order to update the kernel, we have to get the new code or patches. The easy way is to use one of the updaters that we discussed a few sections back, but if you need to do this manually, it is possible. The first stop is to go to or your favorite mirror and see what the latest and greatest kernel is and then download the version you want. Once you have downloaded the new kernel, you should always verify the MD5 signature to make sure the package has not been tampered with. There have been a few cases in which hackers have broken into Linux repositories and tampered with some files. It would not be a good thing to install a kernel that has been tampered with as you are trying to bring your security up to date. The easy way is to find the MD5 signature of the kernel on the website and then run the md5sum command against the file you just downloaded as shown in Figure 4.17: Figure 4.17 Using MD5 to verify package integrity
[root@RedRum updates]# md5sum up2date-4.1.21-3.i386.rpm 5f2bd59e256c7af1685ea5d6fb6401c2 3.i386.rpm [root@RedRum updates]# up2date-4.1.21-

In this example we ran the md5sum command against an update package for Red Hat’s up2date application. The md5 signature is the string 5f2bd59e256c7af1685ea5d6fb6401c2 and we would use that to verify that the rpm package we downloaded had not been tampered with by comparing the md5 signature with the one posted on Red Hat’s site where we downloaded the file. In the example file, I made a single change to one byte. We can see in Figure 4.18 that this one small change affected the MD5 signature checksum. Figure 4.18 Results of changing a file and then using the command md5sum
[root@RedRum root]# md5sum xpde-0.4.0-20030730.tar.gz d908912ebc2d728d6c86c77868f066ea 20030730.tar.gz [root@RedRum root]# xpde-0.4.0-

This is what we want to look for when we use md5sum. Any file that does not match the published MD5 signature is considered suspect and should not be used until otherwise proven to be safe.

Updating Linux

p. 88

Before any major upgrade such as upgrading the kernel, you should always, always get a backup of the system, or have a rescue disk handy in order to correct an upgrade gone awry. When Linux is first installed, most distributions will ask if you want to make a rescue disk and most times, people say no. But we can always make the rescue disk later. If you are using Fedora, you can download a rescue disk ISO file ready to burn at But most of us will be making our boot floppy from the command line so we start with a blank and formatted floppy disk. Then we need to get the version of our kernel by using the uname –r command. Now that we have our version number, we can actually start the process with the mkbootdisk command as seen in Figure 4.19. Figure 4.19 Making a Red Hat or Fedora rescue floppy
[root@RedRum root]# mkbootdisk --device /dev/fd0 2.4.22-1.2199.nptl Insert a disk in /dev/fd0. Any information on the disk will be lost. Press <Enter> to continue or ^C to abort: 20+0 records in 20+0 records out [root@RedRum root]#

In this sample we are writing our files to fd0 and the version of kernel is When working with SuSE, Red Hat 9, Slackware or Debian, we will use the mkrescue command to build our rescue disk. The mkrescue command can burn ISOs with the newest version. This capability is becoming more important as the kernel sizes increase and will no longer fit on a floppy for some distributions. The other option is to download the rescue ISO image from one of the many sites on the internet for the distribution you are trying to rescue. For a Mandrake image, we can go to and get a rescue disk image. For a Fedora Core2 rescue disk image, go to and download the ISO image. With some of the newer commercial snapshot tools that can back up the entire disk such as Ghost or Acronis TrueImage, we can be assured that we can recover from an upgrading disaster of wide proportions. We can also use the Linux dd command to take a sector-by-sector backup of a disk. This is a good safety net to have for those ultra important upgrades where the server absolutely, positively be up Monday morning. There is always the tried and true dump command but the reality is that the new snapshot tools are much easier to use when compared to dump. The better tools can actually copy the image over the wire to a another server and recovery that image over the wire so the tape is not needed or desired. There are also boot disks available that are not rescue disks per se but they give you the method to access the broken Linux (or Windows)and make changes manually.

Updating Linux

p. 89

Some of the bootable CDs are: LocalAreaSecurity INSERT Auditor Security Collection Knoppix STD The Ultimate Boot CD LNX-BBC Damn Small Linux

These all share the common thread of being a bootable and usable Linux distro on a CD or credit card CD. In the case of a major malfunction or security breach, anyone of these could easily be a lifesaver by giving access to a Linux server and being able to run various scripts and tools.

Alternative Security Kernels
We have the choice of patching the standard kernel or we can use one of the alternative kernels such as SELinux. SELinux comes from our friends at the NSA and can be downloaded from There are both 2.4 and 2.6 kernel updates available to download and install. The heart and soul of SELinux is that it uses Mandatory Access Controls (MAC) to isolate various parts of Linux. It does not audit or shutdown unneed services or the like. The MAC style of security is very different from the normal Discretionary Access Control which is the normal model used in Unix. DAC is where the user has complete control over who has access to the object in question. This is why, when the root account is compromised, the entire Linux box is compromised. MAC is very different but an analogy is our firewall. What is not expressly permitted is denied in the world of MAC. Each file and process is now part of the security policy and can be secured against access from anyone without the express rights to use or access the object in question. Along with the positives of using SELinux, it has to be said that this kernel is a prototype and as such, you really want to think hard about deploying it into a production enviroment. It has not been formally tested and according to the developers, it will not be submitted since it is a prototype. There is no “formal” support for SELinux other than the newsgroups from the NSA or the development team. So if you decide to use SELinux, please keep all of this in mind. For more details on SELinux, IBM has some interesting papers that can be found at They give a lot of detail about SELinux and the nitty gritty of how it works.

Updating Linux

p. 90

Keeping the LID on The Linux Intrusion Detection System, or LIDS, is another MAC style of security enhancement for Linux. LIDS is a kernel patch and administrative tool kit to manage LIDS. LIDS can be downloaded at and is available at this time for kernel 2.4.28 and 2.6.8. Installing LIDS is not an overly complex task but it is not a simple “install the package” either.
Caution: Do NOT attempt this without a tested and known good backup of your kernel and files.

We will need to download both LIDS tarball and the matching kernel source code which can be found at You will install the kernel source code and then decompress the LIDS tarball and run the patching scripts as we see here:
# cd <linux_install_path/linux> # patch -p1 </lids_install_path/lids-0.9pre4-2.2.14.patch # rm -rf /usr/src/linux # ln -s linux_install_patch/linux /usr/src/linux

Now the real fun begins; we need to make and compile the new kernel with the LIDS patches in place. By using make menuconfig or make xconfig, we have to choose a couple of new items. We need to select these three choices:
[*] Prompt for development and/or incomplete code/drivers [*] Sysctl support [*] Linux Intrusion Detection System support (EXPERIMENTAL) (NEW)

Now we run through the normal steps of compiling a new kernel as we see here:
# make dep # make clean # make bzImage # make modules # make modules_install

Once you have completed this, we will run ./configure in the LIDS directory followed by the make command. And finally we will use make install to finish up installing LIDS. The first step afterwards is to use the command lidsconf -P to set a LIDS password. To enable or disable capablities, go to /etc/lids/lids.cap file and make your changes. The basic set of rules for LIDS will be put in place now. The last change is to modify our boot scripts with the command lidsadm -I which “seals” the kernel after we reboot. Our last step is to install the new kernel and reboot the computer. If there is a problem, we can boot without LIDS by typing in lids=0 at the boot prompt. Resources
Grsecurity Kernel Patch

Updating Linux

p. 91

Chapter 5
"If privacy is outlawed, only outlaws will have privacy!" - Philip R. Zimmermann

Encryption or protecting your Data


ncryption has been and will remain a highly contested and debated topic among personal privacy advocates and. the various governments. Governments tend to be afraid of things they cannot or do not control and we see this in action by the various attempts to insert government-owned backdoors into various encryption methods. This was made very clear with the “Clipper chip” debacle that took place in the 80s. The government wanted a key in their possession that would unlock any encrypted data since the backdoor was to be mandated by the government to be in any method of encryption used. Needless to say this idea caused a huge outcry and Clipper chip faded away. This book skips a lot of the background and the technical issues involved, but it highlights why we have software like Pretty Good Privacy or PGP written by Phil Zimmermann and released to the world in 1991. It is rather telling that until 1996 Phil was harassed by the government for writing PGP and releasing “munitions” to the world. For those with an interest in Clipper chip, you can find an excellent article at The free version of PGP is available at

An offshoot of PGP is called GnuPG, or GPG, and is a free replacement for the PGP suite of cryptographic applications. GPG is released under the GNU General Public License and is part of the Free Software Foundation’s GNU software project. GPG version 1.0 was first released in 1999 by its developer Werner Koch and with assistance GPG has been ported to Windows. GPG is considered to be a stable and mature project.

What is encryption?
Encryption is a way of hiding your data in a form that can be very difficult to break into. It’s also used to checksum or form a signature of data files to verify that they have not been changed. You can also sign email to verify that the email was not altered in transit. In short encryption is a way to keep secret things secret. It is not hard to see how this can be a double-edged sword. In the time of war, there is always the fight to encrypt communications and the fight to break the encrypted communications. World War II brought us the Enigma machine which gave Germany near total control of the sea lanes due to the encrypted communication between the German submarines and the their bases. Cryptography gave the Allies “Ultra” which were the decrypted messages which gave the Allies invaluable intelligence. This is just a sample of the importance of encryption to a government. In the hands of the public, encryption is used every day in our ATM cards, our e-commerce sites, at our businesses to keep records


p. 92

confidential and to keep the government from spying on its own citizens without cause. Before we get too far into encryption, we should explain a few basic concepts behind encryption and how they are important to us in this book. We will be using terms like ‘key’, ‘passphrase’, ‘asymmetrical’ and other interesting terms and it would help that everyone knew ahead of time just what was meant by all them. So let us begin our condensed version Crypto 101. There are two basic forms of encryption that we deal with: symmetrical and asymmetrical. Symmetrical encryption uses a secret key and for it to work, both Alice and Bob must have the same key. As you can imagine, this can be problematic to manage. Asymmetrical encryption is what called a “public key” encryption and is heavily used today. With public key encryption, if Alice wanted to send an encrypted message to Bob, she would first find Bob’s public key on a keyserver or just ask Bob for it. Alice then would encrypt the message with the public key and send the encrypted message message to Bob however she wants to. Bob would receive the encrypted message and then use his private key to decrypt the message. The process works well and is very effective assuming that proper security measures were used to set up the private and public keys. There are trade offs to using public keys verses the single shared key methodology. One trade off is that the asymmetrical system is slower to encrypt and decrypt data. A second is that it will take a large key in the asymmetrical system to match the strength of a smaller key in the symmetrical system. In Table 5.1 we see a comparison between symmetrical key length and asymmetrical key length. Table 5.1 Key Size Comparison Symmetrical
56 bit 64 bit 80 bit 112 bit 128 bit

384 bit 512 bit 768 bit 1792 bit 2034 bit

Table from Matt Curtin 1998

What is this alphabet soup?
An alphabet soup of acronyms are used within the world of ciphers and encryption: DES, 3DES, DSA, AES, IDEA, Blowfish, RSA and on and on it goes. The good news is that we need to know just a few of them. Each of these is an algorithm which is just a way to encrypt or scramble the data and each method has a strength and a weakness. We make the choice about which will work for us when we choose the algorithm. Data Encryption Standard, or DES, comes in two flavors. There is the original DES which uses a 56-bit key which was developed by IBM in 1977 and triple Encryption p. 93

DES, or 3DES, which uses three separate keys to encrypt data for an effective bit count of 112 bits. The key size differences are from the fact that DES can be relatively easily cracked in less than a day where as 3DES will take considerably longer. Digital Signature Algorithm, or DSA, was developed by the NSA but in the beginning National Institute of Standards and Technology or NIST claimed they developed the algorithm. Given the poor track record of the NSA at being upfront about their goals and what exactly is in the algorithm, it has been mired in controversy since its released in 1994. A “leak”of information was discovered by a researcher which just fed the fuel of paranoia in the security world. Advanced Encryption Standard, or AES, used to be called Rijndael (rain-doll) and is the replacement for DES. It was approved in the year 2000 after a call for submissions was made by NIST in 1997. International Data Encryption Algorithm or IDEA, was designed in 1990 and is a replacement for DES also. It uses a 128-bit key and the same algorithm that DES uses but since it was not developed by the United States it addresses concerns that the NSA might have trap doors built into DES and 3DES. Blowfish while having a funny sounding name, is a very serious encryption method designed by Bruce Schneier in 1993. It is a fast algorithm and it is freely given away as a drop in replacement for DES or IDEA. Blowfish uses a variable sized key ranging from 32 to 448 bits. SSH-2 uses 128 bits when Blowfish is selected. Rivest-Shamir-Adlerman, or RSA, is one of the most widely used encryption methods. We will use RSA in this book for both signatures and setting up our private/public keys. RSA’s strength comes from the fact that it is very difficult to factor very large numbers. So cracking RSA on your hotrod PC might be exceedingly difficult. SSH-1 and 2 both will support RSA; in fact, SSH-1 requires RSA, whereas RSA is an option for SSH-2.

How does encryption work?
All encryption works towards the same goal: to allow the transfer of a file, document, or even your voice without fear of compromising the integrity of the data. Encryption is not unique to the digital age. There have been ciphers and encryption for as long as people have been around and have desired to keep secrets. The only thing that has changed is the encryption methods have become more and more sophisticated, which in turn makes them harder to break. The encryption mechanism that we are interested in for this books is the Public Key Infrastructure, or PKI. The concept of using keys is easily understood. We know what keys do; they open things like locks and doors. In the world of security, it is the same, the key will encrypt or decrypt the file or message. The PKI works like this; we have a public key and a private key. We have Alice and Bob who wants to exchange private information and keep anyone else from reading it. We will start with Alice who wants to send a document to Bob over the internet. So Alice will either ask Bob for his public key or look it up on a Encryption p. 94

public key server. Alice will encrypt his document using Bob’s public key and send it on its way. Anyone looking at the document along the way will be unable to read it since they would need Bob’s private key to decrypt the document. When Bob sends his response, he will use Alice’s public key to encrypt his response and repeat the process of sending the encrypted document back to Alice in relatively safety. How the actual encryption works depends in part on the encryption used, such as DES, IDEA, SHA and the rest. They each encrypt data but do so in a different manner. To understand the actual mechanism requires quite a bit of math and there have been many, many books on encryption and cipher theory. An excellent site with self-teaching guides on the theory and math can be found at GPG is one of the most popular encryption methods for Linux and BSD since it is reliable and free. There is PGP which is a commercial product and there is also a free version of PGP. To get started with GPG is very straightforward. If your distribution of Linux does not already have the package installed, you can get the source files from and install it yourself as we will explain later in this chapter.

What are keys all about?
When you talk about encryption, one of the discussions will be about the PKI or Public Key Infrastructure. Keys are one of the cornerstones to how encryption works. There are two keys; a public key and a private key. Each of these keys is for a specific user but the private key is never given out to anyone. The private key also has a “passphrase” to help verify whoever is using it is actually allowed to use it. In Figure 5.1 we see a sample of a public key. Figure 5.1 Sample Public and key
[root@RedRum root]# gpg --export -a -----BEGIN PGP PUBLIC KEY BLOCK----Version: GnuPG v1.2.6 (GNU/Linux)

mQGiBEFDCiIRBACGVj55pko/OQ7Y7EeR7+LeSoHzZOnNOgipL459VdZvOrZAe6jH mLYnUKG/ttko4v4kcB5KlvT7dqEiCyZ7bPDv+b6XCfWGxefhngkCk5PEAGc4rfLT XThGNYs9mR2PpXglRV/N09zwnoPJliN4ItQpseHxYPI5SQ+X379Cn/EPKwCghhJP ose36UOJmn9R+uvqa2qiMosD+QHgACUWSx+mkdmnrWufMXBj2ieAM7D3JOKwjG7s hWnBNUv4excQI83vDcgHXp8rt/MuzoOvnW+pE2gezIx/5LZNn73ECvatOAO+nNtn Lt90OLsGB+JfEnjbXLkGy7lEsRLwmNQd3sYhK8mkp+hgu3w2+2pa3qLZ3pAj02le eg5/sO95TH8ScFBwpSEPWA9WDemLMlJLiX96CvJGTGMAAwUD/A9cnOKsJiRsbfay lKIaoX+SiKwUwPMjLyiSo6ErDLCiBKkAP+4hyd3tlr2AjaNnu1n5GxwW2tWxD1bF vdP7LS3w2OT6nUrUQYHNeqA3m30X1RGjIRdoX16Rkhmi0rcseyqOPkJcFzVv9nSX C3U+BPuwpQOCF6R/1cSPMtIH9NaDiEkEGBECAAkFAkFDCiMCGwwACgkQE9Zv4ja8 K8LK7gCfTiOCy1cTomEqgJWtixdAQ944U50An3tXzDyWdWKkIj3nW5CQHnbO2Fef


p. 95

=SqnK -----END PGP PUBLIC KEY BLOCK----[root@RedRum root]#

We can use the keys to “sign” documents or files to assure that the integrity of the documents or files is intact when the recipient of the documents or files receives it. In Figure 5.2 we see the signature of a document. Figure 5.2 Electronic signature of a document using GPG
[root@RedRum root]# cat lettertest.txt.asc -----BEGIN PGP SIGNED MESSAGE----Hash: SHA1

This is a test of encryption using Linux and GPG

Doubting Thomas

This document is publicly signed to prevent changes -----BEGIN PGP SIGNATURE----Version: GnuPG v1.2.6 (GNU/Linux)

iD8DBQFBQxWUE9Zv4ja8K8IRAmewAJ0afmpknhN6QgIn9V5phFrIlqFN0 QCffQ/n BL+d7d8xyaMQsWJeNUGaQp0= =au35 -----END PGP SIGNATURE----[root@RedRum root]#

The signature at the bottom of the note gives us a way to verify the integrity of the document as shown in Figure 5.3. Figure 5.3 Verifying document integrity using GPG
[root@RedRum root]# gpg --verify lettertest.txt.asc gpg: Signature made Sat 11 Sep 2004 08:11:16 AM PDT using DSA key ID 36BC2BC2 gpg: Good signature from "Doubting Thomas <>" [root@RedRum root]#

We see that the document verifies correctly and therefore is intact and has not been tampered with. We have seen how the keys are used but how do the keys work? The theory is pretty simple. Using an encryption program like GPG or PGP, you will generate Encryption p. 96

a set of keys, one public and one private. The next section will show you how to generate your own set of keys.

Why do I need encryption?
Encryption is one of those things that you wish you did not have to use but the real world says you must. In fact, we use encryption all the time and without really noticing it. Every time we use an ATM machine, purchase something online using SSL, use a charge card and now buy music online, we use encryption. The ATM uses encryption to keep someone from getting your account information. This is the same reason why e-commerce sites use SSL. Every charge card transaction is encrypted. Your satellite TV transmission signals are encrypted. The music you buy online or some of the CDs are now encrypted to keep you from making unauthorized copies. And this is just small sample of ways encryption is used. Many people do not trust others on the internet including the government and so they encrypt their email exchanges. A security feature that is coming to your home shortly is the ability to legally sign electronic documents by using a ditial signature and this most certainly will use a form of encryption to verify that once you sign the document, it cannot be altered by someone else. The IRS is testing this technology, as are many business.

How do I use GPG?
In order to use GPG, you have to install GnuPG which you can download at Or check to see if it has already been installed by the your choice of distribution. For example, my Red Hat 9 installation did not have GnuPG installed but my Fedora Core 1 server did, as did my SuSE 9.1 server and Slackware 10 server. If you use rpm -q or whereis and look for gnupg you may be in luck. You can also just type in gpg --version as we see in Figure 5.4 and if you get a resulting version, GnuGP is in fact installed. Figure 5.4 Getting the version of gpg using the command line
root@Slackware1:~# gpg --version gpg (GnuPG) 1.2.4 Copyright (C) 2003 Free Software Foundation, Inc. This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions. See the file COPYING for details.

Home: ~/.gnupg Supported algorithms: Pubkey: RSA, RSA-E, RSA-S, ELG-E, DSA, ELG Cipher: 3DES, CAST5, BLOWFISH, AES, AES192, AES256, TWOFISH Hash: MD5, SHA1, RIPEMD160, SHA256


p. 97

Compression: Uncompressed, ZIP, ZLIB, BZIP2 root@Slackware1:~#

If you do not have GnuPG installed, then download GnuPG and also there are some other files that will be needed to install GnuPG and they are listed here and are available to download from the same site as GnuPG source files:
• • • • • • • Entropy Gathering Daemon GPGME Libgcrypt Libksba DirMngr Libgpg-error Libassuan

Alternatively, you can use YUM or APT to grab the GnuPG package and install it and let the package resolver sort out the various dependencies. If you choose to install GnuPG from a tarball, then download the files to a directory such as /usr/local/src/gpg and then use tar to decompress the various files. Install the libraries first and then use the normal ./configure, make and make install routine to install GnuPG. The keyring and configuration file will be in ~/.gnupg directory. The gpg.conf file is where different variables for gpg can be set. These variables are for having gpg send or receive your public keys to keyservers and other settings. In Figure 5.5 we see where in the gpg.conf file you would enable the public key servers. Figure 5.5 Setting public key servers in gpg.conf
# Example HKP keyserver: # # # Example email keyserver: # # # Example LDAP keyservers: # # ldap:// ldap:// x-hkp://

To enable the key server, you just uncomment the line or add the servers of your own that you wish to use instead. Now that you have GnuPG installed, you need to make a pair of keys. This is easily done as you can see in Figure 5.6 below. Figure 5.6 Creating key pair using GPG
[root@RedRum utilities]# gpg --gen-key


p. 98

gpg (GnuPG) 1.2.1; Copyright (C) 2002 Free Software Foundation, Inc. This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions. See the file COPYING for details.

Please select what kind of key you want: (1) DSA and ElGamal (default) (2) DSA (sign only) (5) RSA (sign only) Your selection? 1 DSA keypair will have 1024 bits. About to generate a new ELG-E keypair. minimum keysize is 768 bits

default keysize is 1024 bits highest suggested keysize is 2048 bits What keysize do you want? (1024) 2048 Requested keysize is 2048 bits Please specify how long the key should be valid. 0 = key does not expire <n> = key expires in n days

<n>w = key expires in n weeks <n>m = key expires in n months <n>y = key expires in n years Key is valid for? (0) Key does not expire at all Is this correct (y/n)? y

You need a User-ID to identify your key; the software constructs the user id from Real Name, Comment and Email Address in this form: "Heinrich Heine (Der Dichter) <>"

Real name: Doubting_Thomas Email address: Doubting_thomas Comment: You selected this USER-ID: "Doubting-Thomas ( <>"


p. 99

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O You need a Passphrase to protect your secret key.

We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. ++++++++++++++++++++..+++++.++++++++++++++++++++..++++++++++++++++++++++++ ++++++.++++++++++++++++++++...+++++.++++++++++++++++++++++++++++++>+++++.. +++++...........................................+++++ We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. +++++++++++++++++++++++++++++++++++.+++++++++++++++.++++++++++++++++++++++ +++++++++++++.++++++++++++++++++++++++++++++.+++++++++++++++.+++++.+++++++ +++>+++++...+++++.>......+++++............................+++++^^^ public and secret key created and signed. key marked as ultimately trusted. pub 1024D/551B3FB9 2004-09-13 Doubting-Thomas (

<> Key fingerprint = 9B05 0817 625D 83EF 992A sub 2048g/1B1F4ED5 2004-09-13 CEC3 F8AC 255D 551B 3FB9

[root@RedRum utilities]#

We made our pair of keys with the gpg --gen-key command. This starts a sequence of questions that ask us which encryption method we want to use, our name, the number of bits to use for the keys, the number of days for the keys to be valid, expiration date and a passphrase. Lets talk a minute about the passphrase. Do NOT forget the passphrase, there is no way to recovery this phrase. If you forget what it is, you will need to regenerate your key pair and make sure everyone has the new public key. The passphrase should be long, ideally 8 characters or more, use special characters and be very hard to guess (no birthdays, phone numbers etc). The passphrase is case SenSiTivE so you can be creative with that option. Once we have typed all of this into GPG, a few minutes later we have a pair of keys to use. Now we can get with the business of encrypting our data and files. We can export the public key so we can place the key on a key server or send it to people to use for encryption to be sent to us. We can import keys for sending encrypted information and files to select people. The import and export is easy to do as we see here:
gpg --export [UID


p. 100

gpg --import [Filename]

The UID, or userID, in our sample is Doubting-Thomas so we would say gpg – export Doubting-Thomas to export our public key. When we export our key we can send it to people for their importation, or we can place it on a public key server like the MIT PGP Public key server seen here Figure 5.7. Figure 5.7 MIT Public Key Server

When you place your public key into a server like this, anyone can search for your key to encryption files and communications to you. Alternatives to the MIT public server can be found at Normally in order to paste our key into a public server we need to have the key in ACSII and not the default binary format. This is done by using the -a command as we see here:
[root@RedRum utilities]# gpg --export -a Doubting-Thomas

The -a will give this for an output as we see in Figure 5.8: Figure 5.8 Our public key in ASCII
-----BEGIN PGP PUBLIC KEY BLOCK----Version: GnuPG v1.2.1 (GNU/Linux)

mQGiBEFF5zgRBADQ1K7osMfXVzGJJpf+UBoOWsG+YS57/vlXenzceJQxeKziLpXC vAewj7sUQKudxPzk2IV+X/vVLbmpiEs8ZurssdznNgDgGJvOMqO4pE2GHXvsrYa8 54D+5oF83bEgEPjH8xjedwNmD0O/yz8gzMTV+MCHSHV8LzBs/QWGNwHgywCgye1B pAb/RO3dIXz82puJs1y3TNcEAJe86UHDu/PaK02AAxp3N3xo5P7XdDDZZrPI7l0E pIyd22Qx0UufRzzSgENPwIr95Iu1nQv+pr9dj7GsvxlLoCHP118XjbOG2V02tqRr PENw2ohWRckuSaqTbdVh50eXHdy+Dt2dOvAN8ZTzV+4MLkpXF3hcO4jE4IWOYjv2 hm+DA/0WhZi2SoQw7YYwUDh7GXEzwo/Rf46DRTrY9sXg8aYWcXL0tlecb37GwaPG 8KvjQMNIQGvQ29WVxx8rOA9psx/47qitMOqe14yLxePcKFhyfsW0uLpdMqlM5Jsk


p. 101

cq7jDJzpiDedB8PY5xIMbIMvXRiUYiKZr5elTixinTGjzEd5sLRBTWljaGFlbC1T d2VlbmV5IChQYWNrZXRhdHRhY2suY29tKSA8bWlrZXN3ZWVuZXlAcGFja2V0YXR0 RtQOoXVd5qi0/fg9Jw7AdzRrSdpmlXSoqWFaaJ2WeZj/H0dfqYXBxelZKxBNtCzp snDuexGi+IyJwphf7QL6lwEhAtX2SYwvIcOYcx2TsW8hJCbNuzVMYtOCR6flxvUC 3/m6Cy+ZdCWJNVe7R1dh42IM4yfNf1d3vixHJu2Lln5xQaoInuV/NY1WlwqPe2OX CJ6hiEYEGBECAAYFAkFF50EACgkQ+KwlXVUbP7kJngCfdSmCjWO9jbY6dzh+tmG6 b8b/Lj8AoJTnyXe7bnTswd+8WM0rbga5hU/P =Lxi1 -----END PGP PUBLIC KEY BLOCK----[root@RedRum utilities]#

This public key is what we will give to the key server. Once the server accepts the key, anyone can search for the name you used to created the key and extract the public key as we see in Figure 5.9 here: Figure 5.9 Extracting the public key information
Public Key Server -- Verbose Index ``0x551B3FB9 ''

Type bits /keyID pub


User ID

1024D/551B3FB9 2004/09/13 Doubting-Thomas

( <> sig 551B3FB9 Doubting-Thomas

( <>

We get two results: a public key and a signature. To get either, we click on the hyperlink and we will get the actual key which should be the key we submitted. We can also remove a key from the server that can be useful if we have to update the key pair for any reason. If we want to export the public key to a file for archiving or emailing to someone, we can use the -o argument to tell GPG to save the results of the export as a file.
# gpg --export -o doubting-thomas-key.txt -a Doubting-Thomas

In this sample, we are exporting our public key as ACSII text to a file called doubting-thomas-key.txt. We are not limited to having to encrypt messages at the command line but we can use applications that are PGP/GPG aware such as the email applications Evolution and Mozilla Mail. We see how easy GPG is to use with Evolution in Figure 5.10.


p. 102

Figure 5.10 Using GPG in Evolution Email to sign a document

When this email message is sent, it will automatically be signed for us by Evolution using our GPG key. So far we have been using GPG from the command line or within certain applications that are aware of PGP or GPG. There a few shells that provide a GUI front end for GPG. One of the easiest frontends for GPG to use is called Seahorse and is available for downloading at While the application itself is good, the documentation is a bit lacking at this point in time. In Figure 5.11 we see a screen shot of the Seahorse at the default screen. Figure 5.11 Seahorse GUI management for GPG

We see our key listed under the Name column and some of the key information about the key, such as the bit length and the key ID. In Figure 5.12 we see the details and a menu to handle details such as exporting keys, deleting keys or adding additional keys.


p. 103

Figure 5.12 Using Seahorse Key Manager

The GUI provides a nice easy way to manage your keys without having to remember all the command line arguments. To install Seahorse is not much different than any other Linux application. Seahorse supports several different distributions of Linux and in most cases you will need to download the source and run the traditional configure/make/make install routine. There are dependencies that have to be fulfilled first. We need the following files installed:
gnupg-1.2.6.tar.gz gpgme-0.3.16.tar.gz libgpg-error-1.0.tar.gz seahorse-0.7.1.tar.gz

Our first step is to install GnuPG and we will use the following command.
[root@RedRum utilities]# tar –xzvf gnupg-1.2.6.tar.gz

This will decompress the files into a directory called gnupg-1.2.6 and from here we will run our normal ./configure, make and make install. This installs only gpg and while gpg is useful at this point by using the command line, we really want to have a GUI interface so we will move on to step two.
[root@RedRum utilities]# tar –xzvf libgpg-error-1.0.tar.gz

Again, the tar –xzvf will place the files we need into the gpg-error-1.0 directory and we run the ./configure, make and make install. Step three is :
[root@RedRum utilities]# tar –xzvf gpgme-0.3.16.tar.gz

And a note about gpgme and Seahorse. Seahorse wants to see a version greater than 0.3.14 of gpgme installed. However, if you install version 0.9 of gpgme, Encryption p. 104

Seahorse does not understand that version 9 is greater than 3.14. So save yourself some trouble and use 0.3.16 and life will be good. Finally, we install Seahorse itself:
[root@RedRum utilities]# tar –xzvf seahorse-0.7.1.tar.gz

When we are done with the ./configure, make and make install, we can test it by typing in the following command:
[root@RedRum seahorse-0.7.1]# seahorse &

This will run Seahorse, assuming we installed everything correctly. Now we can make a desktop icon or add it to a menu of our choice. Managing keys Now that we have our key or keys, we have to be able to manage them. This would involve updating information, removing keys, adding keys and so on. It is pretty easy to do the basics with GnuPG. To list any keys we have, we use the --list-keys command as we see here:
[root@RedRum root]# gpg --list-keys /root/.gnupg/pubring.gpg -----------------------pub 1024D/551B3FB9 2004-09-13 Michael-Sweeney (

<> sub 2048g/1B1F4ED5 2004-09-13

The first item listed is the keyring and then the keys follow. In this sample, we have my public key listed. Revoking a Key One of the first things to do is to make a revocation certificate and put it away just in case your key is compromised or you want to disable this key for some reason. In the sample below, the revocation certificate is generated with the -gen-revoke command:
[root@RedRum root]# gpg --output revoke.asc --gen-revoke


1024D/551B3FB9 2004-09-13

Michael-Sweeney (


Create a revocation certificate for this key? y Please select the reason for the revocation: 0 = No reason specified 1 = Key has been compromised


p. 105

2 = Key is superseded 3 = Key is no longer used Q = Cancel (Probably you want to select 1 here) Your decision? 3 Enter an optional description; end it with an empty line: > Reason for revocation: Key is no longer used Is this okay? y

You need a passphrase to unlock the secret key for user: "Michael-Sweeney ( <>" 1024-bit DSA key, ID 551B3FB9, created 2004-09-13

ASCII armored output forced. Revocation certificate created.

Please move it to a medium which you can hide away; if Mallory gets access to this certificate he can use it to make your key unusable. It is smart to print this certificate and store it away, just in case your media become unreadable. of your machine might store the data and make it available to others! [root@RedRum root]# But have some caution: The print system

The revoke.asc file is the certificate you will put somewhere safe. The actual certificate will look like this:
[root@RedRum root]# less /root/revoke.asc -----BEGIN PGP PUBLIC KEY BLOCK----Version: GnuPG v1.2.1 (GNU/Linux) Comment: A revocation certificate should follow

iFUEIBECABUFAkHhuSwOHQNjYW5jZWxlZCBrZXkACgkQ+KwlXVUbP7ln3gCgt6x1 M6sHT27pLI1BL6cNlUMw6goAn2wHVIXRX9I/tde4/mWg109edGvx =pf3k -----END PGP PUBLIC KEY BLOCK-----

Key Signing Parties A popular event is a “Key Signing Party” where people bring a picture ID and their RSA keys to exchange with folks. In order to attend one of these Encryption p. 106

parties, you will need to bring the KeyID, the Type, a fingerprint of the key and the key size. To get the fingerprint of your key is very easy. You use the -fingerprint command as we see in this sample:
# gpg --list-keys --fingerprint

The email address identifies the key you want to make the fingerprint of. You normally will not need to bring a disk or your PC as either will increase the risk of a compromise taking place. This is one time where paper is a good rule. An excellent site with instructions for having a key signing party is at The idea behind the party is build up a “trusted web” where you can sign keys and believe that each key does in fact belong to who you think it does. This does not make your keys impervious to compromise but it does help reduce the risk of unknown keys or fraudulant keys. Additional Notes About GnuPG While this book is for Linux users, it should be mentioned here that our Windows brethren are not left out in the cold regarding GnuPG and encryption. For many us who by reason of employment switch between different platforms, the ability to keep our keys between systems is very useful. To that end, we will give the Windows version of GnuPG a brief mention here. On the GnuPG website there is a link to download the compiled binaries for a Windows version which has been zipped. The installation is very straightforward, but since there is not an installer, we must do it by hand. It is also command-line driven, just like the Linux version. The good news is there is a useful GUI front end for the command-line called GPGshell available at We see the basic key management screen in Figure 5.13. Figure 5.13 GPGkeys shell for Windows

We can import the keys we generated on our Linux computers and use them on our Windows computers with relative ease. To use the keys on either system all we have to do is to export the keys from one system and import them on the other. You can easily go from Windows to Linux and back.


p. 107

Securing Data with SSH
Secure Socket Shell, or SSH, is a method to encrypt data over a network connection using port 22. SSH is not just a telnet replacement, although this what many people know about SSH and that is how they use it. Actually, SSH is a suite of applications that we can use to encrypt to the data flow between hosts. This data flow can be telnet, FTP, copying files, email, X Windows, rlogin, rcp and much more. When the SSH connection is made, both ends of the client and the server are authenticated using a digital certificate and the passwords are encrypted. SSH uses the RSA public key for cryptography for the connections and authentication. The encryption algorithms can be Blowfish, DES and IDEA. Normally, IDEA is the default encryption algorithm used. SSH came into being in 1995 when Tatu Ylonen was a victim of a password sniffing attack and decided that he could write something to prevent such an attack again. He released his beta code to the public and so began the life of SSH1 or version 1 of SSH. SSH1 has stayed popular due to the restrictive licensing of SSH2, but that is slowly changing and more and more companies are using SSH2 and the licensing for SSH2 has been modified. SSH has two primary components to it. One is the SSH server and one is the SSH client. When you start the SSH connection, the SSH client will authenticate you to the server and encrypt the username and password before it is sent to the SSH server.

What is OpenSSH?
OpenSSH originally came from the OpenBSD project and has since been ported to virtually all distributions of Linux. Since OpenSSH was originally developed for OpenBSD, the code for the OpenBSD version is the cleanest but there are “portable” versions for the other distributions. These “portable” versions contain what OpenSSH refers to as “portablity goop” that makes the OpenBSD code portable. The versions that are portable will use a “p” in the version ID Since OpenBSD is free, there are not any patented encryption algorithms used in OpenSSH. The encryption available is 3DES (triple DES), arcfour, AES (Advanced Encryption Standard) and Blowfish. To download OpenSSH for your system, it is recommended that you get the source files and compile it on your system. There are some RPMs available for those who would prefer to use precompiled binaries, but given that this is encryption and if you want to be as secure as possible, it is in your best interest to compile your own binaries. Go to to get the source or binaries from one of the many mirrors. For our purposes, we will be using openssh-3.9p1.tar.gz as our source. Now we need to use gzip and tar to get our source files decompressed and ready to compile as we see in Figure 5.14. Figure 5.14 Getting openssh ready to compile
[root@RedRum root]# gzip -d openssh* [root@RedRum root]# tar xvf openssh-3.9p1.tar


p. 108

openssh-3.9p1 openssh-3.9p1/match.h openssh-3.9p1/.cvsignore openssh-3.9p1/CREDITS openssh-3.9p1/ChangeLog openssh-3.9p1/INSTALL openssh-3.9p1/LICENCE openssh-3.9p1/ openssh-3.9p1/OVERVIEW openssh-3.9p1/README openssh-3.9p1/README.dns openssh-3.9p1/README.platform openssh-3.9p1/README.privsep


openssh-3.9p1/sshd.0 openssh-3.9p1/sftp-server.0 openssh-3.9p1/sftp.0 openssh-3.9p1/ssh-rand-helper.0 openssh-3.9p1/ssh-keysign.0 openssh-3.9p1/sshd_config.0 openssh-3.9p1/ssh_config.0 [root@RedRum root]#

When we have everything uncompressed, we change to the directory of the source files and then run ./configure and then make. The final step is to run make install. The final lines from the make install are pretty interesting as we see in Figure 5.15: Figure 5.15 OpenSSH make install final output
Generating public/private rsa1 key pair. Your identification has been saved in / usr/local/etc/ssh_host_key. Your public key has been saved in / usr/local/etc/ The key fingerprint is: b9:78:66:0b:71:69:c2:04:f5:3a:72:fd:fe:09:d8:72 root@RedRum.localdomain Generating public/private dsa key pair.


p. 109

Your identification has been saved in / usr/local/etc/ssh_host_dsa_key. Your public key has been saved in / usr/local/etc/ The key fingerprint is: af:d2:58:dd:74:a3:b7:a4:78:8d:64:8c:4a:d7:54:26 root@RedRum.localdomain Generating public/private rsa key pair. Your identification has been saved in / usr/local/etc/ssh_host_rsa_key. Your public key has been saved in / usr/local/etc/ The key fingerprint is: 38:73:2d:80:41:1e:35:48:d9:fb:84:f0:10:d2:e8:30 root@RedRum.localdomain /usr/local/sbin/sshd -t -f /usr/local/etc/sshd_config [root@RedRum openssh-3.9p1]#

We see that the final steps to the installation are the generating our RSA public and private keys and DSA keys. Once the make install is completed, we should be ready to ssh into our server.

The basics of SSH
SSH is very straightforward to use in its basic mode as a replacement for telnet. All we need to do is to have sshd on the server running and a ssh client. In this example we will have sshd running on our server and we will open an ssh session to our server. The first thing we will do is to start up a shell session on our workstation. Now we start things off by typing in the following command:
$ ssh –l <login name> <host name>

This will start the SSH connection to the server and the -l tells ssh to use the name that follows the -l as the login name to the host name which is the target. Next we may or may not get a message about ssh can not establish the authenticity of the host. This normally occurs on the first ssh login from the client to the host and the key being sent is not part of the server's existing keyring. We are given the RSA fingerprint so we can verify that yes, we trust this host and login. In Figure 5.16, we see the entire sequence. Figure 5.16 Login to host using ssh
[root@orion root]# ssh The authenticity of host ' (' can't be established. RSA key fingerprint is 50:bf:c5:fe:5f:ab:ee:b2:df:ab:12:a9:c1:54:a0:05.


p. 110

Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '' (RSA) to the list of known hosts. root@'s password: Last login: Thu Sep [root@RedRum root]# 9 21:44:52 2004 from

In this sample we see our login using the current user name and the approval of the RSA key. The RSA key is added to our keyring of known hosts so we will not be asked again unless the RSA key changes. We give our password, and assuming that it is correct, we are logged into the host via SSH which means all of our data stream between these two hosts is encrypted. There we go, we now have an encrypted replacement for telnet so our communications between our client and our server are now secure. It is important to understand that only the connection between these two systems is encrypted. If we were to ftp out of the server, that link would not be encrypted. We can do various things with ssh such as a secure copy or scp and secure ftp or sftp. The secure copy is very easy to do by using the following command.
$ scp <source file name> <destination file name>

In Figure 5.17 we see the scp command in action: Figure 5.17 Using scp to copy a file
[root@orion root]# scp install.log root@'s password:xxxxxx install.log [root@orion root] 100%****************************| 18801 00:00

In this sample we used the scp command to securely copy the file install.log to the host with a new name of install.log.bk. We are asked the password for the user root on and then the file is transferred. The ip address of the host can just as easily be a domain name such as The scp command works the other way also. We can get a file from the remote system instead of putting the file there. Of course, we can also specify a directory path on the source or on the target system. We can also use sftp or secure file transfer protocol. The syntax is the same as normal ftp is. We can still use GET and PUT with sftp. We would open a connection with the server using the sftp <host name> command. What else can SSH do? SSH can do a lot more for us than just a telnet or FTP replacement. As we mentioned earlier, we can encrypt our email using ssh, we can encrypt our XWindows sessions and we can set up SSH user identities that will eliminate the need to use static passwords.


p. 111

Lets look at securing a login with ssh without having to use a password. Where would this be useful? Well, since we are going to show you how to install Snort as an intrusion detection sensor and we need to manage the sensor, would it not make sense to be able to use ssh to do this? And to help simplify the managing of our sensor so we can automate the gathering of logs, updating rules and other system management we would use ssh with private and public keys. So let's take a look at how to configure ssh to use private and public keys on the login. In order to log in to our system without having to type in a password each time we will use what is called “Identity/PubKey Authentication”. This will require us to create two keys, one private and one public. We will also have edit the sshd_conf file to tell sshd to use the PubKey authentication and where the key files are kept. To start the process of creating our private/public pair of keys, we need to tell SSH to create a pair of RSA keys as we see in Figure 5.18. Figure 5.18 Creating a set of Private and Public RSA keys
[root@RedRum root]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase):<passphrase> Enter same passphrase again: <passphrase> Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/ The key fingerprint is: 39:4b:04:e4:0e:b9:2f:f0:c9:ad:0b:94:41:b2:e7:8d root@RedRum [root@RedRum root]#

We have now created our new RSA key pair and we will now take a look at just what we created. We will change to our home directory and look in the .ssh directory.
[root@RedRum root]# less .ssh total 20 drwx-----drwxr-x---rw-------rw-r--r--rw-r--r-2 root 34 root 1 root 1 root 1 root root root root root root 4096 Dec 1 11:45 ./

4096 Nov 28 09:16 ../ 951 Dec 221 Dec 1 11:45 id_rsa 1 11:45

909 Nov 15 09:36 known_hosts

We see that we have three files and the two that we are interested in are the id_rsa and the These two files are our pair of RSA keys and the contents of the id_rsa is shown in Figure 5.19: Figure 5.19 Contents of the RSA private key id_rsa
[root@RedRum .ssh]# less id_rsa -----BEGIN RSA PRIVATE KEY-----


p. 112

Proc-Type: 4,ENCRYPTED DEK-Info: DES-EDE3-CBC,81D594DBDCF50047

cqDxCN0MhjYw6ZnpVeV7QiZn9xNOjEEClV9vCXbxnIxtlXp+AYTh9V3v5vzk3HMj ju63gxqwP15jbPW5p7N9sPcWTANddaAysL1WC8ZpxSSt51yI/E3CkwFaxQmRIYBD 7KAMiaYQ1/IpHs21bcTL0dmPEedXt0dXOZafw/wcPAdJzaZ1gInUb97VqcJh9ut7 Vew/GRVzcYw3Ntjd8lOJbbDx9/ZzBxAqUBXWu1aVhwmsJT4nu6IIQxLdBY1cAbHT 1meUv84JPpJ83GOk9UxpYXCfLZaMkxNXCZPYCirmC3vHTVwpemRCReXYuifBpF15 aW7VE8Jw4z2QHDyhE4bk8W7Hcm9wEAFMYzJ/br/MW43P1tIYbOH5YqUIUFfR7U64 1MR+4LCNqTEJRIurXv8BHrZmoomfF4HSivDHIHQTWZ//s2Hj0hl82OLdaJw6JFl6 67WP+/OiaBQJEsUgaiaCXpaIsXEYNtb289+6wA2GnDtqCYx/RaBZEByPtdrAj22l tXdKnzkAwgPJvPfDS85PsX7EqGrR6/Gu5ZwXMc8mhpTYCQfGYRf8/yW8LeY0GEgn LtxJGPuAVCg3XKa2Fk7I1uNoIUs1Zc61MIDDN7iElhScTqgqq1fy2IyOOPoryakr aF8gDW6mYyGeY76NFuBVSURsG6U9Nx5ppK7w3vu+YRG8eUk/I09etM29E0sp4stj o+3qdNiIzs70t8PZdNoEvCKo/M4tBdRN29sPZbyKyryC+dkvPmSQwrf6HATwfz/Y VmFLvTJhR2jlP+EWVTPsVOJdTKLeylcnZuw/3tIwzdI= -----END RSA PRIVATE KEY----[root@RedRum .ssh]# less ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAyqCFbp/iMg1dMQVASn8uUddOncVowItmAledzLrewz Y7 hoNfQB1cvnEx8ZJZqvU9J85U4rzwr8MFfmZ06196IUAbo1PuAd8L2YXktWKJidO+QtSPu1 D1pW DmgwQbroCoauOFxdU0rvybXBihmgNjSByUrwumkoWgAgBJ8= root@RedRum [root@RedRum .ssh]#

The id_rsa file must to be protected at all costs since this is our private key. The second file, is our public key and this is the key that we can install on servers, sensors or other hosts. You can change the default file name of by using the -f <filename> option when you generate the RSA key pair. We need to copy our public key to our server or sensor that we want to manage using SSH. So of course we will use scp to securely copy our key over to the $home/.ssh directory on the target. In the case the target does not have an .ssh directory, we would make one using mdir and then give it permissions of 700 using chmod. Once we have the directory we can copy the key as we see in Figure 5.20: Figure 5.20 Using scp to copy our public key to the target server
# scp root@'s password: 100% |************************| 221 00:00


p. 113


Our next step is configure SSH to have the key be trusted by our account on our ssh server. We do this by editing the sshd_conf file which can be found in /etc/ssh/ to allow PubKey or Identity authentication.. We see in Figure 5.21 the edits we need to make. Figure 5.21 Editing the sshd_config file for Public Keys
#RSAAuthentication yes #PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys

We need to remove the hash marks in front of each of these lines to enable the entry. The RSA entry is for SSHv1 and the PubKey entry is for SSHv2. We also need to restart our SSH server once these edits are in place by using the command /etc/init.d/ssd restart. The last file in the .ssh directory is the ‘authorized_keys’ file. This file is a simple text file that contains a listing of trusted or authorized keys. We need to add our public key to this file as we see in Figure 5.22. Figure 5.22 Adding to authorized_keys file
[root@RedRum .ssh]# cat >> authorized_keys [root@RedRum .ssh]# cat authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAyqCFbp/iMg1dMQVASn8uUddOncVowItmAledzLrewz Y7hoNfQB1cvnEx8ZJZqvU9J85U4rzwr8MFfmZ06196IUAbo1PuAd8L2YXktWKJidO+QtSP u1D1pW9Sgk1cDmgwQbroCoaFxdU0rvybXBihmgNjSByUrwumkoWgAgBJ8= root@RedRum [root@RedRum .ssh]#.

We use the cat command to copy our public key to the authorized_keys file and then verify that the key was in fact copied. Each key must be on its own line within the file. We need to tighten up the permissions of the authorized_keys file to 600 using the chmod command. If we have gotten everything right, we should be able to ssh to our SSH server and be presented with a question asking for the passphrase as we see in Figure 5.23: Figure 5.23 Using ssh and public key to login to a server
[root@RedRum]# ssh Enter passphrase for key '/root/.ssh/id_rsa':<passphrase> Last login: Thu Dec 2 10:46:18 2004 from

[root@FedoraC1 root]#

SSH Port Forwarding SSH offers a feature called port forwarding where we can set up an encrypted tunnel and then forward traffic through the encrypted tunnel. The traffic can be FTP, Telnet, X Windows or anything else such as POP. This effectively creates a Encryption p. 114

“poor mans” VPN connection. In Figure 5.24 we see a typical architecture for this use of SSH. Figure 5.24 Typical Architecture requiring Port Forwarding using SSH

In the above scenario, we have our road warrior who needs to get to a resource on the corporate LAN. But, the sysadmin does not have any budget for fancy VPN servers so she does the next best thing. She sets up a Linux server whose job it is to be an SSH connection server. She then configures the road warrior's laptop with an SSH client; configures the passphrase; and makes sure that the ring has the correct keys. She also configures the connection to pass the correct traffic which in our world will be FTP. Yes, there is secure FTP available as part of SSH but let's assume that our resource, while it can speak FTP, it is clueless to what SSH is, so we cannot use the SSH native secure FTP or SFTP. So what we will do is first log into the SSH server and make our secure and encrypted SSHv2 connection from the laptop and over the public internet into the DMZ to the SSH server. Then in turn, we will forward the SSH connection from the DMZ to our clueless resource on the LAN as a normal FTP connection connecting to the resource. So how does this all work? Rather simply, as it turns out. In our sample below, we are setting up an SSH session between RedRum and Orion. Orion will play the role of the SSH server in the DMZ. In Figure 5.25 we see the connection being made to Orion using the -L parameter which is our flag for port forwarding. The -C enables compression over the SSH link. Figure 5.25 Making the SSH connection to the DMZ host
[root@RedRum root]# ssh -C -L 2121: root@'s password: Last login: Wed Dec 22 15:32:45 2004 from [root@orion root]#

The command we use is to SSH to our server Orion at and then configure port forwarding where on RedRum the local port will be 2121, the FTP server is and the final port is 21. In Figure 5.26 we have made our SSH connection to Orion and now we open a new terminal on RedRum and FTP to the new forwarded port of 2121: Figure 5.26 Starting the forwarded FTP session to
[root@RedRum root]# ftp localhost 2121 Connected to localhost (


p. 115

220 ArGoSoft FTP Server, Version 1.01 ( Name (localhost:root): msweeney 331 User name OK, need password Password:xxxx 230 User msweeney logged in successfully Remote system type is Windows_NT. ftp> ls 227 Entering passive mode (192,168,50,235,18,148) 150 Opening binary data connection 12-22-04 12-22-04 12-22-04 03:24PM 03:24PM 03:24PM <DIR> <DIR> . .. 1657263 FileZilla_Server_0_9_4d.exe

226 Transfer complete ftp>

In our example, the sample resource is a Windows FTP server which natively does not support SSH, hence the forwarding. We have opened an FTP session on the local host, RedRum, which in turn forwards port 2121 to Orion, who in turns forwards the request as a normal FTP request to our FTP server. We see in Figure 5.27 the transfer of the file yum-list.txt which resides on RedRum to our Windows FTP server over the port 2121 which is our encrypted SSH link to Orion. Figure 5.27 Putting the file yum-list.txt from RedRum to the FTP Server
ftp> put yum-list.txt local: yum-list.txt remote: yum-list.txt 227 Entering passive mode (192,168,50,235,18,167) 150 Opening binary data connection 226 Transfer complete 68101 bytes sent in 0.0324 secs (2.1e+03 Kbytes/sec) ftp> ls 227 Entering passive mode (192,168,50,235,18,168) 150 Opening binary data connection 12-22-04 12-22-04 12-22-04 12-22-04 03:48PM 03:48PM 03:24PM 03:48PM <DIR> <DIR> . .. 1657263 FileZilla_Server_0_9_4d.exe 68101 yum-list.txt

226 Transfer complete ftp>

We can port forward for X Windows, POP, SMTP or virtually any protocol, so if you find yourself with a requirement to have a secure connection but the target cannot use SSH or you cannot use a VPN, then this technique may be of use. Encryption p. 116

What is a X.509 Certificate?
A digital certificate is digitally signed statement from someone or some company that uses a public key and has a value or a level of privilege associated with it. The X.509 certificates are part of the ITU-T X.500 recommendation. The certificate is made up of several different pieces of information such as the certification authority, the owner's public key, the CA's public key and information of the owner, like a street address and company name. The Certification Authority, or CA, is a trusted third party that signs the certificate to assure the certificate is valid. Two of the more well known names of CA authorities are Thawte and Verisign. Thawte will supply a free certificate for personal email but for anything else you will have to pay for the certificate. Or we have a better choice, one where we can get a real signed certificate for free! Yes, that is correct, a real signed certificate for free issued by http:// CAcert is a very easy way to get a legitimate and signed certificate to use on websites, email or wherever you need to use a certificate. The most widely used application that uses certificates is your web browser and SSL. Each time you log into your bank or purchase something online, you are using SSL and certificates. We can now use certificates for our authorization on network equipment such as Cisco routers and switches. We can use certificates with our IPsec connections instead of using shared or preshared keys. And given that in the next version of IPsec shared and preshared keys may not be allowed, we should become familiar with making and using X.509 certificates. Make Your Own Certificates For lab work or internal use or even private communication over the internet, we can make our own certificates which is pretty easy to do. We will start to make our own certificates by generating our key pair using the following two openssl commands.
% openssl genrsa -out hostname.key 1024 % openssl req -new -key hostname.key -out hostname.csr

The results of these commands are listed here in Figure 5.28: Figure 5.28 Generating Certificate Keys
[root@FedoraC1 root]# openssl genrsa -out fedorac1.key 1024 Generating RSA private key, 1024 bit long modulus .......++++++ ..++++++ e is 65537 (0x10001) [root@FedoraC1 root]# openssl req -new -key fedorac1.key -out fedorac1.csr You are about to be asked to enter information that will be incorporated


p. 117

into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:Orange Organization Name (eg, company) [My Company Ltd]:Packetattack Organizational Unit Name (eg, section) []:Engineering Common Name (eg, your name or your server's hostname) []:fedorac1 Email Address []

Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:nogood An optional company name []:Packetattack [root@FedoraC1 root]#

Once you have completed these two commands, move the resulting files to your Apache server's configuration directory, such as /usr/local/apache2/conf/, and add the following lines in your httpd.conf configuration file:
SSLCertificateFile /www/conf/hostname.crt SSLCertificateKeyFile /www/conf/

Are You Certified? As we said earlier, our certificates can be signed by a third party who will verify the integrity of the certificates. But, we can become our own CA and sign our own certificates for use on our own networks or even over the internet. To start the process of creating our own CA, we will run a script called CA as we see in Figure 5.29. Figure 5.29 Creating Your Own Certificate Authority
[root@FedoraC1 misc]# ./CA -newca CA certificate filename (or enter to create)

Making CA certificate ... Generating a 1024 bit RSA private key ......++++++ .........++++++


p. 118

writing new private key to './demoCA/private/./cakey.pem' Enter PEM pass phrase:xxxx Verifying - Enter PEM pass phrase:xxxxx ----You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:Orange Organization Name (eg, company) [My Company Ltd]:Packetattack Organizational Unit Name (eg, section) []:engineering Common Name (eg, your name or your server's hostname) []:packetattack Email Address []

If you get tired of keying in the default values, you can edit the openssl.cnf file and change the defaults as you see fit. In Figure 5.30 we see the defaults in the openssl.cnf file. Figure 5.30 Default settings in openssl.cnf
[ CA_default ]

dir certs kept crl_dir kept database new_certs_dir

= ./demoCA = $dir/certs

# Where everything is kept # Where the issued certs are

= $dir/crl

# Where the issued crl are

= $dir/index.txt = $dir/newcerts

# database index file. # default place for new certs.

certificate serial crl private_key RANDFILE

= $dir/cacert.pem = $dir/serial = $dir/crl.pem

# The CA certificate # The current serial number # The current CRL

= $dir/private/cakey.pem# The private key = $dir/private/.rand # private random number file


p. 119

x509_extensions = usr_cert cert

# The extentions to add to the

To change the length of time for which the certificate is valid is easily accomplished. We will change to the /demoCA directory and then use openssl to change the length of time from the default of one year to ten years as we see in our example in Figure 5.31: Figure 5.31 Changing the length of time for which the certificate is valid
# openssl x509 -in cacert.pem -days 3650 -out cacert.pem -signkey ./ private/cakey.pem Getting Private key Enter pass phrase for ./private/cakey.pem: #

At this point we have our certificate authority ready to be used. Now we can create our certificate signing request. We will use the CA -newreq command to generate the request as we see in Figure 5.32. Figure 5.32 Certificate Signing Request
# /usr/share/ssl/misc/CA -newreq Generating a 1024 bit RSA private key ............................................++++++ .......++++++ writing new private key to 'newreq.pem' Enter PEM pass phrase:xxxx Verifying - Enter PEM pass phrase:xxxx ----You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:Orange Organization Name (eg, company) [My Company Ltd]:Packetattack Organizational Unit Name (eg, section) []:Sales Common Name (eg, your name or your server's hostname) []:FedoraC1


p. 120

Email Address []

Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: Request (and private key) is in newreq.pem [root@FedoraC1 misc]#

So now we have a file called newreq.pem and this file contains the certificate signing request and the private key. We now need to sign the file using our very own CA that we created. In Figure 5.33 we see the process of using CA -sign to sign our certificate. Figure 5.33 Signing our certificate with our own CA
# /usr/share/ssl/misc/CA -sign Using configuration from /usr/share/ssl/openssl.cnf Enter pass phrase for ./demoCA/private/cakey.pem: Check that the request matches the signature Signature ok Certificate Details: Serial Number: 1 (0x1) Validity Not Before: Dec 16 23:57:58 2004 GMT Not After : Dec 16 23:57:58 2005 GMT Subject: countryName stateOrProvinceName localityName organizationName organizationalUnitName commonName emailAddress X509v3 extensions: X509v3 Basic Constraints: CA:FALSE Netscape Comment: OpenSSL Generated Certificate X509v3 Subject Key Identifier: D7:48:40:EE:53:50:6D:86:7E:FB:A8:96:FA:50:9F:0A:AE:74:0B:34 X509v3 Authority Key Identifier: = US = CA = Orange = Packetattack = Sales = FedoraC1 =


p. 121

keyid:6F:2E:BF:05:ED:66:C2:B5:20:00:75:E0:25:06:3B:75:72:B0: DirName:/C=US/ST=CA/L=Orange/O=Packetattack/OU=engineering/C N=packetattack/ serial:00 Certificate is to be certified until Dec 16 23:57:58 2005 GMT (365 days) Sign the certificate? [y/n]:y

1 out of 1 certificate requests certified, commit? [y/n]y Write out database with 1 new entries Data Base Updated Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) Signature Algorithm: md5WithRSAEncryption Issuer: C=US, ST=CA, L=Orange, O=Packetattack, OU=engineering, CN=packetattack/ Validity Not Before: Dec 16 23:57:58 2004 GMT Not After : Dec 16 23:57:58 2005 GMT Subject: C=US, ST=CA, L=Orange, O=Packetattack, OU=Sales, CN=FedoraC1/ Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:b1:71:f1:37:80:6f:55:72:e3:82:a6:01:f9:76: 91:c2:e6:df:6b:a9:7f:3f:fc:54:f5:26:c6:cc:fc: f4:47:e1:e7:fa:0c:63:a2:79:19:79:54:76:5f:29: d2:7b:f3:e7:f6:0f:86:7c:bf Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: CA:FALSE Netscape Comment: OpenSSL Generated Certificate X509v3 Subject Key Identifier: D7:48:40:EE:53:50:6D:86:7E:FB:A8:96:FA:50:9F:0A:AE:74:0B:34 X509v3 Authority Key Identifier:


p. 122

keyid:6F:2E:BF:05:ED:66:C2:B5:20:00:75:E0:25:06:3B:75:72:B0: DirName:/C=US/ST=CA/L=Orange/O=Packetattack/OU=engineering/C N=packetattack/ serial:00

Signature Algorithm: md5WithRSAEncryption 5e:6f:af:e5:63:05:6a:34:6d:83:71:ce:e5:25:0a:d7:92:11: d8:6c:05:34:8e:f0:34:7f:fb:49:e5:cc:28:2d:06:a5:2e:27: d9:25:22:5a:b9:de:cf:fa:47:6c:ba:d0:0e:ba:48:bb:7c:b2: f5:11 -----BEGIN CERTIFICATE----MIIDujCCAyOgAwIBAgIBATANBgkqhkiG9w0BAQQFADCBlDELMAkGA1UEBhMCVVMx CzAJBgNVBAgTAkNBMQ8wDQYDVQQHEwZPcmFuZ2UxFTATBgNVBAoTDFBhY2tldGF0 Lie7p/yWADvUNYTscL/DDNrj/a/tpOvSzG3B2QppCWtaX2cKoZ/NR7o4S5nAbtsR k/0T7cF+5IppBdASB2YwaFoZylmgstKO+InZJSJaud7P+kdsutAOuki7fLL1EQ== -----END CERTIFICATE----Signed certificate is in newcert.pem [root@FedoraC1 misc]#

The various keys in the last example were shortened to conserve space so don't try to count the characters. At this point we have a self-signed certificate. It is now highly recommended to change the names of the files from newreq.pem and newcert.pem to file names that are more useful and descriptive, like myvpngateway_cert.pem and myvpngateway_key.pem. Now we should discuss what to do if either the certificate or the private key is compromised. We would need to revoke the keys or revoke the certificate. The revoked keys are in the certificate revocation list or CRL. Our first step to revoke the certificate; we will use openssl as we see in Figure 5.34. Figure 5.34 Revocation of a certificate
[root@FedoraC1 misc]# openssl ca -gencrl -out crl.pem Using configuration from /usr/share/ssl/openssl.cnf Enter pass phrase for ./demoCA/private/cakey.pem: [root@FedoraC1 misc]#

This command will create an empty CRL file called crl.pem. In order to revoke the certificate you have to have the certificate in hand. The certificate would be stored in the demoCA/newcerts/ directory. If we read the demoCA/index.txt file, we can see the names of the certificates. We see our demo certificate listed in Figure 5.35. Figure 5.35 Listing of index.txt to see the names of the certificates
[root@FedoraC1 demoCA]# less index.txt


p. 123




unknown /


How to use the Certificate We will take a quick look at how to use our new certificates in a few common scenarios such as a web browser and email. An easy way to push the new certificate to the bowser is to use a web link where the user can just download the new certificate. A sample link would be We can see that the certificate does not have the .pem extension anymore. We need to convert our certificate to a version that will work on the browsers so we use a command that will strip off what we don't need and keep just the root certificate. In Figure 5.36, we see this conversion taking place. Figure 5.36 Making a Trusted Root Certificate
#/usr/share/ssl/misc/demoCA# openssl x509 -in cacert.pem -out cacert.crt

To import this new certificate into Internet Explorer, you need to go to the tools| Internet options|content|certificates|import and import the new cacert.crt file. In Figure 5.37 we see the results of the importation of the certificate. Figure 5.37 Details of the new root certificate imported into IE

Mozilla, Firefox and other browsers expect to see a PKCS12, or Personal Information Exchange Syntax Standard file. The PKCS12 is a standard that helps keep the private keys and certificates secure. To make a PKCS12 file, we need to use a new command with openssl and then export the new certificate. In Figure 5.38 we see the entire command for creating the PKCS12 file for Firefox.


p. 124

Figure 5.38 Creating a PKCS12 certificate
# openssl pkcs12 -export -in newcert.pem -inkey newreq.pem -out mycert.p12 -name "MY CERTIFICATE" Enter pass phrase for newreq.pem: xxxx Enter Export Password:xxxx Verifying - Enter Export Password: #

We can see the new parameter called pkcs12 being used and the -export argument right after the command. We then tell pkcs12 which file we want to convert and give it a friendly name of “MY CERTIFICATE”. Now we have a certificate called mycert.p12 and we can either import it into our browser or put it on a website to be imported on a click. If we click on a link from a website, then Mozilla/Firefox will ask us if we want to open the file with PFXfile or to download the file. If you choose to open the certificate with PFXfile, it will launch the import wizard as we see in Figure 5.39. Figure 5.39 Using PFXfile Wizard to import mycert.p12

Our other choice is to copy the file to our hard drive and then use the certificate manager to import the certificate as we see in Figure 5.40.


p. 125

Figure 5.40 Using Certificate Manager to import mycert.p12

Microsoft Internet Explorer can also import the same PKCS12 certificate by downloading the file and importing just as we did for the .crt certificate. Now that we have spent all this time learning about X.509 certificates, CA, PKCS12s and more, there is a way to manage all of this from a nice GUI interface. Why didn't I mention the GUI front end earlier? Because the GUI hides all the coding and commands and when things are hidden from view, it makes it much harder to learn about them. Now that we know the machinery behind the curtain, we can work on making our life easier. There is a tool available to download called SimpleCA and it is available for both Windows and Linux at This GUI front end will let you do most of the same things we have been doing from the command line interface. In Figure 5.41 we see a screen shot of SimpleCA in action. Figure 5.41 SimpleCA managing certificates

There are some limits to SimpleCA such as only 10 years for the life of a root certificate; all certificates will have the same information; and there is a one year Encryption p. 126

lifespan of the certificates. But, you can generate the root certificate, generate private and the public keys for both the server and clients; revoke certificates; and publish a CRL. The installation involves simply downloading the zip file, unzipping the file and then I had to chmod the permissions to -x for the application to work correctly.

Secure Socket Layer
Secure Socket Layer or SSL was developed by Netscape and is the standard method we use to provide a secure http connection between the client and a webserver. We can easily see when we are using SSL on a browser as the http address adds an S to the end to form https://. We see SSL heavily used in todays world of commerce, private sites, banking and many more internet uses. But, what is SSL really? how do we get SSL? How do we use SSL? In this next section we will go over the details of SSL and how we can install it and use it. SSL uses port 443 and provides a secure and reliable link between two hosts or a client and a host. SSL is really two protocols at work; one is SSL Record Protocol and the second is some type of reliable protocol such as TCP. When an SSL connection is started, there is a negotiation that takes place between the server and the client to decide on the encryption algorithm and keys. After this initial negotiation, everything is encrypted. Modern versions of SSL use a 128-bit key or even longer keys. The successor to SSL is called Transport Layer Security, or TLS, and the specifications can be found in RFC 2246. Both of these security protocols use public key encryption and public key certificates. For applications that do not support SSL natively, we can use STUNNEL to supply a secure and encrypted link. STUNNEL can be downloaded at and has been reported to work with systems such as AmigaOS, BeOS, Plan9, SGI IRIX and others, along with more popular choices such as Linux (big surprise?) and Windows.

SSL and Apache
To install SSL on Apache 1.3, we will download apache 1.3 from and then install it using configure, make and make install. This sets up the process to install SSL with Apache 1.3. Our next step is to download the source of the SSL module from We will use tar -xvzf to unpack the files. We need to tell mod_ssl that we are using Apache and where to find apache. When we run the configure command, we have some options that we need to enable as we see here:
# cd mod_ssl-2.8.14-1.3.29/ #./configure --with-apache=../apache_1.3.29

Now we will change to the Apache directory and then we need to configure Apache with SSL enabled as we see here:
# cd apache_1.3.29


p. 127

# SSL_BASE=/usr ./configure –enable-module=ssl -prefix=/usr/local/apache1 --enable-module=most --enable-shared=max

Once we have finished compiling Apache, we should be ready to go with our web server and have SSL enabled for our use. Apache 2.0 has SSL included and just needs to have the --enable-ssl parameter set during the configure.
# ./configure --prefix=/usr/local/apache2 --enable-modsshared=most --enable-ssl --with-ssl=/usr

Options are always good to have when building a webserver with SSL. We can use Apache-SSL which can be downloaded at The licensing allows us to use it for free for both personal and business use. It offers 128-bit encryption and we also have the source code. This version of Apache is built using Apache, SSLeay and OpenSSL. Resources
Reference Card for Apache Mod_SSL How PGP Works GnuPG GnuPG Handbook OpenSSH Openssl Generating x.509 Certificate CAcert for free signed certificates X.509 RFC


p. 128

Chapter 6
“Thus those skilled in war subdue the enemy's army without battle. They conquer by strategy.” Sun Tzu

Detecting Intruders
n today’s world of viruses, worms, script kiddies and the like, we need a way to monitor our network for impending attacks or unwanted intruders. This device needs to be passive so as not to give a warning to the would-be intruder that he is being monitored, and it needs to provide a good logging function so we have a record of what is happening and what has happened. Intrusion Detection System, or IDS, is the way we can gain this insight to our network and set up an early warning system to a possible attack or breach in our security. There are two types of intrusion detection. The first type of sensor is the Network Intrusion Detection, or NIDS, and the most common NIDS that most of us have heard about is called “Snort”. Snort is an open source based application that uses a personal computer as the sensor and is available for free at Snort is available for both Linux and Windows. The second type of IDS is called Host based Intrusion Detection or HIDS. HIDS systems are not as well known but they are still very popular. The one that most heard about is called “Tripwire” and available for downloading at Tripwire is also open source and can be downloaded at There is a commercial version of Tripwire available that provides a significant higher level of functionality then the older open source version. A second HIDS is called AIDE which stands for Advanced Intrusion Detection Environment and AIDE can be found at The problem with both the open source Tripwire and AIDE is that neither is up to date. AIDE is the newest but has not been updated since 2003, and Tripwire has not been updated since 2001. There is an option to the traditional IDS in the form of a hybrid HIDS is called “Prelude” and it is available from Prelude is described as a hybrid IDS system since it uses different sensors for different threats. Prelude is different from many other IDS systems as it uses countermeasure agents to perform an action to stop a threat. Most IDS systems will send a notice but not actively try to stop the threat. The sensors used by Prelude can be anything from a simple cron job running to look for a port open to a full on signature based sensor. This gives the network engineer a tremendous amount of flexibility in designing a security architecture using an IDS layer. Both Snort and Prelude support using a message format that is called IDMEF, or Intrusion Detection Message Exchange Format messaging. In the case of Snort, there is a plugin available at and Prelude has the support built into the application. The IDMEF is a draft standard from the Internet


Detecting Intruders

p. 129

Engineering Task Force, or ITEF that gives a standard messaging format for IDS systems to report alerts.

Deploying an IDS
IDS systems are normally composed of two parts: one part is the actual sensor or sensors that are placed on the network; the second part is a management console or interface. Sometimes this can be on the sensor itself but normally it is a different workstation that gets the logs from the sensor by way of a secure channel, such as SCP or SFTP and then runs some type of reporting application. The network based IDS systems will be positioned at various places on the network where they can see all or part of the network traffic. In Figure 6.1 we see a generic network with a DMZ protected by a firewall and a router isolating the LAN. Figure 6.1 Basic Snort Sensor Deployment


Firewall DMZ


DNS Router


LAN Snort User on LAN

In this scenario we see that we used two Snort sensors. The sensors can only see what is on their segment, much like a packet sniffer can only see what is on the segment it’s attached to. We have one sensor deployed in the DMZ to act as our early warning system, and we have one on the LAN to warn about any intrusions from the inside or possibly one that made it through the DMZ. This is a classic IDS deployment of the sensors. The host-based IDS sensor is installed on the host and acts as the last defense of the host. In Figure 6.2 we see a typical host-based IDS with each host protected by a “bubble” of security provided by the HIDS.

Detecting Intruders

p. 130

Figure 6.2 Sample of Host Based IDS Deployment


HIDS “bubble”

Firewall DMZ

HIDS “bubble”

DNS Router



User on LAN

A HIDS will provide a very detailed audit trail of what happened , who did it, when it happened happen on or to a host. It will sound the alarm if something happens that you have set the rules to watch for. Some types of HIDS systems can actually sense if a configuration has changed and as a result of the change, put an archived version of the configuration back onto the host without any assistance from the engineer. While being this proactive is a good thing, one must take care not to have an updated configuration wiped out by the HIDS thinking that the update was you hacking your own host. A good IDS system will provide a way to use logging and then generate reports from that logged information. Without the reports, it is almost impossible to make decisions on what to do or what might be lurking in your network. Also, with a good logging function, you can go backwards in time to see how something might have breached your security.

What is Snort
Snort is an open source intrusion detection system that turns a basic personal computer into a high-performance intrusion detections sensor, or IDS. As a sensor, the Snort box will sniff or “snort” the data packets as they flow past the sensor and run signature pattern and Boolean matches based on a series of rules that have been configured. These rules are very flexible and can be quite powerful in the analysis that they can accomplish. While Snort is a command line application, there have been several GUI front ends written for it in addition, and there are commercial products that have been built on Detecting Intruders p. 131

top of Snort or, which used Snort as the starting point. Snort has two strong points and its flexiblity, and the extensive logging capabilities. The rules and rule sets that Snort use are its heart and soul, and they provide the flexibility that has let Snort stand the test of time as the premier open source IDS system. The log files are in the TCP dump format and there are powerful tools available to analyze the log files such as Analysis Console for Intrusion Detection, or ACID, which is available for downloading at A second log file analysis tool is called Sawmill which is available from Sawmill is a commercial product but one of the most flexible in that it can read and generate reports from virtually any log file including Snort. We discuss ACID at the end of this chapter and Sawill in Chapter 8.

Building a Sensor
Installing Snort is a relatively straightforward endeavor. First, we have to check our server and make sure we meet certain requirements, then we need to go to Snort’s web site at and download the current binaries or source files. The hardware requirements are very easy to meet since Snort is a lightweight application to run under Linux. If we are going to watch a 10/100 link, then we should have at least a Pentium III and 128 MB of RAM. If we plan to look at a Gig link, we need to have a P4 with 512 MB of RAM and we should use a server class NIC card. Speaking of NICs, we should use two NICs for our sensor: one to snort with and one to manage the sensor with and send logging information from. Snorting data can be done with a single NIC but two are better all the way around for both performance and security.
This book is not distro specific but it IS Linux specific. While much will apply to BSD, some parts will not. I will at this time recommend that if you want to install Snort on a FreeBSD box, get a copy of Keith Tokash's excellent installation Snort on FreeBSD guide at FreeBSD47RELEASE-Snort-MySQLVer1-3.pdf

If we are going to install Snort from RPMs or other precompiled packages, then we need to decide if we are also going to use some optional software such as Apache, PHP or MySQL. As it turns out, without a way to store the log files in some type of meaningful manner, Snort is somewhat useless, so it's best to resign yourself to installing MySQL or some other database. Also, to get the reports from Snort, it is easiest if we make the information available by using a browser which means we will be installing Apache on our sensor. If we are deploying more than a single sensor, then we would put only Snort on the sensors and feed the log files back to the management station. Also, many of the report generating applications such as the Analysis Console for Intrusion Databases, or ACID, will require installation of PHP. At this point, we are really building a Linux, Apache, MySQL and PHP or LAMP server.
For a solid guide on installing all the software for LAMP, check out Bruce Timberlake’s LAMP installation guide at:

Detecting Intruders

p. 132

In the world of security, it is almost always best if we install our applications from source files. This way, something can not be hidden in the precompiled binaries or packaged files. You are having to rely on someone else's work with pre-compiled binaries and that is a large risk in security. And if we are going to install from source files, we need to make sure we have the latest autoconf, automake, gcc, lex and yacc (or the GNU flex and Bison) and the latest versions of libpcap and tcpdump. Lets have a word about the importance of libpcap and tcpdump for Snort. These two files are part of the core of Snort and without them Snort would be completely useless. The libpcap library is how the Snort sensor can capture packets for both the local host and other hosts. Tcpdump is how Snort prints out the packet header information based on Boolean expressions. So we have the two parts that allow the capture of packets and how to process the header information. It stands to reason we want the latest and greatest of these two files for stability and speed. We can get the source files for tcpdump and libpcap at and compile them ourselves using the normal configure, make and make install routine.
Many of us know about using the history command to go back to see what commands we typed but when installing something as complex as MySQL from source, a trick to use to capture all of the output is to use the script command like this: #script ~/mysql.install.notes. This will put a file with all of the output in the home directory of the user. This can be very useful when trying to troubleshoot where something might have broken down in the installation process.

But, the reality is that MySQL can be troublesome at times to install from source files, and it is generally recommended to install it using precompiled binaries. But, the installation can be done with source files if some key directions are followed and if you are willing to spend a bit of time. Besides, if you have been reading and following along with your own installations from the beginning of this book, you have already conquered the recompiling of your kernel. A little bitty source file is nothing, is it? There are some versions of Snort that are already precompiled and if you are in a hurry or not entirely comfortable with installing from source, the precompiled binaries will make life much easier. So let's see what we will need to make our Snort sensor and to be able to generate reports from the data that Snort logs. To install a useful version of Snort and get good solid data from it, we should have the following list of files shown in Figure 6.3 for a complete Snort installation. Figure 6.3 Listing of require source or binary files for installing Snort with MySQL
Snort 2.x MySQL PHP Apache ACID ADODB

Detecting Intruders

p. 133

jpGraph libpng PHPLOT zlib syslog-ng

MySQL or Apache should not to be installed during the installation of your distribution of Linux, most of the included versions are too old for our purposes. Your installed operating system has to be fully patched and updated before you install these packages. If you have installed MySQL or Apache while installing the operating system, never fear. We can use a package manager like RPM or Synaptic to remove or erase this files. First use an rpm query like the following example to find out what is installed:
[root@RedRum2 root]# rpm -q mysql

Now that we know the name of what package is installed, we can remove or erase it using rpm like this:
[root@RedRum2 root]# rpm -e <package_name>

But, we are not ready to install the packages just yet; we still have some homework to do. We need to audit and disable some common services that are not needed if they are enabled. A partial list of the commonly enabled services that should be disabled are listed in Figure 6.4: Figure 6.4 Common services to disable for a Snort sensor
• • • • • • • • • • apmd CUPs firstboot idsn netfs nfslock pcmcia portmap sgi_fam sendmail

So how do we see what services are running on our soon to be sensor? By using a command called chkconfig. In Figure 6.5, we see the output of the command chkconfig -list on our potential sensor.

Detecting Intruders

p. 134

Figure 6.5 Output from chkconfig - list
[root@Fedora1 root]# chkconfig --list gpm kudzu syslog rawdevices netfs network random saslauthd iptables anacron atd irda nscd acpid apmd irqbalance pcmcia nfslock nfs microcode_ctl smartd isdn autofs sshd portmap sendmail rhnsd crond tux aep1000 bcm5820 httpd squid winbind smb messagebus snmpd 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 2:on 2:off 2:on 2:off 2:off 2:on 2:on 2:off 2:on 2:on 2:off 2:off 2:off 2:off 2:on 2:off 2:on 2:off 2:off 2:on 2:on 2:on 2:off 2:on 2:off 2:on 2:off 2:on 2:off 2:off 2:off 2:off 2:off 2:off 2:off 2:off 2:off 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:off 3:on 3:on 3:on 3:off 3:off 3:on 3:on 3:on 3:on 3:on 3:off 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:on 3:off 3:off 3:off 3:off 3:off 3:off 3:off 3:on 3:off 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:off 4:on 4:on 4:on 4:off 4:off 4:on 4:on 4:on 4:on 4:on 4:off 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:on 4:off 4:off 4:off 4:off 4:off 4:off 4:off 4:on 4:off 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:off 5:on 5:on 5:on 5:off 5:off 5:on 5:on 5:on 5:on 5:on 5:off 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:on 5:off 5:off 5:off 5:off 5:off 5:off 5:off 5:on 5:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off

Detecting Intruders

p. 135

snmptrapd xfs xinetd cups named ntpd vncserver vsftpd dovecot yppasswdd ypserv ypxfrd postgresql mysqld

0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off 0:off

1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off 1:off

2:off 2:on 2:off 2:on 2:off 2:off 2:off 2:off 2:off 2:off 2:off 2:off 2:off 2:off

3:off 3:on 3:on 3:on 3:off 3:on 3:off 3:off 3:off 3:off 3:off 3:off 3:off 3:off

4:off 4:on 4:on 4:on 4:off 4:off 4:off 4:off 4:off 4:off 4:off 4:off 4:off 4:off

5:off 5:on 5:on 5:on 5:off 5:on 5:off 5:off 5:off 5:off 5:off 5:off 5:off 5:on

6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off 6:off

xinetd based services: chargen-udp: rsync: off off off off off off

chargen: daytime-udp: daytime: echo-udp: echo: off

services: time: off


time-udp: cups-lpd: sgi_fam: finger: off [root@Fedora1 root]#

off off off

The command chkconfig will tell us which run levels each process will start at. For example, the line:
sendmail 0:off 1:off 2:on 3:on 4:on 5:on 6:off

will tell us that sendmail is set to run at levels 2, 3, 4 and 5. This is not an application that we what we want running on our sensor so we need to disable it from starting up at these runlevels. We use the same command that we did before but with a different parameter: [root@Fedora1 root]# chkconfig –level 2345 sendmail off

Detecting Intruders

p. 136

This use of chkconfig says that for levels 2,3,4 and 5 we want sendmail to be turned off. Now we should repeat this same command with our entire list of services that need to be turned off for our sensor.

Secure Communications
Since use of telnet would defeat the entire purpose of building a secured Snort sensor, we need to make sure that Secure Socket Shell, or SSH, is enabled and configured properly. A simple test to see if we have sshd daemon running on our server is to use the following command:
[root@RedRum boot]# pgrep sshd 1696 2222 3074 [root@RedRum boot]#

We can see that on RedRum we have 3 processes for sshd so we do have SSH installed and running. We also need to make sure that we have SSHv2 being used and not SSHv1 which is much weaker. In Figure 6.6, we see the SSH configuration file where we can choose which version of SSH to use. We will find the ssh_config file in /etc/ssh. Figure 6.6 Partial listing of SSH configuration file
[root@RedRum boot]# cd /etc/ssh [root@RedRum ssh]# less ssh_config # $OpenBSD: ssh_config,v 1.16 2002/07/03 14:21:05

markus Exp $

:::edited for space::: # Host * # # # # # # # # # # # # # ForwardAgent no ForwardX11 no RhostsAuthentication no RhostsRSAAuthentication no RSAAuthentication yes PasswordAuthentication yes HostbasedAuthentication no BatchMode no CheckHostIP yes StrictHostKeyChecking ask IdentityFile ~/.ssh/identity IdentityFile ~/.ssh/id_rsa IdentityFile ~/.ssh/id_dsa

Detecting Intruders

p. 137

# # #

Port 22 Protocol 2,1 Cipher 3des

The highlighted line is Protocol 2,1 that says this server is configured to use either version 2 or version 1 of SSH. To lock the version down, just edit the file and remove the reference to version 1. If you do not have SSH running on your server, use a package manager such as RPM, up2date, YUM or APT to install SSH. Or you can install it from source files. For a complete installation guide and use of SSH , see chapter 5 which is all about SSH.

Making the Pig Fly
Now that you have your sensor hardware in place, the operating system installed, stray services turned off and SSH enabled, you are finally ready to actually install Snort and make a working IDS sensor. For the sake of this book, it will be assumed that you are using Fedora Core 1, your source files are located in /usr/src and you will use source files unless you have to install from RPMs. While the use of RPMs would simplify our lives as you install Apache, MySQL, PHP and Snort, this book is about security and so I will show you how to install Snort in a secure manner using source files. And since we are discussing installing files on a critical security device, let's also examine how you can verify the integrity of the files before you either compile them or install them. The most common way to verify the integrity of the files is to either verify the GnuPG signature or the MD5 hash. You can read about how to install, configure and use GnuPG in Chapter 5.
While we are discussing security of the IDS sensor, the reader should also be reminded the physical security of the sensors is every bit as important as the software security. Make sure the sensor is locked up or at least restrict access to it. Turn off in BIOS such things as booting from CD-ROM or floppy. Better yet, make sure the BIOS has a password enabled so changes cannot be implemented without the password. One of the easiest ways to restrict access is to run the sensor without a keyboard.

Installing MySQL One of the first things to install is our database software, which will be MySQL. Our sensor will be using MySQL as the database and you will use the current GA version which can be found at The current GA release is 4.1.7 and this is the version we will use for our sensor. Always check for the current file since updates come out periodically. Once you have downloaded your choice of files, you will then run our integrity check using the md5sum command:
# md5sum mysql-4.1.7.tar.gz 04c08d2a5cc39050d9fa4727f8f197e8 # mysql-4.1.7.tar.gz

Detecting Intruders

p. 138

You will compare the results to the md5 hash posted on the MySQL website’s download page. If you have a match, we will move ahead and start the installation. If there is not a match, the file is questionable and we should download the file again from a trusted source. Now you will unpack our source files and get ready to compile:
# tar -xvzf mysql-4.1.7.tar.gz

Once you have run the tar command, there will be a new directory created in our /usr/src/mysql/ directory. You will change to /mysql-4.1.7 and start the compiling process. Our first step is to run the configure command with the prefix argument to tell MySQL where to live when it is installed.
[root@Fedora1 mysql-4.1.7]# ./configure –prefix=/usr/local/mysql

The configure command is how we start the compiling process and it also allows us to set up certain parameters and paths to use. If you have special requirements such as you want the database somewhere other than the default location, we use configure with the parameter – localstateddir=/usr/local/mysql/data \..

In our case, the configure failed due to a C++ compiling error which can be read in the configure log file. This is easily corrected by installing the current version of gcc. Once the gcc update was installed, the configure command ran happily. Our next step is to run the make command. When you start this process, it can take quite a while depending on the processing power available to your soon-to-be Snort sensor. When the make command has completed without any errors, you need to install MySQL by using the make install command. Once MySQL is installed, we need to test it and verify that it is running. You also need to configure the initial database that is used by MySQL for its own use. This is accomplished by using a script that is part of the MySQL installation. The script can be found in the /scripts directory of the source file directory as we see below in Figure 6.7: Figure 6.7 Installing basic tables for MySQL internal use
[root@Fedora1 scripts]# ./mysql_install_db Preparing db table Preparing host table Preparing user table Preparing func table Preparing tables_priv table Preparing columns_priv table Installing all prepared tables 041029 16:31:40 Complete /usr/local/mysql/libexec/mysqld: Shutdown

Detecting Intruders

p. 139

You have to tell Snort where to find MySQL, so you need to edit the file as we see here in Figure 6.8: Figure 6.8 Editing the library paths:
# echo "/etc/local/mysql/lib/mysql">>/etc/ #cat /etc/ /usr/X11R6/lib /usr/local/lib

/usr/lib/qt-3.3/lib /usr/local/mysql/lib/mysql # ldconfig

At this point we can start MySQL manually and verify that the database will come up correctly by using this command:
# /usr/local/mysql/nin/mysqld_safe –user=mysql &

To help with the problem of having to change to the /bin directory of MySQL to run the admin commands, try this: cd /usr/local/mysql/bin for file in *; do ln -d /usr/local/mysql/bin/$file /usr/bin/$file; done This will make symbolic links in the /usr/bin directory pointing back to: /usr/local/mysql/bin directory for all the mysql* utilities.

You should see a message about mysql starting and using the base tables in /usr/local/mysql/var/ or wherever you put the tables. Now you can test the database and make sure that you can really read and write to the database. You will start by using the mysql client application by changing to the bin directory of our mysql installation and then use the following commands:
# ./mysql Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 3 to server version: 4.1.7-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> SHOW DATABASES -> ; +----------+ | Database | +----------+ | mysql |

Detecting Intruders

p. 140

| test


+----------+ 3 rows in set (0.03 sec)


Now you will use the USE command to select the mysql database and then use the SHOW TABLES command:
mysql> USE mysql Database changed mysql> SHOW TABLES; +---------------------------+ | Tables_in_mysql |

+---------------------------+ | columns_priv | db | func | help_category | help_keyword | help_relation | help_topic | host | tables_priv | time_zone | time_zone_leap_second | time_zone_name | time_zone_transition | | | | | | | | | | | | |

| time_zone_transition_type | | user |

+---------------------------+ 15 rows in set (0.01 sec)


And finally you will use the SELECT COUNT as the last test of our MySQL server.
mysql> SELECT COUNT(*) FROM user; +----------+ | COUNT(*) | +----------+ | 5 |


Detecting Intruders

p. 141

1 row in set (0.01 sec)


Next, you have to run the mysqladmin program to set up our root password for MySQL. This “root” password is not the root password for the sensor; it is used for the database only. Don’t forget the new password; while it is possible to recover the database and get past the root password, it is not fun.
/usr/local/mysql/bin/mysqladmin -u root -h localhost password snort

If you do forget the password for the SQL database, you can use the recovery procedure found at

Since this is a book on Linux security, it would not do for us to leave our databases open for the unwashed masses to see so you need to change the permissions on a few files and directories that you see in Figure 6.9 Figure 6.9 Changing permissions on mysql directories
# chown –R root /usr/local/mysql # chown –R mysql /usr/local/mysql/var # chown –R mysql /usr/local/mysql # cp /usr/src/MySQL/mysql-4.1.7/support-files/my-medium.cnf /etc/my.cnf # cat /etc/ # echo “/usr/local/lib” >> /etc/ # ldconfig –v

You are changing the owner of the /mysql and /var from root to the mysql user for security. The copy of the my-medium.cnf configures MySQL to use the settings of a medium-sized database. Also, unless the database is to be accessed remotely, you should turn off the network support for MySQL which we do by editing the /etc/my.cnf file and removing the # sign in front of the skip-network command as shown here:

When this parameter is enabled, MySQL will not listen on port 3306 for TCP connections. This modification assumes that the applications using MySQL reside locally on the server with the database. Let's see if all this has effort has paid off; we will type in the following command:
# /usr/local/mysql/bin/mysqld_safe –user=mysql &

and see if MySQL will start up in safe mode with the user of mysql. Once that is working, you have to make sure that MySQL will start each and every time we reboot the sensor. You do this by first copying over the mysql.server file found in your source files location to the /etc/init.d/ directory like this:
# cp /usr/src/mysql/mysql-4.1.7/support-files/mysql.server /etc/init.d/

Then you need to do a chmod on the mysql.server file to 755 like this: Detecting Intruders p. 142

# chmod 755 /etc/init.d/mysql.server

Finally, you need to go to our runlevel rc directories and set up the symbolic links like this:
# cd /etc/rc2.d/ # ln –s ../init.d/mysql.server S85mysql # ln –s ../init.d/mysql.server K85mysql # cd ./etc/rc3.d/ # ln –s ../init.d/mysql.server S85mysql # ln –s ../init.d/mysql.server K85mysql # cd /etc/rc5.d/ # ln –s ../init.d/mysql.server S85mysql # ln –s ../init.d/mysql.server K85mysql

At this point, you should reboot our server and verify that MySQL will in fact start up on it's own as it should. To start and stop MySQL from the command line, we use the following commands:
# /etc/rc.d/init.d/mysql start or # /etc/init.d/mysql.server start # /etc/rc.d/init.d/mysql stop # /etc/init.d/mysql.server stop

To start or stop MySQL from X Windows, you can use the Service Configuration utility as we see in Figure 6.10 below. Figure 6.10 Managing the starting or stopping of MySQL

Detecting Intruders

p. 143

Installing Snort Before you install the Snort source code, you should check and make sure that libpcap is installed and that PCRE is installed. This is easily done using the RPM command and most current Linux distributions will already have these installed.
[root@Fedora1 bin]# rpm -q libpcap libpcap-0.7.2-8.fc1.2 [root@Fedora1 bin]# rpm -q pcre pcre-4.5-0.fdr.0.1 [root@Fedora1 bin]#

If you do not have either installed, then download the source files and run configure, make and make install just like you did before when we installed MySQL. When I tried this on a new installation of Fedora Core1, while the rpm -q showed pcre was installed, the configure of Snort failed with an error saying that pcre was not installed. So a quick upgrade with the newest source tarball of pcre fixed the problem. Now you can move on to configuring snort. Let's get our groups and our userID for snort configured. You do this by making a group for Snort called something original like “snortgroup”
[root@Fedora1 bin]# groupadd snortgroup

Then we create our Snort user:
[root@Fedora1 bin]# useradd –g snortgroup snortuser

The names of the group and the user can be whatever you would like however, these very non-original names are used for clarity and are not recommended for security reasons. Next we will make a home for the Snort configuration file and log files.
# mkdir –p /usr/local/snort/etc # mkdir /var/log/snort

Now the real fun begins for you. You start by going to the directory that you downloaded your source file to and run the normal tar –xvzf command to unpack the files. Now change to the directory that tar created and you will run the typical configure command but with an option to tell Snort to use MySQL.
# tar -xvzf snort-2.1.3.tar.gz # cd snort-2.1.3 # ./configure --with-mysql=/usr/local/mysql # make # make install

The “--with-mysql” tells Snort to compile with MySQL support and gives the path for snort to find MySQL. Snort will compile pretty quickly, unlike MySQL so when it is completed, the easy way to see if Snort will attempt to run, is to type in snort at the command line. You should get the slightly cheeky message of, “Uh, you need to

Detecting Intruders

p. 144

tell me to do something...” which says that at the very least, Snort is attempting to start. Let finish off our configuration of Snort. You need to copy some rules and define some variables for Snort to actually work. Make sure you are still in the source file directory for Snort and type in the following commands as we see in Figure 6.13: Figure 6.13 Copying configuration files for Snort
[root@Fedora1 snort-2.1.3]# cp rules/* /usr/local/snort/rules/ [root@Fedora1 snort-2.1.3]# cp /etc/snort.conf /usr/local/snort/etc/ [root@Fedora1 snort-2.1.3]# cp /etc/reference.config /usr/local/snort/etc/ [root@Fedora1 snort-2.1.3]# cp /etc/classification.config / usr/local/snort/etc/ [root@Fedora1 snort-2.1.3]# cp /etc/ /usr/local/snort/etc/ [root@Fedora1 snort-2.1.3]# cp /etc/threshold.conf /usr/local/snort/etc/

Once you have all of these files copied, you can start to edit our snort.conf file which will tell Snort what our home network is, which interfaces are to be used, DNS servers and a host of other configuration settings that we can adjust. Snort Configuration Now that you have Snort running, you still need to dial in the configuration file. The snort.conf file is found at /usr/local/snort/etc. It’s a simple text file so you can edit the file with your editor of choice. When you open the file, you will find that it is well commented all through the file which can be a real help when trying to fine tune Snort. One of the first things you need to edit is the variable called HOME_NET and add our local IP networks as we see here:

The second variable is the EXTERNAL_NET and the default of ‘any’ is a good place to start. You can also add our email server, web servers and DNS servers to the appropriate variable. These variables help speed the filtering of packets. After all, why look for web attacks if you do not have any webservers on your network? That really does not make sense doesn’t it? In the following list we can see all the variables for identifying these servers in Figure 6.17: Figure 6.17 Server Variables
# List of DNS servers on your network var DNS_SERVERS

# List of SMTP servers on your network var SMTP_SERVERS $HOME_NET

# List of web servers on your network var HTTP_SERVERS $HOME_NET

Detecting Intruders

p. 145

# List of sql servers on your network var SQL_SERVERS $HOME_NET

# List of telnet servers on your network var TELNET_SERVERS $HOME_NET

# List of snmp servers on your network var SNMP_SERVERS $HOME_NET

The downside to setting up server variables like this is that one time that we can miss a rogue server. So it comes down to risk management on your network. Do you have enough control over the network to verify that a rogue SQL server is not going to go online without you knowing? If not, then it is best to leave the variable set to $HOME_NET in order to catch the rogues. We can also tell Snort to look at certain ports such as 8080 or 8181.
var HTTP_PORTS 80:8080:8181

Please notice the colon between the port numbers. This is how we can identify to Snort all the ports that HTTP is using on our network. Another important variable is specifying where Snort’s rules live. You might remember that we put Snort’s rules into /usr/local/snort/rules and so we should tell Snort this with the following variable.
var RULE_PATH /usr/local/snort/rules

The final settings are for telling Snort what data to log and where to log all the data. You can have Snort send the data to a syslog server, a SQL database or TCP dump file. You can even mix it up so some goes one place and the rest goes elsewhere. Syslog Notes Syslog logging is something that all Unix and Linux operating systems come with. While Windows does not come with a syslog server, there are many available for free or for a nominal fee. I will assume that you are using a Linux based syslog server. You need to tell Snort to send its output to a syslog facility. Let's discuss syslog logging for a minute. There are 8 different levels of syslog alert logging available , level 0 (emergency) to level 7(debugging).
We cover syslog in great detail in Chapter 8

There is also something called “facility” which you can imagine as where the syslog messages should be sent. There are 24 different facilities available (RFC 3164) with numerical codes from 0 to 23. Normally you will use local0 to local7 when using syslog. Think of the facilities as pipes directing the messages to a particular directory. This is how we have several sensors sending syslog messages to a single syslog server and each sensor would have it’s own directory for the messages. The

Detecting Intruders

p. 146

/etc/syslog.conf file controls the local<number> mapping to a directory as we see here:
# Log all messages from IDS sensor 1 here. local3.* /var/log/snort/sensor1/snort.log

Before syslog will use our new settings, you have to restart the syslog server. You do this by telling syslog to restart like this in Figure 6.18: Figure 6.18 Restarting syslog
[root@Fedora1]# /etc/init.d/syslog restart Shutting down kernel logger: Shutting down system logger: Starting system logger: Starting kernel logger: [root@Fedora1]# [ [ [ [ OK OK OK OK ] ] ] ]

Now you have to tell Snort sensor 1 where to put its syslog files. Let's go back to our snort.conf file and find the output alert_syslog lines. You will need to uncomment one line and use the syslog facility number you put into our syslog.conf file, which in this case is the number 3. You can see the section of snort.conf that you are interested in below in Figure 6.19: Figure 6.19 Setting alert_syslog parameter in snort.conf
# Use one or more syslog facilities as arguments. # specify a particular hostname/port. Win32 can also optionally Under Win32, the default hostname is

# '', and the default port is 514. # # [Unix flavours should use this format...] # output alert_syslog: LOG_AUTH LOG_ALERT

So you need to change output alert_syslog: LOG_AUTH LOG_ALERT to this:
output alert_syslog: LOG_LOCAL3

Now, for a quick test to see if your Snort sensor and logging actually works, lets type in the following command:
[root@Fedora1 sensor1]# /usr/local/bin/snort –T -u snortuser -c / usr/local/snort/etc/snort.conf

Here you are telling Snort to start using the -T which is a test mode with our snortuser ID and to use the snort.conf file. If things work correctly, Snort will run through the initialization process and exit without any errors. If there are errors, check for typo errors in the snort.conf file and make sure all the files that needed to be moved, were moved. When the -T start up runs without any errors, we will run the same command but without the -T which actually starts Snort and has it initate logging of any alerts to

Detecting Intruders

p. 147

our syslog server. In the Figure 6.20, we see the resulting log file after I ran an NMAP scan against a target. Figure 6.20 Reading the syslog file from sensor1
[root@Fedora1 sensor1]# less snort.log Oct 31 01:12:50 Fedora1 snort: [1:469:1] ICMP PING NMAP [Classification: Attempted Information Leak] [Priority: 2]: {ICMP} -> Oct 31 01:12:51 Fedora1 snort: [1:620:6] SCAN Proxy Port 8080 attempt [Classification: Attempted Information Leak] [Priority: 2]: {TCP} -> Oct 31 01:12:52 Fedora1 snort: [1:618:5] SCAN Squid Proxy attempt [Classification: Attempted Information Leak] [Priority: 2]: {TCP} -> Oct 31 01:12:53 Fedora1 snort: [1:615:5] SCAN SOCKS Proxy attempt [Classification: Attempted Information Leak] [Priority: 2]: {TCP} -> Oct 31 01:12:53 Fedora1 snort: [1:628:3] SCAN nmap TCP [Classification: Attempted Information Leak] [Priority: 2]: {TCP} ->

We can see on the first and last entry, Snort is telling us that this was an NMAP scan. To stop Snort all we need to do is hit CTRL-C. We will get a page of information about Snort and the various traffic statistics. In Figure 6.21, we see a partial listing of this information. Figure 6.21 Ending Snort and final intrusion statistics
=============================================================================== Snort analyzed 3440 out of 4563 packets, dropping 1123(24.611%) packets

Breakdown by protocol: TCP: 2269 UDP: 8 ICMP: 2 ARP: 25 EAPOL: 0 IPv6: 0 IPX: 0 OTHER: 13 DISCARD: 0 (49.726%) (0.175%) (0.044%) (0.548%) (0.000%) (0.000%) (0.000%) (0.285%) (0.000%)

Action Stats: ALERTS: 6 LOGGED: 6 PASSED: 0


Detecting Intruders

p. 148

Now you know for sure that Snort is alive and well.We can move on to getting your rule set dialed in and our database up and running. Configuring Snort’s New Database At this point you have a functioning and usable MySQL database installation. But in order for Snort to use MySQL, you have to have a database for Snort to use. Let's start the mysql client and get our new database put together and configured. In Figure 6.22, we see how the Snort database is configured. Figure 6.22 Configuring basic tables for Snort
[root@Fedora1 bin]# ./mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 2 to server version: 4.0.20-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> create database snortdb; Query OK, 1 row affected (0.05 sec) mysql> grant INSERT,SELECT on snortdb.* to snortuser@localhost; Query OK, 0 rows affected (0.02 sec) mysql> SET PASSWORD FOR snortuser@localhost=PASSWORD('snort'); Query OK, 0 rows affected (0.00 sec) mysql> grant CREATE,INSERT,SELECT,DELETE,UPDATE on snortdb.* to snortuser@localhost; Query OK, 0 rows affected (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) mysql> exit Bye

You also have to tell Snort to use our new database and we do this in the snort.conf file. You need to edit the line that says :
# output database: log, mysql, user=root password=test dbname=db host=localhost

You need to get rid of the # delimiter so the code will be read and we need to add our own database information. So the new line will look like this:
output database: log, mysql, user=snortuser password=snort dbname=snortdb host=localhost

Since, in our example, the MySQL database is local to Snort, you will keep the line that tells us that the host is the localhost. Now you are not completely done at this point. You have a database installed called snortdb, you have a user called snortuser and you have told Snort to use MySQL as the database. But, you do not have tables in the database yet. You need to have tables to store the data, but never fear. In our Detecting Intruders p. 149

source directory is a sub-directory called contrib. and in there is a script you can run that will build all of the tables you need in the snort database. So type in the following command:
[root@Fedora1 snort-2.1.3]# mysql -u root -p <./contrib/create_mysql snortdb

This will use the mysql client to run the script and configure your new database tables. Once the script is finished, we can log into our Snort database and verify that the tables have been created by using the commands shown in Figure 6.23: Figure 6.23 Creating Snort SQL tables
[root@Fedora1 snort-2.1.3]# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 13 to server version: 4.0.20-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> SHOW DATABASES; +----------+ | Database | +----------+ | mysql | snortdb | test | | |

+----------+ 3 rows in set (0.00 sec)

mysql> USE SNORTDB Database changed mysql> SHOW TABLES -> ; +-------------------+ | Tables_in_snortdb | +-------------------+ | data | detail | encoding | event | icmphdr | iphdr | opt | reference | | | | | | | |

Detecting Intruders

p. 150

| reference_system | schema | sensor | sig_class | sig_reference | signature | tcphdr | udphdr

| | | | | | | |

+-------------------+ 16 rows in set (0.00 sec)

mysql> exit Bye

Starting the Pig So now, finally, we have MySQL installed, configured and working. We have Snort installed, configured and working. And we have MySQL and Snort talking to each other ready for action. But, it will not do much good if we have to manually start Snort each time, so there is another jewel in the same contrib. directory where we found the database configuration script. This jewel is called ‘S999snort’ and it is a script that will bring up Snort automatically each time and use the correct user ID that we made earlier. Copy the S999snort script from /contrib to the /etc/init.d/ directory and rename it as snort as we see here:
[root@Fedora1 contrib]# cp S99snort /etc/init.d/snort [root@Fedora1 init.d]# chmod 755 snort

Then use chmod to make the script executable (755). Now we have to edit the script to reflect our configuration of user ID, interfaces, location of files, group and options as we see in Figure 6.23: Figure 6.23 Editing snort init.d script
# Configuration

# set config file & path to snort executable SNORT_PATH=/usr/local/bin CONFIG=/usr/local/snort/etc/snort.conf

# set interface IFACE=eth0

# set GID/Group Name

Detecting Intruders

p. 151


# other options OPTIONS="-D -u snortuser"

# End of configuration

Now we have to make sure the symbolic links are created in the various rc directories. So use the following commands shown, in Figure 6.24: Figure 6.24 Making symbolic links for Snort
cd ../rc2.d/ ln –s ../init.d/snort S99snort ln –s ../init.d/snort K99snort cd ../rc3.d/ ln –s ../init.d/snort S99snort ln –s ../init.d/snort K99snort cd ../rc5.d/ ln –s ../init.d/snort S99snort ln –s ../init.d/snort K99snort

And at this point should be able to reboot the sensor and have Snort come online automatically. You can also use the chkconfig command by saying:
chkconfig –add snort chkconfig snort on

This is all possible using the Service Configuration GUI if you are using Red Hat. Apache Now that we have our SQL server and Snort working, we need to move on to getting our Apache server up and running. The webserver is one half of what we need to push out our results using ACID. The other half is PHP and we will install that once we have the webserver up and running. This installation, compared to Snort, is a cake walk. Get the latest source files from The version we will install here is 2.0.49, but by the time this book is printed, there will be a new version out. As before, the Apache or httpd source files are copied to /usr/src/apache and the we decompress them with tar as we see here:
[root@Fedora1 apache]# tar –xvzf httpd-2.0.49.tar.gz

Once the decompression is completed, we will change to our new directory containing the source files. Just like all the software up to this point, we will use the ./configure, make and make install to build our Apache server and install it as we see here:
[root@Fedora1 httpd-2.0.49]# ./configure --enable-so [root@Fedora1 httpd-2.0.49]# make

Detecting Intruders

p. 152

[root@Fedora1 httpd-2.0.49]# make install

This whole process can take a while on a slower machine but it will get there. When you have completed the make install, you still need to tell the system about the apache libraries. We do this using the ldconfig command like this:
[root@Fedora1 httpd-2.0.49]# echo "/usr/local/apache2/lib" >>/etc/

Now we need to test the new webserver by starting the services with this command:
[root@Fedora1 httpd-2.0.49]# /usr/local/apache2/bin/apachectl start

If we do not have any errors messages, we can test it by bring up a browser and either using localhost if we are using the Snort sensor, or by using a browser on a different machine and verifying that you have the Apache test page showing as we see here in Figure 6.25: Figure 6.25 Apache test paging showing after successful installation

Now, if we see the test page, happy days! We are just about done building our sensor. Our next project is compiling and installing PHP on our almost completed Snort sensor. Installing PHP PHP is a scripting language that has taken over Linux by storm. The greatest benefit to using PHP is the ability to easily build websites with dynamic content. This is perfect for our logging analysis software. PHP can be a real pain to install as it has many dependencies and, while it has gotten better, it is still not exactly a walk in the park. But, never fear, we will go step by step and at least give you the basics on how to get PHP installed and working on your Snort sensor. Like everything else, we need to get current source files, so we will go to and get the current source file tarball. Again, we will place it in /usr/src/php and decompress it with tar:
[root@Fedora1 php]# tar -xvzf php-4.3.6.tar.gz

Detecting Intruders

p. 153

This will make our php directory and have our source files ready but we need to check a couple of things first. We need to make sure we have libpng and zlib installed on our sensor. We did not have either on our Red Hat system so we will download them and install them. We will download zlib first and install it before we install libpng. We can grab zlib from and the current version is 1.2.1. We use the typical tar –xvzf and then use configure, make and make install to install zlib. But, we do not use the typical configure to install libpng. Instead we will copy from the /script directory, the correct make.x file to Make located in the root directory of libpng. We see the process here:
[root@Fedora1 libpng-1.2.7]# cp scripts/makefile.linux Makefile

Next we will run a make test and then, if the test works, we will say the typical make install. At this time, version libpng-1.2.7.tar.gz is the latest and can be found at There are RPMs and other packages available for those who still insist on using someone else’s packaging. Once we have taken care of those two small items, we can move on to the PHP installation. We have several options for configure that we need to use as we see in Figure 6.25: Figure 6.25 Configuring apache2 with options
[root@Fedora1 php4.3.6]#./configure --prefix=/usr/local/apache2/php \ > --with-config-file-path=/usr/apache2/php \ > --with-apxs2=/usr/local/apache2/bin/apxs \ > --enable-sockets \ > --with-mysql=/usr/local/mysql \ > --with-zlib-dir=/usr/local \ > --with-gd

We are enabling sockets, mysql support, zlib support and giving path information to configure for the installation process. Once configure has done its work, we will use the typical make and make install to install PHP. Once all of that is completed, we still need to copy the php.ini file to the Apache directory as we see here:
[root@FedoraC1 php-4.3.6]# cp php.ini-dist /usr/local/apache2/php/php.ini

No we have to tell Apache to use PHP and to configure Apache to start up each time we reboot the sensor. We will start by editing the httpd.conf file which is in the /usr/local/apache2/conf/ directory. Verify this line is there: LoadModule Now add this line:
AddType application/x-httpd-php .php php4_module modules/

Detecting Intruders

p. 154

I’m using vi to edit the file so using the / search command, we will look for the DirectoryIndex and edit it to include index.php as we see here:
DirectoryIndex index.html index.html.var index.php

Once we have made the changes to the Apache configuration file, we need to make a small test file for our new PHP server. All we need to do is make a file called phptest.php in the /usr/local/apache2/htdocs/ directory with the following line:
<?php phpinfo();?>

Once we have our test file in place, we need to stop and restart Apache as shown below.
[root@FedoraC1 php-4.3.6]# /usr/local/apache2/bin/apachectl stop [root@FedoraC1 php-4.3.6]# /usr/local/apache2/bin/apachectl start

If everything goes well, when we go to our browser and type in the IP address of our sensor and use the phptest.php file, we will see the results shown in Figure 6.26: Figure 6.26 The results of a working Apache and PHP server

Once we have our working Apache and PHP server, we need to enable Apache to start automatically. We do this by copying the apachectl file and configuring the as we see here:
# cp /usr/local/apache2/bin/apachectl /etc/init.d/httpd # cd /etc/rc3.d/ # ln -s ../init.d/httpd S85httpd # ln -s ../init.d/httpd K85httpd

Detecting Intruders

p. 155

Reboot the sensor and verify that you can see that same phptest file. At this point we have a working sensor but we can improve it with some log file tools and Snort management tools as we will see next. Snort on ACID One of the most well known log file analysis tools for Snort is called “ACID” which stands for “Analysis Console for Intrusion Database”. This flexible tool provides a browser interface to the SQL database we created and generates reports and graphs from the captured data in the database. In Figure 6.27, we see the main screen of ACID. Figure 6.27 Main Screen for ACID on Linux

We can see that this is much easier to deal with than the raw log files from Snort. In order to use ACID on our Snort sensor, there are a few items we need to load first. In order for ACID to talk to our MySQL database, we need an interface and this is called ADODB. We can get ADODB from The installation of ADODB is very easy; just download it, move it, uncompress it and then clean up the directory as we see here:
# mv adodb454.tgz /var/www/html/ # tar –xvzf adodb454.tgz # rm adodb454.tgz

The path of /var/www/html is our webroot for the Apache setting. This is not the default path so we need to edit our apache.conf file located at /usr/local/apache2/conf to say this:
DocumentRoot "/var/www/html/"

That is all we need to do to “install” ADODB for our upcoming ACID console. The next item we need to install is jpgraph. Like ADODB, it is as simple as downloading Detecting Intruders p. 156

the tarball, moving to the document root directory and unpacking it there. We can get jpgraph at The final piece we need to install before installing ACID is phplot. We can download the tarball at and just like the last two, we download, move and unpack the files in the document root directory. The final dependency we need to fulfill is to create a database user for ACID and grant the proper privileges to that new user. So we will start the mysql client by going to /usr/local/mysql/bin/mysql and then use the following commands shown in Figure 6.28 to add our new user ‘acid’ and privileges. Figure 6.28 Configuring MySQL for ACID
Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 to server version: 4.1.7-log Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> GRANT USAGE ON *.* to acid@localhost identified by "snort"; mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON archive.* TO acid@localhost;

mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, ALTER ON snortdb.* TO acid@localhost;

Let's verify that we have the proper grants for the ACID user account by using the SHOW GRANTS command as we see in Figure 6,29: Figure 6.29 Showing grants in MySQL
mysql> Show grants for acid@localhost | Grants for acid@localhost |

+------------------------------------------------------------------------| GRANT USAGE ON *.* TO 'acid'@'localhost' IDENTIFIED BY PASSWORD '*EF3DA5F1B03B180B177A126A8B2E739A0A1FAC16' | | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON `archive`.* TO 'acid'@'localhost' |

| GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, ALTER ON `snortdb`.* TO 'acid'@'localhost' |


So now that we have the parts needed for ACID and our new acid user created with the proper grants in place, we can move onto installing and using ACID. To download ACID, go to and download the latest tarball to /var/www/html/. We will unpack the tarball here and delete the tar file. Now change to the acid subdirectory and using vi or the editor of choice, open the acid_conf.php file. We need to tell ACID some information about our snortdb and other configuration so we will make the following edits shown in Figure 6.30:

Detecting Intruders

p. 157

Figure 6.30 Editing the acid_conf.php file
*/ $alert_dbname $alert_host $alert_port $alert_user = "snortdb"; = "localhost"; = "3306"; = "acid";

$alert_password = "snort"; /* Archive DB connection parameters */ $archive_dbname $archive_host $archive_port $archive_user = "snortdb"; = "localhost"; = "3306"; = "acid";

$archive_password = "snort"; $ChartLib_path = "/var/www/html/jpgraph/src";

This will give the required information for ACID to be able to open the database and then generate the graphs using jpgraph. Our first session with ACID will be to go to the webserver name or IP and then to the acid_mail.php page. We see the results in Figure 6.31: Figure 6.31 Initial ACID web page

ACID will give an error but also has a built-in script to create the needed tables by clicking on the setup link in the web page. So click on the setup link and move on to the next page. In Figure 6.32, we see that we can create our tables by clicking on the ‘create ACID AG’ button on the right-hand side so let's do that. Figure 6.32 Creating ACID tables

Detecting Intruders

p. 158

When the tables are created successfully, we will see the page shown in Figure 6.33 that tells that we are ready to go. Figure 6.33 Successfully created tables by ACID

At this point we have successfully created a Snort sensor with MySQL running to store the data, Apache to server out the data and ACID to create the various reports using the stored data. Securing the Pig Now that we have a working sensor complete with reporting and a database, let's go over a few security issue of this configuration. We already removed the network access to MySQL by editing the /etc/my.cnf file like I told you, right? If not, go there and edit the file. We also need to remove or ‘drop’ the test database which has nary a useful purpose on our sensor. We do this by:
mysql> drop database test; Query OK, 0 rows affected (0.06 sec)

mysql> use mysql Database changed mysql> delete from db -> ; Query OK, 6 rows affected (0.00 sec)


We also need to understand that ACID is still considered beta code. Therefore, there may be flaws with the input validation which might give access to the sensor. So back to one of the suggestions earlier about network security, make sure the sensor and monitoring station are not accessible by the uncontrolled public. ACID does not, by Detecting Intruders p. 159

default, use any validation or encryption for the http connection. This can be corrected by using SSL with Apache. You can get the code and instructions needed to configure Apache 1.3 with SSL at You may also need OpenSSL from in order make your certificates and to manage certificates. We discuss SSL in chapter 5. A final note about ACID and security, you may recall that the passwords for the database and alerts are in plain text in the configuration file. The only security for these passwords is the system file permissions. Multiple NIC cards It is a very good idea to use multiple NIC cards in the sensor. One NIC would be for the snorting and the other is for the management of the sensor. The NIC that snorts should be IP-less and we do this by using the ifconfig command to bring up the interface but not specify an IP address.
/sbin/ifconfig eth1 up

To run Snort on two interfaces, one solution is to “bond” the two interfaces so they work as one using the following comand:
-i bond0

This is trick is valid for running one Snort instance and multiple promiscuous mode interfaces except the management interface. We can find more details on bonding in a file called bonding.txt which can be found in several different places depending on your Linux distribution. One place to look for the file is /usr/src/linux(kernel)/ Documentation/networking/. We need to edit the modules.conf file with this command:
echo alias bond0 bonding >>/etc/modules.conf

On Red Hat systems, we need to create an ifcfg-bond0 in the /etc/sysconfig/networkscripts directory. This file will have something like the following for its content. You will need to edit it for your own system.

Each interface which is part of the bonding should have a MASTER and a SLAVE entry. In the following Figure, we have two interfaces, eth0 and eth1 that we want to be part of the bonding. The configuration files, ifcfg-eth0 and ifcfg-eth1, would look something like this:

Detecting Intruders

p. 160


You would make a second file for eth1 that looks like the etho file. Depending on your distribution, you can just bring up the new bonding interface using “ifup bond0” or restart using /etc/rc.d/init.d/network restart. If these commands do not work, restart the box. For more detailed information, make sure that you actually read the bonding.txt file. It will give more details and how to use SNMP with the new bonded interfaces. It is also possible just to have Snort using two interfaces without bonding but there is a caveat that you lose the promiscuous mode of the card when you do. The way to run Snort this way is to start snort with the argument of -i any and this argument is valid for a single Snort instance running on all interfaces. It is recommended that you create a filter to filter out loopback traffic. We now have both the sensor and the database talking to each other and all we need to do now is to fine tune some rules and get snorting the traffic. Lets move on to rules.

Rules? What Rules?
Snort is a rule bound application without question. Everything Snort does is based on a set of rules that we put in place to see, examine, capture and process packets. A result is that rules for Snort are very important. Let me say straight out, this book is not the end all for building Snort rules. It is a guide to the start of building rules and hopefully, to grab your interest in building rules so you can go nuts on your own network. Each network is different and you will spend a fair amount of time tuning your ruleset to match your network and the “normal” traffic of your network. So let's move on to building rules for Snort. In Figure 6.34, we see a basic capture of a packet by Snort. How much of the packet we see is based on the switches we set when Snort is started. Figure 6.34 Packet captured by Snort using the –v switch
[root@Fedora1 root]# snort -v Running in packet dump mode Log directory = /var/log/snort

Initializing Network Interface eth0

--== Initializing Snort ==-Initializing Output Plugins! Decoding Ethernet on interface eth0

Detecting Intruders

p. 161

--== Initialization Complete ==--

-*> Snort! <*Version 2.1.3 (Build 27) By Martin Roesch (, 09/06-11:09:21.903573 -> UDP TTL:64 TOS:0x0 ID:0 IpLen:20 DgmLen:161 DF Len: 133 =+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+

09/06-11:09:24.884222 -> TCP TTL:64 TOS:0x0 ID:51225 IpLen:20 DgmLen:52 DF ***A***F Seq: 0xCA4C8152 TcpLen: 32 TCP Options (3) => NOP NOP TS: 114810 1440002210 =+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ Ack: 0x19051BB9 Win: 0x8040

In this example we see that the packet is from IP address is going to IP address and that it is using port 80. We can enhance the output detail of the packet by using the -vde switches. In Figure 6.35, we see how more detail we can have. Figure 6.35 Packet details using snort with –vde switches set
-*> Snort! <*Version 2.1.3 (Build 27) By Martin Roesch (, 09/06-11:14:00.123807 0:3:FF:7B:AA:B4 -> FF:FF:FF:FF:FF:FF type:0x800 len:0xAF -> UDP TTL:64 TOS:0x0 ID:0 IpLen:20 DgmLen:161 DF Len: 133 39 30 31 65 20 33 20 69 70 70 3A 2F 2F 46 65 64 6F 72 61 31 3A 36 33 31 2F 70 72 69 6E 74 65 72 73 2F 48 50 33 35 30 20 22 22 20 22 43 72 65 61 74 65 64 20 62 79 20 72 65 64 68 61 74 2D 63 6F 6E 66 69 67 2D 70 72 69 6E 74 65 72 20 30 2E 36 2E 78 22 20 22 48 50 20 4F 66 66 69 63 65 4A 65 74 20 44 31 33 35 20 46 6F 6F 6D 61 74 69 63 2F 68 70 69 6A 73 20 28 72 65 63 6F 6D 6D 65 6E 64 901e 3 ipp://Fed ora1:631/printer s/HP350 "" "Crea ted by redhat-co nfig-printer 0.6 .x" "HP OfficeJe t D135 Foomatic/ hpijs (recommend

Detecting Intruders

p. 162

65 64 29 22 0A



In this sample we still see the connection information but now we can see the payload data contained within the packet. It is this information that the signature rules can look at and run the analysis on to see if there is a match. You can see by these two samples that it would be easy to overwhelm Snort by trying to capture everything so you need to have some rules that tell Snort what to look for, how to look for it and what to do after it has found whatever it is. What can we write rules for? A good many things, with a bit of imagination. The ruleset for Snort is lightweight and very robust. We can look for certain types of attacks, we can examine packets based on ports, ips, signatures, flags set and more. And it is not just attacks; Snort can look for violations of your company internet policy. For example, if there is a policy of no porn, then Snort can look for signatures of porn sites or phrases and flag those packets and who is requesting the sites. If only select people are to have access to a certain server, Snort could look for any traffic other than what is allowed to access that server. We will start by examining the two basic rules for building a Snort rule. You knew there would be more rules didn't you? The two basic rules are these:
• • Rules must be completely contained on a single lines Rules are made of two parts, the rule header and the rule options

Once we have these parts in our brain, the rest will come easily. Let's take a simple rule shown in Figure 6.36 and break down into its components. Figure 6.36 Basic Snort Rule
alert tcp any any -> /24 111 (content:”|00 01 86 a5|”; msg: “mountd access”;)

The rule is for an alert based on TCP and sourced from any IP address to any IP address on the network with a mask of and port 111. This first part of the rule is the rule header. There are five default rule actions available and we see them listed in Figure 6.37: Figure 6.37 Default Rule Actions for Snort Rules
• • • • • alert log pass activate dynamic generates an alert using the selected alert method, and then logs the packet logs the packet ignores the packet alerts and then turns on another dynamic rule remains idle until activated by an activate rule , then acts as a log rule

The current version of Snort will analyze four protocols and we see them listed here: Detecting Intruders p. 163

• • • •


When you specify a port, the word ANY is a wild card literally meaning any port. You can specify a single port or a range of ports and you can use negation by using the ! Symbol shown here:
• • • 1:1024 Any port from 1 to 1024 !6000:6010 Any port EXCEPT ports used by X Windows :1024 Any port less than 1024

The -> in our sample rule is called the Direction Operator and specifies which direction the rule should be applied to. On the left side is the source of the traffic and on the right is the destination for the traffic. We can use the < > to say both directions. The words between the parentheses are the rule options. In this example, the option option keyword is content. There are 4 categories of options for Snort rules as listed in Figure 6.38: Figure 6.38 Snort Rules Options Categories
• • • • meta-data payload non-payload post-detection These options provide information about the rule but do not have any effect during detection These options all look for data inside the packet and can be inter-related These options look for data that is not specific to the packet payload These options are rule specific triggers that happen after a rule has "fired"

Within each one of the categories, there are options. In Figure 6.39 we see a listing of options under the payload category. Figure 6.39 Payload Options
• • • • • • content nocase rawbytes depth offset distance

Detecting Intruders

p. 164

• • • • • • • •

within uricontent isdataat pcre byte_test byte_jump regex content-list

Let's examine the content option since that is the one we used in our example. The content option allows us to use both text and binary patterns to search for within the payload of the packet. We can trigger the rule on either finding the pattern or not finding the pattern within the packet. We can even take it further by using one of contents modifiers like “nocase” which will look for the string/pattern regardless of case. And this is just one option from the many options available to you. To read about all the options, review the Snort online manual which can be found at Updating the Rules We can get new rule sets from at We can always download the new rules and apply them manually but there is a better way. Oinkmaster comes to the rescue by giving us a way to automatically update our rule sets. Oinkmaster can be downloaded from Oinkmaster is a simply a Perl script that automates the downloading and installation of new rulesets for Snort. It will let you mark rules as “locally modified” so they are not written over and lost during updates. It will also print what was changed so you know exactly what is happening to your Snort sensor. An excellent feature is that Oinkmaster will backup the old rules before applying the update. We start the installation by downloading the tarball and running tar to get our scripts as we see here:
# tar zxvf oinkmaster-1.1.tar.gz

Then we need to copy a couple of files to new locations:
# cp /usr/local/bin/ #cp oinkmaster.conf /usr/local/etc/ #cp oinkmaster.1 /usr/local/man/man1/

Then we need to make a couple of small edits to the oinkmaster.conf file. In our case, we need to comment out the rule download for Snort 2.2 and enable the 2.1 download. If you do not put Oinkmaster in the default places, this is when you will need to edit the locations.
vi /usr/local/etc/oinkmaster.conf

Detecting Intruders

p. 165

#url = # This is for Snort 2.1.x. url =

There is a lot more we can do in the configuration file so take some time and read the comments. Once we have our edits in place, we can run the Oinkmaster and see what happens, as shown here:
# ./ -o /usr/local/snort/rules

The following is a snippet of the output as the rules are updated by Oinkmaster:
-> Removed from web-misc.rules (2): alert tcp $EXTERNAL_NET any -> $HTTP_SERVERS $HTTP_PORTS (msg:"WEB-MISC order.log access"; flow:to_server,established; uricontent:"/admin_files/order.log"; nocase; classtype:attempted-recon; sid:1176; rev:5;) alert tcp $EXTERNAL_NET any -> $HTTP_SERVERS $HTTP_PORTS (msg:"WEB-MISC b2 access"; flow:to_server,established; uricontent:"/b2/b2-include/"; content:"b2inc"; content:"http|3A|//"; classtype:web-application-attack; sid:1758; rev:3;) [*] Non-rule line modifications: [*] None.

[+] Added files (consider updating your snort.conf to include them if needed): [+]

-> classification.config -> -> reference.config -> -> threshold.conf -> [root@FedoraC1 oinkmaster-1.1]

And that is all there is to a basic configuration of Oinkmaster. You will want to make sure you do not use root all the time to run the updates. The man pages are very good and there is a good readme file.

Deploying Snort
Now that we have built a sensor, the question is where to put it? Convention says that the IDS sensor should be where it can see the traffic that is important to you and we have to remember that like a packet sniffer, Snort has to follow the same rules on a switched network. Snort will only see the traffic on a given port unless the port can be mirrored or spanned. Mirroring or spanning a port means that we can direct a copy Detecting Intruders p. 166

of some or all traffic to the port that Snort is plugged into. Otherwise, the only traffic Snort would see is the broadcast traffic since broadcast traffic goes to all ports or any traffic directed to the sensor. The issue with mirroring or spanning is that we can easily overwhelm the capacity of the one port. Also, under certain conditions, we might only see one side of the TCP connection as in the case of a full duplex connection. There are alternative methods of connecting Snort to our network and one way is to use an Rx-only Ethernet cable. A second way is to use a network tap. In Figure 6.40 we see how to build a Rx-only Ethernet cable. Figure 6.40 Rx-only Ethernet cable
TD +

Switch/Hub 1

TD -


RD -




RD +




The Rx-only cable and the network tap provide a measure of security since they will “hide” the sensor from prying eyes. To hide the sensor you can also bring up the interface without an IP address, but this is not always reliable. Tapping the network Not all switches can provide a mirror or a spanned port or we can have special circumstances like a full duplex connection that we want to monitor. Also, a point to remember is that unlike the switch, which many times will filter out certain errors, the tap passes everything which includes any errors such as physical layer errors. To monitor the network and get around these limits, we can use a device called a passive or active tap to insert the sensor into the data flow without relying on a switch port. In Figure 6.41 we see a basic deployment using a tap.

Detecting Intruders

p. 167

Figure 6.41 Using a network tap in a basic Snort deployment

The difference between an active tap and a passive tap is what you might expect to be given the names. The active tap is a powered device that offers a bit more than just breaking out a copy of the traffic. Many active taps allow us to send TCP resets to a connection that is named as a malcontent by the IDS sensor. The passive tap provides just a way to get the traffic and nothing more. Almost all active taps provide a failover so if the tap fails or loses power, the connection will fail to closed so traffic will keep flowing. In Figure 6.42 we see how to build our own passive network tap with a few simple parts and wiring.

Detecting Intruders

p. 168

Figure 6.42 Building your own passive network tap

Detecting Intruders

p. 169

The enclosure is a plastic four port surface mount enclosure like the Versatap AT-44 available from Allen Tel. And the RJ45 jacks are a punch down CAT5E jack such as an AT-55 style. This style of punch down is why the numbering is a bit odd in the drawing. The actual type is not as critical as keeping everything CAT5E or better. When you use the tap on a full duplex connection, TAP A will show one half of a full duplex connection and TAP B will show the other half, so you will need to use a sensor with two NIC interfaces. An alternative is to recompile the Linux kernel to use bonding on two interfaces to provide one logical interface. Where to place Snort Common placement will locate Snort right behind the firewall and before the LAN default gateway or router. This gives Snort a chance to monitor all inbound and outbound traffic after the firewall has had a chance to stop the attack or probing. Other popular locations for the sensor are in the DMZ to guard the servers there or inside on the LAN at key locations such as a server farm. With any IDS sensor, there will be a period of training and tuning where you will need to review all the logs to decide what is normal traffic for your network. This shows up quickly when there is a network monitoring package being used like What’s Up Gold which relies on an ICMP packet to touch a node and see if it’s alive or not. The IDS sensor sees all this ICMP traffic and immediately thinks there is a probe scan taking place. The rules need to be adjusted so that your network monitoring does not start burning out pagers with false alarms.

Managing Snort
There are several ways to manage your Snort sensors whether it be a single sensor or many sensors scattered around the enterprise. As you can imagine, trying to manage each sensor from SSH or a console would be time consuming and generally speaking, a real pain. So let's take look at some of the various management packages.
For those who are willing or have to use a Windows management console, you are not out in the cold even though this book is for the Linux user. Two good packages for managing Snort sensors are available. IDSCenter is available for free at and there is IDS Policy Manager available at

Webmin To help with the management of Snort one of the easy tricks is to install a package called Webmin. In fact, Webmin will be your new best friend when you see all that you will be able to do from a browser based front end. You can find Webmin and many packages for it at In Figure 6.43 we see the opening screen of Webmin.

Detecting Intruders

p. 170

Figure 6.43 Default Webmin Screen

You will notice in the screen shot that we are logged in as the localhost on port 10000. In order to use Webmin across the network, we first have to log in as the localhost and then configure the firewall to allow access for http traffic on port 10000 if you have the firewall configured. In Figure 6.44 we have the Webmin module loaded to manage Snort. Figure 6.44 Snort Webmin Module Loaded

We can see that the Webmin interface gives us an easy way to manage the command line driven Snort. There is also a module for Apache and for MySQL since both applications used with many of the Snort log file analyzers like ACID. You will also find that Webmin is an easy way to manage the entire Linux infrastructure you may have. You can manage users, permissions, services, servers and specialty applications like clustering.

Detecting Intruders

p. 171

Snort Center Snort Center is a package using PHP and Perl to give us a nice GUI management front end for Snort. You can download Snort Center at Snort Center offers some solid features such as SSL between the sensors and the management console, automatic updates, rule templates and more. Another substantial feature is that Snort Center uses ACID to run the log analysis. Resources:
MySQL Refence Card Barnyard Fast output for Snort Snort Alert Monitor

Java console that mointors your SQL database and lets you see in real time any alerts Snortalog

Perl based log processor for Snort, Cisco PIX and FW-1 IDScenter

A Microsoft Windows based Snort console

Detecting Intruders

p. 172

Chapter 7
“If you reveal your secrets to the wind you should not blame the wind for revealing them to the trees.” Kahlil Gibran

Virtual Private Networks
irtual Private Networks, or VPNs, are a way to secure a network connection between a client or network to another client or network using a public infrastructure or an unsecured infrastructure. The VPN and the use of encryption is designed to minimize the risks of data theft, the loss of data integrity or the loss of privacy of your communications and data. There are a few different topologies of VPN connections to know about. There is the Extranet topology where the VPN is between two routers. There is the Internetwork topology where the connection is from the VPN client to the router over the network. Lastly there is the Access VPN where the connection from the VPN client to the router is by a dial-up connection. For Linux, we have several choices of how to implement VPNs and which protocols to use. We can set up VPNs between Linux workstations, workstations to routers/firewalls, Microsoft VPN servers, Cisco VPN concentrator and more. The big dog in the world of Linux and VPNs is the free IPsec implementation for Linux called Openswan. This is an update to the older Free/SWAN project and now is partially sponsored by Novell. Openswan is opensource, freely available for downloading and is also used in many Linux security products. Before we dive into any of the VPN software, we need to first learn about the underpinnings and protocols of what makes up a VPN. We will start with one of the most important building blocks of VPN technology called IP Security Protocol or IPsec. IPsec IPsec was developed by the Internet Engineering Task Force, or IETF, to provide a secure method of transporting data that worked at layers 3 and 4. The standards that make up the IPsec group of protocols were released in 1998. The main RFC is 2401 and is called the “Security Architecture for the Internet Protocol.” This is one of several RFCs that make up what we call IPsec. In table 7.1 we see the mapping of the IPsec protocols and the components that make up IPsec. Table 7.1 IPsec Protocols Encryption



AH/ESP Tunnel Transport

Virtual Private Networks

p. 173

IPsec has two sets of protocols that make up the suite. The first protocol, Internet Key Exchange, IKE, controls and handles the keys and the management of the keys. IKE will let us automatically negotiate the Security Associations or SAs. IKE uses UDP port 500 so we need to adjust our firewalls to allow UDP port 500 to pass when we configure our VPN using IPsec.The second set of protocols handles the data manipulation such as encryption, authentication and compression. We can choose between Authentication Header, or AH, and Encapsulating Security Protocol, or ESP. AH will provide us with authentication, integrity and relay protection. AH provides this by using hashing algorithms to secure the data. The downside to AH is that it does not encrypt the data; everything is sent in clear text. It only provides the three protections we just mentioned. If you want encryption, then we need to look at ESP which provides the encryption but at a small cost. AH encrypts both the IPv4 header information and the payload that the packet is carrying. ESP only encrypts the payload, not the IPv4 header information. IPsec can work in two modes: transport mode where only the data payload is secured, or tunnel mode where the entire packet is secured and a new IPv4 header is written. We can see in Figures 7.1 and 7.2 the packet construction using AH as the security for the IPsec packet. Figure 7.1 Adding AH Protection using Transport Mode

IPv4 Header



Figure 7.2 Adding AH Protection using Tunnel Mode

NEW IPv4 Header


OLD IPv4 Header


When we look at ESP and the fact that it will encrypt the data, one has to ask, why use AH at all? The difference is that AH will authenticate the entire packet where ESP will only authenticate the payload. Also, the encryption process on both ends will take some time and could be a factor in the connection performance. We see in figures 7.3 and 7.4 how ESP will encrypt the payload in transport mode and payload plus the original IP header in tunnel mode.

Virtual Private Networks

p. 174

Figure 7.3 Adding ESP Protection using Transport Mode
Encrypted Data

IPv4 Header

ESP Header


ESP Trailer

ESP Authentication Data

Figure 7.4 Adding ESP Protection using Tunnel Mode
Encrypted Data ESP Authentication Data

New IPv4 Header

ESP Header

OLD IPv4 Header


ESP Trailer

The steps that IPsec uses to perfom the encryption are shown here:
• • • Host A sends a message Data encryption Secure packets Data is sent to the IPsec function The data is encrypted with algorithms using a public key The data is sent over the Internet as encrypted packets within a secure tunnel The data is encrypted again in the tunnel using the session key At the termination end point of the tunnel, it is decrypted using the private key When the data gets to host B, PC, it is checked and the sender is verified using the IPsec function

• •

Session key Data decryption


Message received by Host B

L2TP Layer Two Tunneling Protocol, or L2TP, is a newer protocol that is an extension to Point to Point Protocol, or PPP, and used for creating VPNs. Microsoft is a heavy user of this protocol for setting up VPNs between Windows servers and clients. One of the benefits of L2TP is that it will support multiple protocols such as IPX and NetBEUI. L2TP is based on Point to Point Protocol, or PPP, and follows the specifications in RFC 1661 for the PPP protocol. RFC 2661 spells out all the details for the L2TP Virtual Private Networks p. 175

protocol and how L2TP works with PPP to establish the session. For further reading on L2TP, RFC 2661 can be found at The L2TP packet is really a blending of the PPP packet and uses IPsec ESP for the encryption of the payload. In Figure 7.5 we see the construction of a L2TP packet starting with the PPP encapsulation, then the adding of the L2TP header and UDP headers and finally adding the IPsec ESP encryption and the final IP header. Figure 7.5 L2TP Packet Construction
PPP Header PPP Payload (IP Datagam, IPX Datagram)

UDP Header

L2TP Header

PPP Header

PPP Payload (IP Datagam, IPX Datagram)

IP Header


UDP Header

L2TP Header

PPP Header

PPP Payload (IP Datagam, IPX Datagram)

When a VPN session is started up using L2TP the steps involved are straightforward. The first step is that a PPP connection is established between the two endpoints. When the PPP connection is up, the end points partially authenticate using Challenge Handshake Authentication Protocol, or CHAP, or Password Authentication Protocol, or PAP. With the PPP tunnel established, L2TP is brought up over the PPP tunnel. At this point, we can have optional authentication for the L2TP sessions. PPTP Point to Point Tunneling Protocol, or PPTP, was developed jointly by Microsoft, U.S Robotics and several other vendors who make up the PPTP Forum. The PPTP protocol is used to create VPNs much like L2TP or IPsec. However, the PPTP protocol is not an RFC standard and it has been compromised in less than 4 hours. On the other hand, PPTP supports NAT transversal, so it can make a connection through most routers or firewalls using NAT.

VPN Utilities
PPTP Client If you need an easy way to configure a VPN from your Linux box to a VPN server such as a Microsoft Windows VPN server, the PPTP client available at is an easy way to accomplish this task. In the sample for this section, I have configured a Red Hat 9 client with PPTP. There are packages available for :
• Debian

Virtual Private Networks

p. 176

• • • • •

Fedora Core 1, 2, 3 Gentoo Mandrake Red Hat 7.3, 8, 9 SuSE 8.2, 9.1, 9.2

To install PPTP on Red Hat 9, we have to do some preliminary work. We first need to download a few packages to start off the installation. We need to get:
• • • • • kernel-mppe-2.4.20-31.9.i686.rpm kernelmod-0.7.1.tar.gz

pptp-linux-1.3.1-1.i386.rpm pptp-php-gtk-20030505-rc1.i386.rpm

There are two choices for the kernel to get the MPPE (Microsoft Point to Point Encryption) that is required. We use the rpm if we do not have PPP configured with our kernel or we can use the kernelmod patch if we do have PPP configured. Since I had PPP already configured for our test installation, I used the patch. The steps to install the patch are the same as many of the other tools we have installed so far.
# tar xfz kernelmod-0.7.1.tar.gz # cd kernelmod # ./

The building of the RPM at the end of the script is optional. Once the script has finished running, we need to test the patch by doing this:
# modprobe ppp-compress-18

If the modprobe completes silently, we were successful in our patching.
The following warning is also applicable when testing the patch Warning: loading /lib/modules/2.4.20-8/kernel/drivers/net/ppp_mppe.o will taint the kernel: non-GPL license - BSD without advertisement clause. See for information about tainted modules Module ppp_mppe loaded, with warnings

Our next step is to install the ppp-2.4.2_cvs file using the upgrade option as we see here:
# rpm --upgrade ppp-2.4.2_cvs_20030610-1.i386.rpm

Now we install the client side:
# rpm -i pptp-linux-1.3.1-1.i386.rpm

And our final step is to install the configuration tool:

Virtual Private Networks

p. 177

# rpm --install pptp-php-gtk-20030505-rc1.i386.rpm

At this point we can start up the interface by opening a shell and typing in the command pptpconfig &, and we should get the resulting GUI interface that we see in Figure 7.6: Figure 7.6 Opening screen using PHP interface for PPTP Configuration

We can start configuring our VPN tunnels at this point. We will give the connection a friendly name and either a DNS name or IP address for the server. The domain, username and password are self explanatory. On the Encryption tab, we can see in Figure 7.7 how we can select Microsoft or other types of encryption. Figure 7.7 Setting encryption for VPN tunnel

We can set up different types of connections such as VPN to an interface, client to server, server to server and so on. In Figure 7.8 we see our options.

Virtual Private Networks

p. 178

Figure 7.8 Configuring type of connection and routing

Lest you think that this is a GUI only interface, those who are CLI adverse do have an option of the command line. We see in Figure 7.9 how to start the command line version: Figure 7.9 Configuring PPTP using the command line
[root@RedRum etc]# pptp-command 1.) start 2.) stop 3.) setup 4.) quit What task would you like to do?:3 1.) Manage CHAP secrets 2.) Manage PAP secrets 3.) List PPTP Tunnels 4.) Add a NEW PPTP Tunnel 5.) Delete a PPTP Tunnel 6.) Configure resolv.conf 7.) Select a default tunnel 8.) Quit ?: 1 1.) List CHAP secrets 2.) Add a New CHAP secret 3.) Delete a CHAP secret 4.) Quit ?: 2 Add a NEW CHAP secret.

NOTE: Any backslashes (\) must be doubled (\\).

Virtual Private Networks

p. 179

Local Name: This is the 'local' identifier for CHAP authentication. NOTE: If the server is a Windows NT machine, the local name should be your Windows NT username including domain. For example: domain\\username Local Name: young\\msweeney Remote Name: This is the 'remote' identifier for CHAP authentication. In most cases, this can be left as the default. It must be set if you have multiple CHAP secrets with the same local name and different passwords. Just press ENTER to keep the default. Remote Name [PPTP]:


This is the password or CHAP secret for the account specified. The password will not be echoed.

Password: Adding secret young\\msweeney PPTP password *

We have just added a new CHAP secret entry using the command line. We use the CLI tool to add new tunnels, edit tunnels, manage CHAP and more. OpenSwan Openswan is a fork of the old FreeSwan project which stopped development in April of 2003. At the time of this book, there are RPMs offered for Fedora Core 2 and 3, Red Hat 7 and 9, RHEL and SuSE. There is a tarball for compiling your own if you can use one of the binaries. You can download Openswan from As an alternative to Openswan, there is a fork called “Strongswan” available at Strongswan is for those who will to use digital certificates in the strongest possible manner. Strongswan is optimized for using certificates. To this end, Strongswan has support for Certificate Authority and for Attribute Certificates.
There is an important point to know about Openswan binaries. With using binaries, you do not need to reboot the server but you will not have the usage of NAT Transversal , or NAT-T, until you do reboot. NAT-T is also still considered to be experimental and so if you plan to use it in a production network, you need to keep that in mind.

Virtual Private Networks

p. 180

Strongswan also supports 3DES and AES, as does Openswan, but takes encryption support further by supporting Serpent, Twofish and Blowfish. Strongswan has been demonstrated and tested with the following VPN gateways and clients: VPN Gateways
• • • • • • Cisco IOS Routers Cisco PIX Firewall Cisco VPN 3000 Nortel Contivity VPN Switch NetScreen Check Point VPN-1 NG

VPN Clients
• • • • • • • • FreeS/WAN PGPnet SafeNet/Soft-PK SafeNet/SoftRemote SonicWALL Global VPN Client SSH Sentinel The GreenBow Microsoft Windows 2000 and Windows XP

We can see that Openswan and Strongswan are very interchangable when it comes time to install and configure them. I will use Openswan in my examples but they should work equally well with Strongswan. Openswan is a Linux implementation of the IPsec protocols. This provides a way to encrypt all TCP traffic, unlike a higher level protocol encryption such as SSL which normally encrypts web traffic, or SSH which can encrypt remote terminal sessions. There are three primary pieces that make up Openswan: KLIPS PLUTO Kernel IPsec that handles the ESP This is the IKE daemon

Administrative scripts that provide the interface to the actual machinery A word about KLIPS; there are two IPsec kernels available for Linux. One is KLIPS and the other is called 26sec. In Fedora Core 3, 26sec is the default IPsec kernel, but in many other Linux kernels, you will use either KLIPS or a back ported version of 26sec. The recent implementations of Openswan support either IPsec kernel but if you can, it is highly recommended to use KLIPS. Virtual Private Networks p. 181

One of the best features of Openswan is that it can do Opportunistic Encryption. This where two Openswan gateways can communicate and setup an encrypted tunnel without the admins having to preshare any information about the gateways. While this feature is not part of the IPsec standard, it has been submitted to be included as a standard feature of all IPsec implementations. Installing Openswan is easily accomplished with the prepackaged binaries. There is a dependency that may or may not be installed on your distro which are the ipsec_tools. On our Fedora Core 2 test server, we had to install the tools so we just used apt to download and install them as we see here:
[root@Fedora-Core2 openswan]# apt-get install ipsec-tools

Once the tools are installed you can use an RPM package manager to install the binaries or compile the source files. When we have Openswan installed we need to start editing the configuration file found at /etc/ipsec.conf. With the proper configuration we can have our Openswan gateway talk to another site using Openswan, Cisco PIX or other IPsec devices. In our sample, we will configure pointto-point VPN or what is commonly called a “site-to-site” VPN. This is where you have one site and a second site, possibly a small remote office or even another company. Installing and Configuring Openswan For our example we will use the precompiled binaries from Openswan and Fedora Core 2. For those who are using a 2.4 kernel, your option is to use KLIPS and patch in the NAT-T support. This will require you to build the kernel ipsec.o module. Some versions of 2.4 have the backported 26sec, like Debian, in which case this rebuilding is not required. I will assume you have downloaded the binaries and have checked and installed, if needed, the ipsec tools we mentioned earlier. The rpm installation is very straightforward. I downloaded the rpm to a new directory in / usr/src called 'openswan'. Once I have downloaded the rpm package, all I have to do is the typical rpm -i openswan* installation and wait for it to finish. When the install is finished you should reboot to have IPsec come up cleanly. On the other hand, you can always try service ipsec start and then verify IPsec by using the ipsec verify command as we see here in Figure 7.10: Figure 7.10 Verifying IPsec is running correctly
[root@Fedora-Core2 root]# ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.2.0/K2.6.9-1.6_FC2 (native) Checking for IPsec support in kernel [OK] Checking for RSA private key (/etc/ipsec.secrets) [OK]

Virtual Private Networks

p. 182

Checking that pluto is running [OK] Two or more interfaces found, checking IP forwarding [FAILED] Checking for 'ip' command [OK] Checking for 'iptables' command [OK] Checking for 'setkey' command for native IPsec stack support [OK] Opportunistic Encryption DNS checks: Looking for TXT in forward dns zone: Fedora-Core2 [MISSING] Does the machine have at least one non-private address? [FAILED] [root@Fedora-Core2 root]#

In this sample, Openswan has not yet been configured for use as the VPN gateway, but it will verify that Openswan is in fact running. Certificates and Keys Openswan can use certificates or keys to authorize the connection. In our example site-to-site VPN, we will be using certicates and if you need a refresher on how to create the certificates, you can go back to Chapter 5 and read it there. We have to complete a few steps before we move on since the VPN will not work at all without the certificates for both ends. Our steps to making our certificates are:
• • • • Generate a CA certificate Generate a host or user certificate Generate a CRL file Make up the Revocation of the certificate to have it ready if needed

We can also use RSA keys instead of the certificates. If we want to use keys, then we need to make an pair of private and public keys for each of our Openswan VPN gateways. We will use the ipsec rsasigkey command to build our pair of RSA keys as we see in Figure 7.11 below. Figure 7.11 Output of the ipsec command
# ipsec rsasigkey --verbose 1024 > gateway1-keys getting 64 random bytes from /dev/random... looking for a prime starting there (can take a while)... found it after 100 tries.

Virtual Private Networks

p. 183

getting 64 random bytes from /dev/random... looking for a prime starting there (can take a while)... found it after 110 tries. swapping primes so p is the larger... computing modulus... computing lcm(p-1, q-1)... computing d... computing exp1, exp1, coeff... output...


The resulting file from this key generation looks like this in Figure 7.12: Figure 7.12 Results of key generatation
# less gateway1-keys # RSA 1024 bits 21:16:11 2004 # for signatures only, UNSAFE FOR ENCRYPTION #pubkey=0sAQPNz2zeWfA1hVbbw1kuyTclvo9driqEP6PlpDY +N8fLewcmDOE+32kDAKdhodmlPe/b5wrg4tXEq4oFEHK4RGpBVq3uK9i9 71KOPgSadQ== Modulus: 0xcdcf6cde59f0358556dbc3592ec93725be8f5dae2a843fa3e5a4363 e37c7cb7b07260ce13edf690308e7b691500a761a1d9a53defdbe70ae 3e250ef528e3e049a75 PublicExponent: 0x03 # everything after this point is secret PrivateExponent: 0x036e1fae5f5d9a7df4a5cbca050be30d6dd7b9072b60122099292 30991ff1030fbda3bf2affb30aefbe1ba826c002ca2918f5f43b76dd7 f86cc46d42f859638b637438dbcdd2666f8195a46b3 Prime1: 0xf6b47d2b0979c2ee76c038b96e485ec0097175ea9bc480f665cc0c6 0b7621aa1c99dc0576c4cfed2e610ba47f00a1d7c22f3d3aa9220c839 Prime2: 0xd5907fd76b0e2c6bfbd4a6f6c0883382951ba8698f916481c0a6b19 5ba207837fb0b7a51b68c909bd7e86d6bd4f60605f7a2d00f559e2acc f76f65069 Exponent1: 0xa478537206512c9ef9d57b2649859480064ba3f1bd2dab4eee88084 Fedora-Core2 Tue Dec 14

Virtual Private Networks

p. 184

e3d5cf96bc6bdbbe803a4833548c99607c2ff55c13a8174d37c70c15d ad108e992573 Exponent2: 0x8e60553a475ec847fd386f4f2b0577ac6367c59bb50b9856806f211 0763d16afacffcb2518bcf086067e5459e47e34eaeaea517355f8e697 1ddfa4a4359b Coefficient: 0x9dd396f83d021af6ce2a951414c6059cd6eadc0103d7c651cde222d 14a4c3f10fea686b34300d51f9f14d1b4c7c5e1c1c26ae03e1f2f77a2 1d20959ea9f2b9

If we will use the keys instead of the certificates, then the PUBLIC key will go in the leftrsasigkey and rightrsasigkey parameters in the conn section of the ipsec.conf file. The private key goes into its own ipsec.secrets file. We can verify our right and left host keys by using the ipsec showhostkey <right|left> command as we see in Figure 7.13: Figure 7.13 Showing right and left keys
[root@Fedora-Core2 root]# ipsec showhostkey --right # RSA 2192 bits Fedora-Core2 Sat Dec 11 22:42:23 2004

rightrsasigkey=0sAQOE+UbFoFfw8dOpfZFFyzMqMUOqjIweC4IhiUzxJEV+nPMXg/ CKGXY9hnTu57ivCn/2iyEwo0XD3Hz9+wVzXJwzGu/BR0vTonFCEINgDObWWSii+chtV+Favc4R5 YXRhva9+YI+7Oxn+botk5NoOi9NQSf6Tg4Ss8JOE1GciphPs6Z/5iq34gcoftj8mwJIeoLCQLoH k2y5fDraf/3SwGP/Ad [root@Fedora-Core2 root]# ipsec showhostkey --left # RSA 2192 bits Fedora-Core2 Sat Dec 11 22:42:23 2004

leftrsasigkey=0sAQOE+UbFoFfw8dOpfZFFyzMqMUOqjIweC4IhiUzxJEV+nPMXg/R UM+nSeJYyU2S+JeBxxBd2Ks/bDDpGbad8AODyA8hBWJHINyvv4NR6awdvHE0abBnzxlhcrBxVgF XRhva9+YI+7Oxn+botk5NoOi9NQSf6Tg4Ss8JOE1GciphPs6Z/5iq34gcoftj8mwJIeoLCQLoHk 2y5fDraf/3SwGP/Ad [root@Fedora-Core2 root]#

Configuration In Figure 7.14 we see the site-to-site map of our soon-to-be VPN tunnel. We have Openswan servers in the DMZ on each site. We should discuss NAT and the issues of trying to run VPNs through NAT. For the most part, don't try to do it. There is feature called NAT Traversal, or NAT-T, that “repackages” the IPsec packets and uses a new port to transport them through NAT, but at this point in time, NAT-T is still considered experimental and enabling it on a production network is not recommended. NAT works by rewritting the headers of the IP packets and this “changing” of data is exactly what IPsec tries to prevent. So when the packet is altered, it is considered not trusted and therefore unacceptable by the endpoint. There

Virtual Private Networks

p. 185

is an RFC for NAT-T which can be found at and this details exactly how NAT-T should work. You should firewall the L2TP daemon on the external or untrusted interface so that it is not accessible. Or to put it more plainly, you MUST firewall the L2TP daemon, otherwise you are exposed to a large security risk of the L2TP daemon answering to any 1701 packet. The fix is relatively easy to implement by firewalling off incoming L2TP connections (UDP port 1701) on all interfaces except ipsec0. On our external perimeter firewalls, you need to make sure that UDP port 500 (IKE), along with protocols 50 and 51 (ESP), are open for the VPN server and clients. Figure 7.15 Diagram of VPN for Site to Site connection over the Internet
Encrypted Tunnel over the Internet Firewall LAN A

Firewall LAN B



OpenSwan VPN Gateway Left=111.222.333.444

OpenSwan VPN Gateway Right= 555.666.777.888

The diagram will help you to understand along with a quick explanation of the script. In the ipsec.conf file, convention is that we read our network from left to right with the LAN A the left and the LAN Bto the right. So the lines:
left=111.222.333.444 right=555.666.777.888

set the ip address for LAN A and and LAN B. We could use the parameter “right=% any” which allows any IP address instead of a specific IP address. If we use our certificates, then instead of specifying the RSA keys, we would specify the certificates. In this sample, we are configuring the left side or LAN A as we see here:
/etc/ipsec.conf conn net-net left=%defaultroute leftsubnet= leftcert=LAN-A-Cert.pem right=555.666.777.888 rightsubnet= rightid="C=CH, O=company,"

Virtual Private Networks

p. 186

rightrsasigkey=%cert auto=start

The right side will be a mirror of the left side as shown here:
/etc/ipsec.conf conn net-net conn net-net left=%defaultroute leftsubnet= leftcert=LAN-B-Cert.pem right=111.222.333.444 rightsubnet= rightid="C=CH, O=company," rightrsasigkey=%cert

These configurations are pretty easy to understand. There are sample configurations for host-to-host and host-to-remote, and they all follow the same basic idea. Before you can use your VPN, you still need to copy over the certificates and keys that you made to each VPN server and place them in these locations:
/etc/ipsec.d/cacerts/caCert.pem /etc/ipsec.d/certs/LAN-A-Cert.pem

In order to protect our keys when we move them from our CA to the VPN servers, they can be encrypted with 3DES using a symmetric transport key derived from a passphrase like this sample:
# openssl genrsa -des3 -out myVPNserverKey.pem 1024

Once copied onto the VPN gateway, the private key can be permanently unlocked so that it can be used by Pluto without having to know the passphrase as we see here:
# openssl rsa -in myVPNserverKey.pem -out myVPNserverKey.pem

Once we have completed that, we can add a line in the ipsec.secrets file to reflect our new key.

: RSA myVPNserverKey.pem "<optional passphrase>"

If you choose to leave the key encrypted, then you need to give the passphrase. You will repeat this process for the right side of the network or in our example, LAN B.
Because of the weak security, key files protected by single DES will not be accepted by Pluto (IKE Daemon).

Resources WikiOpenswan

Virtual Private Networks

p. 187

Linux VPN Firewall script

X.509 Xswan configuration guide PopTop PPTP Server

Virtual Private Networks

p. 188

Chapter 8
"Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information on it." Samuel Johnson

Logging for Fun and Profit
ne of the most important things that can be done for overall network security is to have robust logging in place for sensors, file servers, webservers, routers and any other mission critical device on the network. The need to have robust logging is multifaceted. We need to be able to track network events such as intrusions but we also need to be able to track mundane items such as who accessed what file and when. If someone tries to hack or has hacked, your system, sometimes the log files can help in finding out just what happened and where on the system it did happen. There are security and business reasons to log data ranging from: building trend analysis of traffic, or resource utilization, to legal reasons by using log files to prove someone did something or something happened and when it happened. Without a log file, it is almost impossible to prove that someone did anything to your system. Just enabling logging is not enough; normally, default, logs are pretty generic. We need to have a sharp focus and to understand what to log, how to log it and when to log it. We also need to have accurate time stamps because without a reliable time and date stamp, you cannot build a time line, trend accurately or provide legal proof by way of a log file. We will start this discussion by showing you have to set up the correct time on a network and the various devices attached to it. The primary method to set up accurate time on a network is to use a private or public Network Time Protocol, or NTP server. We can find information about NTP at where there are links to servers, information on what NTP is, how NTP really works and much more. When we want to have accurate time for our log files or any other function on our computers, we will use NTP and the time will be stated in Coordinated Universal Time, or UTC. (Yes, I know the acronym of UTC does not match what should be CUT but this is what happens when a committee tries to fix a problem and can not agree on the solution. The choices were CUT (American) or TUC (French) so they split the difference with UTC. Go Figure that one out.) Anyways, back to the matter of time. When we measure our local time using NTP, we will be in a plus or minus reference relative to zero hour UTC. So here on the west coast, we are minus 8 hours from zero UTC. In Table 8.1, we see a chart that gives us the conversion from UTC time to local time zones.



p. 189

Table 8.1 Coversion from UTC to Local Time Zones Local Time Zone
ADT - Atlantic Daylight AST - Atlantic Standard EDT - Eastern Daylight EST - Eastern Standard CDT - Central Daylight CST - Central Standard MDT - Mountain Daylight MST - Mountain Standard PDT - Pacific Daylight PST - Pacific Standard ADT - Alaskan Daylight ALA - Alaskan Standard HAW - Hawaiian Standard Nome, Alaska CET - Central European FWT - French Winter MET - Middle European MEWT - Middle European Winter SWT - Swedish Winter EET - Eastern European, USSR Zone 1 BT - Baghdad, USSR Zone 2 ZP4 - USSR Zone 3 ZP5 - USSR Zone 4 ZP6 - USSR Zone 5 WAST - West Australian Standard CCT - China Coast, USSR Zone 7 JST - Japan Standard, USSR Zone 8 EAST - East Australian Standard GST Guam Standard, USSR Zone 9 IDLE - International Date Line NZST - New Zealand Standard NZT - New Zealand

From UTC
-3 hours -4 hours -5 hours -6 hours -7 hours -8 hours -9 hours -10 hours -11 hours

UTC 12:00
9 am 8 am 7 am 6 am 5 am 4 am 3 am 2 am 1 am

+1 hour

1 pm

+2 hours +3 hours +4 hours +5 hours +6 hours +7 hours +8 hours +9 hours +10 hours +12 hours

2 pm 3 pm 4 pm 5 pm 6 pm 7 pm 8 pm 9 pm 10 pm Midnight

When we configure our NTP service, we will make an allowance for that difference. The various time servers are referenced to an atomic clock for their clocking signal, and we will in turn get our clock from these servers. There are two types of time servers: stratum 1 and stratum 2 servers. Stratum 1 servers will give out time to the stratum 2 servers which are the ones we will use for our clock. There are many public time servers available, and you can also set up your own time server using radio frequency, GPS or the Internet. We can get a list of public NTP servers here at: Logging p. 190 You should also read the rules of engagement for using the time servers. The typical protocol is to send an email to the owner of the server to let them know you are going to be using their NTP server. If you plan to use NTP on more than just one or two systems, configure one system as the master and point the rest to that server. This will help keep the load on the public servers from becoming too high. NTP for Linux In most cases, NTP is already loaded for Linux to use but if it is not, then we will go to and grab the source files to compile yourself. Depending on the distro you are using, there may be binaries already built to use with some searching, but again, like with Snort and SQL, we will use source files for maximum control over what is actually being installed on our computer. If you are using Fedora, you should have at least ntp-4.1.2 if you have kept up with the updates. And just because the package is installed that does not mean NTP is running; we have to edit the ntp.conf file to tell our system which time servers to use and to start the ntpd daemon. So we will start by downloading the current NTP source file and installing or by finding your required package. The installation of the source files is just like we have done many times so far: configure, make and make install. Once we have compiled the files or installed the package, we need to edit the ntp.conf file and add our NTP servers that we will use for our reference. In Figure 8.1, I have three public stratum 1 servers that I am using:, and Figure 8.1 Default listing of ntp.conf
[root@RedRum root]# vi /etc/ntp.conf # -- CLIENT NETWORK ------# Permit systems on this network to synchronize with this # time service. Do not permit those systems to modify the Also, do not use those

# configuration of this service.

# systems as peers for synchronization. # restrict mask notrust nomodify notrap

We need to add our server as we see in Figure 8.2 by copying two lines, removing the # signs and then adding our own IP addresses of the time servers we want to use. Figure 8.2 Editing ntp.conf to add time servers
# restrict mytrustedtimeserverip mask nomodify notrap noquery # server mytrustedtimeserverip

restrict mask nomodify notrap noquery server


p. 191

When we have our time servers added to the ntp.conf file, we should do a manual synchronization by using the ntpdate command as seen in Figure 8.3: Figure 8.3 Enabling manual time synchronization
# ntpdate -u 25 Nov 09:47:43 ntpdate[16463]: step time server offset 6.790629 sec

This manual update will keep the update from taking too long if we were to just start the service right away with the date or time too far from the correct setting. Once we have our update in place, we can start the ntpd service manually. We will configure it to be automatically started later in this section. We will start ntpd manually as shown in Figure 8.4. Figure 8.4 Starting ntpd service manually
[root@RedRum root]# /sbin/service ntpd start Starting ntpd: [root@RedRum root]# [root@RedRum root]# pgrep ntpd 16709 [root@RedRum root]# date Thu Nov 25 11:23:43 PST 2004 [root@RedRum root]# [ OK ]

Let's check our NTP service and make sure we have a good synchronization between our client and the server we specified. I will show you two ways that we can see the details of the NTP service. The first uses the ntpq –p command as we see here in Figure 8.5: Figure 8.5 Using ntpq to see ntp statistics
# ntpq -p remote refid st t when poll reach delay offset jitter

========================================================================= dewey.lib ntp1.twc.weathe LOCAL(0) # LOCAL(0) 2 u 10 l 2 12 64 64 1 37 43.386 0.000 -6.914 0.000 0.015 0.015

The second way uses the ntpdc command and then the sysinfo parameter as we see in Figure 8.6: Figure 8.6 Using ntpdc and sysinfo to see ntp statistics
[root@RedRum utilities]# ntpdc ntpdc> sysinfo system peer: system peer mode: client


p. 192

leap indicator: stratum: precision: root distance: root dispersion: reference ID: reference time: system flags: jitter: stability: broadcastdelay: authdelay: ntpdc>

00 4 -16 0.10155 s 0.05600 s [] c55335e7.08eb67c2 Sat, Nov 27 2004 9:20:39.034

auth monitor ntp kernel stats 0.002258 s 0.006 ppm 0.007996 s 0.000000 s

One of the most common problems in configuring NTP is that a firewall is configured to block port 123 which is the port NTP uses. If you see the above printout with a jitter of 4000, odds are high that there is a firewall to blame. We just used ntpdc to see the statistics for our ntp service and there are several other commands that we can use with ntpdc The version command gives us the version of NTP we are using and the time and date stamp as shown in Figure 8.7. Figure 8.7 Showing the current version of ntpd
ntpdc> version ntpdc 4.1.1c-rc1@1.836 Thu Feb 13 12:17:22 EST 2003 (1)

The sysstats command gives us good information about how long the system has been up and if we have had any errors such as bad packets or packets rejected as seen in Figure 8.8. Figure 8.8 Using the sysstats command
ntpdc> sysstats system uptime: time since reset: bad stratum in packet: old version packets: new version packets: 166453 166453 0 15 189

unknown version number: 0 bad packet format: packets processed: bad authentication: packets rejected: 0 189 0 0

The peers command shown in Figure 8.9, shows us the NTP peers we are using and some statistics about them such as polling and delays. We can even use ntpdc Logging p. 193

to add a server and add the restrictions that we did manually by editing the ntp.conf file earlier. Figure 8.9 Using the peers command to check for errors
ntpdc> peers remote local st poll reach delay offset disp

======================================================================= LOCAL(0) 10 64 377 0.00000 377 0.05182 0.000000 0.00093 0.000323 0.01486

* ntpdc>

3 1024

To use our new Linux time server on our local network, all we need to do is to edit the ntp.conf file under ‘client network’ and add our own subnet information and mask as we see in Figure 8.10: Figure 8.10 Configuring Linux time server for use on LAN
restrict mask nomodify notrap

This will restrict the time service to nodes on the network only. Our final task is to make sure that our NTP service starts each and every time the server restarts or is rebooted. We will use our friend chkconfig yet again to do this seen in Figure 8.11: Figure 8.11 Setting ntp to start on levels 2, 3 and 5 using chkconfig
[root@RedRum utilities]# chkconfig --level 235 ntpd on [root@RedRum utilities]# chkconfig --list |grep ntpd ntpd 0:off 1:off 2:on 3:on 4:off 5:on 6:off

Now that we have an accurate clock for our logging, let's move on to configuring logging and see what we can do with logging. Monitoring and Analyzing the Logs Log files are one of those hidden treasures that people tend to ignore until it is too late. Either the network is suffering or they have been hacked and now people want to turn to the log files for the answers. And there are none to be had since logging had either not been enabled or it had not been configured well so the data is lost. A more pressing concern in today's world is that log files are required to press any kind of charges against an intruder. And the logs had better be configured properly and with an accurate time stamp. We have an accurate time stamp enabled with the last section on how to configure NTP on our Linux computers. Here is a fact that most folks do not know about: there is a real organization that is dedicated to nothing but logging and analzing the log files. This is the stock and trade of Loganalysis and that you can find out more at This not-for-profit group has a wealth of logging related information and links on their site, and it is well worth the time to look it over.


p. 194

Many applications and services in Linux generate log files of one type or another. We can log who is on the system, when, how long, what they touched, what services started, when, why not, when an application started, what traffic was blocked, what was allowed and much, much more. The real problem is that we need to configure the log files with data we need and we need to be able to quickly analyze that data to make some kind of sense out of it. If you are discussing several IDS sensors or firewalls, the amount of data can be overwhelming with some kind of tool to help. What we need to do is identify what information we want to log, where we want to put the logs and how to examine the logs. Lets start with what we where want to put our logs. We can choose to log the data locally but that opens up a few issues. First, is there disk space available and secondly, do you want the log files readily available to someone who may have compromised the system and would like to cover their tracks? So in a perfect world, we would have a dedicated syslog server sitting securely on the network somewhere. The idea is to have the server where it's secure and then control the data coming in so it is a one way street with strict permissions controlling who can write the data and where it is written to. A best practice on LANs is to have all of the management of the network devices on a separate VLAN to help secure access to the data. This would include data like telnet sessions to the switches and routers, SNMP traffic and syslog traffic. This way we can use the VLAN and the routers as barriers to anyone just hopping onto the syslog server. Another option would be to use SSH to encrypt the logging data as it is sent to the syslog server. STUNNEL can work quite well in this design. We also need to make sure all non-essential services are shut down just like we did for our IDS sensor. No X Windows, no telnet, no CUPs and so on. Our syslog server should do logging and not one thing more. The only service that should be running that is considered “extra” would our NTP service since we must have an accurate time stamp on the log files. There should be a firewall set up using iptables that restricts access by protocol, IP address, MAC address or more, depending your security policies and how paranoid you are.
Slackware has a symbolic link /var/adm/ to reach the log files. The /var/log directory is still there

Traditionally, the log files are kept in /var/log and in Figure 8.12 we see what is in my Fedora Core 1 test box log directory. Figure 8.12 Fedora Core 1 test server log files
boot.log cron cups dmesg gdm httpd ksyms.0 ksyms.1 lastlog maillog messages news pgsql rcd rpmpkgs samba scrollkeeper.log secure spooler squid up2date vbox vsftpd.log wtmp XFree86.0.log XFree86.0.log.old XFree86.1.log yum.log

We can see that I have the normal boot, cron, spooler and dmesg log files, plus log files from other applications such as httpd, up2date, Squid and vsftp. Linux keeps the log files in plain text so we do not need a special reader just to view the Logging p. 195

contents of the file. We can use any editor or even just cat or less. We can use the tail command if we just want to catch the end of the log file and see the most recent events. Speaking of tail, a -f switch for tail will keep tail running and show you the new entries as they are written to the log file as we see here:
tail -f /var/log/messages

The message log file is where the core messages will be written to. Here is a sample output of the message log file using the tail -f command.
Dec 21 11:40:31 RedRum rcd[2294]: id=73 COMPLETE 'Downloading 'Evolution Snapshot' channel icon' time=3s (finished) Dec 21 11:40:31 RedRum rcd[2294]: id=75 COMPLETE 'Downloading 'rcd snapshots' channel icon' time=3s (finished) Dec 21 11:48:32 RedRum ntpd[2149]: time reset -0.338949 s Dec 21 11:48:32 RedRum ntpd[2149]: synchronisation lost

The format is the date and time stamp and then the host generating the log entry which in this case is RedRum. We see what process is generating the log entry and what the log entry is. An alternative method of managing our log files is to use our old friend from managing Snort, Webmin. In Figure 8.13 we see the syslogger management screen from Webmin. Figure 8.13 Webmin management of log files


p. 196

In Figure 8.14, we see the details of configuring the file logging using Webmin. Figure 8.14 Configuring Webmin to manage our log files

This is where we give Webmin all the information about where the log files are kept and how we want our logging configured. Syslog is the process we are concerned with for this section. Syslog is a UDP based protocol that uses port 514. The RFC that gives all the details on syslog is RFC 3164 and can be found at Syslog is controlled by init.d so we can start, stop and restart syslogd either from /etc/init.d/syslog <command> or by sending an HUP signal. Syslog is configured by the /etc/syslog.conf file and this is where we will start by telling syslog.conf to send all logging to a network based syslog server. To do this, we need to add a single line to the configuration file as we see in Figure 8.15: Figure 8.15 Editing the syslog.conf file to send all logging to server
*.* # Log all kernel messages to the console. # Logging much else clutters up the screen. #kern.* /dev/console @

The *.* and the IP address of @ says that we are going to send everything to the host This could be a host name as long as we can resolve the name to an IP address. The results are seen in Figure 8.16 where we have a Windows PC using Kiwi Syslog Server receiving the log files. Yes, I know the book is about Linux, but the reality is that many of us also have to deal with Windows servers on the network. Kiwi is one of the best Windows based syslog servers around. And yes, I will show you the same redirection for a Linux syslog server.


p. 197

Figure 8.16 Results from sending all logging to a network syslog server

To make our Linux syslog server accept our messages as a remote is very easy. We first stop the current running syslog server by using /etc/init.d/syslog stop or we can use killall -HUP syslogd.
For Slackware, use these commands to stop and start syslogd: # /etc/rc.d/rc.syslog start|stop|restart

Now we restart the syslog process with /sbin/syslogd but with a couple of new parameters that are shown in Figure 8.17.
To restart Slackware with -rm 0 use :/sbin# /usr/sbin/syslogd -rm 0

Figure 8.17 Restarting the syslog server to receive remote syslog messages
[root@FedoraC1 etc]# /sbin/syslogd -rm 0 [root@FedoraC1 etc]# tail /var/log/messages Dec 21 17:09:17 FedoraC1 kernel: Kernel logging (proc) stopped. Dec 21 17:09:17 FedoraC1 kernel: Kernel log daemon terminating. Dec 21 17:09:19 FedoraC1 syslog: klogd shutdown succeeded Dec 21 17:09:19 FedoraC1 exiting on signal 15 Dec 21 17:09:44 FedoraC1 syslogd 1.4.1: restart (remote reception). Dec 21 17:10:08 syslogd 1.4.1: restart. Dec 21 17:10:08 syslog: syslogd startup succeeded

We have to use the parameters -rm and 0 which enable the remote receiving of the syslog messages and disable the MARK. We can see that FedoraC1 has restarted the syslog process with remote reception and the next line is from which is RedRum sending the message that it has restarted its own syslog server process. To make our changes to syslog permanent, we need to edit the /etc/init.d/syslog file to reflect this change as seen in Figure 8.18: Figure 8.18 Editing the syslogd startup script
start() { echo -n $"Starting system logger: "


p. 198

daemon syslogd -rm 0 $SYSLOGD_OPTIONS

We have added the -m 0 to the startup script for syslogd so each time the syslogd is started or restarted it will be configured to accept remote syslog messages and not to use MARK. Now we will discuss something called facilities and priority. These two parameters are a way to separate out the message flows into more managable data streams. We can use the facilities to direct remote syslog streams to specific places on our remote syslog server. The priority levels are a way we can manage the quantity of syslog messages we will receive at our remote syslog server. In Figure 8.19 we see the eight levels of priority that syslog uses. Figure 8.19 Eight levels of Syslog Priority 7 6 5 4 3 2 1 0 debug info notice warning err crit alert emerg Normally used for debugging Informational messages Conditions that may require attention Any warnings Any errors Critical conditions like hardware problems Any condition that demands immediate attention Any emergency condition

We have to use caution when setting the alert level because we could bury our server by asking for too much information at once from too many sources. The facility number can let us have a directory for the PIX firewall, a directory for the switches and yet one more for our Linux servers. The default facility levels can be normally found at /usr/include/sys/syslog.h and we see the facility list in Figure 8.20: Figure 8.20 Showing the facility codes in syslog.h
/* facility codes */ #define LOG_KERN #define LOG_USER #define LOG_MAIL #define LOG_DAEMON #define LOG_AUTH #define LOG_SYSLOG #define LOG_LPR #define LOG_NEWS #define LOG_UUCP #define LOG_CRON (0<<3) (1<<3) (2<<3) (3<<3) (4<<3) (5<<3) (6<<3) (7<<3) (8<<3) (9<<3) /* kernel messages */ /* random user-level messages */ /* mail system */ /* system daemons */ /* security/authorization messages /* messages generated internally */ /* line printer subsystem */ /* network news subsystem */ /* UUCP subsystem */ /* clock daemon */


p. 199


(10<<3) /* security/authorization messages (private) */

#define LOG_FTP

(11<<3) /* ftp daemon */

/* other codes through 15 reserved for system use */ #define LOG_LOCAL0 #define LOG_LOCAL1 #define LOG_LOCAL2 #define LOG_LOCAL3 #define LOG_LOCAL4 #define LOG_LOCAL5 #define LOG_LOCAL6 #define LOG_LOCAL7 (16<<3) /* reserved for local use */ (17<<3) /* reserved for local use */ (18<<3) /* reserved for local use */ (19<<3) /* reserved for local use */ (20<<3) /* reserved for local use */ (21<<3) /* reserved for local use */ (22<<3) /* reserved for local use */ (23<<3) /* reserved for local use */

The facilities we will be most interested in for our remote servers will be the those from 16 to 23 which are reserved for local use. If you recall, we used local levels when we configured the Snort logging in Chapter 6. Our syslog server can have different settings using a mix of facility and priority. We can specify a certain facility, priority or wild card too. *.alert 7.!=emerg 7.=err all facilities with alert priority facility 7 and all priorities EXCEPT emergency facility 7 and ONLY error messages

You can have more than one facility selected with a single priority by using a comma to separate the facilities: sensorA, sensorB.alert /var/log/snort/alerts

Each entry in the syslog.conf along with the facility.priority will have an action. The action is separated by tab, not spaces as spaces, as will cause problems. We can choose from these actions:
• • • • A full pathname, in which case messages of the specified type are written to the file specified by this path An “at” sign (@) followed by the network address of a machine that's configured to receive syslog messages A user or list of users (separated by commas) or all currently logged on users ( *) The name of a device such as a console or a named pipe

Why is this important? For example, some applications like Shorewall, which we discussed in Chapter 2, uses facility 8 to log the syslog messages it generates. Most times, using a priority of 6 (info) is plenty for logging your Shorewall


p. 200

activity. You would add the mapping of the facility and priority into syslog.conf, along with the directory location for the log file. What to Look for Having all these log files is good but what the heck do we want to look at? Well, the message log, for example, is the “catch all,” and many intrusion attempts will show up in this log file. For example, below we see a typical message that alerts someone, hopefully you, to a failed root login attempt:
Dec 27 15:16:52 RedRum su(pam_unix)[14508]: authentication failure; logname=msweeney uid=500 euid=0 tty= ruser=msweeney rhost= user=root

This message was in the /var/log/messages log file and found using grep. This is why we need to use some type of tool to sort through all the messages. We need to use log analyis tools to aid us by finding matches and then sending an alert right away to the administrator or other responsible party. The secure log is another interesting log file to watch as we see in this example:
Dec 27 15:16:42 RedRum sudo: msweeney : user NOT in sudoers ; TTY=pts/5 ; PWD=/home/msweeney ; USER=root ; COMMAND=/bin/su

In this example, the user tried to use the command sudo but the user was not approved to use this command and it shows up here in the log file. Depending on how you update your distribution, there will be a log file kept detailing what was done and when. In this example we see the log file for yum:
[root@RedRum log]# tail -10 yum.log 09/13/04 10:20:09 Dep Installed: id3lib 09/13/04 10:20:09 Updated: nmap 09/13/04 10:20:09 Updated: grip 1:3.0.7-fr1.i386 09/13/04 10:20:09 Updated: nmap-frontend

Now, if you were not the one installing NMAP, you should start to worry about who had the access and the privileges to install it since NMAP can be used for good but can also be a hacker's friend. Without looking at the log file, you may not have known it was installed if you were not the one installing it. Tuning Syslog I hope by now you have come to understand that syslog is our friend and you are its master. There are some cool tweaks that we can do to enhance our syslog logging capabilities to protect our log files. For Red Hat users, add these lines to the /etc/syslog.conf file:
*.warn;*.err auth.*;user.*;daemon.none kern.* /var/log/syslog /var/log/loginlog /var/log/kernel

For those who use Slackware, add these lines to /etc/syslog.conf: Logging p. 201

*.warn;*.err mail.* auth.*;user.*;daemon.none kern.*

/var/adm/syslog /var/adm/maillog var/adm/loginlog/ /var/adm/kernel

Once you have made these changes, you need to create the files for the logging to work as we see here:
# touch /var/log/syslog # touch /var/log/loginlog # touch /var/log/kernel

Now restart the syslog process to pick up the new changes that we just made. If we do a tail on the loginlog for example, after we make our edits, restarting syslog and then logging in as a user, we see this:
[root@RedRum log]# tail loginlog Dec 27 21:42:51 RedRum sshd(pam_unix)[15862]: session opened for user msweeney by (uid=500) Dec 27 21:42:57 RedRum sshd(pam_unix)[15862]: session closed for user msweeney [root@RedRum log]#

This was a test login using SSH to verify that the new log of loginlog is working. Now you can track who has logged into the system and when. Next we need to tighten up our security of the log files themselves. This is easily accomplished by following these suggestions:
• • • • • • • • • • • • • chmod 600 /var/log/syslog chmod 600 /var/log/loginlog chmod 600 /var/log/kernel chmod 600 /etc/syslog.conf chmod 600 /var/log/cron chmod 700 /var/log/httpd chmod 600 /var/log/httpd/* chmod 600 /var/log/maillog chmod 600 /var/log/messages chmod 600 /var/log/mysql chmod 600 /var/log/netconf.log chmod 700 /var/log/samba chmod 600 /var/log/samba/*


p. 202

• • • • • •

chmod 600 /var/log/ chmod 600 /var/log/secure chmod 600 /var/log/spooler chmod 700 /var/log/squid chmod 600 /var/log/squid/* chmod 600 /var/log/xferlog

These tips for tuning syslog come from an excellent site by David Ranch who has condensed years of Linux experience into some outstanding web pages that can be found at: Rotating Logs Without some kind of log rotating scheme, our syslog log files would quickly fill all available disk space and be too large to effectively utilize. So you need to have a log rotation schedule and method of rotating the log files. For the standard Linux distribution, there is an application called “logrotate” that assists us with this. The file that controls logrotate is called logrotate.conf and can be found at /etc/logrotate.conf. We can configure logrotate to rotate not only the standard messages and log files but other log files from applications such as Apache, MySQL, Squid and others. The default rotation schedule and rotation can be seen in Figure 8.21: Figure 8.21 Default logrotate.conf file
# rotate log files weekly weekly # keep 4 weeks worth of backlogs rotate 4 # create new (empty) log files after rotating old ones create # uncomment this if you want your log files compressed #compress

We can see that our default is to rotate the logs weekly and to keep four weeks of log files. The host will also create new and empty log files to replace the rotated log files. The compression of the old log files is disabled in this example and by default. You can also call external configuration files by using the include command as we see here:
include /home/squid/cfg/logrotate.d

The include statement will allow you to have custom configurations for other programs with log files of their own to be rotated on a schedule custom to that application. These applications might be Squid or a firewall. Within the Logging p. 203

logrotate.d file, the format is the same as you used for the logrotate.conf file. In Figure 8.22, we have a sample of what could be in the called configuration file for your Squid application: Figure 8.22 Externally called logrotate configuration file
} /home/squid/logs/squid/store.log { weekly rotate 5 copytruncate compress notifempty missingok endscript }

We have some new commands to work with shown in this example. The new commands are listed here with the explanation of what they do:
• • • copytruncate notifempty missingok copy and truncate the original log file instead of renaming it and creating a new log file do not rotate the log if it is empty do not issue an error message if the log file is missing. Use this command with caution as since in most cases, the log file should not be missing signifies the end of the script and goes back to the calling script


Logrotate is normally run by a cron job that is automatically configured when Linux is installed. You can change the cron job to meet your own requirements without too much trouble. In Figure 8.23, we see the system crontab file which is located at /etc/contab: Figure 8.23 Typical crontab file
[root@RedRum root]# less /etc/crontab SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root HOME=/ # run-parts 1 * * * * 2 4 * * * 22 4 * * 7 root root root run-parts /etc/cron.hourly run-parts /etc/cron.daily run-parts /etc/cron.weekly


p. 204

42 4 1 * *


run-parts /etc/cron.monthly

# This file was written by KCron. Copyright (c) 1999, Gary Meyer

The format of a crontab file is an easy format to follow:
minute hour day month dayofweek command

To use this format, I have a sample crontab file below that will start a shell script on the first day of every month at 4:00am:
# run custom script the first day of each month at 4:00AM 1 4 1 * * /root/scripts/

For our Squid log files, we could use something like this sample:
30 5 * * * /usr/sbin/logrotate -s /home/squid/cfg/logrotate.status / home/squid/cfg/logrotate.conf

This sample script will start logrotate at 5:30am each day and use the logrotate.conf file located in the /home/squid/cfg directory. The -s and the path for the logrotate status file will tell logrotate to put the status in the same directory as the configuration file which is owned by Squid. Since the user “squid” probably (should not) have permissions to write to the default location of /var/lib/logrotate/status and without those permissions, or a location specified that the user squid can write to, logrotate will die. To edit the crontab file for the user squid, you can use vi to directly edit the user crontab file, or you can use the command crontab -e to edit the user file with the editor called out by the VISUAL or EDITOR environment variable.
To set your default editor type, use the command: export EDITOR=<editor>

When you use the crontab -e command, you will have only the users crontab available for editing. You will find the user defined crontab files in /var/spool/cron/<username>
Slackware keeps the user crontab files in /var/spool/cron/crontabs SuSE keeps the user crontab files in /var/spool/cron/tabs BSD keeps the user crontab files in /var/cron/tabs

Syslog Improved I will quickly go over a replacement for the syslogd that comes with Linux called “syslog-ng.” Syslog-ng is available for downloading from Syslog-ng addresses many of the shortcomings of the original syslogd from Linux and is very popular with administrators and security engineers as a replacement for syslogd. The users of SuSE and Debian already know this since their distros come with syslog-ng as part of the normal distribution. For the reader with a background in any kind of structured programming, you will feel right at home configuring syslog-ng. The


p. 205

configuration file of syslog-ng is ended with a semi-colon and any white space is ignored so you can indent all you want for pretty code. To install syslog-ng, we also need to download a file called “libol” from the same site. I tested syslog-ng 1.6 with libol 0.3. You need to install both with the normal ./configure, make and make install routine that we are all used to. Once you have installed syslog-ng, do not start it right away; you still need to get a configuration file in place for it. You can find sample scripts for Red Hat, HPUX, Solaris and SunOS in syslog-ng-1.6.5/contrib/. These scripts will let syslog-ng act as a “drop in” replacement for syslogd until you add the bells and whistles. In the same directory is a program called “syslog2ng” that will convert your existing syslog.conf file to the new format for syslog-ng. You will also find the init.d scripts for each sample script. To use one of the sample scripts, copy it to /usr/local/etc/syslog-ng which is the default location for the syslog-ng configuration file using version 1.6. Make sure to change the name to syslogng.conf from the sample's name. For my test system, I used the Red Hat sample file and copied it over to the new location. Once the new configuration is copied, stop the existing syslogd and then start syslog-ng. As we can see in Figure 8.24, the first entry from my secure log file is from syslogd and the next two are from syslog-ng: Figure 8.24 Comparing syslogd entries to syslog-ng
Jan 4 19:39:52 RedRum sshd[11933]: Accepted password for root from :: ffff: port 1199 ssh2 Jan 4 20:49:58 RedRum sshd[16147]: Illegal user msweneey from ::

ffff: Jan 4 20:50:02 RedRum sshd[16147]: Failed password for illegal user

msweneey from ::ffff: port 1358 ssh2

Comfiguring the syslog-ng.conf file can be an entire book on its own given how flexible it is. To get started, use the sample files and build on those. The manual for syslog-ng can be downloaded from: There is an FAQ on writing your configuration files with configuration examples that can be found at: There are many resources available on the Internet to help you learn how to configure syslog-ng and to use it effectively. A couple of good papers on this can be found at: and here:


p. 206

Securing Syslog Traffic We can always choose to replace the standard syslog with a secure version. Some of these will secure the transmission of the syslog traffic and others will encrypt the actual log files to keep someone from editing and changing the logs. The Windows Event log converters are here with the understanding that if you make a standalone syslog server, ALL devices should send their syslog files to that server including the Windows servers. Most of us have to support a mixed network so I added a few possiblities to start you off on your hunt for the perfect coverter. Secure syslog syslogng Nsyslogd

Secure BSD Syslog SDSC Secure Syslog Windows to Syslog Converters SNARE Kiwi Logger Ntsyslog We can also use SSH to tunnel our syslog traffic by configuring a encrypted tunnel or we can use stunnel to do the same. Configuration Guides Cisco PIX Syslog Cisco Switch Apache Microsoft IIS WU-FTP

Sendmail Pop and Imap


p. 207

Sawmill Sawmill is the closest thing to a universal log file analyzing tool that I have seen. There are very few formats that Sawmill cannot read and use. At last count, Sawmill can read over 500 log file formats. It can read Apache, Cisco, TCP dump, Sendmail, FTP, SMTP, POP and many more. If you are so inclined, you can even define your own format if required. Sawmill can use an internal database or you can configure Sawmill to use MySQL. You can have multiple profiles for the log files that you can customize as you need. The reporting is flexible and all it needs is a browser to view the reports. I think by now you get the picture that Sawmill is a very comprehensive log analysis application. While it is a commercial application, the pricing is very reasonable, and if you are willing to be a beta tester, they will make you a deal of a free license in exchange for some of your time running the various betas. So let's get started installing and using Sawmill. The installation of Sawmill can be from prepackaged binaries or from encrypted source files. In our example we will download the rpm package from and use the normal tar zxvf command to decompress and set up the directories. To start Sawmill, we just change to the Sawmill subdirectory and say ./sawmill7.0.9a & as we see in Figure 8.21: Figure 8.21 Installing and starting Sawmill
[root@RedRum sawmill]# tar -zxvfsawmill7.0.9a_x86_linux.tar.gz root@RedRum sawmill]# ./sawmill7.0.9a & Sawmill 7.0.9a; Copyright (c) 2004 Flowerfire Web server running; browse to use Sawmill. To run on a different IP address, use "sawmill -sh ip-addr -ws t"

Once Sawmill is up and running, we just point our web browser to the Sawmill server using port 8987 and start by logging in and setting up our first profile as we see in Figure 8.22: Figure 8.22 Setting up profiles in Sawmill


p. 208

In this profile we will use the local disk as the source of the log file, but we could use FTP or HTTP if we had remote log files to analyze. If Sawmill cannot automatically find the correct format, it will present you will a dialog box to choose what format you would like to use as we see in Figure 8.23: Figure 8.23 Choosing log file format

In setting up the new profile, there will be questions about displaying the data depending on what type of log file and there will a question about what database to choose, internal or MySQL. The last step is to give the profile a name and then you can use the new profile to start viewing the statistics and data. In Figure 8.24 we see the results of setting up a profile to analyze an Apache web server’s log files. Figure 8.24 Viewing Apache log files with Sawmill


p. 209

Sawmill is a very powerful and flexible tool to use to analyze your various log files. The demo is free and will run for 30 days which is plenty of time for you to get hooked on the power of Sawmill. Logwatch Logwatch is an open source program that will check your log files and generate a summary of the data. This is a command line tool with a very small footprint and is pretty flexible to what it can do. You can download Logwatch from in either a tarball or RPM package. Red Hat 7.0 and newer include Logwatch as part of the distribution, but you should get the newest version and upgrade using the rpm -U switch. Logwatch is configured by parameters in the /etc/log.d/conf/logwatch.conf file. This is where you can set the default levels of details, where the log files will be found and more.
Logwatch configuration files are case-insensitive. To preserve the case you need to enclose the text in double quotes.

These defaults will be overruled by any command line options that you can set. There are some parameters that you should check and configure to match your systems as you see here in Figure 8.25: Figure 8.25 Basic logwatch.conf configuration
# Default Log Directory # All log-files are assumed to be given relative to this directory. LogDir = /var/log # You can override the default temp directory (/tmp) here TmpDir = /tmp

# Default person to mail reports to. # complete email address. MailTo = root

Can be a local account or a

# If set to 'Yes', the report will be sent to stdout instead of being # mailed to above person. Print = No # The default time range for the report... # The current choices are All, Today, Yesterday Range = yesterday

# The default detail level for the report. # This can either be Low, Med, High or a number. # Low = 0 # Med = 5


p. 210

# High = 10 Detail = Low # (in /etc/log.d/scripts/services/*) or 'All'. # The default service(s) to report on. # most people. Service = All # some systems have different locations for mailers # mailer = /bin/mail This should be left as All for

These are the basics to configure or you can leave them at the defaults. The way to use Logwatch is just to type in the command logwatch and then any parameters that you are looking for. In Figure 8.26 we see the results from running logwatch with details and print to screen. Figure 8.26 Running logwatch with details
[root@RedRum utilities]# logwatch --range all --detail high --print

################### LogWatch 5.2.2 (06/23/04) #################### Processing Initiated: Sat Nov 27 07:05:12 2004 Date Range Processed: all Detail Level of Output: 10 Logfiles for Host: redrum ################################################################

--------------------- Cron Begin ------------------------

Commands Run: User adm: personal crontab listed: 2 Time(s) User apache: personal crontab listed: 2 Time(s) User bin: personal crontab listed: 2 Time(s) User games: personal crontab listed: 2 Time(s) User gdm: personal crontab listed: 2 Time(s)

In Figure 8.27 we will ask logwatch to find a match for sshd in todays log files, use the range of today with high detail and then print the results to the screen.


p. 211

Figure 8.27 Querying the log files for sshd with logwatch
# logwatch --service sshd --range today --detail high --print

################### LogWatch 5.2.2 (06/23/04) #################### Processing Initiated: Tue Dec 28 10:54:28 2004 Date Range Processed: today Detail Level of Output: 10 Logfiles for Host: redrum ################################################################

--------------------- SSHD Begin ------------------------

Users logging in through sshd: root: 1 time

---------------------- SSHD End -------------------------

###################### LogWatch End #########################

[root@RedRum log]#

You can see from this example that Logwatch gives us good detail and a rudimentary report format to work with. We have several options to work with for Logwatch, and the man page is pretty good at laying it out clearly. These basic settings will normally work to get you started using Logwatch and from there you can customize it to your particular needs and desires. Swatch Swatch fills in that area of log file management where you get the information but you do not have the time to manually go through each and every entry to see if it is important. Swatch is a near real time log tool that can continously watch the log files and look for matches based on simple rules. Swatch is found at and is built using Perl. You can run Swatch as a cron job or as a deamon. Swatch requires the following files at the miminum:
• • • • PERL 5 Time::HiRes Date::Calc Date::Format


p. 212

Some other files that may be needed depending on your distribution and Perl configuration are:
• • • • Carp-Clan Bit-Vector-6.4 File-Tail lingua::EN::Words2Nums

You might want to take this time to update your Perl installation to use CPAN to aid you with installing the various packages. There are many useful scripts found on the Internet, and installing the various Perl packages can be a problem. CPAN helps with this by having mirrors and a package manager type of interface to download and install the Perl modules or extensions as required. To update your current Perl using CPAN, run the following command and work through the script's prompts:
[root@RedRum swatch]# perl -MCPAN -e "install Bundle::CPAN"

/usr/lib/perl5/5.8.0/CPAN/ initialized.

CPAN is the world-wide archive of perl resources. It consists of about 100 sites that all replicate the same contents all around the globe. Many countries have at least one CPAN site already. The resources found on CPAN are easily accessible with the module. If you want to use, you have to configure it properly.

If you do not want to enter a dialog now, you can answer 'no' to this question and I'll try to autoconfigure. (Note: you can revisit this dialog anytime later by typing 'o conf init' at the cpan prompt.)

Are you ready for manual configuration? [yes] yes ::: truncated for brevity:::::

To use CPAN to download the files, we can use the following command that we see here to download the Date::Calc file we need for Swatch:
# perl -MCPAN -e "install Date::Calc" CPAN: Storable loaded ok Going to read /root/.cpan/Metadata Database was generated on Sun, 26 Dec 2004 21:50:21 GMT Date::Calc is up to date. #


p. 213

In this case, the request file which is Date::Calc was already installed and up to date. The format to use is “install <file name>”. You need to get the spelling and the capitialization correct on the file name. You can also download the files manually, unpack them and then manually install the files like this:
perl Makefile.PL make make test make install make realclean

Watch the results from the make and make test to see if there are any errors about missing files. These will be need to be corrected before you continue on. On my sample system for this section, I used a Red Hat 9.0 system and fully patched it before I started the Swatch installation. I had also made sure that Perl 5 was installed. For this system to run Swatch, I had to install the following files in this order:
• • • • • • • • Carp::Clan Bit::Vector Date::Calc Time::Parse Time::HiRes Lingua::EN::Words2Nums File-Tail-0.98 Swatch

In the /examples subdirectory, there are two sample files. In the swatchrc.monitor file you will find the defaults are for Solaris and it has a pager number in place. You will need to edit these before using the swatchrc.monitor file. The second example file is swatchrc.personal, and it also has defaults for Solaris. Swatch has three default conditions that it looks for in the syslog files.
• • • Bad logins System Restarts System Crashes

Of course you can add to this list as you see fit or as required. Swatch is written in Perl which means that you can easily adapt or modifiy it to suit your needs. But, there are some downsides to using Swatch to examine your log files such as having to have Perl installed. An alternative to Swatch is Logsurfer which is written in C and brings some extra features to the table at the cost of not being so easily modified unless you are a C programmer. Logging p. 214

LogSurfer Again, trying to sort through all the entries in a log file is almost impossible by manual means and some people need more power then Swatch can bring to the fight so enter LogSurfer. LogSurger is not a perl script but it is written in C and in some ways a lot easier to set up. You can download Logsurfer from Logsurfer is much like Swatch but offers a few enhancements over Swatch like:
• • • • • Does not require Perl Timeouts and resource limits are included Uses “contexts” or a collection of messages instead of a single line at a time Works on any text file Allows you to specify exceptions to the rules

So while you cannot play with the guts of LogSurfer like you can Swatch, it does offer some powerful features that Swatch does not have. If you want to go further with LogSurfer, you can download a branch of LogSurfer from which has some improvements to the old 1.5b version of LogSurfer. Optionally, if you already have LogSurfer 1.5 installed and working, you can apply their patch to bring LogSurfer to 1.6. A resource with many “recipes” for LogSurfer can be found at: The rules that LogSurfer uses are comprised of four basic parts.
• • • • Description of the matching message or text Description of when the rule should be removed The maximum time in seconds that the rule should be active An action to be excuted when the rule is triggered.

Our action can be one of the following:
• • • • • • • ignore exec pipe open delete report rule ignore the line run an application log line is sent to stdin open a new context rule delete an open context rule run an application and pipe in the context data create a new rule


p. 215

An example of rules for LogSurfer can be found in the man pages and I have reproduced the sample rule here for our discussion. '.*' - - - 0 exec "/bin/echo $0" In this sample, the '.*' says to match everything. The next three dashes say that there will not be any exceptions or self destroying rules. The 0 says there will not be a timeout and finally, our test rule will execute /bin/echo to $0. We can start to see that the structure of the rules is very simple. But, we can make very complicated rules and since the rules are dynamic, unless you pay attention to your rules, it can get rather messy. Nagios Nagios is a logging and system monitoring application that uses a GUI interface and can monitor services such as SMTP, POP, HTTP and NNTP. In a prior life, Nagios used to be called “NetSaint” until somewhat recently when it changed to Nagios. Nagios can be used to monitor other host resources such as temps, CPU utilization, disk space and others. Unlike some other monitoring applications, Negios can send a page, send an email or use some other type of alert system depending on how you configure it. The interface uses a browser so it is very easy to use regardless of where you are. The source file tarball or RPMs can be downloaded from The source files can be compiled with the defaults, or you can set the various paths in the configure command. When you install Nagios, you should make a user for Nagios and avoid using root to run the application to avoid the security risks of using root. You also need to make the directory where Nagios will be located.
# adduser nagios # mkdir /usr/local/nagios # chown nagios.nagios /usr/local/nagios

Next we need to identify the user that apache uses.
# grep "^User" /etc/httpd/conf/httpd.conf User apache #

At this point we create a new group called Nagios and add the user apache to it:
# /usr/sbin/groupadd nagcmd # /usr/sbin/usermod -G nagcmd apache /usr/sbin/usermod -G nagcmd nagios

Now we can run our configure script. In most cases the defaults for configure will work fine, but if needed, you have some options that we see in Figure 8.28: Figure 8.28 Configure Options for Nagios --prefix=prefix --with-cgiurl=cgiurl Logging default is /usr/local/nagios default is /nagios/cgi-bin p. 216

--with-htmurl=htmurl --with-nagios-user=someuser --with-nagios-grp=somegroup --with-command-grp=cmdgroup Now we continue with:
# make # make install # make install-init

default is /nagios/ default is nagios default is nagios default is nagios

This is the basic installation of Nagios but you still need to install the vaious plugins that make Nagios really work for you. The plugins are placed in /usr/local/nagios/libexec and can be scripts or binaries. You can find the plugins at We need to edit our web server httpd.conf file and add this following script to it which will create the alias for the CGI scripts.
ScriptAlias /nagios/cgi-bin /usr/local/nagios/sbin <Directory "/usr/local/nagios/sbin"> AllowOverride AuthConfig Options ExecCGI Order allow,deny Allow from all </Directory>

Next we need to create the alias for html files with this script:
Alias /nagios /usr/local/nagios/share <Directory "/usr/local/nagios/share"> Options None AllowOverride AuthConfig Order allow,deny Allow from all </Directory>

You need to save the httpd.conf file and then restart httpd. Once the web server has restarted we can run a basic test by pointing a browser at http://<yourmachine>/nagios and you should get the basic Nagios interface. In Figure 8.29 we see the basic web interface for Nagios after some basic network objects have been configured:


p. 217

Figure 8.29 The basic interface of Nagios

The details of configuring Nagios for your network is beyond the scope of this book since everyone's network is different with its own requirements and needs. To continue the configuration, you can go to: and read detailed information on how to configure objects and their properties customized to your network. Resources


p. 218

Chapter 9
"Whatever enables us to go to war secures our peace." Thomas Jefferson to James Monroe, 1790



e have spent a lot of time working through various aspects of network security, installing open source tools and examining some commercial products. Now we have to bring it all together into something that can help us and make some sense at the same time. As you should see by now, security is not an end to itself but a ongoing process that encompasses not just the traditional ideas of firewalls but proactive measures and procedures. We have seen where many of these tools and techniques can be used to defend or to attack. It is up to us to use each tool wisely and effectively, and that starts with learning about the tools and how to configure them to do what we want and need them to do.

We started with the basic building block of most modern networks: the TCP/IP protocol suite. We examined how it works and how it can be used for good (or twisted) purposes and used against our networks with rogue packets and nefarious data streams. We have looked into the future of TCP/IP with the introduction of Ipv6, how it differs from the “old” IPv4 and what it brings to the table for us with features such as IPsec built in to it. We looked at firewalls and routers and how they are similar but definitely not the same in spite of the media using the two words interchangibly. We learned that firewalls can be very simple devices offering minimal protection to very sophisticated devices that are exceedingly difficult for a hacker to break through. We saw that Linux has firewalling built in to the kernel, and it can be configured to provide a reasonable level of protection and risk management. There are firewalls that will fit on a single floppy disk or run from CDRs to provide an extra level of protection for the configuration. We have seen that the open source tool Snort, while free can be a very sophisticated device and a powerful addition to the more traditional firewallcentric network security design. We have also seen that using an IDS system such as Snort is much more than just throwing a sensor on the wire and calling it done. A feature that brings the power of search queries and trending is that we can use MySQL to be the database to log all the Snort information and then we can add web based tools like ACID to view the various reports and events. IDS protection does not have to be like visiting the dentist. We learned that encryption is your friend for security and is used virtually every day for data and email. We learned how easy it can be to configure our own encryption and how to wade through the alphabet soup of acronyms like DES, AES, RSA. We explored these algorithms and seen how they fit into the world of encryption and ciphers. We saw that the popular telnet replacement SSH is in reality much more than that. We saw that SSH can securely copy files, provide Summary p. 219

secure logins from public places, be used as the poor man's VPN and much more with just a bit of imagination on our part. Speaking of VPNs, we learned about Openswan and PPTP and that Linux can be very flexible when configuring VPNs. Linux can talk to Linux, Windows and other firewalls like the Cisco PIX with just a little bit of coaxing. We finally learned what the lock on the browser really means, how we can have our own X.509 certificates and what certificates can do for us. We also learned that, while Linux is incredibly flexible, to be secure we must be proactive and tune the installation to minimize our risks. We saw that we need to shut off services not used or use replacement services when needed or required. And lastly we learned that with some free tools, applications and some common sense, we do not need to spend a ton of money for risk management. And that really is what we do; manage the risk to our data and keep that risk at an acceptable level. In the end, I hope you found this book to be useful to your efforts at securing the network using linux and keeping the bad guys at a distance. In closing, I have some suggestions to start securing your network with Linux:
• • • Do not allow remote root logins. Do not use root locally. You should login as a user and then either su or use sudo Take passwords SERIOUSLY. Do not give them out, do not use the root password anywhere else, and do not write them down. Use strong passwords. The passwords should not be the name of your pet, children, wife, husband, girlfriend, boyfriend, your own name, the server name or any other common name. Use a mix of alpha-numeric characters and symbols. Passwords are case sensitive so make use of that feature. You can find a list of common passwords at Avoid like the plague any application that transmits passwords in clear text. One of the greatest offenders is telnet and it should always be replaced with SSH, or STUNNEL or other method of encryption. Use the local firewall rules or an HIDS to protect your server(s) and other critical hosts. The FBI reports that over 70 percent of intrusions come from internal attacks and so the lesson is, do not trust anyone on your network. Install any tools or security applications from source files if possible. Always verify the digital signature of the files before unpacking or installing a package. Study current security risks and the methods to combat the new risks.


p. 220

• •

Keep your patches for the operating system and applications up to date . Some may consider this advice too basic but it bears repeating. Get your software from known sources, not from any old website that happens to have the files. Use signatures to verify the integrity of the files. Log anything and everything and then sort it all out with some of the available tools. Make sure you touch base with your legal beagles to find out the current rules regarding how long to keep log files, acceptable formats, time stamp requirements and so on. Make sure to archive the old log files and not just delete them; you never know when an old set of files may hold the key of a current attack or threat. Many times, the attack is preceded by weeks of probing and having the log files can be of use in this case. Spend the required time to learn how to harden your systems and then do it. There are many guides available at CERT and SANs along with tools like Bastille. Run penetration tests against your own systems and find the holes before the hackers point them out to you. Learn about tools like NMAP and Nessus and the basics of using a sniffer and then use them. Spend the time to tune your security perimeter and always be ready to adjust those parameters if required.


p. 221

Appendix 1
The purpose of this appendix is to help the folks just getting started in Linux with some common commands in an FAQ format. How can I find out what my IP address is? You can check a few ways but the easiest way is to use the ifconfig command like this:
[root@RedRum log]# ifconfig eth0 Link encap:Ethernet HWaddr 00:10:B5:8E:71:AD Bcast: Mask:

inet addr:

inet6 addr: fe80::210:b5ff:fe8e:71ad/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:3980 errors:0 dropped:0 overruns:0 frame:0 TX packets:3088 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:420825 (410.9 Kb) TX bytes:261924 (255.7 Kb)

Interrupt:9 Base address:0xb000


Link encap:Local Loopback inet addr: Mask:

inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1

RX packets:2411 errors:0 dropped:0 overruns:0 frame:0 TX packets:2411 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:306411 (299.2 Kb) TX bytes:306411 (299.2 Kb))

We see that the -a gives us all the interfaces. We also get subnet masking information, RX and TX information and any collisions. How can I change my IP address?
Ifconfig <interface> <ip address> <netmask> up # ifconfig eth0 up

How can I see my route tables?
[root@RedRum log]# netstat -r Kernel IP routing table Destination Gateway * Genmask Flags U MSS Window 0 0 irtt Ifa 0 eth0

Appendix 1

p. 222 default

* *


0 0 0 0 0 0

0 eth0 0 lo 0 eth0

What is an easy way to view a file?
# less /etc/ssh/sshd_config

The less command shows a screen at a time and you can page a screen or a line at time using the enter key or the space bar. There is always cat and vi if you wish. You can also use tail -n <number of lines> and just get the last few lines of a file. How do I bring an interface up or down?
# ifdown eth0 # ifup eth0

How do I find a file on my system?
# whereis <filename> # find / | grep <filename> # find . -name <filename> -print

These next two commands to find a file are tied together and work very fast since locate will use an index:
# updatedb # locate <filename>

How do I install or erase a given application? Depends on if its source or some kind of package like an RPM: To install from source:
# tar -zxvf <filename>.tar.gz

Change to the <filename directory>
# ./configure <any options> # make # make install

To install from an RPM:
# rpm -i <filename>.rpm or rpm -i --force <filename>.rpm

To delete a package using RPM:
# rpm -e <package name>

Appendix 1

p. 223

To force a deletion of a corrupted package or a package that refuses to uninstall due to imagined dependicies using RPM:
# rpm -e --nodeps <package name>

You can also use yum or apt-get if you have these utilities installed. A good pocket resource for common commands can be downloaded from:

Appendix 1

p. 224

accept, 29, 42, 52, 53, 54, 59, 60, 71, 72, 199, 200, 226 acid, 133, 153, 156, 157, 158, 159, 160, 161, 172, 173, 220 ack, 16, 17, 62, 163, 226 ack syn, 54 acls, 25, 226 addresses, 3, 4, 6, 8, 9, 10, 11, 18, 24, 56, 57, 95, 192, 206, 226 adodb, 157, 226 affected, 89, 150, 160 ah, 175, 226 aide, 130, 226 alert_syslog, 148, 226 alert_syslog log_auth, 148 alert_syslog log_local, 148 algorithm, 94, 95, 109, 123, 124, 128, 226 alice, 94, 95, 96 altered, 93, 98, 186 apt, 73, 77, 78, 79, 81, 82, 84, 86, 87, 99, 139, 183, 226, 231 apt-get, 73, 77, 78, 86, 225, 226, 229 apt-install, 73, 77, 86 astaro, 34, 44, 45, 46, 47, 226 asymmetrical, 94, 226 atd, 35, 136 authority, 118, 122, 123, 226 authorized_keys, 115, 226 autofs, 35, 226, 228



bastille, 36, 222, 226 bgp, 7, 21, 226 blue, 38, 226, 227 bond, 161, 162, 226 bonding, 161, 162, 171, 226 boolean, 4, 132, 134, 226 bootable, 38, 90, 226 buffer, 141, 226

cacert.pem, 121, 125, 226 cache, 78, 86, 226 certificate, vii, 109, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 129, 184, 226 certificate authority, 119, 121, 181 certificate manager, 126 certified, 119, 123 certified commit, 123 certs, 120, 226 chain, 52, 53, 54, 55, 56, 57, 226 chain input, 60 chain syntax, 53, 226 channel-label, 82, 226 channels, 81, 226 chap, 177, 180, 181, 226 cidr, 6, 7, 8, 226, 228 cipher, 96 cipher des, 98, 138 class, 3, 4, 5, 6, 24, 27, 28, 133, 226 class firewalls, 28 classful, 4, 5, 6 classful subnet, 4, 226 classification, 148, 149, 226 classless, 4, 6, 226 classless subnet, 7 clipper chip, 93 commit, 63, 226 conn, 186, 226 context, 216 converters, ix, 208 country, 119, 120, 121 cpan, 64, 214, 226 cpu, 25, 27, 217 crl, 120, 124, 128, 184, 226 crontab, 205, 206, 212, 226 cups, 135, 196, 226



ca, 34, 118, 119, 121, 122, 127, 188, 226, 229 ca certificate, 119, 120, 184

datagam, 177, 226 debug, 61, 200 decimal, 5, 10 decrypt, 94, 95, 96, 227 defaults, 47, 71, 120, 211, 212, 215, 217, 227



defense, 2, 59, 131 define, 1, 5, 145, 200, 201, 209, 227, 228 des, 94, 95, 96, 109, 174, 182, 188, 220 des dsa, 94 dest, 42, 43 dest net, 42 dgmlen, 163 dist-upgrade, 77, 86, 227 dmesg, 65, 197 dod, 2, 227 dod layer, 3 dollars, 25, 28 door, 24, 25 dsa, 95, 97, 98, 100, 107, 110, 111, 227 duplex, 168, 171, 227

freesco, 20, 21, 227 ftest, 64, 65, 227 ftest.conf, 64, 65, 227 ftester, 63, 64, 65, 227 ftp, 1, 26, 35, 49, 69, 109, 112, 115, 116, 117, 201, 209, 210, 227 ftp tracking, 68



eastern, 191 embedded, 25, 34, 51, 227 encrypts, 175, 182 enterprise, 27, 28, 171 entropy, 99, 101, 227 esp, 175, 176, 182, 187, 227 esp ah, 175 ether, 55, 59 ethereal, 17, 18 evolution, 103, 104 expect_str, 61, 227 exponent, 123, 185, 186, 227 extensions, 122, 123, 214 external_net, 146, 167 extif, 66, 67, 72, 227

gateways, 183, 227, 231 gcc, 134, 140, 227 generates, 156, 164, 201 genrsa, 118, 227 gibraltar, 47, 49, 227 gnupg, 93, 96, 97, 98, 99, 102, 105, 106, 107, 108, 129, 139, 227 gnutella, 60, 61, 227 gpg, 93, 96, 97, 98, 99, 101, 102, 103, 104, 105, 106, 227 gpg list-keys, 106, 227 gpg.conf, 99, 227 gpgme, 99, 105, 227 gpm, 136, 227 grant, 157, 158, 227 green, 38, 227 grub, 13, 227


handshake, 16, 17, 24, 177, 227 hids, 130, 131, 132, 221, 227 home_net, 146, 147, 227, 231

id_rsa, 113, 114, 113, 114, 115, 227 facility, 147, 148, 200, 201, 202, 227 ike, 175, 182, 187, 227 facility priority, 201 inbound, 26, 31, 39, 171, 227 fedorac, 119, 121, 122, 199 init.d, 152, 198, 207, 227 fedorac syslogd, 199 input, 54, 55, 56, 63, 71, 85, 160, 226, filters, 25, 44, 45, 52, 227 227 fin, 15, 17, 26, 227 input accept, 63, 71, 227 fingerprint, 101, 108, 110, 111, 113, 227 input chain, 56, 227 firestarter, 40, 65, 227 inspection, 15, 25, 26, 34, 59, 228 firewalls, 15, 24, 25, 26, 27, 28, 31, 32, install.log, 112 33, 43, 49, 52, 53, 63, 65, 70, 72, invalid, 54, 57, 70 175, 177, 187, 196, 220, 221, 227, ip6tables, 29, 228 228, 230 ip_conntrack, 57, 68, 228 flush, 53, 55, 227 ip_conntrack_ftp, 68, 228 frame, 14, 15, 223, 227 ipchains, 29, 228





ipcop, 38, 40, 228 ipmasq, 57, 58, 59, 228 ipsec, 10, 49, 88, 118, 174, 175, 176, 177, 182, 183, 184, 186, 187, 220, 228 ipsec esp, 177 ipsec showhostkey, 186 iptable, 42, 60, 61, 68, 228 iptable syntax, 54 iptable_nat, 69, 228 iptables-save, 62, 228 irc, 69 isa, 27, 28, 87, 228 isdn, 23, 35, 228 isdn freesco, 20, 228

logsurfer, 215, 216, 217, 227, 228 logwatch, 211, 212, 213, 228



jitter, 193, 194, 228 jpgraph, 157, 159, 228


k85mysql, 144, 228 k99snort, 153, 228 keyring, 99, 106, 111, 112, 228 keyserver, 94, 99, 228 keysize, 100, 228 kiwi, 198, 228 klips, 182, 183, 228

manager, 73, 74, 75, 76, 77, 79, 84, 105, 127, 135, 139, 171, 183, 214, 226, 228, 230 mangle, 52, 54, 228 mask, 3, 4, 5, 6, 7, 14, 164, 192, 195, 223, 228, 230 mask cidr, 230 masq, 67, 68, 69, 71, 228 masquerade, 57, 59, 66, 72, 228, 230 masquerading, 25, 66, 71, 228 mit, 102 mod_ssl, 128, 129, 228 modulus, 118, 123, 185 mrproper, 12, 13, 228 mtu, 10, 223, 228 multicast, 9, 11 multicast mtu, 14, 223 mycert.p, 126, 127, 228 mysql, 133, 134, 135, 139, 140, 141, 143, 144, 145, 149, 150, 151, 155, 157, 158, 160, 172, 173, 209, 210, 220 mysql flush privileges, 150 mysql grant, 150, 158 myvpnserverkey.pem, 188, 228


N l2tp, 176, 177, 187, 228 nagcmd, 217, 228 lamp, 133, 228 layer, 1, 2, 3, 10, 18, 34, 128, 130, 168, nagios, 217, 218, 228 nat-t, 181, 183, 186, 229 228 net-net, 226 layers, v, 1, 2, 17, 174 netfilter, 34, 37, 52, 53, 72, 229 ldconfig, 141, 143, 153, 228 netfilter shorewall, 41, 229 libpcap, 134, 144, 145, 228 netpacket, 63, 64, 229 libpng, 154, 155, 228 newcert.pem, 124, 126, 229 lids, 91, 228 newreq.pem, 121, 122, 124, 126, 229 linksys, 26, 28, 29, 228 nics, 44, 133 list-keys, 106, 228 nocase, 165, 166, 167, 229 locality, 119, 120, 121, 228 node, 10, 171 locate, 171, 224 nomodify notrap, 192, 195 log_auth, 228 nsa, 90, 91, 95 log_auth define, 200 ntp, ix, 190, 191, 192, 193, 194, 195, logger, 148, 199, 228 196, 227, 229 loginlog, 203 ntp peers, 195, 229 logrotate, 204, 205, 206, 228 ntp.conf, 192, 193, 195, 229 logrotate.conf, 204, 205, 206, 228
Index 227

ntpd, 23, 137, 192, 193, 195, 197, 229 ntpdate, 193, 229 ntpdc, 193, 194, 195, 229 ntpdc peers, 195 ntpdc sysinfo, 193, 229 ntpdc sysstats, 194, 229 ntpq, 193, 229


oct, 148, 149 octet, 4, 5, 6, 10, 229 octets, 5, 10, 56, 229 oinkmaster, 166, 167, 229 one-to-one, 57 openbsd, 21, 29, 109, 138, 229 openssh, 109, 229 openssl, 118, 121, 122, 123, 124, 125, 129, 160, 188, 229 openssl ca, 124, 229 openssl genrsa, 118, 229 openssl pkcs, 126, 229 openssl.cnf, 120, 229 openswan, 174, 181, 182, 183, 184, 186, 187, 221, 229 openswan vpn, 229 openswan vpn gateways, 184, 229 orange, 38, 122, 229 osi, 1, 2, 3, 18, 34, 229 ou, 33, 34, 229

ppp tunnel, 177, 229 pptp, 177, 178, 179, 180, 181, 189, 221, 229 pptp tunnel, 180, 229 pptp tunnels, 180, 229 prelude, 130, 229 prime, 101, 184, 185 priority, 16, 148, 149, 200, 201, 202, 228, 229 priority facility, 229 privacy, 93, 174, 229 privileges, 157, 158, 202 proactive, 132, 220, 221 profile, 209, 210, 229 proxy, 25, 26, 28, 31, 40, 44, 47, 49, 149, 229 proxying, 26, 229 pubkey, 98, 113, 115, 226, 229



pat, 8, 25, 229 patch-o-matic, v, 29, 30, 31, 229 patching, 30, 90, 91, 178 pcre, 144, 145, 166, 229 peers, 192, 195, 229 pem, 120, 121, 125, 227, 229 pfxfile, 126, 229 pgp, 93, 96, 97, 102, 103, 104, 107, 129, 229 pkcs, 125, 126, 229 pkcs certificate, 127, 229 pkglist, 77, 229 pki, 95, 96, 229 pluto, 184, 188, 229 portable, 109 portmap, 23, 135, 136, 229 postrouting, 54, 59, 72, 229 ppp, 42, 56, 71, 176, 177, 178, 229

rawdevices, 35, 136, 229 rc.firewall, 65, 72, 229 rcd, 196, 197, 229 rddump, 62, 229 recovery, 90, 101, 143, 229 referred, 1, 3, 8 repositories, 77, 84, 85, 87, 88, 229 represent, 3, 10 rescue, 89, 90, 166, 229 reset, 17, 26, 194, 197, 229 resolution, 18 resolve, 74, 198, 229 resolvers, 74 restarted, 199, 200, 218 revocation, 106, 124, 184, 230 revocation certificate, 106, 107 revoke, 124, 128, 230 rhn, 82, 83, 230 rhnsd, 136, 227, 229, 230 rope, 60, 61, 62, 230 rotate, 204, 205, 230 rotation, 204, 230 route, 18, 27, 32, 223, 230 router, 8, 18, 20, 21, 22, 26, 27, 32, 131, 171, 174, 230 router firewalls, 27 routing, 7, 9, 10, 18, 20, 22, 25, 26, 27, 32, 180, 223, 230 routing mask cidr, 6 rtt, 230



rtt min/avg/max/mdev, 14, 55, 56 ruleset, 66, 162, 164, 230 rx-only, 168


s85mysql, 144, 230 s99snort, 152, 153, 230 sawmill, 133, 209, 210, 230 schedule, 204, 230 scp, 112, 114, 131, 230 scripting, 60, 154, 230 seahorse, 104, 105, 106, 230 selinux, 90, 91, 230 sequence, 80, 101, 111, 230 sftp, 112, 116, 131, 230 sha, 96, 97, 98, 230 shorewall, 41, 42, 43, 201, 227, 230 simpleca, 127, 230 site-to-site, 186, 230 site-to-site vpn, 183, 184 smoothwall, 34, 38, 46, 47, 230 snapshot, 90, 230 snat, 25, 71, 230 snat masquerade, 72 snort.conf, 146, 148, 150, 167, 230 snortdb, 150, 151, 158, 159, 230 snortgroup, 145 snortuser, 148, 150, 152 soho, 29, 57, 230 soho firewalls, 28 solaris, 207, 215, 230 sophisticated, 95, 220, 230 sql, 33, 143, 146, 147, 151, 153, 156, 173, 192 squid, 136, 204, 205, 206, 230 ssh, 18, 28, 38, 43, 62, 95, 109, 111, 112, 113, 114, 115, 116, 117, 138, 139, 171, 182, 196, 203, 207, 208, 220, 221, 230 ssl, 44, 98, 118, 128, 160, 161, 173, 182, 230 stable, 28, 85, 88, 93 stateless, 52, 56, 230 static, 18, 25, 112, 230 stratum, 191, 192, 194, 230 strongswan, 181, 182, 230 subdirectory, 158, 209, 215 subnet, 4, 6, 7, 25, 57, 195, 226, 230 subnet mask, 2, 3, 4, 5, 6, 71 subnet masking, 7, 223

subnetting, 6, 7, 230 successfully, 117, 159, 160 supported, 25, 34, 98 swatch, 213, 214, 215, 216, 230 symmetrical, 94, 226, 230 syn, 15, 16, 17, 26, 54, 59, 226, 230 synaptic, 78, 79, 86, 135, 230 synchronization, 192, 193, 230 syntax, 53, 112, 125, 228 sysinfo, 193, 230 syslog-ng, 206, 207, 230 syslog.conf, 148, 198, 201, 202, 207, 230 syslogd, 198, 199, 200, 206, 207, 230 sysstats, 194, 230


tail, 197, 202, 203, 224, 230 tap, 168, 171, 230 tracking, 68 transmitted, 14, 55, 56 tripwire, 130, 230 ttl, 14, 55, 64, 163, 230 tunnel, 115, 174, 175, 176, 183, 187, 208, 230 tunnels, 181


umask, 76, 230 uname, 88, 89, 230 unset, 76, 230 up2date, 73, 74, 81, 82, 83, 84, 89, 230 upgrades, 22, 28, 90, 230 userland, 61, 62, 230 ussr, 191 utc, 190, 191, 230


var/log, 196, 211 var/log/kernel, 203 var/log/loginlog, 203 var/log/messages, 197, 202, 203 var/log/syslog, 203 vlsm, v, 7, 231 vpn, 27, 44, 47, 116, 117, 174, 175, 177, 179, 182, 183, 184, 187, 188, 189, 221, 230, 231 vpn gateways, 182 vpn tunnel, 179, 186, 231




webmin, 23, 44, 171, 172 wget, 29, 30, 64 xinetd, 23, 137, 231

yast, 74, 79, 231 yellow, 34, 73, 84, 231 yellowdog, 73, 231 yellowdog updater, 73 yum, 73, 74, 81, 82, 83, 84, 85, 86, 99, 202, 225, 231 yum yellow, 84, 231


zebra, 21, 22, 231 zlib, 98, 154, 155, 231 zones, 43, 190, 191, 231