You are on page 1of 109

Deploying Clustered Samba

on Red Hat Enterprise Linux 6


Samba/CTDB and Active Directory Integration

Mark Heslin
Principal Software Engineer

Version 1.0
October 2012
1801 Varsity Drive™
Raleigh NC 27606-2072 USA
Phone: +1 919 754 3700
Phone: 888 733 4281
Fax: +1 919 754 3701
PO Box 13588
Research Triangle Park NC 27709 USA

Linux is a registered trademark of Linus Torvalds. Red Hat, Red Hat Enterprise Linux and the Red Hat
"Shadowman" logo are registered trademarks of Red Hat, Inc. in the United States and other
countries.
Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation.
UNIX is a registered trademark of The Open Group.
Intel, the Intel logo and Xeon are registered trademarks of Intel Corporation or its subsidiaries in the
United States and other countries.
All other trademarks referenced herein are the property of their respective owners.

© 2012 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, V1.0 or later (the latest version is presently available at
http://www.opencontent.org/openpub/).

The information contained herein is subject to change without notice. Red Hat, Inc. shall not be liable
for technical or editorial errors or omissions contained herein.

Distribution of modified versions of this document is prohibited without the explicit permission of Red
Hat Inc.

Distribution of this work or derivative of this work in any standard (paper) book form for commercial
purposes is prohibited unless prior permission is obtained from Red Hat Inc.

The GPG fingerprint of the security@redhat.com key is:


CA 20 86 86 2B D6 9D FC 65 F6 EC C4 21 91 80 CD DB 42 A6 0E

Send feedback to refarch-feedback@redhat.com

www.redhat.com ii refarch-feedback@redhat.com
Table of Contents
1 Executive Summary..........................................................................................1

2 Component Overview........................................................................................2
2.1 Red Hat Enterprise Linux 6................................................................................................2
2.2 High Availability Add-On.....................................................................................................3
2.2.1 Quorum.........................................................................................................................3
2.2.2 Resource Group Manager............................................................................................3
2.2.3 Fencing.........................................................................................................................4
2.2.3.1 IPMI.........................................................................................................................4
2.2.4 CMAN............................................................................................................................4
2.2.5 Conga............................................................................................................................5
2.2.5.1 Luci..........................................................................................................................5
2.2.5.2 Ricci.........................................................................................................................5
2.2.6 CCS...............................................................................................................................6
2.3 Resilient Storage Add-On...................................................................................................7
2.3.1 GFS2.............................................................................................................................7
2.3.2 Cluster Logical Volume Manager (CLVM)....................................................................7
2.3.3 CTDB (Clustered Samba).............................................................................................8
2.3.3.1 Lock Volume............................................................................................................8
2.3.3.2 Data Volume............................................................................................................8
2.4 DM Multipath......................................................................................................................8
2.5 Samba................................................................................................................................9
2.6 SMB/CIFS...........................................................................................................................9
2.7 Winbind...............................................................................................................................9
2.8 Solution Stack...................................................................................................................11
2.8.1 Products and Components.........................................................................................11
2.8.2 Package Details..........................................................................................................12
2.9 Component Log Files.......................................................................................................14

3 Reference Architecture Configuration.............................................................15


3.1 Cluster Server - Node 1....................................................................................................16
3.2 Cluster Server - Node 2....................................................................................................16
3.3 Cluster Server - Node 3....................................................................................................17
3.4 Windows Server 2008 R2.................................................................................................17

refarch-feedback@redhat.com iii www.redhat.com


3.5 Fibre Channel Storage Array............................................................................................18

4 Clustered Samba Deployment........................................................................19


4.1 Deployment Task Flow.....................................................................................................19
4.2 Deploy Cluster Nodes......................................................................................................20
4.2.1 Install Red Hat Enterprise Linux 6..............................................................................20
4.2.2 Configure Networks and Bonding...............................................................................22
4.2.3 Configure Firewall.......................................................................................................25
4.2.4 Configure Time Service (NTP)....................................................................................28
4.2.5 Configure Domain Name System (DNS)....................................................................28
4.2.6 Install Cluster Node Software.....................................................................................29
4.3 Configure Cluster..............................................................................................................29
4.3.1 Create Cluster.............................................................................................................29
4.3.2 Add Nodes..................................................................................................................30
4.3.3 Add Fence Devices.....................................................................................................30
4.3.4 Activate Cluster...........................................................................................................31
4.4 Configure Storage............................................................................................................32
4.4.1 Configure Multipathing................................................................................................32
4.4.2 Create Cluster Logical Volumes.................................................................................34
4.4.3 Create GFS2 Filesystems...........................................................................................37
4.4.4 Configure SELinux Security Parameters....................................................................38
4.5 Configure CTDB...............................................................................................................39
4.6 Configure Samba..............................................................................................................41
4.7 Start clvmd........................................................................................................................42
4.8 Mount GFS2 Volumes......................................................................................................42
4.9 Start CTDB/Samba...........................................................................................................43
4.10 Verify File Share.............................................................................................................43

5 Clustered Samba Management.......................................................................46


5.1 Starting, Shutting down, Restarting Cluster Nodes..........................................................46
5.2 Adding Clustered Samba Nodes......................................................................................48
5.3 Removing Clustered Samba Nodes.................................................................................51
5.3.1 On-line Node Removal (Method 1).............................................................................51
5.3.2 Off-line Node Removal (Method 2):............................................................................54
5.4 Adding File Shares...........................................................................................................58
5.5 Removing File Shares......................................................................................................62

www.redhat.com iv refarch-feedback@redhat.com
6 Windows Active Directory Integration..............................................................66
6.1 Overview...........................................................................................................................66
6.1.1 Configuration Summary..............................................................................................66
6.1.2 Cluster Configuration with Active Directory Integration..............................................67
6.1.3 Authentication and ID Components............................................................................68
6.2 Integration Tasks..............................................................................................................69
6.2.1 Synchronize Time Service..........................................................................................69
6.2.2 Configure DNS............................................................................................................70
6.2.3 Update Hosts File.......................................................................................................70
6.2.4 Install/Configure Kerberos Client................................................................................71
6.2.5 Install oddjob-mkhomedir............................................................................................72
6.2.6 Configure Authentication............................................................................................73
6.2.7 Verify/Test Active Directory........................................................................................78
6.2.8 Modify Samba Configuration......................................................................................79
6.2.9 Verification of Services...............................................................................................83
6.2.10 Configure CTDB Winbind Management (optional)...................................................86

7 Conclusion.......................................................................................................87

Appendix A: References....................................................................................88

Appendix B: Fibre Channel Storage Provisioning.............................................90

Appendix C: Cluster Configuration File (cluster.conf)........................................95

Appendix D: CTDB Configuration Files.............................................................96

Appendix E: Samba Configuration File (smb.conf)............................................97

Appendix F: Cluster Configuration Matrix..........................................................99

Appendix G: Adding/Removing HA Nodes......................................................100

Appendix H: Deployment Checklists...............................................................102

Acknowledgements.........................................................................................103

refarch-feedback@redhat.com v www.redhat.com
1 Executive Summary
This reference architecture details the deployment, configuration and management of highly
available file shares using clustered Samba on Red Hat Enterprise Linux 6. The most
common administration tasks are included - starting/stopping nodes, adding/removing nodes
and file shares. For environments interested in integrating clustered Samba into Windows
Active Directory domains, a separate section is provided. Active Directory integration permits
clients to access Samba cluster file shares using existing Active Directory user accounts and
authentication methods.
Clustered Samba extends the benefits of Samba file sharing by providing clients with
concurrent access to highly available file shares. In the event of a cluster node fault or
failure, client sessions through the remaining nodes maintain access to the highly available
file shares. Client sessions through a faulty node are not maintained and require a reconnect
due to client protocol limitations.
Clustered Samba enhances Samba functionality through the use of two of Red Hat's premier
Add-On products:
• High Availability (HA) Add-On
• Resilient Storage (RS) Add-On
The High Availability Add-On provides reliability, availability and scalability (RAS) to critical
production services by eliminating single points of failure, and providing automatic failover of
those services in the event of a cluster node failure or error condition. The Resilient Storage
Add-On extends these capabilities by providing a cluster logical volume manager (CLVM), a
cluster file system (GFS2) and a cluster implementation of the Samba TDB database (CTDB).
In combination, the High Availability Add-On and Resilient Storage Add-On provide the
underlying framework for configuring clustered Samba and deploying highly-available file
shares.
A three-node cluster is deployed to provide simultaneous (active-active), read-write client
access to file shares. A maximum of four nodes is supported on Red Hat Enterprise Linux 6.
I/O performance is increased and scales out linearly as the number of clustered Samba
nodes is expanded.
The underlying storage for file share data and cluster recovery utilizes clustered LVM (CLVM)
volumes. The CLVM volumes within this reference architecture are created on Fibre Channel
based storage, but other shared storage (e.g. - iSCSI) may be used.
Additional redundancy and performance increases are achieved through the use of separate
public and private (cluster interconnect) networks. Multiple network adapters are used on
these networks with all interfaces bonded together. Similarly, device mapper multipathing is
used to maximize performance and availability to all CLVM volumes.
This document does not require extensive Red Hat Enterprise Linux experience but the
reader is expected to have a working knowledge of Linux administration, clustering, Samba
and client side file sharing concepts.

refarch-feedback@redhat.com 1 www.redhat.com
2 Component Overview
This section provides an overview on the Red Hat Enterprise Linux operating system, Red
Hat's High Availability Add-On and the other components used in this reference architecture.

2.1 Red Hat Enterprise Linux 6


Red Hat Enterprise Linux 6.3, the latest release of Red Hat's trusted datacenter platform,
delivers advances in application performance, scalability, and security. With Red Hat
Enterprise Linux 6.3, physical, virtual, and cloud computing resources can be deployed within
the data center. Red Hat Enterprise Linux 6.1 provides the following features and capabilities:
Reliability, Availability, and Security (RAS):
• More sockets, more cores, more threads, and more memory
• RAS hardware-based hot add of CPUs and memory is enabled
• Memory pages with errors can be declared as “poisoned” and can be avoided
File Systems:
• ext4 is the default filesystem and scales to 16TB
• XFS is available as an add-on and can scale to 100TB
• Fuse allows file systems to run in user space allowing testing and development on
newer fuse-based file systems (such as cloud file systems)
High Availability:
• Extends the current clustering solution to the virtual environment allowing for high
availability of virtual machines and applications running inside those virtual machines
• Enables NFSv4 resource agent monitoring
• Introduction of CCS. CCS is a command line tool that allows for complete CLI
administration of Red Hat's High Availability Add-On
Resource Management:
• cgroups organize system tasks so that they can be tracked and so that other system
services can control the resources that cgroup tasks may consume
• cpuset applies CPU resource limits to cgroups, allowing processing performance to be
allocated to tasks
There are many other feature enhancements to Red Hat Enterprise Linux 6. Please see the
Red Hat website for more information.

www.redhat.com 2 refarch-feedback@redhat.com
2.2 High Availability Add-On
The High Availability Add-On for Red Hat Enterprise Linux provides high availability of
services by eliminating single points of failure. By offering failover services between nodes
within a cluster, the High Availability Add-On supports high availability for up to 16 nodes.
(Currently this capability is limited to a single LAN or datacenter located within one physical
site.)
The High Availability Add-On also enables failover for off-the-shelf applications such as
Apache, MySQL, PostgreSQL and Samba, any of which can be coupled with resources like
IP addresses and single-node file systems to form highly available services. The High
Availability Add-On can also be easily extended to any user-specified application that is
controlled by an init script per UNIX System V (SysV) standards.
When using the High Availability Add-On, a highly available service can fail over from one
node to another with no apparent interruption to cluster clients. The High Availability Add-On
ensures data integrity when one cluster node takes over control of a service from another
cluster node. It achieves this by promptly evicting nodes from the cluster that are deemed to
be faulty using a method called "fencing", thus preventing data corruption. The High
Availability Add-On supports several types of fencing, including both power and storage area
network (SAN) based fencing.
The following sections describe the various components of the High Availability Add-On in the
context of this reference architecture and the deployment of clustered Samba.

2.2.1 Quorum
Quorum is a voting algorithm used by the cluster manager (CMAN). To maintain quorum, the
nodes in the cluster must agree about their status among themselves. Quorum determines
which nodes in the cluster are dominant. For example, if there are three nodes in a cluster
and one node loses connectivity, the other two nodes communicate with each other and
determine that the third node needs to be fenced. The action of fencing ensures that the node
which lost connectivity does not corrupt data.
By default each node in the cluster has one quorum vote, although this is configurable. There
are two methods the nodes can communicate with each other to determine quorum. The first
method quorum via network consists of a simple majority (50% of the nodes +1 extra). The
second method is by adding a quorum disk. The quorum disk allows for user-specified
conditions to exist which help determine which node(s) should be dominant.
This reference architecture uses network quorum - a dedicated quorum disk is not required.

2.2.2 Resource Group Manager
Resource group manager (rgmanager) provides failover capabilities for collections of cluster
resources known as resource groups or resource trees. Rgmanager allows system
administrators to define, configure, and monitor cluster services such as httpd or mysql.
In the event of a node failure, rgmanager relocates the clustered service to another node to
restore service availability. Services can also be restricted to run on specific cluster nodes.

refarch-feedback@redhat.com 3 www.redhat.com
In the context of this reference architecture, rgmanager is not used as it does not provide
support for clustered Samba file sharing services.

2.2.3 Fencing
Fencing is the disconnection of a node from the cluster's shared storage. Fencing prevents
the affected node from issuing I/O to shared storage, thus ensuring data integrity. The cluster
infrastructure performs fencing through fenced, the fence daemon.
When CMAN determines that a node has failed, it communicates to other cluster-
infrastructure components to inform them that the node has failed. The failed node is fenced
when fenced is notified. Other cluster-infrastructure components determine what actions to
take - that is, they perform any recovery that needs to done. For example, distributed lock
manager (DLM) and Global File System version 2 (GFS2), when notified of a node failure,
suspend activity until they detect that fenced has completed fencing the failed node. Upon
confirmation that the failed node is fenced, DLM and GFS2 perform recovery. DLM releases
locks of the failed node; GFS2 recovers the journal of the failed node.
The fencing program (fenced) determines from the cluster configuration file which fencing
method to use. Two key elements in the cluster configuration file define a fencing method:
fencing agent and fencing device. The fencing program makes a call to a fencing agent
specified in the cluster configuration file. The fencing agent, in turn, fences the node via a
fencing device. When fencing is complete, the fencing program notifies the cluster manager.
The High Availability Add-On provides a variety of fencing methods:

• Power fencing - A fencing method that uses a power controller to power off an
inoperable node
• Storage fencing - Includes fencing methods that disable the Fibre Channel port that
connects storage to an inoperable node. SCSI-3 persistent reservations are another
commonly used storage fencing method in which access to a common shared storage
device can be revoked to an inoperable node.
• Systems management fencing - Fencing methods that disable I/O or power to an
inoperable node. Examples include IBM® BladeCenter, Dell® DRAC/MC, HP® ILO,
IPMI, and IBM RSA II.

2.2.3.1 IPMI
The Intelligent Platform Management Interface (IPMI) is a standardized computer interface
that allows administrators to remotely manage a system. Centered around a baseboard
management controller (BMC), IPMI supports functions to access the system BIOS, display
event logs, power on, power off and power cycle a system.
This reference architecture uses IPMI to fence faulty cluster nodes across the public network
through the fence_ipmilan agent.

2.2.4 CMAN
CMAN manages cluster membership, fencing, locking and quorum. CMAN runs as a service

www.redhat.com 4 refarch-feedback@redhat.com
on all cluster nodes and simplifies the management of the following HA cluster daemons:
• corosync (manages cluster membership, messaging, quorum)
• fenced (manages cluster node I/O fencing)
• dlm_controld (manages distributed file locking to shared file systems)
• gfs_controld (manages GFS2 file system mounting and recovery )
From a systems management perspective, CMAN is the first service in the component stack
started when bringing up a clustered Samba node.

2.2.5 Conga
Conga is an agent/server architecture for the remote administration of cluster nodes. The
agent component is called ricci and the server component is called luci. One luci
management server can communicate with ricci agents installed on multiple cluster nodes.
When a system is added to a luci management server, authentication is only done the first
time. No authentication is necessary afterwards. The luci management interface allows
administrators to configure and manage cluster nodes. Communications between luci and
ricci is done via XML over SSL.

2.2.5.1 Luci
Luci provides a web-based graphical user interface that helps visually administer the nodes
in a cluster, manage fence devices, failover domains, resources, clustered services and other
cluster attributes.
In the context of clustered Samba, luci is not used.

2.2.5.2 Ricci
Ricci is the cluster management and configuration daemon that runs on the cluster nodes.
When ricci is installed it creates a user account called ricci and a password is set for the
account. All ricci accounts must be configured with the same password across all cluster
nodes to allow authentication with the luci management server. The ricci daemon
requires port 11111 to be open for both tcp and udp traffic.

refarch-feedback@redhat.com 5 www.redhat.com
2.2.6 CCS
The Cluster Configuration System (CCS) was introduced in Red Hat Enterprise Linux 6.1.
CCS provides a powerful way of managing a Red Hat Enterprise Linux cluster from the
command line. CCS allows an administrator to create, modify and view cluster configurations
from a remote node through ricci or on a local file system. CCS has a robust man page
detailing all of the options but the ones most commonly used are described in Table 2.2.6:
Common CCS Switches:
Switches Function

--host Remote node to run ccs action on

--createcluster Create a new cluster configuration

--addnode Add a fence instance to the cluster

--addmethod Add a fence method to the cluster

--addfencedev Add a fence device to the cluster

--addfenceinst Add a fence instance to the cluster

--addfailoverdomain Add a failover domain to the cluster

--addfailoverdomainnode Add a failover domain node to the cluster

--addresource Add a resource to the cluster

--addservice Add a service to the cluster

--addsubservice Add a subservice to the cluster

--sync --activate Synchronize and activate the cluster configuration


file across all nodes

--checkconf Verify all nodes have the same cluster.conf

--startall Start cluster services on all nodes

--stopall Stop cluster services on all nodes

Table 2.2.6: Common CCS Switches


Cluster configuration details are stored as XML entries in the file /etc/cluster/cluster.conf. To
avoid configuration errors, changes to this file should always be done through the ccs utility.
CCS authenticates with ricci agents by using automatically generated certificate files in
~/.ccs/. These files allow CCS to communicate with ricci securely over Secure Socket
Layer (SSL) encrypted links. To communicate with ricci the password for the ricci agent
on each cluster node must be known by the administrator as CCS prompts for the password
at the first connection.

www.redhat.com 6 refarch-feedback@redhat.com
2.3 Resilient Storage Add-On
The Resilient Storage Add-On for Red Hat Enterprise Linux provides numerous file system
capabilities for improving resiliency to system failure. The following components are included
with the Resilient Storage Add-On:
• Global File System 2 (GFS2)
• Cluster Logical Volume Manager (CLVM)
• Clustered Samba (CTDB)
The following sections describe each component in further detail.

2.3.1 GFS2
GFS2 is a shared disk clustered file system in which data is shared across all cluster nodes
with concurrent access to the file system name space. Processes on different cluster nodes
work with GFS2 files in the same way that processes on a single node access files on a local
file system.

This reference architecture uses the GFS2 file system on all CLVM volumes.

2.3.2 Cluster Logical Volume Manager (CLVM)
Volume managers create a layer of abstraction between physical storage and applications
and services running on host operating systems. Volume managers present logical volumes
that can be flexibly managed with little to no impact on the applications or services accessing
them. Logical volumes can be increased in size or the underlying storage relocated to another
physical device without need to unmount the file system.
The architecture of LVM consists of three components:
• Physical Volume (PV)
• Volume Group (VG)
• Logical Volume (LV)
Physical volumes (PV) are the underlying physical storage – i.e. a block device such as a
whole disk or partition. A volume group (VG) is the combination of one or more physical
volumes. Once a volume group has been created, logical volumes (LV) can be created from it
with each logical volume formatted and mounted similar to a physical disk.
Cluster logical volumes (CLVM) expand the use of Logical Volumes (LV) by making them
accessible and shared among cluster nodes. Cluster logical volumes must be formatted with
a cluster file system such as GFS2.

The CLVM volumes within this reference architecture consist of one physical volume (PV) that
is a member of a volume group (VG) from which a single logical volume (LV) is created.

refarch-feedback@redhat.com 7 www.redhat.com
2.3.3 CTDB (Clustered Samba)
CTDB is the cluster implementation of the TDB database used by Samba. To use CTDB, a
clustered file system must be available and mounted on all nodes in the cluster. Under Red
Hat Enterprise Linux 6, the cluster file system is GFS2 (included in the Resilient Storage Add-
On).
CTDB extends state information and inter-process communications across clustered Samba
nodes in order to maintain consistent data and locking. CTDB also provides HA features such
as node monitoring, node failover and IP takeover (IPAT) in the event of a cluster node fault
or failure. When a node in a cluster fails, CTDB will relocate the IP address of the failed node
to a different node to ensure that the IP addresses for the Samba file sharing services are
highly available.
As of Red Hat Enterprise Linux 6.2, CTDB runs as a cluster stack in conjunction with the Red
Hat Enterprise Linux 6 High Availability Add-On clustering. From a cluster management
perspective, this is important and impacts the sequence for starting and stopping of services.
Section 5.1 Starting, Shutting down, Restarting Cluster Nodes discusses in further detail.

2.3.3.1 Lock Volume


The CTDB lock volume is a small, dedicated CLVM volume used by the CTDB daemons
(ctdbd) on all nodes when arbitrating changes in cluster membership. The volume contains
a recovery lock file that specifies which node is acting as the recovery master. The lock
volume is formatted with GFS2 and must be mounted by all nodes in the CTDB cluster.
The recommended size for the lock volume is 1 GB.
For this reference architecture, a 2 GB fibrechannel volume was provisioned. After creating
the logical volume and formatting with GFS2, the final volume size was 1.8 GB.

2.3.3.2 Data Volume


The Data Volume is a dedicated CLVM volume used to hold the contents of the highly
available file share. The data volume is formatted with GFS2 and must be mounted by all
nodes in the CTDB cluster servicing the file share. The size for the data volume is
implementation dependent.
For this reference architecture, a 200 GB fibrechannel volume was provisioned. After
creating the logical volume and formatting with GFS2, the final volume size was 180 GB.

2.4 DM Multipath
Device mapper multipathing (DM Multipath) allows multiple I/O paths to be configured
between a server and the connection paths to SAN storage array volumes. The paths are
aggregated and presented to the server as a single device to maximize performance and
provide high availability. A daemon (multipathd) handles checking for path failures and
status changes.
This reference architecture uses DM Multipath on the all CLVM volumes.

www.redhat.com 8 refarch-feedback@redhat.com
2.5 Samba
Samba is an open source suite of programs that can be installed on Red Hat Enterprise Linux
6 systems to provide file and print services to Microsoft Windows clients.
Samba provides two daemons that run on a Red Hat Enterprise Linux 6 system:
• smbd - primary daemon providing file and print services to clients via SMB
• nmbd – NBT (NetBIOS over TCP) namespace server
When combined with the reliability and simplified management capabilities of Red Hat
Enterprise Linux 6, Samba is the application of choice for providing file and print sharing to
Windows clients. Samba version 3.5 is used in the Samba based configurations detailed
within this reference architecture.

2.6 SMB/CIFS
Server Message Block (SMB), sometimes referred to as the Common Internet File System
(CIFS), is a network protocol developed to facilitate client to server communications for file
and print services. The SMB protocol was originally developed by IBM and later extended by
Microsoft.
Samba supports the current SMB protocol (SMB1) as used in all Windows systems from
Legacy Windows 2000 through to current implementations.

2.7 Winbind
Winbind is a component of the Samba suite of programs that allows for unified user logon.
Winbind uses an implementation of Microsoft RPC (Remote Procedure Call), PAM (Pluggable
Authentication Modules), and Red Hat Enterprise Linux 6 nsswitch (Name Service Switch) to
allow Windows Active Directory Domain Services users to appear and operate as local users
on a Red Hat Enterprise Linux machine. Winbind minimizes the need for system
administrators to manage separate user accounts on both the Red Hat Enterprise Linux 6 and
Windows Server 2008 R2 environments. Winbind provides three separate functions:
• Authentication of user credentials (via PAM). This makes it possible to log onto a Red
Hat Enterprise Linux 6 system using Active Directory user accounts. Authentication is
responsible for identifying “Who” a user claims to be.
• ID Tracking/Name Resolution via nsswitch (NSS). The nsswitch service allows user
and system information to be obtained from different database services such as LDAP
or NIS. ID Tracking/Name Resolution is responsible for determining “Where” user
identities are found.
• ID Mapping represents the mapping between Red Hat Enterprise Linux 6 user (UID),
group (GID), and Windows Server 2008 R2 security (SID) IDs. ID Mappings are
handled through an idmap “backend” that is responsible for tracking “What” ID's users
are known by in both operating system environments.

refarch-feedback@redhat.com 9 www.redhat.com
Figure 2.7: Winbind Authentication, ID Components and Backends represents the
relationship between Winbind and Active Directory:

Figure 2.7: Winbind Authentication, ID Components and Backends

Winbind idmap “backends” are one of the most commonly misunderstood components in
Samba. Since Winbind provides a number of different “backends” and each manages ID
Mappings differently, it is useful to classify them as follows:
• Allocating - “Read-Writeable” backends that store ID Mappings in a local database
file on the Red Hat Enterprise Linux 6 system(s).
• Algorithmic - “Read-Only” backends that calculate ID Mappings on demand and
provide consistent ID Mappings across each Red Hat Enterprise Linux 6 system.
• Assigned - “Read-Only” backends that use ID Mappings pre-configured within Active
Directory.

Selecting a Winbind “backend” is also dependent on factors such as:


• Whether or not Active Directory schema modifications are permitted
• Preferred location of ID Mappings
• Number of Red Hat Enterprise Linux 6 systems
• Number of nodes in the Active Directory forest
• Use of LDAP

Understanding Winbind backends is essential when integrating Samba based configurations


into Windows Active Directory. Section 6 Windows Active Directory Integration provides
the details for integrating Samba cluster nodes into a Windows Active Directory environment.

www.redhat.com 10 refarch-feedback@redhat.com
2.8 Solution Stack
The full set of products, components and packages that comprise the clustered Samba
solution stack are outlined in the next two sections.

2.8.1 Products and Components
Figure 2.8.1: Solution Stack - Products, Components, Daemons provides a summary of
the product and components that comprise clustered Samba on Red Hat Enterprise Linux 6:

Figure 2.8.1: Solution Stack - Products, Components, Daemons

From a systems perspective, understanding the component stack is important as this is


where most of the day to day administration tasks are performed. Further detail is provided
in Section 5 Clustered Samba Management.

refarch-feedback@redhat.com 11 www.redhat.com
2.8.2 Package Details
Details on the individual products/groups, packages and versions, can be found in Table 2.8.2: Solution Stack – Product
Package Details:
Product/Group Package Architecture Version Release Installation Requirement
Samba samba x86_64 3.5.10 125.el6 Mandatory
Samba samba-client x86_64 3.5.10 125.el6 Recommended
Samba samba-common x86_64 3.5.10 125.el6 Mandatory
Samba samba-winbind x86_64 3.5.10 125.el6 Mandatory
Samba samba-winbind-clients x86_64 3.5.10 125.el6 Recommended
High Availability cman x86_64 3.0.12.1 32.el6_3.1 Mandatory
High Availability ccs x86_64 0.16.2 55.el6 Default
High Availability omping x86_64 0.0.4 1.el6 Default
High Availability rgmanager x86_64 3.0.12.1 12.el6 Default
High Availability cluster-cim x86_64 0.16.2 18.el6 Optional
High Availability cluster-glue-libs-devel x86_64 1.0.5 6.el6 Optional
High Availability cluster-snmp x86_64 0.16.2 18.el6 Optional
High Availability clusterlib-devel x86_64 3.0.12.1 32.el6_3.1 Optional
High Availability corosynclib-devel x86_64 1.4.1 7.el6_3.1 Optional
High Availability fence-virtd-checkpoint x86_64 0.2.3 9.el6 Optional
High Availability foghorn x86_64 0.1.2 1.el6 Optional
High Availability libesmtp-devel x86_64 1.0.4 15.el6 Optional
High Availability openaislib-devel x86_64 1.1.1 7.el6 Optional
High Availability pacemaker x86_64 1.1.7 6.el6 Optional
High Availability pacemaker-libs-devel x86_64 1.1.7 6.el6 Optional
High Availability python-repoze-what-quickstart noarch 1.0.1 1.el6 Optional

Table 2.8.2: Solution Stack – Product Package Details


Product/Group Package Architecture Version Release Installation Requirement
Resilient Storage gfs2-utils x86_64 3.0.12.1 32.el6_3.1 Mandatory
Resilient Storage lvm2-cluster x86_64 2.02.95 10.el6 Mandatory
Resilient Storage ccs x86_64 0.16.2 55.el6 Default
Resilient Storage cluster-glue-libs-devel x86_64 1.0.5 6.el6 Optional
Resilient Storage clusterlib-devel x86_64 3.0.12.1 32.el6_3.1 Optional
Resilient Storage cmirror x86_64 2.02.95 10.el6_3.2 Optional
Resilient Storage corosynclib-devel x86_64 1.4.1 7.el6_3.1 Optional
Resilient Storage ctdb x86_64 1.0.114.3 4.el6 Optional
Resilient Storage ctdb-devel x86_64 1.0.114.3 4.el6 Optional
Resilient Storage fence-virtd-checkpoint x86_64 0.2.3 9.el6 Optional
Resilient Storage libesmtp-devel x86_64 1.0.4 15.el6 Optional
Resilient Storage openaislib-devel x86_64 1.1.1 7.el6 Optional
Resilient Storage pacemaker-libs-devel x86_64 1.1.7 6.el6 Optional

Table 2.8.2: Solution Stack - Product Package Details (continued)


The High Availability and Resilient Storage products are bundled and installed as group packages with several of packages
included in both groups.
2.9 Component Log Files
Log files are essential for monitoring the activity and status of cluster and operating system
components. Table 2.9: Component Log Files provides a summary of the log file locations:

Component Log File(s) Description


Red Hat /var/log/messages System events
Enterprise Linux 6
Red Hat
/var/log/dmesg Boot messages
Enterprise Linux 6
Red Hat
/var/log/secure Security, authentication messages
Enterprise Linux 6
Samba /var/log/samba/log.smbd SMB daemon (smbd)
Samba /var/log/samba/log.winbindd Winbind daemon (winbindd)
Samba /var/log/samba/log.wb-{client} {Client} specific connections
Standard out
Kerberos Kerberos client messages
(terminal screen)
Kerberos /var/log/krb5libs.log Kerberos library messages (optional)
Kerberos /var/log/krb5kdc.log Kerberos KDC messages (optional)
DNS /var/log/messages DNS messages
NTP /var/log/messages NTP messages
CMAN component messages
HA Cluster /var/log/cluster
(corosync, dlm, fenced)
CTDB /var/log/messages CTDB messages
GFS2 /var/log/messages GFS2 messages

Table 2.9: Component Log Files


By default, most events are sent via syslog to the default /var/log/messages. Most
daemons have debug mode capabilities that can be enabled through configuration files
(e.g. - /etc/krb5.conf) or via command line flags. Consult the on-line man pages and
Red Hat documentation for further details.

www.redhat.com 14 refarch-feedback@redhat.com
3 Reference Architecture Configuration
This section provides an overview of the hardware components that were used in the
deployment of this reference architecture. The cluster nodes (smb-srv1, smb-srv2, smb-srv3)
were configured on an HP BladeSystem c7000 enclosure using three HP ProLiant BL460c G6
Blade servers. Two 10 Gb/s ethernet networks were configured for use as the public and
cluster interconnect networks. The HP Blade servers share access to the CTDB Lock and
Samba Data (file share) volume located on an HP StorageWorks MSA2324fc fibrechannel
storage array.
All public and cluster node networks are configured with two bonded interfaces for
redundancy. Client access to Samba file share (Data Volume) is over the public network.
Figure 3: Cluster Configuration depicts an overview of the cluster configuration:

Figure 3: Cluster Configuration

refarch-feedback@redhat.com 15 www.redhat.com
3.1 Cluster Server - Node 1
Component Detail
Hostname smb-srv1
Red Hat Enterprise Linux 6.3 (64-bit)
Operating System
(2.6.32-279.1.1.el6.x86_64 kernel)
System Type HP ProLiant BL460c G6
Quad Socket, Quad Core (16 cores)
Processor
Intel® Xeon® CPU X5550 @2.67GHz
Memory 48 GB
4 x 146 GB SATA internal disk drive (RAID 1)
Storage
2 x Qlogic QMH2562 8Gb FC HBA
Network 8 x Broadcom NetXtreme II BCM57711E XGb

Table 3.1: Cluster Node1 Configuration

3.2 Cluster Server - Node 2


Component Detail
Hostname smb-srv2
Red Hat Enterprise Linux 6.3 (64-bit)
Operating System
(2.6.32-279.1.1.el6.x86_64 kernel)
System Type HP ProLiant BL460c G6
Quad Socket, Quad Core (16 cores)
Processor
Intel® Xeon® CPU X5550 @2.67GHz
Memory 48 GB
4 x 146 GB SATA internal disk drive (RAID 1)
Storage
2 x Qlogic QMH2562 8Gb FC HBA
Network 8 x Broadcom NetXtreme II BCM57711E XGb

Table 3.2: Cluster Node2 Configuration

www.redhat.com 16 refarch-feedback@redhat.com
3.3 Cluster Server - Node 3
Component Detail
Hostname smb-srv3
Red Hat Enterprise Linux 6.3 (64-bit)
Operating System
(2.6.32-279.1.1.el6.x86_64 kernel)
System Type HP ProLiant BL460c G6
Quad Socket, Quad Core (16 cores)
Processor
Intel® Xeon® CPU X5550 @2.67GHz
Memory 48 GB
4 x 146 GB SATA internal disk drive (RAID 1)
Storage
2 x Qlogic QMH2562 8Gb FC HBA
Network 8 x Broadcom NetXtreme II BCM57711E XGb

Table 3.3: Cluster Node3 Configuration

3.4 Windows Server 2008 R2


Component Detail
Hostname win-srv1
Windows Server 2008 R2 – Enterprise Edition (64-bit)
Operating System
Version 6.1 (Build 7601: Service Pack 1)
System Type HP ProLiant BL460c G6
Quad Socket, Quad Core (16 cores)
Processor
Intel® Xeon® CPU X5550 @2.67GHz
Memory 48 GB
2 x 146 GB SATA internal disk drive (RAID 1)
Storage
2 x Qlogic QMH2562 8Gb FC HBA
Network 8 x Broadcom NetXtreme II BCM57711E XGb

Table 3.4: Windows Server 2008 R2 Configuration

refarch-feedback@redhat.com 17 www.redhat.com
3.5 Fibre Channel Storage Array
Component Detail
Hostname ra-msa20
HP StorageWorks MSA2324fc
System Type
(1 x HP MSA70 expansion shelf)
CPU Type: Turion MT32 1800MHz
Controllers Cache: 1GB
2 x Host Ports
Storage Controller Code Version: M112R14
Memory Controller FPGA Code Version: F300R22
Storage Controller Loader Code Version: 19.009
Management Controller Code Version: W441R35
Firmware
Management Controller Loader Code Version: 12.015
Expander Controller Code Version: 1112
CPLD Code Version: 8
Hardware Version: 56
Physical Drives 48 x 146GB SAS drives (24 enclosure, 24 expansion shelf)
Logical Drives 4 x 1.3 TB Virtual Disks (12 disk, RAID 6)

Table 3.5: Cluster Storage Array Configuration

www.redhat.com 18 refarch-feedback@redhat.com
4 Clustered Samba Deployment
4.1 Deployment Task Flow
Figure 4.1: Clustered Samba Deployment Task Flow provides an overview of the order in
which the deployment of the cluster nodes and cluster creation tasks are performed:

Figure 4.1: Clustered Samba Deployment Task Flow

Appendix H: Deployment Checklists provides a detailed list of steps to follow for deploying
highly available file shares on a Red Hat Enterprise Linux 6 Samba Cluster.

refarch-feedback@redhat.com 19 www.redhat.com
4.2 Deploy Cluster Nodes
Prior to creating the cluster, each cluster node is deployed by performing the following series
of steps on each cluster node:
• Install Red Hat Enterprise Linux 6
• Configure Networks and Bonding
• Configure Firewall
• Configure Time Service (NTP)
• Configure Domain Name System (DNS)
• Install Cluster Node Software (“High Availability” Add-On)
• Configure Storage
The next sections describe how to perform the deployment steps in detail.

4.2.1 Install Red Hat Enterprise Linux 6
The installation of Red Hat Enterprise Linux 6 on each of the three cluster nodes is performed
using a Red Hat Satellite server. Details on how the Satellite server was configured can be
found in the Microsoft Windows Server 2008 R2 section of Appendix A: References. Local
media can be used in lieu of a Satellite server deployment.
Once Red Hat Enterprise Linux 6 has been installed on each cluster node, perform the
following sequence of steps to register and update each cluster node:

1. Register the node using Red Hat Subscription Manager:


# subscription-manager register --auto
Username: rhn-user
Password: ********
The system has been registered with id: a8ebb66c-f4a1-4f66-bedd-d6bb3ab57421
Installed Product Current Status:
Product Name: Red Hat Enterprise Linux Server
Status: Subscribed

2. List the available subscriptions and subscribe the node:


# subscription-manager list --available
+-------------------------------------------+
Available Subscriptions

...output abbreviated...

Product Name: Red Hat Employee Subscription


Product Id: SYS0395
Pool Id: 8a85f98431be63480131f7518e204db6
Quantity: 9568
Service Level: None
Service Type: None
Multi-Entitlement: No

www.redhat.com 20 refarch-feedback@redhat.com
Expires: 01/01/2022
Machine Type: physical

# subscription-manager subscribe --auto


Installed Product Current Status:
Product Name: Red Hat Enterprise Linux Server
Status: Subscribed

3. View the repositories that are available:


# yum repolist
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
Updating certificate-based repositories.

Please use yum-config-manager to configure which software


repositories are used with Red Hat Subscription Management.

rhel-6-server-cf-tools-1-rpms | 2.6 kB 00:00


rhel-6-server-rhev-agent-rpms | 2.6 kB 00:00
rhel-6-server-rhev-agent-rpms/primary_db | 8.7 kB 00:00
rhel-6-server-rpms | 3.7 kB 00:00
rhel-6-server-rpms/primary_db | 15 MB 00:03
rhel-ha-for-rhel-6-server-rpms | 3.7 kB 00:00
rhel-ha-for-rhel-6-server-rpms/primary_db | 147 kB 00:00
rhel-lb-for-rhel-6-server-rpms | 3.7 kB 00:00
rhel-lb-for-rhel-6-server-rpms/primary_db | 7.6 kB 00:00
rhel-rs-for-rhel-6-server-rpms | 3.7 kB 00:00
rhel-rs-for-rhel-6-server-rpms/primary_db | 160 kB 00:00
rhel-scalefs-for-rhel-6-server-rpms | 3.7 kB 00:00
rhel-scalefs-for-rhel-6-server-rpms/primary_db | 13 kB 00:00
repo id repo name status

rhel-6-server-cf-tools-1-rpms Red Hat CloudForms Tools for RHEL 6 (RPMs)


26
rhel-6-server-rhev-agent-rpms Red Hat Enterprise Virtualization Agents for
RHEL 6 Server (RPMs) 12
rhel-6-server-rpms Red Hat Enterprise Linux 6 Server (RPMs)
8,344
rhel-ha-for-rhel-6-server-rpms Red Hat Enterprise Linux High Availability
(for RHEL 6 Server) (RPMs) 200
rhel-lb-for-rhel-6-server-rpms Red Hat Enterprise Linux Load Balancer (for
RHEL 6 Server) (RPMs) 5
rhel-rs-for-rhel-6-server-rpms Red Hat Enterprise Linux Resilient Storage
(for RHEL 6 Server) (RPMs) 228
rhel-scalefs-for-rhel-6-server-rpms Red Hat Enterprise Linux Scalable File
System (for RHEL 6 Server) (RPMs) 19
repolist: 8,834

4. Update each node to take in the latest patches and security updates:
# yum update

Follow the steps above and consult the Red Hat Enterprise Linux 6 Installation and
Deployment Guides found in the Red Hat Enterprise Linux 6 section of Appendix A:
References for further details.

refarch-feedback@redhat.com 21 www.redhat.com
4.2.2 Configure Networks and Bonding
The cluster nodes are configured to provide access to all members across both the public and
cluster interconnect (private) networks. The public network (10.16.142.0) is configured on the
eth0 interface and bonded to the eth1 interface for redundancy. The cluster interconnect
(10.0.0.0) is configured on the eth2 interface and bonded to the eth3 interface for
redundancy. Static IP addressing is used throughout the cluster configuration.
1. Verify that NetworkManager is disabled on startup to prevent conflicts with the High
Availability Add-On cluster services:
# chkconfig NetworkManager off
# chkconfig NetworkManager --list
NetworkManager 0:off 1:off 2:off 3:off 4:off 5:off 6:off

2. Create bond configuration files for the public and cluster interconnect networks:
# echo "alias bond0 bonding" >> /etc/modprobe.d/bonding.conf
# echo "alias bond1 bonding" >> /etc/modprobe.d/bonding.conf

3. Create the bond interface file for the public network and save the file as
/etc/sysconfig/network-scripts/ifcfg-bond0:
DEVICE=bond0
IPADDR=10.16.142.101
NETMASK=255.255.248.0
GATEWAY=10.16.143.254
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
BONDING_OPTS="mode=0 miimon=100"

4. Create the bond interface file for the cluster interconnect network and save the file as
/etc/sysconfig/network-scripts/ifcfg-bond1:
DEVICE=bond1
IPADDR=10.0.0.101
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
BONDING_OPTS="mode=1"

5. Modify the interface file for the first public interface and save the file as
/etc/sysconfig/network-scripts/ifcfg-eth0:
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no

www.redhat.com 22 refarch-feedback@redhat.com
6. Create the interface file for the second public interface and save the file as
/etc/sysconfig/network-scripts/ifcfg-eth1:
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no

7. Modify the interface file for the first cluster interconnect and save the file as
/etc/sysconfig/network-scripts/ifcfg-eth2:
DEVICE=eth2
BOOTPROTO=none
ONBOOT=yes
MASTER=bond1
SLAVE=yes
USERCTL=no

8. Create the interface file for the second cluster interconnect and save the file as
/etc/sysconfig/network-scripts/ifcfg-eth3:
DEVICE=eth3
BOOTPROTO=none
ONBOOT=yes
MASTER=bond1
SLAVE=yes
USERCTL=no

9. Restart the networking service:


# service network restart

10. Verify the public bond is running:


# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: load balancing (round-robin)


MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0


MII Status: up
Speed: 100 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:17:a4:77:24:44
Slave queue ID: 0

Slave Interface: eth1


MII Status: up
Speed: 100 Mbps

refarch-feedback@redhat.com 23 www.redhat.com
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:17:a4:77:24:46
Slave queue ID: 0

11. Verify the cluster interconnect bond is running:


# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)


Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2


MII Status: up
Speed: 5000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:17:a4:77:24:60
Slave queue ID: 0

Slave Interface: eth3


MII Status: up
Speed: 5000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:17:a4:77:24:62
Slave queue ID: 0

12. Edit the /etc/hosts file to include the IP addresses, hostname/aliases of all cluster
node and management server interfaces:
127.0.0.1 localhost localhost.localdomain
#
#----------------#
# Cluster Nodes: #
#----------------#
#
10.16.142.101 smb-srv1 smb-srv1.cloud.lab.eng.bos.redhat.com
10.0.0.101 smb-srv1-ci smb-srv1-ci.cloud.lab.eng.bos.redhat.com
10.16.142.102 smb-srv2 smb-srv2.cloud.lab.eng.bos.redhat.com
10.0.0.102 smb-srv2-ci smb-srv2-ci.cloud.lab.eng.bos.redhat.com
10.16.142.103 smb-srv3 smb-srv3.cloud.lab.eng.bos.redhat.com
10.0.0.103 smb-srv3-ci smb-srv3-ci.cloud.lab.eng.bos.redhat.co
#

13. Distribute the file to the other two cluster nodes. For example, if the file was initially
created on cluster node smb-srv1, copy it to the other nodes as follows:
# scp -p /etc/hosts smb-srv2:/etc/hosts
# scp -p /etc/hosts smb-srv3:/etc/hosts

www.redhat.com 24 refarch-feedback@redhat.com
14. Verify all public and cluster interconnect interfaces are properly configured and
responding:
# ping smb-srv1
# ping smb-srv1-ci
# ping smb-srv2
# ping smb-srv2-ci
# ping smb-srv3
# ping smb-srv3-ci

4.2.3 Configure Firewall
Before the cluster can be created, the firewall ports must be configured to allow access to the
cluster network daemons. The specific ports requiring access are listed in Table 4.2.3
Cluster Node Ports:
Port Number Protocol Component
5404 UDP corosync/cman (Cluster Manager)
5405 UDP corosync/cman (Cluster Manager)
11111 TCP ricci (Cluster Configuration)
11111 UDP ricci (Cluster Configuration)
21064 TCP dlm (Distributed Lock Manager)
16851 TCP modclusterd
445 TCP smb (Samba)
4379 TCP ctdb (CTDB)
4379 UDP ctdb (CTDB)
137 UDP NBT (Name Service)
138 UDP NBT (Datagram Service)
139 TCP NBT (Session Service)

Table 4.2.3: Cluster Node Ports

Firewall access can be configured with either system-configuration-firewall (GUI) or


the iptables utility. Use iptables to configure the firewall as per the following series of
steps on each of the three cluster nodes (smb-srv1, smb-srv2, smb-srv3):

1. Create a backup copy of the current iptables configuration file:


# cp /etc/sysconfig/iptables-config /etc/sysconfig/iptables-config.orig

2. Display the current iptables configuration:


# iptables --list --line-numbers --numeric --verbose
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 8612 17M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0
state RELATED,ESTABLISHED

refarch-feedback@redhat.com 25 www.redhat.com
2 7 588 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
3 1 60 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
4 2 120 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0
state NEW tcp dpt:22
5 3762 547K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0
reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)


num pkts bytes target prot opt in out source
destination
1 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0
reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT 8082 packets, 1747K bytes)


num pkts bytes target prot opt in out source destination

3. Create a new iptables chain called cluster-chain and insert it into the INPUT
chain:
# iptables --new-chain cluster-chain
# iptables --insert INPUT --jump cluster-chain

4. Create a new iptables chain called samba-chain and insert it into the INPUT
chain:
# iptables --new-chain samba-chain
# iptables --insert INPUT --jump samba-chain

5. If NetBIOS is in use by clients, create a new iptables chain called netbios-chain


and insert it into the INPUT chain:
# iptables --new-chain netbios-chain
# iptables --insert INPUT --jump netbios-chain

6. Add the rules for the cluster components to the chain clusters-chain:
# iptables --append cluster-chain --proto udp --destination-port 5404 \
--jump ACCEPT
# iptables --append cluster-chain --proto udp --destination-port 5405 \
--jump ACCEPT
# iptables --append cluster-chain --proto tcp --destination-port 11111 \
--jump ACCEPT
# iptables --append cluster-chain --proto udp --destination-port 11111 \
--jump ACCEPT
# iptables --append cluster-chain --proto tcp --destination-port 21064 \
--jump ACCEPT
# iptables --append cluster-chain --proto tcp --destination-port 16851 \
--jump ACCEPT

7. Add the rule for web service components to the chain samba-chain:
# iptables --append samba-chain --proto tcp --destination-port 445 \
--jump ACCEPT
# iptables --append samba-chain --proto tcp --destination-port 4379 \
--jump ACCEPT
# iptables --append samba-chain --proto udp --destination-port 4379 \

www.redhat.com 26 refarch-feedback@redhat.com
--jump ACCEPT

8. If NetBIOS is in use by clients, add the rule for web service components to the chain
netbios-chain:
# iptables --append netbios-chain --proto udp --destination-port 137 \
--jump ACCEPT
# iptables --append netbios-chain --proto udp --destination-port 138 \
--jump ACCEPT
# iptables --append netbios-chain --proto tcp --destination-port 139 \
--jump ACCEPT

9. Display the new iptables configuration:


# iptables --list --line-numbers --numeric --verbose
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 628 56934 samba-chain all -- * * 0.0.0.0/0 0.0.0.0/0
2 642 58232 cluster-chain all -- * * 0.0.0.0/0 0.0.0.0/0
3 21107 34M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0
state RELATED,ESTABLISHED
4 6 504 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
5 1 60 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
6 5 300 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0
state NEW tcp dpt:22
7 3959 580K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0
reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)


num pkts bytes target prot opt in out source destination
1 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0
reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT 63 packets, 6636 bytes)


num pkts bytes target prot opt in out source destination

Chain cluster-chain (1 references)


num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0
udp dpt:5404
2 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0
udp dpt:5405
3 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0
tcp dpt:11111
4 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0
udp dpt:11111
5 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0
tcp dpt:21064
6 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0
tcp dpt:16851

Chain samba-chain (1 references)


num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0
tcp dpt:445

refarch-feedback@redhat.com 27 www.redhat.com
2 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0
tcp dpt:4379
3 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0
udp dpt:4379

10. Save the new rules and verify iptables is activated on system boot:
# service iptables save
# chkconfig iptables on

4.2.4 Configure Time Service (NTP)
Configure the time service on each cluster node as follows:

1. Edit the file /etc/ntp.conf so that the time on each cluster node is synchronized from a
known, reliable time service:
# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats
server ns1.bos.redhat.com
server 10.5.26.10

2. Activate the change by stopping the ntp daemon, updating the time, then starting the
ntp daemon. Verify the change on both servers:
# service ntpd stop
Shutting down ntpd: [ OK ]

# ntpdate 10.16.255.2
22 Mar 20:17:00 ntpdate[14784]: adjust time server 10.16.255.2 offset
-0.002933 sec
# service ntpd start
Starting ntpd: [ OK ]

3. Configure the ntpd daemon to start on server boot:


# chkconfig ntpd on
# chkconfig ­­list ntpd
ntp  0:off 1:off     2:on     3:on     4:on     5:on     6:off

4.2.5 Configure Domain Name System (DNS)
Configure DNS lookups on each cluster node as follows:

1. Edit the file /etc/resolv.conf so that the fully qualified domain name (FQDN) of the DNS
servers is specified:
domain cloud.lab.eng.bos.redhat.com
search cloud.lab.eng.bos.redhat.com
nameserver 10.nn.nnn.3
nameserver 10.nn.nnn.247
nameserver 10.nn.nnn.2

www.redhat.com 28 refarch-feedback@redhat.com
2. Similarly, the hostname of each cluster node should be set to its Fully Qualified
Domain Name (FQDN). Edit the file /etc/sysconfig/network and set the hostname to
use the FQDN:
HOSTNAME=smb-srv1.cloud.lab.eng.bos.redhat.com

4.2.6 Install Cluster Node Software
Install the High Availability and Resilient Storage Add-On packages. Perform this step on
each of the three cluster nodes:
# yum groupinstall “High Availability”
# yum groupinstall “Resilient Storage”

4.3 Configure Cluster


The ccs (Cluster Configuration System) command line interface allows administrators to
create, modify and view a cluster configuration file on a remote node through the ricci
service or on a local filesystem. Using ccs an administrator can also start, stop and relocate
cluster services on one or more cluster nodes.
In the prior sections, cluster nodes were fully deployed. Do not proceed with creating the
cluster via CCS until these tasks have been fully completed:

Cluster Nodes (smb-srv1, smb-srv2, smb-srv3)


• Install Red Hat Enterprise Linux 6
• Configure Networks and Bonding
• Configure Firewall
• Configure Network Time Service (NTP)
• Configure Domain Name System (DNS)
• Install Cluster Node Software (“High Availability” Add-On)
• Install Cluster Storage Software (“Resilient Storage” Add-On)
• Configure Storage
The next sections describe the steps involved in creating a cluster from the ccs command
line interface.

4.3.1 Create Cluster
Cluster creation is performed from the first cluster node (smb-srv1) and updates are deployed
to the other cluster nodes across the public network interfaces. The process involves creating
a full cluster configuration file (/etc/cluster/cluster.conf) on one node (smb-srv1-ci) then
distributing the configuration and activating the cluster on the remaining nodes. Cluster
interconnects are specified within the configuration file for all node communications.
Configure the appropriate cluster services then create the cluster.
1. Start the ricci service, configure to start on system boot and verify. Perform this step

refarch-feedback@redhat.com 29 www.redhat.com
on all cluster nodes:
# service ricci start
Starting oddjobd: [ OK ]
generating SSL certificates... done
Generating NSS database... done
Starting ricci: [ OK ]
# chkconfig ricci on
# chkconfig --list ricci
ricci 0:off 1:off 2:on 3:on 4:on 5:on 6:off

2. Configure a password for the ricci user account on each node. The same password
may be used on all cluster nodes to simplify administration:
# passwd ricci
Changing password for user ricci.
New password: **********
Retype new password: **********
passwd: all authentication tokens updated successfully.

3. Create a cluster named samba-cluster from the first cluster node (smb-srv1):
# ccs --host smb-srv1 --createcluster samba-cluster
smb-srv1 password: **********
Note that the ricci password specified in Step 2. should be entered in Step 3.

4.3.2 Add Nodes
Once the cluster has been created, specify the member nodes in the cluster configuration.
1. Add the three cluster nodes (smb-srv1-ci, smb-srv2-ci, smb-srv3-ci) to the cluster.
Perform this step from the first cluster node (smb-srv1):
# ccs --host smb-srv1 --addnode smb-srv1-ci –-nodeid=”1”
Node smb-srv1-ci added.
# ccs --host smb-srv1 --addnode smb-srv2-ci –-nodeid=”2”
Node smb-srv2-ci added.
# ccs --host smb-srv1 --addnode smb-srv3-ci –-nodeid=”3”
Node smb-srv3-ci added.

4.3.3 Add Fence Devices
Add the fence method then add devices and instances for each cluster node to the method.
IPMI LAN fencing is used in this configuration. Other fencing methods and devices can be
used depending on the resources available. Perform all steps from the first cluster node
(smb-srv1).
1. Add a fence method for the Primary fencing devices:
# ccs --host smb-srv1 --addmethod Primary smb-srv1-ci
Method Primary added to smb-srv1-ci.
# ccs --host smb-srv1 --addmethod Primary smb-srv2-ci
Method Primary added to smb-srv2-ci.
# ccs --host smb-srv1 --addmethod Primary smb-srv3-ci
Method Primary added to smb-srv3-ci.

www.redhat.com 30 refarch-feedback@redhat.com
2. Add a fence device for the IPMI LAN device:
# ccs --host smb-srv1 --addfencedev IPMI-smb-srv1-ci \
agent=fence_ipmilan auth=password \
ipaddr=10.16.143.232 lanplus=on \
login=root name=IPMI-smb-srv1-ci passwd=password \
power_wait=5 timeout=20
# ccs --host smb-srv1 --addfencedev IPMI-smb-srv2-ci \
agent=fence_ipmilan auth=password \
ipaddr=10.16.143.233 lanplus=on \
login=root name=IPMI-smb-srv2-ci passwd=password \
power_wait=5 timeout=20
# ccs --host smb-srv1 --addfencedev IPMI-smb-srv3-ci \
agent=fence_ipmilan auth=password \
ipaddr=10.16.143.241 lanplus=on \
login=root name=IPMI-smb-srv3-ci passwd=password \
power_wait=5 timeout=20

3. Add a fence instance for each node to the Primary fence method:
# ccs --host smb-srv1 --addfenceinst IPMI-smb-srv1-ci smb-srv1-ci Primary
# ccs --host smb-srv1 --addfenceinst IPMI-smb-srv2-ci smb-srv2-ci Primary
# ccs –-host smb-srv1 --addfenceinst IPMI-smb-srv3-ci smb-srv3-ci Primary

4.3.4 Activate Cluster
Once the cluster has been created, the configuration needs to be activated and the cluster
started on all nodes.

1. Synchronize and activate the cluster configuration across all nodes:


# ccs --host smb-srv1 --sync --activate
smb-srv2-ci password:

# ccs --host smb-srv1 --checkconf


All nodes in sync.
2. Verify status of all cluster nodes
# clustat
Cluster Status for samba-cluster @ Tue Aug 21 16:58:23 2012
Member Status: Quorate

Member Name ID Status


------ ---- ---- ------
smb-srv1-ci 1 Online, Local
smb-srv2-ci 2 Online
smb-srv3-ci 3 Online

refarch-feedback@redhat.com 31 www.redhat.com
4.4 Configure Storage
Two volumes are created – one to maintain the CTDB lock state and another to hold the
contents of the Samba file share. Access to both volumes is shared across the cluster nodes.
In the event of a node failure, access to both volumes is maintained across all remaining
cluster nodes. Since all cluster nodes require simultaneous access, both volumes are
configured as Clustered Logical Volume Manager (CLVM) volumes.

The Logical Unit Number (LUN) for the volume must be provisioned and accessible to each of
the cluster nodes before continuing. Appendix B: Fibre Channel Storage Provisioning
describes how the LUN used for this reference architecture was provisioned.

4.4.1 Configure Multipathing
1. Install the DM Multipath Package on each cluster node:
# yum install device-mapper-multipath.x86_64

2. On the first cluster node (smb-srv1) create a Multipath configuration file


(/etc/multipath.conf) with user friendly names disabled and the daemon started:
# mpathconf --enable --user_friendly_names n --with_multipathd y
Starting multipathd daemon: [ OK ]

3. On the first cluster node (smb-srv1) view the multipath device, paths and World Wide
ID (WWID):
# multipath -ll
3600c0ff000d7e69dd26a325001000000 dm-6 HP,MSA2324fc
size=1.9G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 1:0:0:1 sdb 8:16 active ready running
|- 2:0:0:1 sdd 8:48 active ready running
|- 2:0:1:1 sdf 8:80 active ready running
`- 1:0:1:1 sdh 8:112 active ready running
3600c0ff000d7e69df36a325001000000 dm-7 HP,MSA2324fc
size=186G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 1:0:0:2 sdc 8:32 active ready running
|- 2:0:0:2 sde 8:64 active ready running
|- 2:0:1:2 sdg 8:96 active ready running
`- 1:0:1:2 sdi 8:128 active ready running

# ls /dev/mapper
3600508b1001030374142393845301000 3600508b1001030374142393845301000p2
3600c0ff000d7e69df36a325001000000 vg_smbsrv1-lv_home vg_smbsrv1-lv_swap
3600508b1001030374142393845301000p1 3600c0ff000d7e69dd26a325001000000
control vg_smbsrv1-lv_root

4. On the first cluster node (smb-srv1) edit the file /etc/multipath.conf and add aliases
for both the CTDB (smb-srv-ctdb-01) and Data (smb-srv-data-01) volumes
using the WWIDs from the previous step:

www.redhat.com 32 refarch-feedback@redhat.com
multipaths {
multipath {
alias smb-srv-ctdb-01
wwid " 3600c0ff000d7e69dd26a325001000000”
}
multipath {
alias smb-srv-data-01
wwid "3600c0ff000d7e69df36a325001000000"
}
}

5. Copy the file from the first cluster node (smb-srv1) to the other cluster nodes:
# scp -p /etc/multipath.conf smb-srv2:/etc/multipath.conf
# scp -p /etc/multipath.conf smb-srv3:/etc/multipath.conf

6. Restart multipathd on each cluster node:


# service multipathd restart
Stopping multipathd daemon: [ OK ]
Starting multipathd daemon: [ OK ]

7. Verify the change on each cluster node:


# multipath -ll
smb-srv-ctdb-01 (3600c0ff000d7e69dd26a325001000000) dm-6 HP,MSA2324fc
size=1.9G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 1:0:0:1 sdb 8:16 active ready running
|- 2:0:0:1 sdd 8:48 active ready running
|- 2:0:1:1 sdf 8:80 active ready running
`- 1:0:1:1 sdh 8:112 active ready running
smb-srv-data-01 (3600c0ff000d7e69df36a325001000000) dm-7 HP,MSA2324fc
size=186G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 1:0:0:2 sdc 8:32 active ready running
|- 2:0:0:2 sde 8:64 active ready running
|- 2:0:1:2 sdg 8:96 active ready running
`- 1:0:1:2 sdi 8:128 active ready running

# ls /dev/mapper
3600508b1001030374142393845301000 3600508b1001030374142393845301000p2
smb-srv-ctdb-01 vg_smbsrv1-lv_home vg_smbsrv1-lv_swap
3600508b1001030374142393845301000p1 control
smb-srv-data-01 vg_smbsrv1-lv_root
8. On each cluster node configure multipath to start on system boot:
# chkconfig multipathd on
# chkconfig multipathd --list
multipathd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

refarch-feedback@redhat.com 33 www.redhat.com
4.4.2 Create Cluster Logical Volumes
Create volume groups (VG) and logical volumes (LV) on the previously defined LUN's.

1. Ensure the parameter locking_type is set to the value of 3 (to enable built-in
clustered locking) in the global section of the file /etc/lvm/lvm.conf on all nodes:
# grep "locking_type" /etc/lvm/lvm.conf | grep -v "#"
locking_type = 3

2. Start the cluster manager (CMAN) and clvmd services on each cluster node:
# service cman start
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... [ OK ]
Starting fenced... [ OK ]
Starting dlm_controld... [ OK ]
Starting gfs_controld... [ OK ]
Unfencing self... [ OK ]
Joining fence domain... [ OK ]

# service clvmd start


Starting clvmd:
Activating VG(s): 3 logical volume(s) in volume group "vg_smbsrv1" now
active
[ OK ]

3. Configure the physical volumes (PV) using the Multipath devices (/dev/mapper/smb-
srv-ctdb-01, /dev/mapper/smb-srv-data-01) and display the attributes. Perform this
step on the first cluster node (smb-srv1) only:
# pvcreate /dev/mapper/smb-srv-ctdb-01
Writing physical volume data to disk "/dev/mapper/smb-srv-ctdb-01"
Physical volume "/dev/mapper/smb-srv-ctdb-01" successfully created

# pvcreate /dev/mapper/smb-srv-data-01
Writing physical volume data to disk "/dev/mapper/smb-srv-data-01"
Physical volume "/dev/mapper/smb-srv-data-01" successfully created

# pvdisplay /dev/mapper/smb-srv-ctdb-01
"/dev/mapper/smb-srv-ctdb-01" is a new physical volume of "1.86 GiB"
--- NEW Physical volume ---
PV Name /dev/mapper/smb-srv-ctdb-01
VG Name
PV Size 1.86 GiB
Allocatable NO
PE Size 0
Total PE 0

www.redhat.com 34 refarch-feedback@redhat.com
Free PE 0
Allocated PE 0
PV UUID K9eYda-fJa9-kMmX-tOme-nXER-ZYxe-rmhxtj

# pvdisplay /dev/mapper/smb-srv-data-01
"/dev/mapper/smb-srv-data-01" is a new physical volume of "186.26 GiB"
--- NEW Physical volume ---
PV Name /dev/mapper/smb-srv-data-01
VG Name
PV Size 186.26 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID plJUZN-wDLh-F3Fh-5UFK-FaIu-GVle-l1oyId

4. Create volume groups (VG) to contain the logical volumes (LV) and display the
attributes. Perform this step on the first cluster node (smb-srv1) only:
# vgcreate --clustered y SMB-CTDB-VG /dev/mapper/smb-srv-ctdb-01
Clustered volume group "SMB-CTDB-VG" successfully created

# vgcreate --clustered y SMB-DATA1-VG /dev/mapper/smb-srv-data-01


Clustered volume group "SMB-DATA1-VG" successfully created

# vgdisplay SMB-CTDB-VG
--- Volume group ---
VG Name SMB-CTDB-VG
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
Clustered yes
Shared no
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.86 GiB
PE Size 4.00 MiB
Total PE 476
Alloc PE / Size 0 / 0
Free PE / Size 476 / 1.86 GiB
VG UUID RdFhK1-yKI6-tE65-R60U-rmsz-p7cL-2gVW8S

# vgdisplay SMB-DATA1-VG
--- Volume group ---
VG Name SMB-DATA1-VG
System ID

refarch-feedback@redhat.com 35 www.redhat.com
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
Clustered yes
Shared no
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 186.26 GiB
PE Size 4.00 MiB
Total PE 47683
Alloc PE / Size 0 / 0
Free PE / Size 47683 / 186.26 GiB
VG UUID NrHlsk-HyZk-AWWf-Gs9J-Hq0Q-tfBx-Sh2ce8

5. Create logical volumes (LV) for the CTDB (smb-ctdb-lvol1), Data (smb-data-
lvol1) volumes and display the attributes. Perform this step on the first cluster node
(smb-srv1) only:
# lvcreate --size 1.8GB --name smb-ctdb-lvol1 SMB-CTDB-VG
Rounding up size to full physical extent 1.80 GiB
Logical volume "smb-ctdb-lvol1" created

# lvcreate --size 180GB --name smb-data-lvol1 SMB-DATA1-VG


Logical volume "smb-data-lvol1" created

# lvdisplay SMB-CTDB-VG
--- Logical volume ---
LV Path /dev/SMB-CTDB-VG/smb-ctdb-lvol1
LV Name smb-ctdb-lvol1
VG Name SMB-CTDB-VG
LV UUID oUqpSy-Ucpf-zHSg-dtav-5cJi-NZTm-UbQ0j1
LV Write Access read/write
LV Creation host, time smb-srv1, 2012-08-21 14:46:23 -0400
LV Status available
# open 0
LV Size 1.80 GiB
Current LE 461
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:8

# lvdisplay SMB-DATA1-VG
--- Logical volume ---
LV Path /dev/SMB-DATA1-VG/smb-data-lvol1
LV Name smb-data-lvol1
VG Name SMB-DATA1-VG
LV UUID 4xQw7X-FrHS-bXvO-BqNh-lBTs-C8kX-917xYi

www.redhat.com 36 refarch-feedback@redhat.com
LV Write Access read/write
LV Creation host, time smb-srv1, 2012-08-21 14:46:53 -0400
LV Status available
# open 0
LV Size 180.00 GiB
Current LE 46080
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:9

4.4.3 Create GFS2 Filesystems
1. Format both volumes with the GFS2 filesystem. For each volume, specify 3 journals
(-j3) - one for each cluster node, the cluster locking protocol (-p lock_dlm) and the
lock table name (ClusterName:FileSystem Name). Perform this step on the first
cluster node (smb-srv1) only:
# mkfs -t gfs2 -j3 -p lock_dlm -t samba-cluster:ctdb-state \
/dev/SMB-CTDB-VG/smb-ctdb-lvol1
This will destroy any data on /dev/SMB-CTDB-VG/smb-ctdb-lvol1.
It appears to contain: symbolic link to `../dm-8'

Are you sure you want to proceed? [y/n] y

Device: /dev/SMB-CTDB-VG/smb-ctdb-lvol1
Blocksize: 4096
Device Size 1.80 GB (472064 blocks)
Filesystem Size: 1.80 GB (472063 blocks)
Journals: 3
Resource Groups: 8
Locking Protocol: "lock_dlm"
Lock Table: "samba-cluster:ctdb-state"
UUID: 816d88d3-3f4b-4198-ab92-a73157216c22

# mkfs.gfs2 -j3 -p lock_dlm -t samba-cluster:smb-data1 \


/dev/SMB-DATA1-VG/smb-data-lvol1
This will destroy any data on /dev/SMB-DATA1-VG/smb-data-lvol1.
It appears to contain: symbolic link to `../dm-9'

Are you sure you want to proceed? [y/n] y

Device: /dev/SMB-DATA1-VG/smb-data-lvol1
Blocksize: 4096
Device Size 180.00 GB (47185920 blocks)
Filesystem Size: 180.00 GB (47185918 blocks)
Journals: 3
Resource Groups: 720
Locking Protocol: "lock_dlm"
Lock Table: "samba-cluster:smb-data1"
UUID: 9341df53-e6cc-fe6e-6d07-8ac015cd5bd2

2. Create a mount point for both volumes. Perform this step on all cluster nodes:

refarch-feedback@redhat.com 37 www.redhat.com
# mkdir -p /share/ctdb
# mkdir -p /share/data1

4.4.4 Configure SELinux Security Parameters
By default, SELinux is enabled during the Red Hat Enterprise Linux 6 installation process. For
maximum security, Red Hat recommends running Red Hat Enterprise Linux 6 with SELinux
enabled. In this section, verification is done to ensure that SELinux is enabled and the file
context set correctly on the /share/data1 filesystem for use by Samba.

1. Verify whether or not SELinux is enabled using the getenforce utility. Perform this
step on all cluster nodes:
# getenforce
Enforcing

If getenforce returns “Permissive” then set to “Enforcing” and verify:


# getenforce
Permissive
# setenforce 1
# getenforce
Enforcing

2. Edit the file /etc/selinux/config and set SELinux to be persistent across reboots.
Perform this step on all cluster nodes:
SELINUX=enforcing

3. Add (-a) the file context (fcontext) for type (-t) samba_share_t to the directory
/share/data1 and all contents within it. This makes the changes permanent.
Perform this step on all cluster nodes:
# semanage fcontext -a -t samba_share_t "/share/data1(/.*)?"

Note: If the semanage (/usr/sbin/semanage) utility is not available, install the core
policy utilities kit and then apply the file context on all nodes:
# yum -y install policycoreutils-python
# semanage fcontext -a -t samba_share_t "/share/data1(/.*)?"

4. View the current security policy file context. Perform this step on all cluster nodes:
# ls -ldZ /share/data1
drwxr-xr-x. root root system_u:object_r:file_t:s0 /share/data1

5. Run the restorecon command to apply the changes and view the updated file
context. Perform this step on all cluster nodes:
# restorecon -R -v /share/data1
restorecon reset /share/data1 context system_u:object_r:file_t: \
s0->system_u:object_r:samba_share_t:s0
restorecon reset /share/data1/data.test context unconfined_u:object_r: \
file_t:s0->unconfined_u:object_r:samba_share_t:s0

www.redhat.com 38 refarch-feedback@redhat.com
# ls -ldZ /share/data1
drwxr-xr-x. root root system_u:object_r:samba_share_t:s0 /share/data1

4.5 Configure CTDB


The file (/etc/sysconfig/ctdb) contains parameters specifying the location of CTDB
configuration files and options. Table 4.5: CTDB Configuration File Parameters
provides a summary of each configuration file parameter:
Parameter Description
CTDB_NODES Location of the CTDB cluster nodes file
Location of the CTDB cluster nodes public IP
CTDB_PUBLIC_ADDRESSES
addresses file
Location of recovery lock file – resides on CTDB
CTDB_RECOVERY_LOCK
lock volume
Configures CTDB to start, stop the Samba
CTDB_MANAGES_SAMBA
(smbd) service
Configures CTDB to start, stop the Winbind
CTDB_MANAGES_WINBIND
(winbindd) service

Table 4.5: CTDB Configuration File Parameters

1. Install the CTDB package – perform this step on each cluster node:
# yum -y install ctdb

2. Edit and save the CTDB configuration file (/etc/sysconfig/ctdb) on the first cluster node
(smb-srv1) as follows:
CTDB_DEBUGLEVEL=ERR
CTDB_NODES=/etc/ctdb/nodes
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_RECOVERY_LOCK=/share/ctdb/.ctdb.lock
CTDB_MANAGES_SAMBA=yes
CTDB_MANAGES_WINBIND=yes # optional

Copy the file to the other cluster nodes (smb-srv2, smb-srv3):


# scp -p /etc/sysconfig/ctdb smb-srv2:/etc/sysconfig/ctdb
# scp -p /etc/sysconfig/ctdb smb-srv3:/etc/sysconfig/ctdb

The optional parameter CTDB_MANAGES_WINBIND is for environments that require user


authentication and id mapping (Integration) through Windows Active Directory domains.
This configuration is detailed in Section 6 Windows Active Directory Integration.

3. On the first cluster node (smb-srv1), edit and save the CTDB nodes file
(/etc/ctdb/nodes) by adding the CLUSTER INTERCONNECT IP addresses for each
cluster node (smb-srv1-ci, smb-srv2-ci, smb-srv3-ci) :
10.0.0.101
10.0.0.102

refarch-feedback@redhat.com 39 www.redhat.com
10.0.0.103

The IP addresses specified here are used for CTDB cluster node communications and
should match those specified in the cluster configuration file (/etc/cluster/cluster.conf).
Copy the file from the first cluster node (smb-srv1) to the other cluster nodes:
# scp -p /etc/ctdb/nodes smb-srv2:/etc/ctdb/nodes
# scp -p /etc/ctdb/nodes smb-srv3:/etc/ctdb/nodes

4. On the first cluster node (smb-srv1), edit and save the CTDB public addresses file
(/etc/ctdb/public_addresses) by adding three, new unique public IP addresses for use
by CTDB on each cluster node (smb-srv1-ctdb, smb-srv2-ctdb, smb-srv3-ctdb):
10.16.142.111/24 bond0
10.16.142.112/24 bond0
10.16.142.113/24 bond0
These addresses co-exist with the existing public addresses defined for bond0. In the
event of a cluster node failover, client access to file shares is maintained through the
use of IP address takeover (IPAT). The addresses specified within this file are relocated
to other cluster nodes to maintain client file share access on the public network. Copy
the file from the first cluster node (smb-srv1) to the other cluster nodes:
# scp -p /etc/ctdb/public_addresses smb-srv2:/etc/ctdb/public_addresses
# scp -p /etc/ctdb/public_addresses smb-srv3:/etc/ctdb/public_addresses

5. Edit the /etc/hosts file to include the IP addresses, hostname/aliases of all cluster
node CTDB public address interfaces:
127.0.0.1 localhost localhost.localdomain
#----------------#
# Cluster Nodes: #
#----------------#
#
10.16.142.101 smb-srv1 smb-srv1.cloud.lab.eng.bos.redhat.com
10.16.142.111 smb-srv1-ctdb smb-srv1-ctdb.cloud.lab.eng.bos.redhat.com
10.0.0.101 smb-srv1-ci smb-srv1-ci.cloud.lab.eng.bos.redhat.com
10.16.142.102 smb-srv2 smb-srv2.cloud.lab.eng.bos.redhat.com
10.16.142.112 smb-srv2-ctdb smb-srv2-ctdb.cloud.lab.eng.bos.redhat.com
10.0.0.102 smb-srv2-ci smb-srv2-ci.cloud.lab.eng.bos.redhat.com
10.16.142.103 smb-srv3 smb-srv3.cloud.lab.eng.bos.redhat.com
10.16.142.113 smb-srv3-ctdb smb-srv3-ctdb.cloud.lab.eng.bos.redhat.com
10.0.0.103 smb-srv3-ci smb-srv3-ci.cloud.lab.eng.bos.redhat.com
Copy the file from the first cluster node (smb-srv1) to the other cluster nodes:
# scp -p /etc/hosts smb-srv2:/etc/hosts
# scp -p /etc/hosts smb-srv3:/etc/hosts
It is also recommended to register the hosts, addresses within the local site specific DNS.

www.redhat.com 40 refarch-feedback@redhat.com
4.6 Configure Samba
Install the Samba packages and configure the Clustered Samba file share. Note that CTDB
will manage the starting, stopping of the Samba and Winbind (optional) services – do not
manually start or stop them.

1. Install the Samba server, client and winbind packages – perform this step on each
cluster node:
# yum -y install samba samba-client samba-common samba-winbind \
samba-winbind-clients

• Note that some packages may have been previously installed depending on
which packages were selected during the installation of Red Hat Enterprise
Linux on each cluster node.

2. On the first cluster node (smb-srv1), edit and save the Samba configuration file
(/etc/samba/smb.conf) as follows:
[global]
workgroup = REFARCH-CTDB
server string = Samba Server Version %v

guest ok = yes
clustering = yes

idmap backend = tdb2


passdb backend = tdbsam
log file = /var/log/samba/log.%m
max log size = 50

[data1]
comment = Clustered Samba Share 1
public = yes
path = /share/data1
writable = yes

Test the file using the testparm utility:


# testparm
Load smb config files from /etc/samba/smb.conf
Processing section "[data1]"
Loaded services file OK.
Server role: ROLE_STANDALONE
Press enter to see a dump of your service definitions

[global]
workgroup = REFARCH-CTDB
server string = Samba Server Version %v
log file = /var/log/samba/log.%m
max log size = 50
clustering = Yes

refarch-feedback@redhat.com 41 www.redhat.com
idmap backend = tdb2
guest ok = Yes

[data1]
comment = Clustered Samba Share 1
path = /share/data1
read only = No
Copy the file from the first cluster node (smb-srv1) to the other cluster nodes:
# scp -p /etc/samba/smb.conf smb-srv2:/etc/samba/smb.conf
# scp -p /etc/samba/smb.conf smb-srv3:/etc/samba/smb.conf

4.7 Start clvmd


On each cluster node, restart the clvmd daemon and configure it to start on boot as follows:
# service clvmd restart
Activating VG(s):
1 logical volume(s) in volume group "SMB-DATA1-VG" now active
1 logical volume(s) in volume group "SMB-CTDB-VG" now active
3 logical volume(s) in volume group "vg_smbsrv1" now active
[ OK ]
# chkconfig clvmd on

4.8 Mount GFS2 Volumes


1. Mount the CTDB, DATA volumes and verify they can be written to. Perform this step
on all cluster nodes:
# mount -t gfs2 /dev/SMB-CTDB-VG/smb-ctdb-lvol1 /share/ctdb
# mount -t gfs2 /dev/SMB-DATA1-VG/smb-data-lvol1 /share/data1

# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_smbsrv1-lv_root
51606140 2940604 46044096 7% /
tmpfs 24708156 35236 24672920 1% /dev/shm
/dev/mapper/3600508b1001030374142393845301000p1
495844 98392 371852 21% /boot
/dev/mapper/vg_smbsrv1-lv_home
64583508 185180 61117640 1% /home
/dev/mapper/SMB--CTDB--VG-smb--ctdb--lvol1
1888032 397164 1490868 22% /share/ctdb
/dev/mapper/SMB--DATA--VG-smb--data--lvol1
188723456 397236 188326220 1% /share/data1

# touch /share/ctdb/ctdb.test
# touch /share/data1/data1.test
# ls -l /share/ctdb/ctdb.test /share/data1/data1.test
-rw-r--r--. 1 root root 0 Sep 5 16:43 /share/ctdb/ctdb.test
-rw-r--r--. 1 root root 0 Sep 5 16:43 /share/data1/data1.test

www.redhat.com 42 refarch-feedback@redhat.com
2. Add mount entries for both volumes to /etc/fstab. Edit and save the file on all cluster
nodes:
#
# CTDB and DATA volumes for Clustered Samba
#
/dev/SMB-CTDB-VG/smb-ctdb-lvol1 /share/ctdb gfs2 \
defaults,noatime,nodiratime,quota=off 0 0
/dev/SMB-DATA1-VG/smb-data-lvol1 /share/data1 gfs2 \
defaults,acl,noatime,nodiratime,quota=off 0 0

4.9 Start CTDB/Samba


Start the CTDB daemon on all cluster nodes. The daemon can take up to a minute to
synchronize across all cluster nodes:
# service ctdb start
Starting ctdbd service: [ OK ]

# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 UNHEALTHY (THIS NODE)
pnn:1 10.0.0.102 UNHEALTHY
pnn:2 10.0.0.103 UNHEALTHY
Generation:122968421
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:2

...approximately 1 minute later...

# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
pnn:2 10.0.0.103 OK
Generation:1330161966
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:2

4.10 Verify File Share


1. Verify the cluster, Samba and ctdb status:
# clustat
Cluster Status for samba-cluster @ Wed Sep 5 17:37:33 2012
Member Status: Quorate

refarch-feedback@redhat.com 43 www.redhat.com
Member Name ID Status
------ ---- ---- ------
smb-srv1-ci 1 Online, Local
smb-srv2-ci 2 Online
smb-srv3-ci 3 Online

# smbstatus

Samba version 3.5.10-125.el6


PID Username Group Machine
-------------------------------------------------------------------

Service pid machine Connected at


-------------------------------------------------------

No locked files

# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
pnn:2 10.0.0.103 OK
Generation:1330161966
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:0

2. Verify the file share is available from a client. For high availability, select one of the
hostnames or IP addresses associated with the transferrable IP addresses (IPAT)
specified within the /etc/ctdb/public_addresses file:

smb-srv1-ctdb –> 10.16.142.111


smb-srv2-ctdb –> 10.16.142.112
smb-srv3-ctdb –> 10.16.142.113

Alternatively, round-robin DNS load balancing can be configured to automatically cycle


through the transferrable IP addresses by binding them to a single DNS hostname.
This reference architecture is configured with round-robin DNS using the following
DNS zone file entries for the hostname smb-srv:
;
; Clustered Samba Servers
;
smb-srv IN A 10.16.142.111
smb-srv IN A 10.16.142.112
smb-srv IN A 10.16.142.113

www.redhat.com 44 refarch-feedback@redhat.com
In the examples below, the round-robin DNS hostname (smb-srv) is specified:
$ smbclient -U root //smb-srv.cloud.lab.eng.bos.redhat.com/data1
Enter root's password: *******
Domain=[REFARCH-CTDB] OS=[Unix] Server=[Samba 3.5.10-125.el6]
smb: \> ls
. D 0 Wed Sep 5 16:43:54 2012
.. D 0 Wed Sep 5 16:10:52 2012
data1.test 0 Wed Sep 5 16:43:54 2012

46075 blocks of size 4194304. 45978 blocks available


smb: \>

# mount -t cifs //smb-srv.cloud.lab.eng.bos.redhat.com/data1 \


/mnt/data1 -o username=root
Password for root@//smb-srv.cloud.lab.eng.bos.redhat.com/data1: *******

# ls -la /mnt/data1
total 12
drwxr-xr-x. 2 root root 0 Sep 5 17:49 .
drwxr-xr-x. 3 root root 4096 Sep 5 17:53 ..
-rw-r--r--. 1 root root 0 Sep 5 16:43 data1.test

Verify the client connection to the fileshare from any cluster node running smbstatus:
# smbstatus
Samba version 3.5.10-125.el6
PID Username Group Machine
-------------------------------------------------------------------
1:15522 root root bandit (::ffff:10.16.187.21)

Service pid machine Connected at


-------------------------------------------------------
data2 1:15522 bandit Wed Sep 5 17:40:39 2012

No locked files

The pid indicates which cluster node (smb-srv2) is serving the file share (data2) to client
(bandit). The smbstatus utility can be run from any cluster node.
This completes the deployment and configuration of clustered Samba file shares. The next
section details the common use cases in managing a Red Hat Enterprise Linux 6 Samba
Cluster.

refarch-feedback@redhat.com 45 www.redhat.com
5 Clustered Samba Management
The previous sections of this reference architecture detailed the configuration tasks for
deploying a highly available clustered Samba file share on Red Hat Enterprise Linux.
The following sections focus on the most common cluster management tasks.

5.1 Starting, Shutting down, Restarting Cluster Nodes


When starting, shutting down or restarting clustered Samba nodes, it is important to follow the
correct sequence of steps. Figure 5.1 depicts the proper ordering of components during
cluster node startup and shutdown:

Figure 5.1: Components - Startup, Shutdown

Failure to follow the proper sequence can result in the cluster not forming properly and
the clustered Samba file shares not becoming available. For this reason, it is inadvisable
to restart all cluster nodes at once. In cases where all cluster nodes need to be restarted
(e.g. - recovery from an unexpected power outage), the recommended method is to
reboot each node individually, one at a time. Only after the node has been fully started,
the cluster formed and the clustered Samba resources properly started should the next
node be rebooted.

www.redhat.com 46 refarch-feedback@redhat.com
Table 5.1-1: Cluster Component Startup and Shutdown depicts the proper command
sequences to follow during startup and shutdown:
Startup Sequence Shutdown Sequence
# ctdb stop
# service cman start
# ctdb status
# clustat
# smbstatus
# service clvmd start # umount -a -t gfs2
# service clvmd status # mount
# mount -a -t gfs2 # service clvmd stop
# mount # service clvmd status
# ctdb start
# service cman stop
# ctdb status
# clustat
# smbstatus

Table 5.1-1: Cluster Component Startup and Shutdown

Clustered Samba components (CLVM, GFS2, CTDB) are dependent on the underlying HA
cluster services (CMAN). For this reason, it is essential to allow the cluster to form properly
before the clustered Samba services are started.

Table 5.1-2: CTDB Administrative Commands below, provides a summary of the most
commonly used ctdb command options:
Command Option Description
status Display current CTDB cluster status
stop Administratively stop cluster node
(IP address is not relocated to another node)
continue Re-start administratively stopped node
uptime Display CTDB daemon uptime for node
listnodes List IP addresses of all cluster nodes
ip List public addresses, node servicing the address
ipinfo {IP address} Provide detail about specified public address
statistics Display CTDB daemon statistics
disable Administratively disable cluster node
(IP address is relocated to another node)
enable Administratively re-enable cluster node
shutdown Stop the CTDB daemon on a node
recover Trigger a cluster recovery
reloadnodes Reload the nodes file on all nodes

Table 5.1-2: CTDB Administrative Commands

refarch-feedback@redhat.com 47 www.redhat.com
5.2 Adding Clustered Samba Nodes
Prior to adding a new node to an existing Samba cluster, the system must be deployed
and configured as a member of an HA cluster as outlined in the following sections:
• 4 Clustered Samba Deployment
• Appendix G: Adding/Removing HA Nodes
The new node must be configured as a HA cluster node member. The new node must also
be fully configured with CTDB, GFS2, CLVM and CTDB/Samba as detailed in the previous
sections. Do not proceed until the previous tasks have been completed.
In the steps below, a new node (smb-srv3) is added to an existing two-node Samba cluster.

1. Verify the cluster and CTDB status from an existing Samba cluster node. Ensure
that all nodes are up, running, the cluster status is Online and the CTDB status
is OK. Do not add a node to the cluster unless the cluster is fully formed and in
a healthy state:
# clustat
Cluster Status for samba-cluster @ Tue Oct 16 11:01:38 2012
Member Status: Quorate

Member Name ID Status


------ ---- ---- ------
smb-srv1-ci 1 Online, Local
smb-srv2-ci 2 Online
smb-srv3-ci 3 Online

# ctdb status
Number of nodes:2
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
Generation:1768794705
Size:2
hash:0 lmaster:0
hash:1 lmaster:1
Recovery mode:NORMAL (0)
Recovery master:0

2. On the first cluster node (smb-srv1), copy the /etc/sysconfig/ctdb file to the new cluster
node (smb-srv3) being added:
# scp -p /etc/sysconfig/ctdb smb-srv3:/etc/sysconfig/ctdb

3. On the first cluster node (smb-srv1), edit the /etc/ctdb/nodes file and add an entry
(10.0.0.103) for the new node being added (smb-srv3). The node entry must be added
to the end of the file:
10.0.0.101
10.0.0.102
10.0.0.103

www.redhat.com 48 refarch-feedback@redhat.com
Copy the file to the other cluster nodes (smb-srv2, smb-srv3), including the node being
added:
# scp -p /etc/ctdb/nodes smb-srv2:/etc/ctdb/nodes
# scp -p /etc/ctdb/nodes smb-srv3:/etc/ctdb/nodes

4. On the first cluster node (smb-srv1), edit the /etc/ctdb/public_addresses file and add
an entry (10.16.142.113/21 bond0) for the new node being added (smb-srv3):
10.16.142.111/21 bond0
10.16.142.112/21 bond0
10.16.142.113/21 bond0

5. Copy the file to the other cluster nodes (smb-srv2, smb-srv3), including the node
being added:
# scp -p /etc/ctdb/public_addresses smb-srv2:/etc/ctdb/public_addresses
# scp -p /etc/ctdb/public_addresses smb-srv3:/etc/ctdb/public_addresses

6. On the new cluster node being added (smb-srv3), restart the CTDB service:
# service ctdb restart
Shutting down ctdbd service: [ OK ]
Starting ctdbd service: [ OK ]

7. Run ´ctdb reloadnodes´ to force all nodes to reload the /etc/ctdb/nodes.


Run this from one of the existing Samba cluster nodes:
# ctdb reloadnodes
2012/10/18 11:03:19.797702 [18291]: Reloading nodes file on node 1
2012/10/18 11:03:19.798053 [18291]: Reloading nodes file on node 0

8. On the new cluster node being added (smb-srv3), restart the CTDB service:
# service ctdb restart
Shutting down ctdbd service: [ OK ]
Starting ctdbd service: [ OK ]

9. Verify the status of the cluster and the CTDB/Samba cluster from one of the other
cluster nodes (smb-srv1, smb-srv2):
# clustat
Cluster Status for samba-cluster @ Thu Oct 18 11:07:02 2012
Member Status: Quorate

Member Name ID Status


------ ---- ---- ------
smb-srv1-ci 1 Online
smb-srv2-ci 2 Online
smb-srv3-ci 3 Online, Local

refarch-feedback@redhat.com 49 www.redhat.com
# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 OK
pnn:1 10.0.0.102 OK
pnn:2 10.0.0.103 OK (THIS NODE)
Generation:822655699
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:1

Note that both the HA cluster and the CTDB/Samba cluster now contain three members.

www.redhat.com 50 refarch-feedback@redhat.com
5.3 Removing Clustered Samba Nodes
Clustered Samba nodes are removed by modifying the CTDB and Samba configuration
files. Two methods are available: on-line and off-line. On-line removal allows member
nodes to remain available and to continue providing file sharing during the removal of
Samba cluster nodes. By design, entries for a removed node remain in the internal CTDB
database. Off-line removal requires a full shutdown of the cluster and effectively rebuilds
the cluster from the CTDB level up without including the removed node. Using this method,
removed nodes are no longer stored in the internal CTDB database.

5.3.1 On­line Node Removal (Method 1)
1. Verify the cluster and CTDB status from any node. Ensure that all nodes are up,
running, the HA cluster status is Online and the CTDB status is OK. Do not remove
a node from the cluster unless the cluster is fully formed and in a healthy state:
# clustat
Cluster Status for samba-cluster @ Fri Oct 5 18:19:40 2012
Member Status: Quorate

Member Name ID Status


------ ---- ---- ------
smb-srv1-ci 1 Online, Local
smb-srv2-ci 2 Online
smb-srv3-ci 3 Online

# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
pnn:2 10.0.0.103 OK
Generation:169828440
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:0

2. Verify whether any clients have active sessions to the file share being removed by
running smbstatus:
# smbstatus
Samba version 3.5.10-125.el6
PID Username Group Machine
-------------------------------------------------------------------

Service pid machine Connected at


-------------------------------------------------------

No locked files

refarch-feedback@redhat.com 51 www.redhat.com
If any active sessions are attached to the node being removed, notify the clients to detach
from the file share before proceeding. The smbstatus utility can be run from any cluster
node.

3. On the first cluster node (smb-srv1), edit the /etc/ctdb/nodes file and comment out the
entry (10.0.0.103) for the node being removed (smb-srv3):
10.0.0.101
10.0.0.102
#10.0.0.103

Copy the file to the other cluster nodes (smb-srv2, smb-srv3), including the node being
removed:
# scp -p /etc/ctdb/nodes smb-srv2:/etc/ctdb/nodes
# scp -p /etc/ctdb/nodes smb-srv3:/etc/ctdb/nodes

4. Run ´ctdb reloadnodes´ to force all nodes to reload the /etc/ctdb/nodes. Run this
from a cluster node not being removed:
# ctdb reloadnodes
ctdb reloadnodes
2012/10/05 18:31:50.213041 [15865]: Reloading nodes file on node 1
2012/10/05 18:31:50.213359 [15865]: Reloading nodes file on node 0

5. Verify the status of the cluster and the CTDB/Samba cluster from one of the other
cluster nodes (smb-srv1, smb-srv2):
# clustat
Cluster Status for samba-cluster @ Fri Oct 5 18:34:16 2012
Member Status: Quorate

Member Name ID Status


------ ---- ---- ------
smb-srv1-ci 1 Online, Local
smb-srv2-ci 2 Online
smb-srv3-ci 3 Offline

# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
Generation:82327558
Size:2
hash:0 lmaster:0
hash:1 lmaster:1
Recovery mode:NORMAL (0)
Recovery master:1

Note that the HA cluster still contains three members. The internal CTDB database of the
Samba cluster continues to report three nodes (Number of nodes: 3) but has successfully
removed node smb-srv3 as confirmed by the cluster size of two (Size: 2).

www.redhat.com 52 refarch-feedback@redhat.com
6. On the node to be removed (smb-srv3), the CTDB service is automatically stopped.
Unmount the CLVM volumes and stop the CMAN service:
# umount -a -t gfs2
# mount -t gfs2

# service clvmd stop


Deactivating clustered VG(s):
0 logical volume(s) in volume group "SMB-DATA2-VG" now active
0 logical volume(s) in volume group "SMB-CTDB-VG" now active
0 logical volume(s) in volume group "SMB-DATA1-VG" now active
[ OK ]
Signaling clvmd to exit [ OK ]
clvmd terminated

# clustat
Cluster Status for samba-cluster @ Fri Oct 5 18:37:49 2012
Member Status: Quorate

Member Name ID Status


------ ---- ---- ------
smb-srv1-ci 1 Online
smb-srv2-ci 2 Online
smb-srv3-ci 3 Online, Local

# service cman stop


Stopping cluster:
Leaving fence domain... [ OK ]
Stopping gfs_controld... [ OK ]
Stopping dlm_controld... [ OK ]
Stopping fenced... [ OK ]
Stopping cman... [ OK ]
Waiting for corosync to shutdown: [ OK ]
Unloading kernel modules... [ OK ]
Unmounting configfs... [ OK ]

# clustat
Could not connect to CMAN: No such file or directory

# shutdown -h now

This completes the removal of a clustered Samba node using the on-line method.

If round-robin DNS was deployed, the IP address of the decommissioned node should be
removed from the DNS zone file and the /etc/ctdb/public_addresses file on the remaining
cluster nodes (smb-srv1, smb-srv2). The node can now be removed from the HA cluster
as outlined in Appendix G: Adding/Removing HA Nodes.

refarch-feedback@redhat.com 53 www.redhat.com
5.3.2 Off­line Node Removal (Method 2):
1. Verify the cluster and CTDB status from any node. Ensure that all nodes are up,
running, the HA cluster status is Online and the CTDB status is OK. Do not
remove a node from the cluster unless the cluster is fully formed and in a
healthy state:
# clustat
Cluster Status for samba-cluster @ Fri Oct 5 18:19:40 2012
Member Status: Quorate

Member Name ID Status


------ ---- ---- ------
smb-srv1-ci 1 Online, Local
smb-srv2-ci 2 Online
smb-srv3-ci 3 Online

# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
pnn:2 10.0.0.103 OK
Generation:169828440
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:0

2. Verify whether any clients have active sessions to the file share being removed by
running smbstatus:
# smbstatus
Samba version 3.5.10-125.el6
PID Username Group Machine
-------------------------------------------------------------------

Service pid machine Connected at


-------------------------------------------------------

No locked files

If any active sessions are attached to the node being removed, notify the clients to detach
from the file share before proceeding. The smbstatus utility can be run from any cluster
node.

www.redhat.com 54 refarch-feedback@redhat.com
3. On the first cluster node (smb-srv1) create backup copies of the ctdb
(/etc/sysconfig/ctdb), nodes (/etc/ctdb/nodes) and public_addresses
(/etc/ctdb/public_addresses) files:
# mkdir -p /var/tmp/ctdb-backups
# cp -p /etc/sysconfig/ctdb /var/tmp/ctdb-backups/ctdb
# cp -p /etc/ctdb/nodes /var/tmp/ctdb-backups/nodes
# cp -p /etc/ctdb/public_addresses /var/tmp/ctdb-backups/public_addresses

4. On all cluster nodes (smb-srv1, smb-srv2, smb-srv3) stop the CTDB service,
unmount the CLVM volumes and stop the CLVMD service:
# service ctdb stop
Shutting down ctdbd service: [ OK ]

# umount -a -t gfs2

# service clvmd stop


Deactivating clustered VG(s): 0 logical volume(s) in volume group "SMB-
DATA2-VG" now active
0 logical volume(s) in volume group "SMB-CTDB-VG" now active
0 logical volume(s) in volume group "SMB-DATA1-VG" now active
[ OK ]
Signaling clvmd to exit [ OK ]
clvmd terminated [ OK ]

5. On all cluster nodes (smb-srv1, smb-srv2, smb-srv3) remove the CTDB package:
# yum -y remove ctdb
...output abbreviated...

On the remaining cluster nodes (smb-srv1, smb-srv2), re-install the CTDB package:
# yum -y install ctdb
...output abbreviated...

6. On the first cluster node (smb-srv1), restore the saved ctdb (/etc/sysconfig/ctdb),
nodes (/etc/ctdb/nodes) and public_addresses (/etc/ctdb/public_addresses) files:
# cp -p /var/tmp/ctdb-backups/ctdb /etc/sysconfig/ctdb
# cp -p /var/tmp/ctdb-backups/nodes /etc/ctdb/nodes
# cp -p /var/tmp/ctdb-backups/public_addresses /etc/ctdb/public_addresses
/etc/sysconfig/ctdb
CTDB_DEBUGLEVEL=ERR
CTDB_NODES=/etc/ctdb/nodes
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_RECOVERY_LOCK=/share/ctdb/.ctdb.lock
CTDB_MANAGES_SAMBA=yes
CTDB_MANAGES_WINBIND=yes

refarch-feedback@redhat.com 55 www.redhat.com
/etc/ctdb/nodes
10.0.0.101
10.0.0.102

/etc/ctdb/public_addresses
10.16.142.111/21 bond0
10.16.142.112/21 bond0

Edit the nodes (/etc/ctdb/nodes) and public_addresses (/etc/ctdb/public_addresses) files


and delete the entry for the removed cluster node (smb-srv3) as per the above. Copy the
files to the remaining cluster node (smb-srv2):
# scp -p /etc/sysconfig/ctdb smb-srv2:/etc/sysconfig/ctdb
# scp -p /etc/ctdb/nodes smb-srv2:/etc/ctdb/nodes
# scp -p /etc/ctdb/public_addresses smb-srv2:/etc/ctdb/public_addresses

7. On the remaining cluster nodes (smb-srv1, smb-srv2), start the CLVMD service,
refresh the device cache then restart CLVMD. Start the CTDB service on the nodes
after the CLVM volumes are mounted:
# service clvmd start
Starting clvmd:
Activating VG(s): 1 logical volume(s) in volume group "SMB-DATA2-VG" now
active
clvmd not running on node smb-srv3-ci
1 logical volume(s) in volume group "SMB-DATA1-VG" now active
clvmd not running on node smb-srv3-ci
1 logical volume(s) in volume group "SMB-CTDB-VG" now active
clvmd not running on node smb-srv3-ci
3 logical volume(s) in volume group "vg_smbsrv2" now active
clvmd not running on node smb-srv3-ci
[ OK ]
# /usr/sbin/clvmd -R
clvmd not running on node smb-srv3-ci

# service clvmd restart


Restarting clvmd: [ OK ]

# mount -a -t gfs2

# service ctdb start


Starting ctdbd service: [ OK ]

8. Verify the status of the cluster and the CTDB/Samba cluster from one of the other
cluster nodes (smb-srv1, smb-srv2):
# clustat
Cluster Status for samba-cluster @ Thu Oct 18 15:41:04 2012
Member Status: Quorate

Member Name ID Status


------ ---- ---- ------
smb-srv1-ci 1 Online, Local

www.redhat.com 56 refarch-feedback@redhat.com
smb-srv2-ci 2 Online
smb-srv3-ci 3 Online

# ctdb status
Number of nodes:2
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
Generation:1113017468
Size:2
hash:0 lmaster:0
hash:1 lmaster:1
Recovery mode:NORMAL (0)
Recovery master:0

Note that the HA cluster still contains three members but the CTDB/Samba cluster no
longer has entries for the removed node in the internal CTDB database.

This completes the removal of a clustered Samba node using the off-line method.

If round-robin DNS was deployed, the IP address of the decommissioned node should be
removed from the DNS zone file and the /etc/ctdb/public_addresses file on the remaining
cluster nodes (smb-srv1, smb-srv2). The node can now be removed from the HA cluster
as outlined in Appendix G: Adding/Removing HA Nodes.

refarch-feedback@redhat.com 57 www.redhat.com
5.4 Adding File Shares
In the steps below, a new file share (data2) is defined on a previously created and mounted
CLVM volume. Prior to adding the file share, a fibrechannel volume (smb-srv-data-02) is
provisioned and configured as outlined in the following sections:

• Appendix B: Fibre Channel Storage Provisioning


• 4.4 Configure Storage
• 4.4.1 Configure Multipathing
• 4.4.2 Create Cluster Logical Volumes
• 4.4.3 Create GFS2 Filesystems
• 4.4.4 Configure SELinux Security Parameters

The physical volume (/dev/mapper/smb-srv-data-02) is configured in a new volume


group (SMB-DATA2-VG). A logical volume (smb-data2-lvol1) is created within the
new volume group, formatted with the GFS2 filesystem and mounted as /share/data2.
Table 5.4-1: Volume Configuration - Data2 File Share provides a summary of the
new file share volume:

CLVM Volume Configuration - Data2


Data2 Volume
Type Fibrechannel
Physical Disk smb-srv-data-02
Physical Volume /dev/mapper/smb-srv-data-02
Volume Group SMB-DATA2-VG
Logical Volume smb-data-lvol1
File System
Volume CLVM
Type GFS2
Mount point /share/data2
Device /dev/SMB-DATA2-VG/smb-data-lvol1

Table 5.4-1: Volume Configuration - Data2 File Share

Do not proceed until the previous tasks have been completed and the CLVM volume
configured.

1. On the first cluster node (smb-srv1), edit and save the Samba configuration file
(/etc/samba/smb.conf) as follows:
[data2]
comment = Clustered Samba Share 2
public = yes
path = /share/data2
writable = yes

www.redhat.com 58 refarch-feedback@redhat.com
Test the file using the testparm utility:
# testparm
Load smb config files from /etc/samba/smb.conf
Processing section "[data1]"
Processing section "[data2]"
Loaded services file OK.
Server role: ROLE_STANDALONE
Press enter to see a dump of your service definitions

[global]
workgroup = REFARCH-CTDB
server string = Samba Server Version %v
log file = /var/log/samba/log.%m
max log size = 50
clustering = Yes
idmap backend = tdb2
guest ok = Yes

[data1]
comment = Clustered Samba Share 1
path = /share/data1
read only = No

[data2]
comment = Clustered Samba Share 2
path = /share/data2
read only = No

Copy the file from the first cluster node (smb-srv1) to the other cluster nodes:
# scp -p /etc/samba/smb.conf smb-srv2:/etc/samba/smb.conf
# scp -p /etc/samba/smb.conf smb-srv3:/etc/samba/smb.conf

2. Mount the new Data2 volume and verify it can be written to. Perform this step on all
cluster nodes:
# mount -t gfs2 /dev/SMB-DATA2-VG/smb-data-lvol1 /share/data2

# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_smbsrv1-lv_root
51606140 2943380 46041320 7% /
tmpfs 24708156 41444 24666712 1% /dev/shm
/dev/mapper/3600508b1001030374142393845301000p1
495844 98392 371852 21% /boot
/dev/mapper/vg_smbsrv1-lv_home
64583508 185180 61117640 1% /home
/dev/mapper/SMB--CTDB--VG-smb--ctdb--lvol1
1888032 397164 1490868 22% /share/ctdb
/dev/mapper/SMB--DATA--VG-smb--data--lvol1
188723456 397228 188326228 1% /share/data1
/dev/mapper/SMB--DATA2--VG-smb--data--lvol1

refarch-feedback@redhat.com 59 www.redhat.com
188723456 397224 188326232 1% /share/data2

# touch /share/data1/data2.test
# ls -la /share/data2/data2.test
-rw-r--r--. 1 root root 0 Oct 1 13:41 /share/data2/data2.test

3. Add a mount entry for the new volume to /etc/fstab - the new entry is highlighted below.
Edit and save the file on all cluster nodes:
#
# CTDB and DATA volumes for Clustered Samba
#
/dev/SMB-CTDB-VG/smb-ctdb-lvol1 /share/ctdb gfs2 \
defaults,noatime,nodiratime,quota=off 0 0
/dev/SMB-DATA1-VG/smb-data-lvol1 /share/data1 gfs2 \
defaults,acl,noatime,nodiratime,quota=off 0 0
/dev/SMB-DATA2-VG/smb-data-lvol1 /share/data2 gfs2 \
defaults,acl,noatime,nodiratime,quota=off 0 0

4. Restart CTDB/Samba on all cluster nodes – individually one node at a time. The
daemons can take up to a minute to synchronize across all cluster nodes:
# ctdb stop
# ctdb continue

5. Verify the cluster, Samba and ctdb status from any cluster node:
# clustat
Cluster Status for samba-cluster @ Mon Oct 1 17:24:09 2012
Member Status: Quorate

Member Name ID Status


------ ---- ---- ------
smb-srv1-ci 1 Online, Local
smb-srv2-ci 2 Online
smb-srv3-ci

# smbstatus

Samba version 3.5.10-125.el6


PID Username Group Machine
-------------------------------------------------------------------

Service pid machine Connected at


-------------------------------------------------------

No locked files

# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
pnn:2 10.0.0.103 OK
Generation:1457203730

www.redhat.com 60 refarch-feedback@redhat.com
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:0

6. Verify the file share is available from a client. If round-robin DNS has been
configured then specify that hostname to automatically cycle through the
transferrable IP addresses. In the examples below, the round-robin DNS
hostname (smb-srv) is used:
$ smbclient -U root //smb-srv.cloud.lab.eng.bos.redhat.com/data2
Enter root's password: *******
Domain=[REFARCH-CTDB] OS=[Unix] Server=[Samba 3.5.10-125.el6]
smb: \> ls
. D 0 Mon Oct 1 13:41:34 2012
.. D 0 Mon Oct 1 13:40:26 2012
data2.test 0 Mon Oct 1 13:41:34 2012

46075 blocks of size 4194304. 45978 blocks available


smb: \>

# mount -t cifs //smb-srv.cloud.lab.eng.bos.redhat.com/data2 \


/mnt/data2 -o username=root
Password for root@//smb-srv.cloud.lab.eng.bos.redhat.com/data2: *******

# ls -la /mnt/data2
total 12
drwxr-xr-x. 2 root root 0 Oct 1 13:41 .
drwxr-xr-x. 4 root root 4096 Oct 1 17:52 ..
-rw-r--r--. 1 root root 0 Oct 1 13:41 data2.test

This completes the deployment and configuration of a new clustered Samba file share.

refarch-feedback@redhat.com 61 www.redhat.com
5.5 Removing File Shares
In the steps below, an existing file share (data2) is removed from a previously created,
mounted CLVM volume. After the file share is unmounted and the Samba configuration
changes propagated across all cluster nodes, the CLVM and fibrechannel volumes
(smb-srv-data-02) can be removed.

The physical volume (/dev/mapper/smb-srv-data-02) is configured in an existing volume


group (SMB-DATA2-VG). A logical volume (smb-data2-lvol1) is created within the
existing volume group, formatted with the GFS2 filesystem and mounted as /share/data2.
Table 5.5-1: Volume Configuration – Data2 File Share depicts the details of the CLVM
volume:

CLVM Volume Configuration - Data2


Data2 Volume
Type Fibrechannel
Physical Disk smb-srv-data-02
Physical Volume /dev/mapper/smb-srv-data-02
Volume Group SMB-DATA2-VG
Logical Volume smb-data-lvol1
File System
Volume CLVM
Type GFS2
Mount point /share/data2
Device /dev/SMB-DATA2-VG/smb-data-lvol1

Table 5.5-1: Volume Configuration – Data2 File Share

Do not proceed until the filesystem contents have been archived or migrated to new target
locations.

1. Verify whether any clients have active sessions to the file share being removed by
running smbstatus:
# smbstatus
Samba version 3.5.10-125.el6
PID Username Group Machine
-------------------------------------------------------------------

Service pid machine Connected at


-------------------------------------------------------

No locked files

If any active sessions are attached to the file share being removed, notify the clients to
detach from the file share before proceeding. The smbstatus utility can be run from any
cluster node.

www.redhat.com 62 refarch-feedback@redhat.com
2. Unmount the file share. Perform this step on each cluster node:
# mount -t gfs2
/dev/mapper/SMB--CTDB--VG-smb--ctdb--lvol1 on /share/ctdb type gfs2
(rw,seclabel,noatime,nodiratime,hostdata=jid=0)
/dev/mapper/SMB--DATA2--VG-smb--data--lvol1 on /share/data2 type gfs2
(rw,seclabel,noatime,nodiratime,hostdata=jid=0,acl)
/dev/mapper/SMB--DATA1--VG-smb--data--lvol1 on /share/data1 type gfs2
(rw,seclabel,noatime,nodiratime,hostdata=jid=0,acl)

# umount /share/data2
# mount -t gfs2
/dev/mapper/SMB--CTDB--VG-smb--ctdb--lvol1 on /share/ctdb type gfs2
(rw,seclabel,noatime,nodiratime,hostdata=jid=0)
/dev/mapper/SMB--DATA1--VG-smb--data--lvol1 on /share/data1 type gfs2
(rw,seclabel,noatime,nodiratime,hostdata=jid=0,acl)

3. Remove or comment out the mount entry for the file share from /etc/fstab - the entry
is highlighted below. Edit and save the change on all cluster nodes:
#
# CTDB and DATA volumes for Clustered Samba
#
/dev/SMB-CTDB-VG/smb-ctdb-lvol1 /share/ctdb gfs2 \
defaults,noatime,nodiratime,quota=off 0 0
/dev/SMB-DATA1-VG/smb-data-lvol1 /share/data1 gfs2 \
defaults,acl,noatime,nodiratime,quota=off 0 0
#
# Removed from service - 2012-10-04
#
#/dev/SMB-DATA2-VG/smb-data-lvol1 /share/data2 gfs2 \
defaults,acl,noatime,nodiratime,quota=off 0 0

4. On the first cluster node (smb-srv1), edit the Samba configuration file
(/etc/samba/smb.conf) and comment out or remove the existing file share entry:
#
# Removed from service – 2012-10-04
#
#[data2]
# comment = Clustered Samba Share 2
# public = yes
# path = /share/data2
# writable = yes

Test the file using the testparm utility:


# testparm
Load smb config files from /etc/samba/smb.conf
Processing section "[data1]"
Loaded services file OK.
Server role: ROLE_STANDALONE
Press enter to see a dump of your service definitions

refarch-feedback@redhat.com 63 www.redhat.com
[global]
workgroup = REFARCH-CTDB
server string = Samba Server Version %v
log file = /var/log/samba/log.%m
max log size = 50
clustering = Yes
idmap backend = tdb2
guest ok = Yes

[data1]
comment = Clustered Samba Share 1
path = /share/data1
read only = No

Copy the file from the first cluster node (smb-srv1) to the other cluster nodes:
# scp -p /etc/samba/smb.conf smb-srv2:/etc/samba/smb.conf
# scp -p /etc/samba/smb.conf smb-srv3:/etc/samba/smb.conf

5. Restart CTDB/Samba on all cluster nodes individually, one node at a time.


The daemons can take up to a minute to synchronize across all cluster
nodes:
# ctdb stop
# ctdb continue

6. Verify the cluster, Samba and ctdb status from any cluster node:
# clustat
Cluster Status for samba-cluster @ Thu Oct 4 17:06:03 2012
Member Status: Quorate

Member Name ID Status


------ ---- ---- ------
smb-srv1-ci 1 Online, Local
smb-srv2-ci 2 Online
smb-srv3-ci 3 Online

# smbstatus

Samba version 3.5.10-125.el6


PID Username Group Machine
-------------------------------------------------------------------

Service pid machine Connected at


-------------------------------------------------------

No locked files

# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK

www.redhat.com 64 refarch-feedback@redhat.com
pnn:2 10.0.0.103 OK
Generation:169828440
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:0

Attempts to connect to the removed file share returns the following error:
$ smbclient -U root //smb-srv.cloud.lab.eng.bos.redhat.com/data2
Enter root's password:
Domain=[REFARCH-CTDB] OS=[Unix] Server=[Samba 3.5.10-125.el6]
tree connect failed: NT_STATUS_BAD_NETWORK_NAME

This completes the removal of an existing clustered Samba file share. The CLVM and
fibrechannel volumes can now be removed.

refarch-feedback@redhat.com 65 www.redhat.com
6 Windows Active Directory Integration
In this section, the tasks necessary for integrating Clustered Samba nodes into an existing
Windows Active Directory domain are detailed. Prior to proceeding, each of the following
components must first be configured:
• Windows Server 2008 R2 with Active Directory Domain Services
• Red Hat Enterprise Linux 6 servers clustered with CTDB/Samba

6.1 Overview
This configuration is for environments looking to integrate one or more Red Hat Enterprise
Linux 6 systems into an Active Directory domain or forest with the capability to customize
user configurations. Login access and file sharing services are provided.

6.1.1 Configuration Summary
Configuration Summary
Samba/Winbind – idmap_ad
Components
RHEL 6: • Samba/Winbind
Windows 2008 • Active Directory
Server R2: • Identity Management for UNIX (IMU)
Authentication
• Windbind (pam_winbind)
(pam)
ID Tracking/
Name Resolution • Windbind (nss_winbind)
(nss)
ID Mapping
• Windbind (idmap_ad)
(“back-end”)
Configuration • /etc/krb5.conf • /etc/pam.d/passwd-auth
Files • /etc/samba/smb.conf • /etc/pam.d/system-auth
Advantages • SID mappings homogeneous across multiple RHEL servers
• Customizeable user configurations (shell, home directory)
(configured within AD)
• Centralized user account management
• SFU, RFC2307 compatible mappings
Disadvantages • Requires additional configuration work to support a forest of AD
domains or multiple domain trees
• Requires additional user management tasks – user/group ID
attributes must be set within AD
Notes • Requires the ability to modify user attributes within AD (via IMU)

Table 6.1.1: Configuration Summary

www.redhat.com 66 refarch-feedback@redhat.com
6.1.2 Cluster Configuration with Active Directory Integration
Figure 6.1.2: Clustered Samba with Active Directory Integration provides an overview
of the clustered Samba systems and in relation to Windows Active Directory:

Figure 6.1.2: Clustered Samba with Active Directory Integration

refarch-feedback@redhat.com 67 www.redhat.com
6.1.3 Authentication and ID Components
Figure 6.1.3 depicts the Authentication, ID Tracking and ID Mapping components:

Figure 6.1.3: Authentication and ID Components

The Winbind idmap_ad backend maintains consistent user ID mappings across all cluster
Samba nodes. Users can login and/or access file shares through any clustered Samba node
using existing Active Directory user accounts and authentication. Customization of user shell
and home directories within Windows Active Directory is also supported. Winbind idmap_ad
requires the Identity Management for UNIX (IMU) role to be enabled on the Windows Active
Directory domain.

www.redhat.com 68 refarch-feedback@redhat.com
6.2 Integration Tasks
Integrating Red Hat Enterprise Linux 6 Samba cluster nodes into an Active Directory domain
involves the following series of steps:
1. Synchronize Time Service
2. Configure DNS
3. Update Hosts File
4. Install/Configure Kerberos Client
5. Install oddjob-mkhomedir
6. Configure Authentication
7. Verify/Test Active Directory
8. Modify Samba Configuration
9. Verification of Services
10. Configure CTDB to Manage Winbind (optional)
The following provides a step-by-step guide to the integration process.

6.2.1 Synchronize Time Service
It is essential that the time service on each clustered Samba node and the Windows Active
Directory server are synchronized, otherwise Kerberos authentication may fail due to clock
skew. In environments where time services are not reliable, best practice is to configure the
clustered Samba nodes to synchronize time from the Windows Server 2008 R2 server.
1. On each clustered Samba node, edit the file /etc/ntp.conf so the time is synchronized
from a known, reliable time service:
# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats
server ns1.bos.redhat.com
server 10.5.26.10

2. Activate the change on each clustered Samba node by stopping the ntp daemon,
updating the time, then starting the ntp daemon. Verify the change on both servers:
Clustered Samba node:
# service ntpd stop
Shutting down ntpd: [ OK ]
# ntpdate 10.16.255.2
22 Mar 20:17:00 ntpdate[14784]: adjust time server 10.16.255.2 offset
-0.002933 sec
# service ntpd start
Starting ntpd: [ OK ]

Windows Server 2008 R2 server:


C:\Users\Administrator> w32tm /query /status | find "Source"
Source: ns1.xxx.xxx.com
C:\Users\Administrator> w32tm /query /status | find "source"
Reference Id: 0x0A10FF02 (source IP: 10.nn.nnn.2)

refarch-feedback@redhat.com 69 www.redhat.com
3. Configure the ntpd daemon to start on server boot:
# chkconfig ntpd on
# chkconfig --list ntpd
smb 0:off 1:off 2:on 3:on 4:on 5:on 6:off

6.2.2 Configure DNS
Proper resolution of DNS hostnames from each clustered Samba node and the Windows
Active Directory server are essential. Improperly resolved hostnames are one of the leading
causes for integration failures. In environments where DNS lookups are not reliable, best
practice is to configure the clustered Samba nodes to perform DNS lookups from the
Windows Server 2008 R2 Active Directory server.

1. Edit the file /etc/resolv.conf on each clustered Samba node so that the domain name
and search list are specified using the fully qualified domain name (FQDN). The
nameserver IP addresses should be listed in preferred lookup order :
domain cloud.lab.eng.bos.redhat.com
search cloud.lab.eng.bos.redhat.com
nameserver 10.nn.nnn.100 # Windows server specified here
nameserver 10.nn.nnn.247 # Alternate server 1
nameserver 10.nn.nnn.2 # Alternate server 2

2. Similarly, the hostname on each clustered Samba node should be set to the FQDN.
Edit the file /etc/sysconfig/network and set the hostname to use the FQDN:
NETWORKING=yes
HOSTNAME=smb-srv1.cloud.lab.eng.bos.redhat.com
GATEWAY=10.16.255.2
Verify on each clustered Samba node by running the hostname utility:
# hostname
smb-srv1.cloud.lab.eng.bos.redhat.com
Best practice is to create both forward and reverse lookup zones on the Windows Active
Directory server. For further detail, consult either the Windows Active Directory server
documentation or Appendix D: Active Directory Domain Configuration Summary in the Red
Hat Reference Architecture Integrating Red Hat Enterprise Linux 6 with Active Directory.

6.2.3 Update Hosts File
On each clustered Samba node, edit /etc/hosts and add an entry for the Windows Active
Directory server:
#
#----------------------------------#
# Windows Active Directory Server: #
#----------------------------------#
#
10.16.142.100 win-srv1 win-srv1.cloud.lab.eng.bos.redhat.com

www.redhat.com 70 refarch-feedback@redhat.com
6.2.4 Install/Configure Kerberos Client
Best practice is to install and configure the Kerberos client (krb5-workstation) to insure
Kerberos is able to properly authenticate to Active Directory on the Windows Server 2008 R2
server. This step is optional but highly recommended as it is useful for troubleshooting
Kerberos authentication issues. Perform the steps below on each clustered Samba node.

1. Verify the Kerberos client is installed:


# yum list installed | grep krb5
krb5-libs.x86_64 1.9-33.el6_3.2 @rhel-6-server-rpms
krb5-workstation.x86_64 1.9-33.el6_3.2 @rhel-6-server-rpms
pam_krb5.x86_64 2.3.11-9.el6 @anaconda-
RedHatEnterpriseLinux-201206132210.x86_64/6.3

2. If not, install it as follows:


# yum -y install krb5-workstation

...output abbreviated...

Installed:
krb5-workstation.x86_64 1.9-33.el6_3.2

Complete!
If Kerberos has not been previously configured, modify the Kerberos configuration file
(/etc/krb5.conf) by adding entries for the new Kerberos and Active Directory realms. Note the
differences in the Kerberos [realms] and Active Directory [domain_realm] realm entries.

1. Create a safety copy of the Kerberos configuration file:


# cp -p /etc/krb5.conf /etc/krb5.conf.orig

2. Edit the file /etc/krb5.conf as follows – changes are highlighted in bold:


[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

[libdefaults]
default_realm = REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true

[realms]
REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM = {
kdc = WIN-SRV1.REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
admin_server = WIN-SRV1.REFARCH.CLOUD.LAB.ENG.BOS.REDHAT.COM
}

refarch-feedback@redhat.com 71 www.redhat.com
[domain_realm]
.refarch-ad.cloud.lab.eng.bos.redhat.com = REFARCH-AD.CLOUD.LAB.ENG.BOS.
REDHAT.COM
refarch-ad.cloud.lab.eng.bos.redhat.com = REFARCH-AD.CLOUD.LAB.ENG.BOS.
REDHAT.COM

Under Kerberos, [realms] is set to the Kerberos server definitions and [domain_realm]
defines the Active Directory server. Both are in the Active Directory REFARCH-AD domain.

3. Verify the Kerberos configuration. First, clear out any existing tickets:
# kdestroy
# klist
klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_0)

4. Obtain a new Kerberos ticket:


# kinit administrator@REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
Password for administrator@REFARCH.CLOUD.LAB.ENG.BOS.REDHAT.COM: ********

5. Verify a new Kerberos ticket was granted:


# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: administrator@REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM

Valid starting Expires Service principal


10/10/12 15:14:19 10/11/12 01:14:22 krbtgt/REFARCH-
AD.CLOUD.LAB.ENG.BOS.REDHAT.COM@REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
renew until 10/17/12 15:14:19

At this point Kerberos is fully functional and the client utilities (kinit, klist, kdestroy)
can be used for testing and verifying Kerberos functionality.

6.2.5 Install oddjob­mkhomedir
Install the oddjob-mkhomedir package to ensure that user home directories are created
with the proper SELinux file and directory contexts. Perform this step on each clustered
Samba node:
# yum install oddjob-mkhomedir.x86_64
Loaded plugins: product-id, refresh-packagekit, rhnplugin, security,
subscription-manager
Updating certificate-based repositories.

Running Transaction
Installing : oddjob-mkhomedir-0.30-5.el6.x86_64
1/1
Installed products updated.

Installed:
oddjob-mkhomedir.x86_64 0:0.30-5.el6

www.redhat.com 72 refarch-feedback@redhat.com
...output abbreviated...

Complete!

6.2.6 Configure Authentication
The system-config-authentication tool simplifies configuring the Samba,
Kerberos, security and authentication files for Active Directory integration. Invoke
the tool as follows:
# system-config-authentication

Figure 6.2.6-1: User Account Database

On the Identity & Authentication tab, select the User Account Database drop-down
then select Winbind.

refarch-feedback@redhat.com 73 www.redhat.com
A new set of fields is displayed. Selecting the Winbind option configures the system to
connect to a Windows Active Directory domain. User information from a domain can then
be accessed, and the following server authentication options can be configured:

• Winbind Domain: Windows Active Directory domain


• Security Model: The Samba client mode of operation. The drop-down list allows
selection of the following options:
ads - This mode instructs Samba to act as a domain member in an Active Directory
Server (ADS) realm. To operate in this mode, the krb5-server package
must be installed, and Kerberos must be configured properly.
domain - In this mode, Samba attempts to validate the username/password by
authenticating it through a Windows Active Directory domain server,
similar to how a Windows Server would.
server - In this mode, Samba attempts to validate the username/password by
authenticating it through another SMB server. If the attempt fails, the user
mode takes effect instead.
user - This is the default mode. With this level of security, a client must first log in
with a valid username and password. Encrypted passwords can also be
used in this security mode.
• Winbind ADS Realm: When the ads Security Model is selected, this allows you to
specify the ADS Realm the Samba server should act as a domain member of.
• Winbind Domain Controllers: Use this option to specify which domain server winbind
should use.
• Template Shell: When filling out the user information for a Windows user, the
winbindd daemon uses the value chosen here to specify the login shell for that user.
• Allow offline login: By checking this option, authentication information is stored in a
local cache. This information is then used when a user attempts to authenticate while
offline.

www.redhat.com 74 refarch-feedback@redhat.com
Populate the fields as follows:

User Account Database: Winbind


Winbind Domain: REFARCH-AD
Security Model: ads
Winbind ADS Realm: REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
Winbind Domain Controllers: WIN-SRV1.REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM

Template Shell: /sbin/false

Figure 6.2.6-2: User Account Configuration

Select the Advanced Options tab when done.

refarch-feedback@redhat.com 75 www.redhat.com
Under Other Authentication Options, select Create home directories on the first login.

Figure 6.2.6-3: Advanced Options

On the first successful login to Active Directory, the oddjobd daemon calls a method
to create a new home directory for a user.

www.redhat.com 76 refarch-feedback@redhat.com
Return to the Identity & Authentication tab, select Join Domain. An alert indicates the
need to save the configuration changes to disk before continuing:

Figure 6.2.6-4: Save Changes

Select Save. A new window prompts for the Domain administrator password:

Figure 6.2.6-5: Joining Winbind Domain

Select OK. The terminal window displays the status of the domain join:
[/usr/bin/net join -w REFARCH-AD -S WIN-SRV1.REFARCH-
AD.CLOUD.LAB.ENG.BOS.REDHAT.COM -U Administrator]
Enter Administrator's password:<...>

Using short domain name -- REFARCH-AD


Joined 'CSMB-SERVER' to realm 'refarch-ad.cloud.lab.eng.bos.redhat.com'
Not doing automatic DNS update in aclustered setup.

Select Apply. The terminal window indicates that Winbind and the oddjobd were started:
Starting Winbind services: [ OK ]
Starting oddjobd: [ OK ]

Perform the previous authentication configuration tasks on each of the clustered Samba
nodes before proceeding to the next section.

refarch-feedback@redhat.com 77 www.redhat.com
6.2.7 Verify/Test Active Directory
The join to the Active Directory domain is complete. Verify access by performing each of
the following tasks.

Test Connection to AD:


# net ads testjoin
Join is OK

List members in domain:


# wbinfo --domain-users
REFARCH-AD\administrator
REFARCH-AD\guest
REFARCH-AD\krbtgt

...output abbreviated...

REFARCH-AD\ad-user101
REFARCH-AD\ad-user102
REFARCH-AD\ad-user103

List groups in domain:


# wbinfo --domain-groups
REFARCH-AD\domain computers
REFARCH-AD\domain controllers
REFARCH-AD\schema admins
REFARCH-AD\enterprise admins
REFARCH-AD\cert publishers
REFARCH-AD\domain admins
REFARCH-AD\domain users

...output abbreviated...

REFARCH-AD\dnsadmins
REFARCH-AD\dnsupdateproxy
REFARCH-AD\rhel-users 

Note: If either of these fail to return all users or groups in the domain, the idmap UID, GUI
upper boundaries in the Samba configuration file need to be increased and the winbind
and smb daemons restarted. These tasks are discussed in the next section.

www.redhat.com 78 refarch-feedback@redhat.com
6.2.8 Modify Samba Configuration
The previous sections configured Winbind by using the default backend to verify Active
Directory domain access. Next, the Samba configuration file is modified to use the
idmap_ad back-end and several other parameters are configured for convenience.
Table 6.2.8: Summary of Changes provides a summary of the configuration file
parameter changes:

Samba Configuration
File Parameters
Parameter Description
idmap uid = 10000-19999 Set user id range for default backend (tdb)
idmap gid = 10000-19999 Set group id range for default backend (tdb)
idmap config REFARCH-AD:backend = ad Configure winbind to use idmap_ad backend
idmap config REFARCH-AD:default = yes Configure REFARCH-AD as default domain
idmap config REFARCH-AD:range =
Set range for idmap_ad backend
10000000-19999999
idmap config REFARCH-AD:
Enable support for rfc2307 UNIX attributes
schema_mode = rfc2307
winbind nss_info = rfc2307 Obtain user home directory and shell from AD
winbind enum users = no Disable enumeration of users
winbind enum groups = no Disable enumeration of groups
winbind separator = + Change default separator from '\' to '+'
winbind use default domain = yes Remove need to specify domain in commands
winbind nested groups = yes Enable nesting of groups in Active Directory

Table 6.2.8: Summary of Changes

Make a safety copy of the Samba configuration file:


# cp -p /etc/samba/smb.conf /etc/samba/smb.conf.back

Edit and save the Samba configuration file as follows – changes are highlighted in bold:
[global]
workgroup = REFARCH-AD
password server = WIN-SRV1.REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
realm = REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
security = ads
idmap uid = 10000-19999
idmap gid = 10000-19999
idmap config REFARCH-AD:backend = ad
idmap config REFARCH-AD:default = yes
idmap config REFARCH-AD:range = 10000000-19999999
idmap config REFARCH-AD:schema_mode = rfc2307
winbind nss info = rfc2307

refarch-feedback@redhat.com 79 www.redhat.com
winbind enum users = no
winbind enum groups = no
winbind separator = +
winbind use default domain = yes
winbind nested groups = yes

Test the new configuration file:


# testparm
Load smb config files from /etc/samba/smb.conf
Processing section "[data1]"
Loaded services file OK.
'winbind separator = +' might cause problems with group membership.
Server role: ROLE_DOMAIN_MEMBER
Press enter to see a dump of your service definitions

[global]
workgroup = REFARCH-AD
realm = REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
server string = Samba Server Version %v
security = ADS
password server = WIN-SRV1.REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
log file = /var/log/samba/log.%m
max log size = 50
clustering = Yes
idmap backend = tdb2
idmap uid = 10000-19999
idmap gid = 10000-19999
winbind separator = +
winbind use default domain = Yes
winbind nss info = rfc2307
idmap config REFARCH-AD:schema_mode = rfc2307
idmap config REFARCH-AD:range = 10000000-19999999
idmap config REFARCH-AD:default = yes
idmap config REFARCH-AD:backend = ad
guest ok = Yes

[data1]
comment = Clustered Samba Share 1
path = /share/data1
read only = No

...output abbreviated...

Backup and clear out the existing Samba cache files - requires services to be stopped:
# service smb stop
Shutting down SMB services: [ OK ]
# service winbind stop
Shutting down Winbind services: [ OK ]

# tar -cvf /var/tmp/samba-cache-backup.tar /var/lib/samba


tar: Removing leading `/' from member names
/var/lib/samba/

www.redhat.com 80 refarch-feedback@redhat.com
/var/lib/samba/smb_krb5/
/var/lib/samba/smb_krb5/krb5.conf.REFARCH-AD

...output abbreviated...

/var/lib/samba/registry.tdb
/var/lib/samba/perfmon/
/var/lib/samba/winbindd_idmap.tdb

# ls -la /var/tmp/samba-cache-backup.tar
-rw-r--r--. 1 root root 512000 Oct 10 17:06 /var/tmp/samba-cache-backup.tar
# rm -f /var/lib/samba/*

Verify no Kerberos tickets are in use:


# kdestroy
kdestroy: No credentials cache found while destroying cache
# klist
klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_0)

Join the Active Directory domain:


# net join -S win-srv1 -U administrator
Enter administrator's password:
Using short domain name -- REFARCH-AD
Joined 'CSMB-SERVER' to realm 'refarch-ad.cloud.lab.eng.bos.redhat.com'
Not doing automatic DNS update in aclustered setup.

Test connection to the Active Directory domain:


# net ads testjoin
Join is OK

# net ads info


LDAP server: 10.16.142.100
LDAP server name: win-srv1.refarch-ad.cloud.lab.eng.bos.redhat.com
Realm: REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
Bind Path: dc=REFARCH-AD,dc=CLOUD,dc=LAB,dc=ENG,dc=BOS,dc=REDHAT,dc=COM
LDAP port: 389
Server time: Wed, 10 Oct 2012 17:09:50 EDT
KDC server: 10.16.142.100
Server time offset: 0

Start Winbind and Samba to activate the new configuration changes:


# service winbind start
Starting Winbind services: [ OK ]
# service winbind status
winbindd (pid 24416) is running...
# ps -aef | grep winbind
root 24416 1 0 17:12 ? 00:00:00 winbindd
root 24421 24416 0 17:12 ? 00:00:00 winbindd
root 24484 24416 0 17:12 ? 00:00:00 winbindd
root 24487 24416 0 17:12 ? 00:00:00 winbindd
root 24489 24416 0 17:12 ? 00:00:00 winbindd

refarch-feedback@redhat.com 81 www.redhat.com
# service smb start
Starting SMB services: [ OK ]
# service smb status
smbd (pid 24482) is running...
# ps -aef | grep smbd
root 24482 1 0 17:12 ? 00:00:00 smbd -D
root 24495 24482 0 17:12 ? 00:00:00 smbd -D

List members in domain:


# wbinfo --domain-users
CSMB-SERVER+root
CSMB-SERVER+test
CSMB-SERVER+smb-user1
administrator
guest
krbtgt

...output abbreviated...

ad-user101
ad-user102
ad-user103

List groups in domain:


# wbinfo --domain-groups
domain computers
domain controllers
schema admins
enterprise admins
cert publishers
domain admins
domain users

...output abbreviated...

dnsadmins
dnsupdateproxy
rhel-users 

www.redhat.com 82 refarch-feedback@redhat.com
6.2.9 Verification of Services
Verify the services provided by performing the tasks outlined in the following sections:
1. Login Access
$ ssh ad-user101@smb-srv1
ad-user101@smb-srv1's password: **********
Creating home directory for ad-user101.

$ hostname
smb-srv1.cloud.lab.eng.bos.redhat.com

$ id
uid=10000101(ad-user101) gid=10000002(rhel-users) groups=10000002(rhel-
users),10001(BUILTIN+users)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

$ pwd
/home/REFARCH-AD/ad-user101
$ ls -ld
drwxr-xr-x. 4 ad-user101 rhel-users 4096 Oct 10 17:23 .

$ echo $SHELL
/bin/bash

Verify access from another Red Hat Enterprise Linux 6 system, using a different Active
Directory user account:
$ hostname
rhel-srv11.cloud.lab.eng.bos.redhat.com

$ ssh ad-user102@smb-srv1
ad-user102@smb-srv1's password:
Creating home directory for ad-user102.

$ hostname
smb-srv1.cloud.lab.eng.bos.redhat.com

$ id
uid=10000102(ad-user102) gid=10000002(rhel-users) groups=10000002(rhel-
users),10001(BUILTIN+users)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

$ pwd
/home/REFARCH-AD/ad-user102
$ ls -ld
drwxr-xr-x. 4 ad-user102 rhel-users 4096 Oct 10 17:27 .

$ echo $SHELL
/bin/bash

refarch-feedback@redhat.com 83 www.redhat.com
2. File Share
Use the smbclient utility to determine what file shares are available on win-srv1:
$ hostname
smb-srv1.cloud.lab.eng.bos.redhat.com
$ id
uid=10000101(ad-user101) gid=10000002(rhel-users) groups=10000002(rhel-
users),10001(BUILTIN+users)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

$ kinit
Password for ad-user101@REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM:**********
$ klist
Ticket cache: FILE:/tmp/krb5cc_10000101
Default principal: ad-user101@REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM

Valid starting Expires Service principal


10/10/12 18:45:30 10/11/12 04:45:37 krbtgt/REFARCH-
AD.CLOUD.LAB.ENG.BOS.REDHAT.COM@REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
renew until 10/17/12 18:45:30
$ smbclient -L win-srv1 -k
OS=[Windows Server 2008 R2 Enterprise 7601 Service Pack 1] Server=[Windows
Server 2008 R2 Enterprise 6.1]

Sharename Type Comment


--------- ---- -------
ADMIN$ Disk Remote Admin
C$ Disk Default share
IPC$ IPC Remote IPC
NETLOGON Disk Logon server share
SYSVOL Disk Logon server share
Win-Data Disk
OS=[Windows Server 2008 R2 Enterprise 7601 Service Pack 1] Server=[Windows
Server 2008 R2 Enterprise 6.1]

Server Comment
--------- -------

Workgroup Master
--------- ------- 

Use the smbclient utility to view what files are available on the Win-Data file share:
$ smbclient //win-srv1/Win-Data -k
OS=[Windows Server 2008 R2 Enterprise 7601 Service Pack 1] Server=[Windows
Server 2008 R2 Enterprise 6.1]
smb: \> showconnect
//win-srv1/Win-Data
smb: \> listconnect
0: server=win-srv1, share=Win-Data
smb: \> ls
. D 0 Wed Oct 10 18:35:44 2012
.. D 0 Wed Oct 10 18:35:44 2012
Win-Srv1.txt A 301 Wed Oct 10 18:38:07 2012

www.redhat.com 84 refarch-feedback@redhat.com
51097 blocks of size 1048576. 26294 blocks available
smb: \> quit
Note that new Kerberos tickets have been granted for use by the smbclient utility:
$ klist
Ticket cache: FILE:/tmp/krb5cc_10000101
Default principal: ad-user101@REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM

Valid starting Expires Service principal


10/10/12 18:45:30 10/11/12 04:45:37 krbtgt/REFARCH-
AD.CLOUD.LAB.ENG.BOS.REDHAT.COM@REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
renew until 10/17/12 18:45:30
10/10/12 18:47:53 10/11/12 04:45:37 cifs/win-srv1@REFARCH-
AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
renew until 10/17/12 18:45:30
10/10/12 18:47:53 10/11/12 04:45:37 cifs/win-srv1@REFARCH-
AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
renew until 10/17/12 18:45:30
10/10/12 18:47:53 10/11/12 04:45:37 cifs/win-srv1@REFARCH-
AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
renew until 10/17/12 18:45:30
10/10/12 18:47:53 10/11/12 04:45:37 cifs/win-srv1@REFARCH-
AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
renew until 10/17/12 18:45:30

Create a mount point, mount the file share locally on a cluster node and access a file:
# hostname
smb-srv1.cloud.lab.eng.bos.redhat.com

# mkdir /mnt/Win-Data
# mount -t cifs //win-srv1/Win-Data /mnt/Win-Data -o username=ad-user101
Password:

# df -k -t cifs
Filesystem 1K-blocks Used Available Use% Mounted on
//win-srv1/Win-Data 52324348 25399260 26925088 49% /mnt/Win-Data

# mount -t cifs
//win-srv1/Win-Data on /mnt/Win-Data type cifs (rw)

# su - ad-user101
$ id
uid=10000101(ad-user101) gid=10000002(rhel-users) groups=10000002(rhel-
users),10001(BUILTIN+users)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

$ ls -la /mnt/Win-Data/Win-Srv1.txt
total 5
drwxr-xr-x. 1 root root 0 Oct 10 19:02 .
drwxr-xr-x. 3 root root 4096 Oct 10 18:57 ..
-rwxr-xr-x. 1 root root 302 Oct 10 19:03 Win-Srv1.txt

$ cat /mnt/Win-Data/Win-Srv1.txt

refarch-feedback@redhat.com 85 www.redhat.com
+-------------------------------------------------------+
+ This file is located on the Windows Server 2008 R2 +
+ server named 'win-srv1.cloud.lab.eng.bos.redhat.com' +
+ located in the Active Directory domain 'REFARCH-AD' +
+-------------------------------------------------------+

6.2.10 Configure CTDB Winbind Management (optional)
CTDB can be configured to manage the startup and stopping of winbind. This step is optional
but highly recommended for environments when clustered Samba nodes are integrated with
Active Directory domains.

1. Edit and save the CTDB configuration file (/etc/sysconfig/ctdb) on the first cluster node
(smb-srv1). Enable the parameter CTDB_MANAGES_WINBIND as highlighted below:
CTDB_DEBUGLEVEL=ERR
CTDB_NODES=/etc/ctdb/nodes
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_RECOVERY_LOCK=/share/ctdb/.ctdb.lock
CTDB_MANAGES_SAMBA=yes
CTDB_MANAGES_WINBIND=yes

2. Copy the file to the other cluster nodes (smb-srv2, smb-srv3):


# scp -p /etc/sysconfig/ctdb smb-srv2:/etc/sysconfig/ctdb
# scp -p /etc/sysconfig/ctdb smb-srv3:/etc/sysconfig/ctdb

This change simplifies the management of Samba and Winbind by automatically starting
and stopping the smbd and winbindd daemons when the ctdb service is started or
stopped.

This completes the process of integrating Red Hat Enterprise Linux 6 Samba cluster nodes
into an Active Directory domain. If there are multiple clustered Samba nodes to be
integrated, repeat the integration tasks for each system and verify the services provided.

www.redhat.com 86 refarch-feedback@redhat.com
7 Conclusion
This reference architecture details the deployment, configuration and management of highly
available file shares using clustered Samba on Red Hat Enterprise Linux 6. The most
common administration tasks are included - starting/stopping nodes, adding/removing nodes
and file shares. For environments interested in integrating Samba clusters into Windows
Active Directory domains, a separate section is provided.
The clustered Samba configuration detailed within can be deployed as presented here,
or customized to meet the specific requirements of individual environments.

refarch-feedback@redhat.com 87 www.redhat.com
Appendix A: References
Red Hat Enterprise Linux 6
1. Red Hat Enterprise Linux 6 Installation Guide
Installing Red Hat Enterprise Linux 6 for all architectures
Edition 1.0
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/pdf/
Installation_Guide/Red_Hat_Enterprise_Linux-6-Installation_Guide-en-US.pdf

2. Red Hat Enterprise Linux 6 Deployment Guide


Deployment, Configuration and Administration of Red Hat Enterprise Linux 6
Edition 3
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/pdf/
Deployment_Guide/Red_Hat_Enterprise_Linux-6-Deployment_Guide-en-US.pdf

3. Red Hat Enterprise Linux 6 DM Multipath


DM Multipath Configuration and Administration
Edition 1
http://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/pdf/
DM_Multipath/Red_Hat_Enterprise_Linux-6-DM_Multipath-en-US.pdf
High Availability Add-On
4. Red Hat Enterprise Linux 6 High Availability Add-On Overview
Overview of the High Availability Add-On for Red Hat Enterprise Linux
Edition 2
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/pdf/
High_Availability_Add-On_Overview/Red_Hat_Enterprise_Linux-6-High_Availability_Add-
On_Overview-en-US.pdf
5. Red Hat Enterprise Linux 6 Cluster Administration
Configuring and Managing the High Availability Add-On
Edition 0
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/pdf/
Cluster_Administration/Red_Hat_Enterprise_Linux-6-Cluster_Administration-en-US.pdf
Resilient Storage Add-On (GFS2, CLVM, CTDB)
6. Global File System 2
Edition 7
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/pdf/
Global_File_System_2/Red_Hat_Enterprise_Linux-6-Global_File_System_2-en-US.pdf
7. Red Hat Enterprise Linux 6 Logical Volume Manager Administration
LVM Administrator Guide
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/pdf/
Logical_Volume_Manager_Administration/Red_Hat_Enterprise_Linux-6-
Logical_Volume_Manager_Administration-en-US.pdf
8. CTDB Setup -Samba Wiki
http://wiki.samba.org/index.php/CTDB_Setup
9. RHEL Clustering - CTDB Support
https://access.redhat.com/knowledge/solutions/32264

www.redhat.com 88 refarch-feedback@redhat.com
Microsoft Windows Server 2008 R2
10. Install and Deploy Windows Server
August 6, 2009
http://technet.microsoft.com/en-us/library/dd283085.aspx
Active Directory
11. Active Directory Domain Services
April 18, 2008
http://technet.microsoft.com/en-us/library/cc770946.aspx

12. Active Directory Lightweight Directory Services


August 18, 2008
http://technet.microsoft.com/en-us/library/cc731868.aspx
Samba/Winbind
13. The Official Samba 3.5 HOWTO and Reference Guide
http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection
RHEL-Windows Active Directory Integration

14. “Integrating Red Hat Enterprise Linux 6 with Active Directory”


Red Hat Reference Architecture Series
Version 1.2
June 2012
https://www.redhat.com/resourcelibrary/reference-architectures/integrating-red-hat-enterprise-
linux-6-with-active-directory
15. “What steps do I need to follow to join a Red Hat Enterprise Linux Samba server to an
Active Directory domain in security = ADS mode?”
Red Hat Knowledge Article - 3049
http://access.redhat.com/knowledge/articles/DOC-3049

16. “How do I set up winbind on our Samba server to create users and groups from our
domain controller?”
Red Hat Knowledge Article - 4821
http://access.redhat.com/knowledge/articles/DOC-4821

17. “How do I configure Kerberos for Active Directory (AD) integration on Linux?”
Red Hat Knowledge Solution - 4734
http://access.redhat.com/knowledge/solutions/DOC-4734

18. “What changes do I need to make to nsswitch.conf for winbind to work?”


Red Hat Knowledge Article - 4761
http://access.redhat.com/knowledge/articles/DOC-4761

refarch-feedback@redhat.com 89 www.redhat.com
Appendix B: Fibre Channel Storage 
Provisioning
Two CLVM volumes are configured for use by the cluster nodes. Both volumes are created on
an HP StorageWorks MSA2324fc Fibre Channel storage array. The array contains a single
controller (Ports A1, A2) and an MSA70 expansion shelf providing a total of 48 physical
drives. The steps below describe how to provision a 1GB volume (ha-web-data-01) and a
100 GB volume () within a new virtual disk (VD1) from the command line.

Step 1. Login to the MSA storage array


# ssh -Y -l manage ra-msa20

Step 2. View the available virtual disks (vdisk), physical disks and volumes
# show vdisk
# show disks
# show volumes

Step 3. Create a virtual disk to hold the volume


# create vdisk level raid6 disks 1.1-12 VD1

Step 4. Create volumes within the virtual disk for CTDB and Samba data
# create volume vdisk VD1 size 2GB access no-access lun 1 smb-srv-ctdb-01
# create volume vdisk VD1 size 200GB access no-access lun 2 smb-srv-data-01
# show volumes vdisk VD1
Vdisk Name Size Serial Number WR Policy
Cache Opt Read Ahead Size Type Class
Volume Description
----------------------------------------------------------------------------
VD1 smb-srv-ctdb-01 1999.9MB 00c0ffd7e69d0000d26a325001000000 write-back
standard Default standard standard
VD1 smb-srv-data-01 199.9GB 00c0ffd7e69d0000f36a325001000000 write-back
standard Default standard standard

<...output truncated...>

Step 5. View the current host connection mappings


# show hosts
Host ID Name Discovered Mapped Profile
------------------------------------------------------------
50060B0000C28628 kvm-srv1-host1 Yes Yes Standard
50060B0000C2862A kvm-srv1-host2 Yes Yes Standard
50060B0000C2863C Yes No Standard
50060B0000C2863E Yes No Standard
50060B0000C2862C Yes No Standard
50060B0000C2862E Yes No Standard
50060B0000C28636 Yes No Standard

www.redhat.com 90 refarch-feedback@redhat.com
50060B0000C28634 Yes No Standard

<...output truncated...>

Step 6. Identify what ports each host is using

smb-srv1:
# cat /sys/class/fc_host/host1/port_name
0x50060b0000c2862C
# cat /sys/class/fc_host/host2/port_name
0x50060b0000c2862E
smb-srv2:
# cat /sys/class/fc_host/host1/port_name
0x50060b0000c28634
# cat /sys/class/fc_host/host2/port_name
0x50060b0000c28636
smb-srv3:
# cat /sys/class/fc_host/host1/port_name
0x50060b0000c2863c
# cat /sys/class/fc_host/host2/port_name
0x50060b0000c2863e

Note: If the Fibre Channel storage array has two controllers attached to the SAN fabric then
each host has four port connections instead of the two shown here.

Step 7. Configure the host-name id's to the ports and verify


# set host-name id 50060B0000C2862C smb-srv1-host1 # Controller A Port 1
# set host-name id 50060B0000C2862E smb-srv1-host2 # Controller A Port 2
# set host-name id 50060B0000C28634 smb-srv2-host1 # Controller A Port 1
# set host-name id 50060B0000C28636 smb-srv2-host2 # Controller A Port 2
# set host-name id 50060B0000C2863c smb-srv3-host1 # Controller A Port 1
# set host-name id 50060B0000C2863e smb-srv3-host2 # Controller A Port 2
# show hosts
Host ID Name Discovered Mapped Profile
------------------------------------------------------------
50060B0000C28628 kvm-srv1-host1 Yes Yes Standard
50060B0000C2862A kvm-srv1-host2 Yes Yes Standard
50060B0000C2863C smb-srv3-host1 Yes No Standard
50060B0000C2863E smb-srv3-host2 Yes No Standard
50060B0000C2862C smb-srv1-host1 Yes No Standard
50060B0000C2862E smb-srv1-host2 Yes No Standard
50060B0000C28634 smb-srv2-host1 Yes No Standard
50060B0000C28636 smb-srv2-host2 Yes No Standard

<...output truncated...>

Step 8. View the volumes


# show volumes vdisk VD1

refarch-feedback@redhat.com 91 www.redhat.com
Vdisk Name Size Serial Number WR Policy
Cache Opt Read Ahead Size Type Class
Volume Description
----------------------------------------------------------------------------
VD1 smb-srv-ctdb-01 1999.9MB 00c0ffd7e69d0000d26a325001000000 write-back
standard Default standard standard
VD1 smb-srv-data-01 199.9GB 00c0ffd7e69d0000f36a325001000000 write-back
standard Default standard standard

<...output truncated...>

Step 9. Restrict volume access to the three cluster nodes


# map volume access read-write lun 1 ports A1,A2 \
host smb-srv1-host1 smb-srv-ctdb-01
# map volume access read-write lun 1 ports A1,A2 \
host smb-srv1-host2 smb-srv-ctdb-01
# map volume access read-write lun 1 ports A1,A2 \
host smb-srv2-host1 smb-srv-ctdb-01
# map volume access read-write lun 1 ports A1,A2 \
host smb-srv2-host2 smb-srv-ctdb-01
# map volume access read-write lun 1 ports A1,A2 \
host smb-srv3-host1 smb-srv-ctdb-01
# map volume access read-write lun 1 ports A1,A2 \
host smb-srv3-host2 smb-srv-ctdb-01
Info: Command completed successfully. - Mapping succeeded. Host smb-srv1-
host1 was mapped for volume smb-srv-ctdb-01 with LUN 1.

Info: Command completed successfully. - Mapping succeeded. Host smb-srv1-


host2 was mapped for volume smb-srv-ctdb-01 with LUN 1.

<...output truncated...>

# map volume access read-write lun 1 ports A1,A2 \


host smb-srv1-host1 smb-srv-data-01
# map volume access read-write lun 1 ports A1,A2 \
host smb-srv1-host2 smb-srv-data-01
# map volume access read-write lun 1 ports A1,A2 \
host smb-srv2-host1 smb-srv-data-01
# map volume access read-write lun 1 ports A1,A2 \
host smb-srv2-host2 smb-srv-data-01
# map volume access read-write lun 1 ports A1,A2 \
host smb-srv3-host1 smb-srv-data-01
# map volume access read-write lun 1 ports A1,A2 \
host smb-srv3-host2 smb-srv-data-01
Info: Command completed successfully. - Mapping succeeded. Host smb-srv1-
host1 was mapped for volume smb-srv-data-01 with LUN 2.

Info: Command completed successfully. - Mapping succeeded. Host smb-srv1-


host2 was mapped for volume smb-srv-data-01 with LUN 2.

<...output truncated...>

www.redhat.com 92 refarch-feedback@redhat.com
Step 10. Verify volume and host mappings
# show volume-map smb-srv-ctdb-01
Info: Retrieving data...
Volume View [Serial Number (00c0ffd7e69d0000d78a095001000000) Name (smb-srv-
ctdb-01) ] Mapping:
Ports LUN Access Host-Port-Identifier Nickname Profile
----------------------------------------------------------------------
A1,A2 1 read-write 50060B0000C2862C smb-srv1-host1 Standard
A1,A2 1 read-write 50060B0000C2862E smb-srv1-host2 Standard
A1,A2 1 read-write 50060B0000C28634 smb-srv2-host1 Standard
A1,A2 1 read-write 50060B0000C28636 smb-srv2-host2 Standard
A1,A2 1 read-write 50060B0000C2863C smb-srv3-host1 Standard
A1,A2 1 read-write 50060B0000C2863E smb-srv3-host2 Standard
not-mapped all other hosts Standard
# show volume-map smb-srv-data-01
Info: Retrieving data...
Volume View [Serial Number (00c0ffd7e69d0000fa8a095001000000) Name (smb-srv-
data-01) ] Mapping:
Ports LUN Access Host-Port-Identifier Nickname Profile
----------------------------------------------------------------------
A1,A2 2 read-write 50060B0000C2862C smb-srv1-host1 Standard
A1,A2 2 read-write 50060B0000C2862E smb-srv1-host2 Standard
A1,A2 2 read-write 50060B0000C28634 smb-srv2-host1 Standard
A1,A2 2 read-write 50060B0000C28636 smb-srv2-host2 Standard
A1,A2 2 read-write 50060B0000C2863C smb-srv3-host1 Standard
A1,A2 2 read-write 50060B0000C2863E smb-srv3-host2 Standard
not-mapped all other hosts Standard
<...output truncated...>
# show host-map
Host View [ID (50060B0000C2863C) Name (smb-srv3-host1) Profile (Standard) ]
Mapping:
Name Serial Number LUN Access Ports
-------------------------------------------------------------------------
smb-srv-ctdb-01 00c0ffd7e69d0000d26a325001000000 1 read-write A1,A2
smb-srv-data-01 00c0ffd7e69d0000f36a325001000000 2 read-write A1,A2

Host View [ID (50060B0000C28634) Name (smb-srv2-host1) Profile (Standard) ]


Mapping:
Name Serial Number LUN Access Ports
-------------------------------------------------------------------------
smb-srv-ctdb-01 00c0ffd7e69d0000d26a325001000000 1 read-write A1,A2
smb-srv-data-01 00c0ffd7e69d0000f36a325001000000 2 read-write A1,A2

Host View [ID (50060B0000C2862C) Name (smb-srv1-host1) Profile (Standard) ]


Mapping:
Name Serial Number LUN Access Ports
-------------------------------------------------------------------------
smb-srv-ctdb-01 00c0ffd7e69d0000d26a325001000000 1 read-write A1,A2
smb-srv-data-01 00c0ffd7e69d0000f36a325001000000 2 read-write A1,A2

Host View [ID (50060B0000C28636) Name (smb-srv2-host2) Profile (Standard) ]


Mapping:

refarch-feedback@redhat.com 93 www.redhat.com
Name Serial Number LUN Access Ports
-------------------------------------------------------------------------
smb-srv-ctdb-01 00c0ffd7e69d0000d26a325001000000 1 read-write A1,A2
smb-srv-data-01 00c0ffd7e69d0000f36a325001000000 2 read-write A1,A2

Host View [ID (50060B0000C2863E) Name (smb-srv3-host2) Profile (Standard) ]


Mapping:
Name Serial Number LUN Access Ports
-------------------------------------------------------------------------
smb-srv-ctdb-01 00c0ffd7e69d0000d26a325001000000 1 read-write A1,A2
smb-srv-data-01 00c0ffd7e69d0000f36a325001000000 2 read-write A1,A2

Host View [ID (50060B0000C2862E) Name (smb-srv1-host2) Profile (Standard) ]


Mapping:
Name Serial Number LUN Access Ports
-------------------------------------------------------------------------
smb-srv-ctdb-01 00c0ffd7e69d0000d26a325001000000 1 read-write A1,A2
smb-srv-data-01 00c0ffd7e69d0000f36a325001000000 2 read-write A1,A2
<...output truncated...>

Step 11. From each of the cluster nodes, determine which device files are configured for the
2 GB (/dev/sdb, /dev/sdd, /dev/sdf, /dev/sdh) and 200 GB (/dev/sdc, /dev/sde, /dev/sdg,
/dev/sdi) Fibre Channel disks
# fdisk -l 2>/dev/null | grep "^Disk /dev/sd"
Disk /dev/sda: 146.8 GB, 146778685440 bytes
Disk /dev/sdb: 999 MB, 999997440 bytes
Disk /dev/sdc: 100.0 GB, 99999989760 bytes
Disk /dev/sdd: 999 MB, 999997440 bytes
Disk /dev/sde: 100.0 GB, 99999989760 bytes
Disk /dev/sdf: 999 MB, 999997440 bytes
Disk /dev/sdg: 100.0 GB, 99999989760 bytes
Disk /dev/sdh: 999 MB, 999997440 bytes
Disk /dev/sdi: 100.0 GB, 99999989760 bytes

Step 12. Verify the World Wide ID's (WWID) match for each device. The WWID's must be
the same across each cluster node
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3600c0ff000d7e69dd78a095001000000
# /lib/udev/scsi_id --whitelisted --device=/dev/sdd
3600c0ff000d7e69dd78a095001000000
# /lib/udev/scsi_id --whitelisted --device=/dev/sdf
3600c0ff000d7e69dd78a095001000000
# /lib/udev/scsi_id --whitelisted --device=/dev/sdh
3600c0ff000d7e69dd78a095001000000
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3600c0ff000d7e69dfa8a095001000000
# /lib/udev/scsi_id --whitelisted --device=/dev/sde
3600c0ff000d7e69dfa8a095001000000
# /lib/udev/scsi_id --whitelisted --device=/dev/sdg
3600c0ff000d7e69dfa8a095001000000
# /lib/udev/scsi_id --whitelisted --device=/dev/sdi
3600c0ff000d7e69dfa8a095001000000

www.redhat.com 94 refarch-feedback@redhat.com
Appendix C: Cluster Configuration File 
(cluster.conf)
<?xml version="1.0"?>
<cluster config_version="19" name="samba-cluster">
<fence_daemon post_join_delay="60"/>
<clusternodes>
<clusternode name="smb-srv1-ci" nodeid="1">
<fence>
<method name="Primary">
<device name="IPMI-smb-srv1-ci"/>
</method>
</fence>
</clusternode>
<clusternode name="smb-srv2-ci" nodeid="2">
<fence>
<method name="Primary">
<device name="IPMI-smb-srv2-ci"/>
</method>
</fence>
</clusternode>
<clusternode name="smb-srv3-ci" nodeid="3">
<fence>
<method name="Primary">
<device name="IPMI-smb-srv3-ci"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman/>
<fencedevices>
<fencedevice agent="fence_ipmilan" auth="password" \
ipaddr="10.16.143.232" lanplus="on" login="root" \
name="IPMI-smb-srv1-ci" passwd="*******" \
power_wait="5" timeout="20"/>
<fencedevice agent="fence_ipmilan" auth="password" \
ipaddr="10.16.143.233" lanplus="on" login="root" \
name="IPMI-smb-srv2-ci" passwd="*******" \
power_wait="5" timeout="20"/>
<fencedevice agent="fence_ipmilan" auth="password" \
ipaddr="10.16.143.241" lanplus="on" login="root" \
name="IPMI-smb-srv3-ci" passwd="*******" \
power_wait="5" timeout="20"/>
</fencedevices>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>

refarch-feedback@redhat.com 95 www.redhat.com
Appendix D: CTDB Configuration Files
/etc/sysconfig/ctdb (Base Configuration)
CTDB_DEBUGLEVEL=ERR
CTDB_NODES=/etc/ctdb/nodes
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_RECOVERY_LOCK=/share/ctdb/.ctdb.lock
CTDB_MANAGES_SAMBA=yes

/etc/sysconfig/ctdb (Active Directory Integration)


CTDB_DEBUGLEVEL=ERR
CTDB_NODES=/etc/ctdb/nodes
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_RECOVERY_LOCK=/share/ctdb/.ctdb.lock
CTDB_MANAGES_SAMBA=yes
CTDB_MANAGES_WINBIND=yes

/etc/ctdb/public_addresses
10.16.142.111/21 bond0
10.16.142.112/21 bond0
10.16.142.113/21 bond0

/etc/ctdb/nodes
10.0.0.101
10.0.0.102
10.0.0.103

www.redhat.com 96 refarch-feedback@redhat.com
Appendix E: Samba Configuration File 
(smb.conf)
Base Configuration
[global]
workgroup = REFARCH-CTDB
server string = Samba Server Version %v

guest ok = yes
clustering = yes
idmap backend = tdb2
passdb backend = tdbsam

log file = /var/log/samba/log.%m


max log size = 50
[data1]
comment = Clustered Samba Share 1
public = yes
path = /share/data1
writable = yes

[data2]
comment = Clustered Samba Share 2
public = yes
path = /share/data2
writable = yes

refarch-feedback@redhat.com 97 www.redhat.com
Advanced Configuration (Active Directory Integration)
[global]
guest ok = yes
clustering = yes

workgroup = REFARCH-AD
password server = WIN-SRV1.REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
realm = REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
security = ads
idmap uid = 20000-29999
idmap gid = 20000-29999
idmap config REFARCH-AD:backend = ad
idmap config REFARCH-AD:default = yes
idmap config REFARCH-AD:range = 10000000-29999999
idmap config REFARCH-AD:schema_mode = rfc2307
winbind nss info = rfc2307
winbind enum users = no
winbind enum groups = no
winbind separator = +
winbind use default domain = yes
winbind nested groups = yes

[data1]
comment = Clustered Samba Share 1
public = yes
path = /share/data1
writable = yes

[data2]
comment = Clustered Samba Share 2
public = yes
path = /share/data2
writable = yes

www.redhat.com 98 refarch-feedback@redhat.com
Appendix F: Cluster Configuration Matrix
smb-srv1 smb-srv2 smb-srv3
Nodes
Node Name smb-srv1-ci smb-srv2-ci smb-srv3-ci
IP Address 10.0.0.101 10.0.0.102 10.0.0.103
(cluster interconnect)
Hostname smb-srv1 smb-srv2 smb-srv3
IP Address 10.16.142.101 10.16.142.102 10.16.142.103
(public interface)
Fencing
Fence Type IPMI Lan IPMI Lan IPMI Lan
Fence Device
IPMI-smb-srv1-ci IPMI-smb-srv2-ci IPMI-smb-srv3-ci
Name
Fence Device
10.16.143.232 10.16.143.233 10.16.143.241
IP Address
Fence Method Name Primary Primary Primary
Fence Instance IPMI-smb-srv1-ci IPMI-smb-srv2-ci IPMI-smb-srv3-ci
Storage
CTDB Volume
Type Fibrechannel
Physical Disk smb-srv-ctdb-01
Physical Volume /dev/mapper/smb-srv-ctdb-01
Volume Group SMB-CTDB-VG
Logical Volume smb-ctdb-lvol1
Filesystem
Volume CLVM
Type GFS2
Mount point /share/ctdb
Device /dev/SMB-CTDB-VG/smb-ctdb-lvol1
Data Volume
Type Fibrechannel
Physical Disk smb-srv-data-01
Physical Volume /dev/mapper/smb-srv-data-01
Volume Group SMB-DATA1-VG
Logical Volume smb-data-lvol1
Filesystem
Volume CLVM
Type GFS2
Mount point /share/data1
Device /dev/SMB-DATA1-VG/smb-data-lvol1

Appendix F: Cluster Configuration Matrix

refarch-feedback@redhat.com 99 www.redhat.com
Appendix G: Adding/Removing HA Nodes
• Two node HA clusters are a special case scenario requiring a cluster restart
(to the CMAN service) and brief service downtime to activate the change in
membership when adding (2 -> 3) or removing (3 -> 2) a node.
Adding HA Cluster Node

1. Verify the cluster status from any node. Ensure that all nodes are up, running
and the cluster status is Online. Do not add a node to the cluster unless the
cluster is fully formed and in a healthy state:
# clustat

2. Add the new member to the cluster configuration and specify the nodeid. When
expanding from two nodes to three nodes (or greater), the two_node flag must be
disabled. This can be run from any cluster node or the management server:
# ccs --host smb-srv1 --setcman
# ccs --host smb-srv1 --addnode smb-srv3-ci --nodeid=”3”
Node smb-srv3-ci added.

3. Add the new node to the fence method (Primary) and an instance of the fence device
(IPMI-smb-srv3-ci) to the fence method. Run this from any cluster node:
# ccs --host smb-srv1 --addmethod Primary smb-srv3-ci
Method Primary added to smb-srv3-ci.
# ccs --host smb-srv1 --addfencedev IPMI-smb-srv3-ci agent=fence_ipmilan \
auth=password ipaddr=10.16.143.241 lanplus=on login=root \
name=IPMI-smb-srv3-ci passwd=password power_wait=5 timeout=20
# ccs --host smb-srv1 --addfenceinst IPMI-smb-srv3-ci smb-srv3-ci Primary

4. Propagate the change to all cluster members and start the cluster services. A brief
downtime is required to allow the cluster nodes to synchronize and activate the
change. This can be run from any cluster node:
# ccs --host smb-srv1 --stopall
# ccs --host smb-srv1 --sync --activate
# ccs --host smb-srv1 --checkconf
All nodes in sync.

# ccs --host smb-srv1 --startall


Started smb-srv2-ci
Started smb-srv3-ci
Started smb-srv1-ci

5. Verify the new cluster status from any node:


# clustat

www.redhat.com 100 refarch-feedback@redhat.com


Remove HA Cluster Node

1. Verify the cluster status from any node. Ensure that all nodes are up, running and the
cluster status is Online. Do not remove a node from the cluster unless the cluster is
fully formed and in a healthy state:
# clustat

2. Remove the node from the cluster configuration and propagate the change to the
remaining cluster members. Two-node clusters are a unique case that require the
two_node and expected_votes flags to be enabled. For all other configurations
these flags do not need to be enabled. A brief downtime is required to allow the cluster
nodes to synchronize and activate the change. This can be run from any cluster node
as follows:
# ccs --host smb-srv1 --rmnode smb-srv3-ci
# ccs --host smb-srv1 --setcman two_node=1 expected_votes=1
# ccs --host smb-srv1 --stopall
# ccs --host smb-srv1 --sync --activate
# ccs --host smb-srv1 --checkconf
All nodes in sync.

3. Activate the new (2-node) cluster configuration. The cluster services must be restarted
when downsizing to a two node cluster configuration. This can be run from any cluster
node:
# ccs --host smb-srv1 --startall

refarch-feedback@redhat.com 101 www.redhat.com


Appendix H: Deployment Checklists
Task Task Description Location Details
Deployment Tasks
smb-srv1
1 Deploy Cluster Nodes smb-srv2 Section 4.2
smb-srv3
smb-srv1
2 Configure Cluster smb-srv2 Section 4.3
smb-srv3
smb-srv1
3 Configure Storage smb-srv2 Section 4.4
smb-srv3
smb-srv1
4 Configure CTDB smb-srv2 Section 4.5
smb-srv3
smb-srv1
5 Configure Samba smb-srv2 Section 4.6
smb-srv3
smb-srv1
6 Start clvmd smb-srv2 Section 4.7
smb-srv3
smb-srv1
7 Mount GFS2 Volumes smb-srv2 Section 4.8
smb-srv3
smb-srv1
8 Start CTDB/Samba smb-srv2 Section 4.9
smb-srv3
smb-srv1
9 Verify File Share smb-srv2 Section 4.10
smb-srv3

Appendix H: Deployment Checklist

www.redhat.com 102 refarch-feedback@redhat.com


Acknowledgements
The author would like to express a sincere thank you to the following individuals
for their time, support and many valued contributions during the development of
this reference architecture.

Contributor Title Contributions


Jeremy Agee Software Engineer Content, Reviews
Sumit Bose Principal Software Engineer Content, Reviews
Abhijith Das Senior Software Engineer Content, Reviews
David Duncan Senior Software Engineer Reviews
Chris Hertel Senior Principal Software
Content, Reviews
Engineer
Lon Hohberger Supervisor Software Engineering Reviews
Ryan McCabe Senior Software Engineer Content
Vijay Trehan Director -Solutions Architecture Reviews

refarch-feedback@redhat.com 103 www.redhat.com

You might also like