Single Node Hadoop Cluster Setup On Cent Os-Gopal PDF

You might also like

You are on page 1of 11

SINGLE NODE HADOOP CLUSTER SETUP ON CENT OS

By Gopal Krishna

---------------------------------------------------------STEP 1: Download VMWare Player or VMWare Workstation

You may download the VMware player from below location.

https://my.vmware.com/web/vmware/free#desktop_end_user_computing/vmware_play
er/5_0

STEP 2: Download CentOS iso DVD version 6.3, which is stable version

For 64 bit windows use below URL

http://mirrors.hns.net.in/centos/6.3/isos/x86_64/
For 32 bit windows, use the below URL

http://wiki.centos.org/Download

STEP 3: Run the VMware player and click on Create New Virtual Machine.
Browse to the iso downloaded in previous step.
CentOS will be installed on local system. You will be asked to enter password
for the root user, enter your root password and do remember it.
After installation is over, you please click on Install to disk icon on your CentOS
Desktop.
Once the installation is over, you please shutdown the machine and log in as root
user.

STEP 4: Installation of VMWare Tools

After installing the VMWare player, you can install VMware tools so that you can share data from
host system to guest system. You can see full system view only if you install VMware tools. Below
are the steps for installing the VMWare tools
Go to VM Menu Item and select Install VMWare Tools shown in the below screenshot:

Go to desktop
cd ~/Desktop
you can untar the tar file
tar -xzf VMwareTools*.tar.gz
cd ~/Desktop/vmware-tools-distrib
Install the vmware tools
./vmware-install.pl
After the above command, if you ask you for some values and the defaults are good
enough. You have to press enter 10-12 times
Enable sharing between the host OS and guest OS as follows:
Go to Virtual Machine Virtual Machine Settings Options Shared Folders and see
if the share is enabled or not. If sharing is enabled, you can add a new share
location

STEP 5: Installation of JAVA

Hadoop needs to have Java installed on your CentOS but CentOS does not come
with
Oracle Java because of licensing issues. So, please use the below commands
to install java.
Download the latest java from blow location. Click on the link Click on
JDK Download button and select the Accept license agreement ratio
button.

http://www.oracle.com/technetwork/java/javase/downloads/index.html
[root@localhost ~]#rpm -Uvh /root/Downloads/jdk-7u15-linux-x64.rpm
[root@localhost ~]#alternatives --install /usr/bin/java java
/usr/java/latest/jre/bin/java 20000
[root@localhost ~]#export JAVA_HOME="/usr/java/latest"

Confirm the java path by running


[root@localhost ~]#javac version
[root@localhost ~]#javac -version
javac 1.7.0_15
[root@localhost ~]#
Confirm the java version by running
[root@localhost ~]#java -version
java version "1.7.0_15"
Java(TM) SE Runtime Environment (build 1.7.0_15-b03)
Java HotSpot(TM) 64-Bit Server VM (build 23.7-b01, mixed mode)
[root@localhost ~]#
If you face any issues while installing the JDK, please check the
JDK version you have download and the one you are using in the
command.
If it shows some errors related to missing packages, just ignore them

STEP 6: Confirm your machine name

When you create a new CentOS machine, the default host name is
localhost.localdomain. Check your hostname by giving the following command.
[root@localhost ~]#hostname
You should get localhost.localdomain as output of above command.
STEP 7: Adding a dedicated Hadoop system user

We will use a dedicated Hadoop user account for running Hadoop. While thats not
required it is recommended because it helps to separate the Hadoop installation
from other software applications and user accounts running on the same machine
(think: security, permissions, backups, etc).
[root@localhost ~]#groupadd hadoop
[root@localhost ~]#useradd hduser -g
hadoop [root@localhost ~]#passwd hduser
It asks for new password, enter again for confirmation and do remember
your password.
Add the hduser to sudoers list so that hduser can do admin
tasks. Step 1.visudo

Step 2: Add a line under ##Allow root to run commands anywhere in the
format. hduser ALL=(ALL) ALL
This will add the user hduser and the group hadoop to your local machine.
Exit from root user and login as hduser to proceed further, type
the following for switching from root to hduser
[root@localhost ~]#su

hduser

STEP 8: Configuring SSH


Hadoop requires SSH access to manage its nodes, i.e. remote machines plus your
local machine if you want to use Hadoop on it (which is what we want to do in this
short tutorial). For our single-node setup of Hadoop, we therefore need to
configure SSH access to localhost for the hduser user we created in the previous
section.
I assume that you have SSH up and running on your machine and configured it to
allow
SSH public key authentication.

Install ssh server on your computer


[hduser@localhost ~]$sudo yum install openssh-server

NOTE: For the above step, INTERNET connection should be enabled


[hduser@localhost ~]$ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key
(/home/hduser/.ssh/id_rsa):
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
9b:82:ea:58:b4:e0:35:d7:ff:19:66:a6:ef:ae:0e:d2hduser@localhost
The key's randomart image is:
+--[ RSA 2048]----+
|
|
|
|
|
.
|
|
*
|
|
= S oo
|
|
. ooOoo.
|
|
.*E .
|
|
..o=
|
|
++.
|
+-----------------+

[hduser@localhost ~]$
The final step is to test the SSH setup by connecting to your local machine with the
hduser user. The step is also needed to save your local machines host key
fingerprint to the hduser users known_hosts file. If you have any special SSH
configuration for your local machine like a non-standard SSH port, you can define
host-specific SSH options in $HOME/.ssh/config (see man ssh_config for more
information).

#now copy the public key to the authorized_keys file, so that ssh should not require
passwords every time
[hduser@localhost ~]$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
#Change permissions of the authorized_keys fie to have all permissions for hduser
[hduser@localhost ~]$chmod 700 ~/.ssh/authorized_keys
If ssh is not running, then run it by giving the below command
[hduser@localhost ~]$ sudo service sshd start
If ssh is not running, then run it by giving the below command
Run the below command to have the sshd running even after system reboot.
hduser@localhost:~$sudo chkconfig sshd on

Stop the firewalls if enabled by following commands


[hduser@localhost ~]$sudo service iptables stop
Run the below command to have the iptables stopped even after system reboot.
hduser@localhost:~$sudo chkconfig iptables off

Test the ssh connectivity by doing the following


[hduser@localhost ~]$ssh localhost
We should be able to ssh localhost without password prompt. If it asks for
password while connecting to localhost, there is something went wrong and
you need to fix it before proceeding further.
Without having the password less SSH working properly, the hadoop cluster will not
work, so there is no point in going further if password less SSH is not working. If you
face any issues with SSH, consult Google University, where copious help available.
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is d7:87:25:47:ae:02:00:eb:1d:75:4f:bb:44:f9:36:26.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Linux ubuntu 2.6.32-22-generic #33-Ubuntu SMP Wed Apr 28 13:27:30 UTC 2010
i686 GNU/Linux
Ubuntu 10.04 LTS
[...snipp...]
Next sections will describe how to setup and run hadoop
STEP 9: Download Hadoop
For this tutorial, I am using Hadoop version 1.0.4, but it should work with any other
version.
Got to the URL http://archive.apache.org/dist/hadoop/core/and click on hadoop1.0.4/ and then select hadoop-1.0.4.tar.gz. The file will be saved to
/home/hduser/Downloads if you choose defaults. Now perform the following steps to
install Hadoop on your CentOS.
Copy your downloaded file from Downloads folder to /usr/local folder
$sudo cp /home/hduser/Downloads/hadoop-1.0.4.tar.gz /usr/local
$cd /usr/local
$sudo tar -xzf hadoop-1.0.4.tar.gz
$sudo chown -R hduser:hadoop hadoop-1.0.4
$sudo ln -s hadoop-1.0.4 hadoop
$sudo chown -R hduser:hadoop hadoop

STEP 10: Add Java location to Hadoop so that it can recognize Java
Add the following to /usr/local/hadoop/conf/hadoop-env.sh. Type the below
command to enter contents into the hadoop-env.sh file

nano /usr/local/hadoop/conf/hadoop-env.sh
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
export HADOOP_HOME_WARN_SUPPRESS="TRUE" export
JAVA_HOME=/usr/java/default
STEP 11: Update $HOME/.bashrc
Add the following lines to the end of the $HOME/.bashrc file of user hduser. If you
use a shell other than bash, you should of course update its appropriate
configuration files instead of .bashrc.
Give the below commands to edit .bashrc file and paste the below commands and
save the file.
nano ~/.bashrc
# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop
# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/java/default
# Some convenient aliases and functions for running Hadoop-related commands
unaliasfs&> /dev/null
aliasfs="hadoop fs"
unaliashls&> /dev/null
aliashls="fs -ls"
# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
hadoopfs -cat $1 | lzop -dc | head -1000 | less
}
# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin:$PATH:$JAVA_HOME/bin

You need to close the terminal and open a new terminal to have the bash changes
into effect.

STEP 12: Create a temporary directory which will be used as base location for DFS.
Now we create the directory and set the required ownerships and permissions:
$
$
$
$
$

sudo
sudo
sudo
sudo
sudo

mkdir -p /app/hadoop/tmp
chown -R hduser:hadoop /app/hadoop/tmp
chmod -R 750 /app/
chmod -R 750 /app/hadoop
chmod -R 750 /app/hadoop/tmp

If you forget to set the required ownerships and permissions, you will see a
java.io.IOException when you try to format the name node in the next section).

STEP 13: core-site.xml file updating


Add the following snippets between the <configuration> ... </configuration> tags
in/usr/local/hadoop/conf/core-site.xml:
Add the following snippets between the <configuration> ... </configuration> tags in
/usr/local/hadoop/conf/core-site.xml:
nano /usr/local/hadoop/conf/core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
theFileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>

STEP 14: mapred-site.xml file updating


Add the following to /usr/local/hadoop/conf/mapred-site.xml between
<configuration> ... </configuration>

nano /usr/local/hadoop/conf/mapred-site.xml

<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job
tracker runs at. If "local", then jobs are run in-process as a
single map
and reduce task.
</description>
</property>

STEP 15: hdfs-site.xml file updating


Add the following to /usr/local/hadoop/conf/hdfs-site.xml between <configuration>
... </configuration>
nano /usr/local/hadoop/conf/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>

STEP 16: Format the Name Node


Format hdfs cluster with below command
[hduser@localhost ~]$hadoop namenode -format
If the format is not working, double check your entries in .bashrc file. The .bashrc
updating come into force only if you have opened a new terminal

STEP 17: Starting single-node cluster


Congratulations, your Hadoop single node cluster is ready to use. Test your cluster by
running the following commands.
[hduser@localhost ~]$start-all.sh
If you get any SSH related error like connection refused etc, ensure that your ssh
service is running by checking with below command and run it if needed.
[hduser@localhost ~]$sudo service sshd status
A command line utility to check whether all the hadoop demons are running or not is:
jps
Give the jps at command prompt and you should see something like below.
[hduser@localhost ~]$ jps
9168 Jps
9127 TaskTracker
8824 DataNode
8714 NameNode
8935 SecondaryNameNode
9017 JobTracker

Check if the hadoop is accessible through browser by hitting the below URLs
For mapreduce - http://localhost:50030
For hdfs - http://localhost:50070

You might also like