Professional Documents
Culture Documents
Single Node Hadoop Cluster Setup On Cent Os-Gopal PDF
Single Node Hadoop Cluster Setup On Cent Os-Gopal PDF
Single Node Hadoop Cluster Setup On Cent Os-Gopal PDF
By Gopal Krishna
https://my.vmware.com/web/vmware/free#desktop_end_user_computing/vmware_play
er/5_0
STEP 2: Download CentOS iso DVD version 6.3, which is stable version
http://mirrors.hns.net.in/centos/6.3/isos/x86_64/
For 32 bit windows, use the below URL
http://wiki.centos.org/Download
STEP 3: Run the VMware player and click on Create New Virtual Machine.
Browse to the iso downloaded in previous step.
CentOS will be installed on local system. You will be asked to enter password
for the root user, enter your root password and do remember it.
After installation is over, you please click on Install to disk icon on your CentOS
Desktop.
Once the installation is over, you please shutdown the machine and log in as root
user.
After installing the VMWare player, you can install VMware tools so that you can share data from
host system to guest system. You can see full system view only if you install VMware tools. Below
are the steps for installing the VMWare tools
Go to VM Menu Item and select Install VMWare Tools shown in the below screenshot:
Go to desktop
cd ~/Desktop
you can untar the tar file
tar -xzf VMwareTools*.tar.gz
cd ~/Desktop/vmware-tools-distrib
Install the vmware tools
./vmware-install.pl
After the above command, if you ask you for some values and the defaults are good
enough. You have to press enter 10-12 times
Enable sharing between the host OS and guest OS as follows:
Go to Virtual Machine Virtual Machine Settings Options Shared Folders and see
if the share is enabled or not. If sharing is enabled, you can add a new share
location
Hadoop needs to have Java installed on your CentOS but CentOS does not come
with
Oracle Java because of licensing issues. So, please use the below commands
to install java.
Download the latest java from blow location. Click on the link Click on
JDK Download button and select the Accept license agreement ratio
button.
http://www.oracle.com/technetwork/java/javase/downloads/index.html
[root@localhost ~]#rpm -Uvh /root/Downloads/jdk-7u15-linux-x64.rpm
[root@localhost ~]#alternatives --install /usr/bin/java java
/usr/java/latest/jre/bin/java 20000
[root@localhost ~]#export JAVA_HOME="/usr/java/latest"
When you create a new CentOS machine, the default host name is
localhost.localdomain. Check your hostname by giving the following command.
[root@localhost ~]#hostname
You should get localhost.localdomain as output of above command.
STEP 7: Adding a dedicated Hadoop system user
We will use a dedicated Hadoop user account for running Hadoop. While thats not
required it is recommended because it helps to separate the Hadoop installation
from other software applications and user accounts running on the same machine
(think: security, permissions, backups, etc).
[root@localhost ~]#groupadd hadoop
[root@localhost ~]#useradd hduser -g
hadoop [root@localhost ~]#passwd hduser
It asks for new password, enter again for confirmation and do remember
your password.
Add the hduser to sudoers list so that hduser can do admin
tasks. Step 1.visudo
Step 2: Add a line under ##Allow root to run commands anywhere in the
format. hduser ALL=(ALL) ALL
This will add the user hduser and the group hadoop to your local machine.
Exit from root user and login as hduser to proceed further, type
the following for switching from root to hduser
[root@localhost ~]#su
hduser
[hduser@localhost ~]$
The final step is to test the SSH setup by connecting to your local machine with the
hduser user. The step is also needed to save your local machines host key
fingerprint to the hduser users known_hosts file. If you have any special SSH
configuration for your local machine like a non-standard SSH port, you can define
host-specific SSH options in $HOME/.ssh/config (see man ssh_config for more
information).
#now copy the public key to the authorized_keys file, so that ssh should not require
passwords every time
[hduser@localhost ~]$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
#Change permissions of the authorized_keys fie to have all permissions for hduser
[hduser@localhost ~]$chmod 700 ~/.ssh/authorized_keys
If ssh is not running, then run it by giving the below command
[hduser@localhost ~]$ sudo service sshd start
If ssh is not running, then run it by giving the below command
Run the below command to have the sshd running even after system reboot.
hduser@localhost:~$sudo chkconfig sshd on
STEP 10: Add Java location to Hadoop so that it can recognize Java
Add the following to /usr/local/hadoop/conf/hadoop-env.sh. Type the below
command to enter contents into the hadoop-env.sh file
nano /usr/local/hadoop/conf/hadoop-env.sh
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
export HADOOP_HOME_WARN_SUPPRESS="TRUE" export
JAVA_HOME=/usr/java/default
STEP 11: Update $HOME/.bashrc
Add the following lines to the end of the $HOME/.bashrc file of user hduser. If you
use a shell other than bash, you should of course update its appropriate
configuration files instead of .bashrc.
Give the below commands to edit .bashrc file and paste the below commands and
save the file.
nano ~/.bashrc
# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop
# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/java/default
# Some convenient aliases and functions for running Hadoop-related commands
unaliasfs&> /dev/null
aliasfs="hadoop fs"
unaliashls&> /dev/null
aliashls="fs -ls"
# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
hadoopfs -cat $1 | lzop -dc | head -1000 | less
}
# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin:$PATH:$JAVA_HOME/bin
You need to close the terminal and open a new terminal to have the bash changes
into effect.
STEP 12: Create a temporary directory which will be used as base location for DFS.
Now we create the directory and set the required ownerships and permissions:
$
$
$
$
$
sudo
sudo
sudo
sudo
sudo
mkdir -p /app/hadoop/tmp
chown -R hduser:hadoop /app/hadoop/tmp
chmod -R 750 /app/
chmod -R 750 /app/hadoop
chmod -R 750 /app/hadoop/tmp
If you forget to set the required ownerships and permissions, you will see a
java.io.IOException when you try to format the name node in the next section).
nano /usr/local/hadoop/conf/mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job
tracker runs at. If "local", then jobs are run in-process as a
single map
and reduce task.
</description>
</property>
Check if the hadoop is accessible through browser by hitting the below URLs
For mapreduce - http://localhost:50030
For hdfs - http://localhost:50070