of Hadoop on your Centos 6/RHEL box is now a lot simpler since rpm
versions have been made available but you nonetheless need to have installed the JDK prior to doing so.
Change the JAVA_HOME path to /usr/java/default and you can install Hadoop via yum from Epel repo.

$ sudo yum -y install hadoop
If you have any problems with yum you can also use the Apache mirror service, download your preference and install it with

$ sudo rpm -Uvh <rpm_package_name>
Once installed as a package, set  it all up.
Generate hadoop configuration on all nodes

$ /usr/sbin/






Where ${namenode} and ${jobtracker} should be replaced with hostname of namenode and jobtracker.

Format namenode and setup default HDFS layout.

$ /usr/sbin/

Start all data nodes after stopping first.

$ /etc/init.d/hadoop-datanode start

Start job tracker node.
$ /etc/init.d/hadoop-jobtracker start

Start task tracker nodes
$ /etc/init.d/hadoop-tasktracker start

Create a user account on HDFS for yourself.
$ /usr/sbin/ -u $USER

Set up Hadoop Environment
$ vi ~/.bash_profile

In INSERT mode set path for JAVA_HOME


Save file by clicking esc:wq

Run the .bash_profile
$ source ~/.bash_profile

Set JAVA_HOME path in Hadoop Environment file
$ sudo vi /etc/hadoop/

Configure Hadoop

Use the following:

$ sudo vi /etc/hadoop/core-site.xml:



















Hadoop Commands

$ hadoop

$ hadoop namenode –format (Format the namenode, If ask to
answer ‘Y’)
$ hadoop namenode (Start the namenode)
$ find / -name (find the file in directory)
$ cd usr/sbin (Go to respective directory directly)

$ hadoop fs –ls / (Shows the HDFS root folder)
$ hadooop fs –put input/file01 /input/file01 (Copy local input/file01 to

HDFS root /input/file01)

Author: Paul Anthony McGowan

Web Technology & Linux Enthusiast, Javascript Afficiado, General Observer Of World Corruption. Builder Of A Variety Of Web Properties And Campaigner Against Serious Government Criminality. Founder of Vorteasy

Leave a Reply

Your email address will not be published. Required fields are marked *