Ubuntu上的Hadoop安装教程
2016-04-18 13:47
260 查看
Install Hadoop 2.2.0 on Ubuntu Linux 13.04 (Single-Node Cluster)
This tutorial explains how to install Hadoop 2.2.0/2.3.0/2.4.0/2.4.1 on Ubuntu 13.04/13.10/14.04 (Single-Node Cluster). This setup does not require an additional user forHadoop. All files related to Hadoop will be stored inside the ~/hadoop directory.
Install a JRE. If you want the Oracle JRE, follow this post.
Install SSH:
sudo apt-get install openssh-serverGenerate a SSH key:
ssh-keygen -t rsa -P ""Enable SSH key:
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys(Optional) Disable SSH login from remote addresses by setting in /etc/ssh/sshd_config:
ListenAddress 127.0.0.1Test local connection:
ssh localhostIf Ok, then exit:
exitOtherwise
debug
Download Hadoop 2.2.0 (or newer versions)
Unpack, rename and move to the home directory:
tar xvf hadoop-2.2.0.tar.gz
mv hadoop-2.2.0 ~/hadoop
Create HDFS directory:
mkdir -p ~/hadoop/data/namenode
mkdir -p ~/hadoop/data/datanode
In file ~/hadoop/etc/hadoop/hadoop-env.sh insert (after the comment "The java implementation to use."):
export JAVA_HOME="`dirname $(readlink /etc/alternatives/java)`/../"export HADOOP_COMMON_LIB_NATIVE_DIR="~/hadoop/lib"export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=~/hadoop/lib"
In file ~/hadoop/etc/hadoop/core-site.xml (inside <configuration> tag):
<property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value></property>
In file ~/hadoop/etc/hadoop/hdfs-site.xml (inside <configuration> tag):
<property> <name>dfs.replication</name> <value>1</value></property><property> <name>dfs.namenode.name.dir</name> <value>${user.home}/hadoop/data/namenode</value></property><property> <name>dfs.datanode.data.dir</name> <value>${user.home}/hadoop/data/datanode</value></property>
In file ~/hadoop/etc/hadoop/yarn-site.xml (inside <configuration> tag):
<property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value></property><property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value></property>
Create file ~/hadoop/etc/hadoop/mapred-site.xml:
cp ~/hadoop/etc/hadoop/mapred-site.xml.template ~/hadoop/etc/hadoop/mapred-site.xmlAnd insert (inside <configuration> tag):
<property> <name>mapreduce.framework.name</name> <value>yarn</value></property>
Add Hadoop binaries to PATH:
echo "export PATH=$PATH:~/hadoop/bin:~/hadoop/sbin" >> ~/.bashrc
source ~/.bashrc
Format HDFS:
hdfs namenode -format
Start Hadoop:
start-dfs.sh && start-yarn.shIf you get the warning:
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
It is because you are running on 64bit but Hadoop native library is 32bit. This is not a big issue. If you want (optional) to fix it, check this.
Check status:
jpsExpected
output (PIDs may change!):
10969 DataNode11745 NodeManager11292 SecondaryNameNode10708 NameNode11483 ResourceManager13096 JpsN.B. The old JobTracker has been replaced by the ResourceManager.
Access web interfaces:
Cluster status: http://localhost:8088 HDFS status: http://localhost:50070 Secondary NameNode status: http://localhost:50090
Test Hadoop:
hadoop jar ~/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar TestDFSIO -write -nrFiles 20 -fileSize 10Check the results and remove files:
hadoop jar ~/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar TestDFSIO -cleanAnd:
hadoop jar ~/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 2 5
Stop hadoop:
stop-dfs.sh && stop-yarn.sh
Some of these steps are taken from this tutorial.
相关文章推荐
- linux等待队列wait_queue_head_t和wait_queue_t
- 将OpenStack私有云部署到Hadoop MapReduce环境中
- centos7yum安装mesos(0.28)+marathon+zookeeper
- 加锁
- linux下(ubuntu),段错误产生的原因Segmentation Fault
- linux :故障提示:Error:No suitable device found: no device found for connection "System eth0"
- Stopwatch 类
- 如何在linux下共享文件夹
- 大型网站系统架构演化之路
- Datastage9.1 Linux环境安装配置文档
- 盘点linux系统中的12条性能调优命令
- linux的内存管理机制、内存监控、buffer/cache异同
- linux的top下buffer与cache的区别、free命令内存解释
- 对MapReduce模型的理解
- 记录Linux启动流程的工具bootchart
- HBase单机安装
- 修改docker默认172.17网段
- curl_setopt用此函数设置上传文件请求的兼容性调整
- Hadoop集群搭建,初探
- xhost: unable to open display ""