hadoopYanr2.3安装
2014-03-13 22:45
441 查看
环境配置
虚拟机:ubuntu12
java:jdk6
hadoop2.3: hadoop.2.3.0.tar.gz
1,java 安装路经
jdk-6u45-linux-i586.bin
/usr/lib/jdk/jdk1.6.0_45
配置环境变理
sudo vi /etc/profile
export JAVA_HOME=/usr/lib/jdk/jdk1.6.0_45
export HADOOP_HOME=/home/lew/software/hadoop-2.3.0
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin
export CLASS_PATH=$CLASS_PATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
2,修改host
vi /etc/hosts
127.0.0.1 hadoop
lew@hadoop:~$ hostname
hadoop
3,安装hadoop (伪分步式安装)
tar -zxvf hadoop-2.3.0.tar.gz
1)配置 hadoop-env.sh
export JAVA_HOME=/usr/lib/jdk/jdk1.6.0_45
vi /etc/profile
export HADOOP_HOME=/home/lew/software/hadoop-2.3.0/
2)mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
3)core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://yarn001:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/lew/hadoop/tmp/</value> 注:默认、/tmp 目录下 机器每次重启都会此目录
</property>
4>yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value> 注:如果配置有问题,启动nodamanager时,启动后会自动集合,
</property>
5> hdfs-site.xml
<property>
<name>dfs.repalcation</name>
<value>1</value>
</property>
3,设置ssh 登录
ssh-keygen -t rsa
cd ~/.ssh/
cat id_rsa.pub >>authorized_keys
4,启动hadoop
bin/hadoop namenode -format
sbin/hadoop-daemon.sh start namenode
sbin/hadoop-daemon.sh start datanode
或 sbin/start-dfs.sh
sbin/yarn-daemon.sh start resourcemanager
sbin/yarn-daemon.sh start nodemanager
或 sbin/start-yarn.sh
或直接使用:sbin/start-all.sh
全部停用:sbin/stop-al
4000
l.sh
5,测试hadoop
lew@hadoop:~/software/hadoop-2.3.0$ jps
9258 NameNode
9560 ResourceManager
12309 Jps
9401 DataNode
9792 NodeManager
http://localhost:8088/
![](https://img-blog.csdn.net/20140313224246703?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcTQ3NjM1NTAyMQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
http://localhost:8042
![](https://img-blog.csdn.net/20140313224159078?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcTQ3NjM1NTAyMQ==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)
6,测试系统自带的worcount 列子
1)准备数据:
lew@hadoop:~/input$ echo "hello hadoop1" >test1.txt
lew@hadoop:~/input$ echo "hello lew" >test2.txt
lew@hadoop:~/input$ cat test*
hello hadoop1
hello lew
2)将数据上传hdfs 文件系统中
lew@hadoop:~/software/hadoop-2.3.0$ bin/hadoop fs -put /home/lew/input/ /input/
lew@hadoop:~/software/hadoop-2.3.0$ bin/hadoop fs -ls /input/input
Found 2 items
-rw-r--r-- 3 lew supergroup 14 2014-03-13 22:31 /input/input/test1.txt
-rw-r--r-- 3 lew supergroup 10 2014-03-13 22:31 /input/input/test2.txt
3)执行wordcout例子
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0.jar wordcount /input/input/ /output
信息如下:
lew@hadoop:~/software/hadoop-2.3.0$ bin/hadoop fs -cat /input/part-r*
hadoop 1
hello 2
world 1
请关注我的微信,阅读更多精彩文章:
虚拟机:ubuntu12
java:jdk6
hadoop2.3: hadoop.2.3.0.tar.gz
1,java 安装路经
jdk-6u45-linux-i586.bin
/usr/lib/jdk/jdk1.6.0_45
配置环境变理
sudo vi /etc/profile
export JAVA_HOME=/usr/lib/jdk/jdk1.6.0_45
export HADOOP_HOME=/home/lew/software/hadoop-2.3.0
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin
export CLASS_PATH=$CLASS_PATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
2,修改host
vi /etc/hosts
127.0.0.1 hadoop
lew@hadoop:~$ hostname
hadoop
3,安装hadoop (伪分步式安装)
tar -zxvf hadoop-2.3.0.tar.gz
1)配置 hadoop-env.sh
export JAVA_HOME=/usr/lib/jdk/jdk1.6.0_45
vi /etc/profile
export HADOOP_HOME=/home/lew/software/hadoop-2.3.0/
2)mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
3)core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://yarn001:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/lew/hadoop/tmp/</value> 注:默认、/tmp 目录下 机器每次重启都会此目录
</property>
4>yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value> 注:如果配置有问题,启动nodamanager时,启动后会自动集合,
</property>
5> hdfs-site.xml
<property>
<name>dfs.repalcation</name>
<value>1</value>
</property>
3,设置ssh 登录
ssh-keygen -t rsa
cd ~/.ssh/
cat id_rsa.pub >>authorized_keys
4,启动hadoop
bin/hadoop namenode -format
sbin/hadoop-daemon.sh start namenode
sbin/hadoop-daemon.sh start datanode
或 sbin/start-dfs.sh
sbin/yarn-daemon.sh start resourcemanager
sbin/yarn-daemon.sh start nodemanager
或 sbin/start-yarn.sh
或直接使用:sbin/start-all.sh
全部停用:sbin/stop-al
4000
l.sh
5,测试hadoop
lew@hadoop:~/software/hadoop-2.3.0$ jps
9258 NameNode
9560 ResourceManager
12309 Jps
9401 DataNode
9792 NodeManager
http://localhost:8088/
http://localhost:8042
6,测试系统自带的worcount 列子
1)准备数据:
lew@hadoop:~/input$ echo "hello hadoop1" >test1.txt
lew@hadoop:~/input$ echo "hello lew" >test2.txt
lew@hadoop:~/input$ cat test*
hello hadoop1
hello lew
2)将数据上传hdfs 文件系统中
lew@hadoop:~/software/hadoop-2.3.0$ bin/hadoop fs -put /home/lew/input/ /input/
lew@hadoop:~/software/hadoop-2.3.0$ bin/hadoop fs -ls /input/input
Found 2 items
-rw-r--r-- 3 lew supergroup 14 2014-03-13 22:31 /input/input/test1.txt
-rw-r--r-- 3 lew supergroup 10 2014-03-13 22:31 /input/input/test2.txt
3)执行wordcout例子
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0.jar wordcount /input/input/ /output
信息如下:
lew@hadoop:~/software/hadoop-2.3.0$ bin/hadoop fs -cat /input/part-r*
hadoop 1
hello 2
world 1
请关注我的微信,阅读更多精彩文章:
相关文章推荐
- 在机器里安装FreeBSD,LINUX和WindowsXP操作系统
- 学习Axis2笔记之三--安装运行Axis2
- 下载和在VS.NET 2003安装IEWebControls组件实现TreeView控件使用
- Centos5反p2p模块安装精要
- 安装Myeclipse和MySQL时的错误问题
- windows下安装和编译ACE
- 解决Windows Vista/7下安装程序冲突问题
- 图解:如何在LINUX中安装VM-Tools
- VC 2005 下如何编译安装并开发 QT 4.4.0 应用程序 (转)
- 安装Windows Live Writer
- Ubuntu下VMware Tools安装教程
- ubuntu 安装后的一些设置
- BUG管理工具bugfree的安装和部署教程
- win2008安装活动目录AD后的检查
- CENTOS LNMP手动安装
- RFS的web自动化验收测试——安装篇
- 解决Apache2.2安装之后启动出现“the requested operation has failed”
- [经验技巧] 利用WindowsPhone7_SDK_Full.rar_for_xp,在xp下安装sdk,部署xap软件的教程
- ubuntu安装显卡驱动后无法进入图形界面的解决方法
- centos6.2安装phpMyAdmin3.5