Installing Apache Hadoop Single Node
2015-07-27 21:33
288 查看
platform: Ubuntu 14.04 LTS
hadoop 1.2.1
1. install ssh:
$sudo apt-get install openssh-server
$sudo apt-get install openssh-client
2. ssh no password access:
$ssh wubin (your computer)
$ssh-keygen
$ssh localhost
$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
(send to other computer $ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@node13)
3. install jdk
$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer
$ java -version
4. install hadoop:
download hadoop-1.2.1-bin.tar.gz;
$tar -zxvf hadoop-1.2.1-bin.tar.gz
$sudo cp -r hadoop-1.2.1 /usr/local/hadoop
$sudo chown wubin /usr/local/hadoop
$dir /usr/local/hadoop
$sudo vim $HOME/.bashrc
go to the bottom:
export HADOOP_PREFIX=/usr/local/hadoop
export PATH=$PATH:$HADOOP_PREFIX/bin
$exec bash
$$PATH
: no such file or directory
$sudo vim /usr/local/hadoop/conf/hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
$sudo vim /usr/local/hadoop/conf/core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://WuBin:10001</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
</configuration>
$sudo vim /usr/local/hadoop/conf/mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>WuBin:10002</value>
</property>
</configuration>
$sudo mkdir /usr/local/hadoop/tmp
$sudo chown chown wubin /usr/hadoop/tmp
5. start hadoop
$hadoop namenode -format
$start-all.sh
$jps
9792 DataNode
9971 SecondaryNameNode
9641 NameNode
10331 Jps
10237 TaskTracker
10079 JobTracker
$dir /usr/local/hadoop/bin
User Interface:
localhost:50070
localhost:50030
localhost:50070(support other computer to view the webpage via this port).
6. hdfs order:
$hadoop -fs -mkdir filename
$hadoop -fs -mkdir hdfs://NameNode:port/filename
$hadoop -fs -rmr filename
$hadoop -fs -moveFromLocal localfilename hdfsfilename
$hadoop -fs -copyToLocal hdfsfilename localfilename
$hadoop -fs -put localfilename hdfsfilename
7. Notation:
When you deploy the multi-node clusters, you will modify /etc/hosts of Master. Please remember to remove this line:
127.0.0.0 localhost
this may cause errer which I always have no idea to deel with.
Reference:
[1] Hadoop tutorial: 05 Installing Apache Hadoop Single Node, https://www.youtube.com/channel/UCjZvxgi8ro5VDv7tCqEWwgw.
hadoop 1.2.1
1. install ssh:
$sudo apt-get install openssh-server
$sudo apt-get install openssh-client
2. ssh no password access:
$ssh wubin (your computer)
$ssh-keygen
$ssh localhost
$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
(send to other computer $ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@node13)
3. install jdk
$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer
$ java -version
4. install hadoop:
download hadoop-1.2.1-bin.tar.gz;
$tar -zxvf hadoop-1.2.1-bin.tar.gz
$sudo cp -r hadoop-1.2.1 /usr/local/hadoop
$sudo chown wubin /usr/local/hadoop
$dir /usr/local/hadoop
$sudo vim $HOME/.bashrc
go to the bottom:
export HADOOP_PREFIX=/usr/local/hadoop
export PATH=$PATH:$HADOOP_PREFIX/bin
$exec bash
$$PATH
: no such file or directory
$sudo vim /usr/local/hadoop/conf/hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
$sudo vim /usr/local/hadoop/conf/core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://WuBin:10001</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
</configuration>
$sudo vim /usr/local/hadoop/conf/mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>WuBin:10002</value>
</property>
</configuration>
$sudo mkdir /usr/local/hadoop/tmp
$sudo chown chown wubin /usr/hadoop/tmp
5. start hadoop
$hadoop namenode -format
$start-all.sh
$jps
9792 DataNode
9971 SecondaryNameNode
9641 NameNode
10331 Jps
10237 TaskTracker
10079 JobTracker
$dir /usr/local/hadoop/bin
User Interface:
localhost:50070
localhost:50030
localhost:50070(support other computer to view the webpage via this port).
6. hdfs order:
$hadoop -fs -mkdir filename
$hadoop -fs -mkdir hdfs://NameNode:port/filename
$hadoop -fs -rmr filename
$hadoop -fs -moveFromLocal localfilename hdfsfilename
$hadoop -fs -copyToLocal hdfsfilename localfilename
$hadoop -fs -put localfilename hdfsfilename
7. Notation:
When you deploy the multi-node clusters, you will modify /etc/hosts of Master. Please remember to remove this line:
127.0.0.0 localhost
this may cause errer which I always have no idea to deel with.
Reference:
[1] Hadoop tutorial: 05 Installing Apache Hadoop Single Node, https://www.youtube.com/channel/UCjZvxgi8ro5VDv7tCqEWwgw.
相关文章推荐
- Ubuntu 14下apache2开启对.htaccess支持
- linux apache
- 【php】Apache无法自动跳转却显示目录与php无法连接mysql数据库的解决方案
- apache worker性能调优
- Keepalived+lvs+apache
- 从apache mod_php到php-fpm
- Apache ActiveMQ消息中间件的基本使用
- Mac OS X 配置 Apache+Mysql+PHP 详细教程
- Linux下MySQL、Apache、PHP源码安装全程实录(CentOS 6.4)
- 分析apache日志,统计ip访问频次命令
- org.apache.commons.lang.math.NumberUtils 工具类
- 【php】在Windows2003下配置Apache2.4与php5.4
- apache 403 forbidden怎么解决
- springboot 整合apache shiro
- Apache访问控制
- linux+apache+mysql+php平台构建及环境配置
- Apache Rewrite 拟静态配置
- 阿里云服务器 ECS Ubuntu系统下PHP,MYSQL,APACHE2的安装配置
- Apache配置防盗链
- Apache配置静态缓存