您的位置:首页 > 运维架构 > Apache

Installing Apache Hadoop Single Node

2015-07-27 21:33 288 查看
platform: Ubuntu 14.04 LTS

hadoop 1.2.1

1. install ssh:

$sudo apt-get install openssh-server

$sudo apt-get install openssh-client

2. ssh no password access:

$ssh wubin (your computer)

$ssh-keygen

$ssh localhost

$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

(send to other computer $ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@node13)

3. install jdk

$ sudo add-apt-repository ppa:webupd8team/java

$ sudo apt-get update

$ sudo apt-get install oracle-java8-installer

$ java -version

4. install hadoop:

download hadoop-1.2.1-bin.tar.gz;

$tar -zxvf hadoop-1.2.1-bin.tar.gz

$sudo cp -r hadoop-1.2.1 /usr/local/hadoop

$sudo chown wubin /usr/local/hadoop

$dir /usr/local/hadoop

$sudo vim $HOME/.bashrc

  go to the bottom:

  export HADOOP_PREFIX=/usr/local/hadoop
  export PATH=$PATH:$HADOOP_PREFIX/bin

$exec bash

$$PATH

  : no such file or directory

$sudo vim /usr/local/hadoop/conf/hadoop-env.sh

  export JAVA_HOME=/usr/lib/jvm/java-8-oracle

  export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true

$sudo vim /usr/local/hadoop/conf/core-site.xml

  <configuration>
  <property>
  <name>fs.default.name</name>
  <value>hdfs://WuBin:10001</value>
  </property>

  <property>
  <name>hadoop.tmp.dir</name>
  <value>/usr/local/hadoop/tmp</value>
  </property>
  </configuration>

$sudo vim /usr/local/hadoop/conf/mapred-site.xml

  <configuration>
  <property>
  <name>mapred.job.tracker</name>
  <value>WuBin:10002</value>
  </property>
  </configuration>

$sudo mkdir /usr/local/hadoop/tmp

$sudo chown chown wubin /usr/hadoop/tmp

5. start hadoop

$hadoop namenode -format

$start-all.sh

$jps

  9792 DataNode
  9971 SecondaryNameNode
  9641 NameNode
  10331 Jps
  10237 TaskTracker
  10079 JobTracker

$dir /usr/local/hadoop/bin

User Interface:

  localhost:50070

  localhost:50030

  localhost:50070(support other computer to view the webpage via this port).

6. hdfs order:

  $hadoop -fs -mkdir filename

  $hadoop -fs -mkdir hdfs://NameNode:port/filename

  $hadoop -fs -rmr filename

  $hadoop -fs -moveFromLocal localfilename hdfsfilename

  $hadoop -fs -copyToLocal hdfsfilename localfilename

  $hadoop -fs -put localfilename hdfsfilename

7. Notation:

  When you deploy the multi-node clusters, you will modify /etc/hosts of Master. Please remember to remove this line:

  127.0.0.0 localhost

  this may cause errer which I always have no idea to deel with.

Reference:

[1] Hadoop tutorial: 05 Installing Apache Hadoop Single Node, https://www.youtube.com/channel/UCjZvxgi8ro5VDv7tCqEWwgw.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: