您的位置:首页 > 运维架构 > Linux

CentOS5.5+hadoop2.6

2015-11-15 18:21 363 查看

hadoop部署

I.准备工作:

IP:

网关

192.168.78.2

 

IP段

192.168.78.128

192.168.78.254

 

master:192.168.78.130

slave1:192.168.78.131

slave2:192.168.78.132

 

I.打开

1.打开“网上邻居属性”,打开“VMware Network Adapter VMnet8”属性,双击"Internet协议",设置自动获取IP和DNS

 

2.右击“我的电脑”,打开管理,点击打开“服务和应用程序”—“服务”,检查VMware DHCP Service和VMware NAT Service是否开启,要求开启

 

yum install ssh

 

 

配置固定IP

涉及到三个配置文件,分别是:

 

/etc/sysconfig/network

/etc/sysconfig/network-scripts/ifcfg-eth0

/etc/resolv.conf

 

首先修改vi/etc/sysconfig/network如下:

NETWORKING=yes

HOSTNAME=localhost.localdomain

GATEWAY=192.168.78.2##网关

 

 

修改eth*为eth0

vi/etc/udev/rules.d/70-persistent-net.rules

vi/etc/sysconfig/network-scripts/ifcfg-eth0

 

 

 

 

 demo

然后修改vi/etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE="eth1"         ##配置成下边可以用的eth*

#BOOTPROTO="dhcp"

BOOTPROTO="static"    ##修改

IPADDR=192.168.78.132##新IP

NETMASK=255.255.255.0   ##修改

HWADDR="00:0c:29:60:74:a6"   ##配置成下面可以用的 attr

IPV6INIT="no"

NM_CONTROLLED="yes"

ONBOOT="yes"

TYPE="Ethernet"

##UUID="0650f791-7ec6-4da7-8fe2-14049d76c604"

DNS1=192.168.78.2          ##网关

复制代码

注意:这里DNS1是必须要设置的否则无法进行域名解析。

 -----------------------------------------------------------------------------------------------------------------

 

问题:Bringing upinterface eth0:  Device eth0 does notseem to be present,

 vi /etc/udev/rules.d/70-persistent-net.rules

 

SUBSYSTEM=="net",ACTION=="add", DRIVERS=="?*",ATTR{address}=="00:0c:29:60:74:a6", ATTR{type}=="1",KERNEL=="eth*", NAME="eth1"

 

SUBSYSTEM=="net",ACTION=="add", DRIVERS=="?*",ATTR{address}=="00:0c:29:ff:db:b6", ATTR{type}=="1",KERNEL=="eth*", NAME="eth2"

~                                                                                                                                  

 

 slave 2:

 DEVICE="eth2"

#BOOTPROTO="dhcp"

BOOTPROTO="static"    ##修改不能加注释不然不生效

IPADDR=192.168.78.132##新IP

NETMASK=255.255.255.0   ##修改

HWADDR="00:0c:29:ff:db:b6"

IPV6INIT="no"

NM_CONTROLLED="yes"

ONBOOT="yes"

TYPE="Ethernet"

##UUID="0650f791-7ec6-4da7-8fe2-14049d76c604"

DNS1=192.168.78.2          ##网关

~      ------------------------------------------------------------------------------------------------------                                

 

 savle1:

 

 # PCI device 0x1022:0x2000 (vmxnet)

SUBSYSTEM=="net",ACTION=="add", DRIVERS=="?*",ATTR{address}=="00:0c:29:b3:22:59", ATTR{type}=="1",KERNEL=="eth*", NAME="eth2"

--------------------------------------------

 

 DEVICE="eth2"

#BOOTPROTO="dhcp"

BOOTPROTO="static"    ##修改

IPADDR=192.168.78.131##新IP

NETMASK=255.255.255.0   ##修改

HWADDR="00:0c:29:b3:22:59"

IPV6INIT="no"

NM_CONTROLLED="yes"

ONBOOT="yes"

TYPE="Ethernet"

##UUID="0650f791-7ec6-4da7-8fe2-14049d76c604"

DNS1=192.168.78.2          ##网关

 

 

 -------------------------------------------------------------------------------------------------------------------

 

 master:

 

  DEVICE="eth1"

#BOOTPROTO="dhcp"

BOOTPROTO="static"    ##修改

IPADDR=192.168.78.130##新IP

NETMASK=255.255.255.0   ##修改

HWADDR="00:0c:29:60:74:a6"

IPV6INIT="no"

NM_CONTROLLED="yes"

ONBOOT="yes"

TYPE="Ethernet"

##UUID="0650f791-7ec6-4da7-8fe2-14049d76c604"

DNS1=192.168.78.2          ##网关

 

 

 SUBSYSTEM=="net",ACTION=="add", DRIVERS=="?*",ATTR{address}=="00:0c:29:60:74:a6", ATTR{type}=="1",KERNEL=="eth*", NAME="eth1"

 

  

 

 

最后配置下vi/etc/resolv.conf:

 

nameserver192.168.78.2

其实这一步可以省掉,上面设置了DNSServer的地址后系统会自动修改这个配置文件。

 

 

 I.重启网络

  1、service network restart 2、rcnetwork restart 3、/etc/rc.d/network restart

 

II.

虚拟机中查询出mac地址

00:0C:29:60:74:A6

 

II.

tune2fs -l/dev/sda1 |grep 'UUID'

查询出uuid

 

 

还有一种错误是下面这种情况:

 

bringing upinterface eth0:error :unknownconnection: 74f5e2a7-729b-41f2-9c18-93095106d493

 

出现这种情况我们直接打开eth0文件,找到UUID哪行,直接删除后保存,然后重启网卡服务即可。

 

 

I.master

192.168.78.130

 

 

DEVICE="eth0"

BOOTPROTO=none

IPV6INIT="yes"

NM_CONTROLLED="yes"

ONBOOT="yes"

TYPE="Ethernet"

UUID="7ba1abf7-69aa-4c5a-a9a6-bef535730218"

HWADDR=00:0C:29:36:EF:6C

IPADDR=10.1.225.165

PREFIX=24

GATEWAY=10.1.225.1

DNS1=10.1.225.1

DEFROUTE=yes

IPV4_FAILURE_FATAL=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_PEERDNS=yes

IPV6_PEERROUTES=yes

IPV6_FAILURE_FATAL=no

NAME="Systemeth0"

LAST_CONNECT=1427186680

~                                     

 

 

 

IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII

I.配置hadoop

 

 

I.配置别名

 

vi /etc/hosts

 

192.168.78.130  node1

192.168.78.131  node2

192.168.78.132  node3

 

ping node1

 

 

I.免登陆

II. 以root用户使用vi /etc/ssh/sshd_config,打开sshd_config配置文件,开放三个配置,如下图所示:

RSAAuthenticationyes

PubkeyAuthenticationyes

AuthorizedKeysFile.ssh/authorized_keys

II.配置后重启服务

service sshdrestart

 

II.完成以上步骤后,复制该虚拟机两份,分别为hadoop2和hadoop3数据节点,IP设置

 

 

II.配置

ssh-keygen -t rsa

什么都不输入一直回车

 

cd /root/.ssh/

 

192.168.78.131

ssh-copy-id -i/root/.ssh/id_rsa.pub root@node2

192.168.78.132

ssh-copy-id -i/root/.ssh/id_rsa.pub root@node3

 

输入slave密码

 

II.解压安装JDK和hadoop

tar  -xvzf /usr/dingsai/hadoop-2.6.0.tar.gz

 

 

 

 

 

 

 

II.修改vi/usr/dingsai/hadoop-2.6.0/etc/hadoop/core-site.xml 内容为如下:

 备份:cp /usr/dingsai/hadoop-2.6.0/etc/hadoop/core-site.xml/usr/dingsai/hadoop-2.6.0/etc/hadoop/core-site_bak.xml

 

 

<?xmlversion="1.0"?>

<?xml-stylesheettype="text/xsl" href=\'#\'" Put site-specific property overrides in thisfile. -->

<configuration>

<property>

  <name>fs.default.name</name>

 <value>hdfs://192.168.78.130:9000</value>

 </property>

<property>

  <name>hadoop.tmp.dir</name>

 <value>/tmp/hadoop-${user.name}</value>

  <description>A base for other temporarydirectories.</description>

</property>

</configuration>

 

 

 

II.修改 vi /usr/dingsai/hadoop-2.6.0/etc/hadoop/mapred-site.xml内容为如下:

cp/usr/dingsai/hadoop-2.6.0/etc/hadoop/mapred-site.xml.template/usr/dingsai/hadoop-2.6.0/etc/hadoop/mapred-site.xml

 

 

cp  mapred-site.xml  mapred-site_bak.xml

 

<?xmlversion="1.0"?>

<?xml-stylesheettype="text/xsl" href=\'#\'" Put site-specific property overrides in thisfile. -->

    <configuration>

     <property>

     <name>mapred.job.tracker</name>

     <value>192.168.78.130:9001</value>

     </property>

</configuration>

 

 

II.修改 vi/usr/dingsai/hadoop-2.6.0/etc/hadoop/hdfs-site.xml内容为如下:

 cp/usr/dingsai/hadoop-2.6.0/etc/hadoop/hdfs-site.xml/usr/dingsai/hadoop-2.6.0/etc/hadoop/hdfs-site_bak.xml

 

 mkdir /usr/dingsai/hadoop-2.6.0/mkdirdata_name1

 mkdir /usr/dingsai/hadoop-2.6.0/mkdirdata_name2

 mkdir /usr/dingsai/hadoop-2.6.0/mkdir data1

 mkdir /usr/dingsai/hadoop-2.6.0/mkdir data2

 

<configuration>

  <property>

    <name>dfs.replication</name>

     <value>2</value>

  </property>

  <property>

    <name>dfs.name.dir</name>

    <value>/usr/dingsai/hadoop-2.6.0/data_name1,/usr/dingsai/hadoop-2.6.0/data_name2</value>

  </property>

  <property>

    <name>dfs.data.dir</name>

    <value>/usr/dingsai/hadoop-2.6.0/data_1,/usr/dingsai/hadoop-2.6.0/data_2</value>

  </property>

</configuration>

 

 

 

II.修改 vi /usr/dingsai/hadoop-2.6.0/etc/hadoop/masters文件内容为如下:

 

 

192.168.78.130

 

II.修改 vi/usr/dingsai/hadoop-2.6.0/etc/hadoop/slaves文件内容为如下:

 

 

192.168.78.131

192.168.78.132

 

 

 

II.两个slave执行

mkdir/usr/dingsai/hadoop-2.6.0

 

 

II.在master 复制

--没用

for  i in  `seq 131 132 ` ; do scp-r  /usr/dingsai/hadoop-2.6.0  root@192.168.78.$i:/data/

 

 

在/usr/dingsai/hadoop-2.6.0下

-复制到node2

scp -r *root@node2:/usr/dingsai/hadoop-2.6.0

 

在/usr/dingsai/hadoop-2.6.0下

-复制到node3

 scp -r * root@node3:/usr/dingsai/hadoop-2.6.0

 

II.格式化

/usr/dingsai/hadoop-2.6.0/bin/hadoopnamenode -format

 

 

 

II.master 启动所有

/usr/dingsai/hadoop-2.6.0/sbin/start-all.sh

 

 /usr/dingsai/hadoop-2.6.0/sbin/stop-all.sh

 

II.校验

http://192.168.78.130:50070/

 
http://192.168.78.130:8088/
 

                          

 

II.上传数据:

hadoop fs -put/home/hadoop/part-00000 /

 

上传本地文件part-00000到 hdfs 根目录

 

下载文件

hadoop fs –get/user/admin/aaron/newFile /home/admin/newFile

 

 

II.查询

hadoop fs-cat  /part-00000 |grep  2015

 

 

 

 

 

 

 

 

 

 

 

 

 

 I.错误

 I.错误sed: -e expression #1, char 6: unknown option to `s'

Java: ssh: Couldnot resolve hostname Java: Name or service not known

Client: ssh: Couldnot resolve hostname Client: Name or service not known

VM: ssh: Could notresolve hostname VM: Name or service not known

You: ssh: Couldnot resolve hostname You: Name or service not known

 

hostname查看是否正确

 vi /etc/sysconfig/network

 修改hostname

 重启后生效

 

 

 解决办法:

出现上述问题主要是环境变量没设置好,在~/.bash_profile或者/etc/profile中加入以下语句就没问题了。

  #vi/etc/profile或者vi~/.bash_profile

exportHADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

exportHADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

然后用source重新编译使之生效即可!

  #source/etc/profile或者source~/.bash_profile

 

 

 

 

 

修改

vi /etc/profile

 

exportHADOOP_HOME=/usr/dingsai/hadoop-2.6.0

exportHADOOP_COMMON_HOME=$HADOOP_HOME

exportHADOOP_HDFS_HOME=$HADOOP_HOME

export HADOOP_MAPRED_HOME=$HADOOP_HOME

exportHADOOP_YARN_HOME=$HADOOP_HOME

exportHADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

exportPATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOOME/sbin:$HADOOP_HOME/lib

exportHADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

 

 

 

 

 修改

vi/usr/dingsai/hadoop-2.6.0/libexec/hadoop-config.sh

 

/JAVA

 

exportJAVA_HOME=/usr/java/jdk1.7.0_71

 

 

 

 

 资源:
http://hadoop.apache.org/
 配置好的VM master 下载地址:

http://pan.baidu.com/s/1sjrGtPj

只有主master添加另外两台还需要自己配置免登陆等重新格式化

 576699909@qq.com 

 
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  hadoop 大数据