您的位置:首页 > 大数据 > Hadoop

Linux系统安装hadoop 2.8.0 + hive 2.1.1

2018-05-25 18:43 656 查看
1.下载

hadoop-2.8.0.tar.gz http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.8.0/hadoop-2.8.0.tar.gz
apache-hive-2.1.1-bin.tar.gz http://mirror.bit.edu.cn/apache/hive/hive-2.1.1/apache-hive-2.1.1-bin.tar.gz
2.安装

2.1 解压Hadoop文件

cd /usr/local/hadoop

tar -zxvf ahadoop-2.8.0.tar.gz

2.2 添加环境变量

vi /etc/profile

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk.x86_64

export CLASSPATH

HADOOP_LOG_DIR=/usr/local/hadoop/hadoop-2.8.0

HADOOP_PREFIX=/usr/local/hadoop/hadoop-2.8.0

export HADOOP_PREFIX

export HADOOP_HOME=/usr/local/hadoop/hadoop-2.8.0

export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

export HADOOP_MAPARED_HOME=${HADOOP_HOME}

export HADOOP_COMMON_HOME=${HADOOP_HOME}

export HADOOP_HDFS_HOME=${HADOOP_HOME}

export HADOOP_YARN_HOME=${HADOOP_HOME}

export YARN_HOME=${HADOOP_HOME}

export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export LD_LIBRARY_PATH=${HADOOP_HOME}/lib/native/:$LD_LIBRARY_PATH

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"

export HADOOP_NAMENODE_OPTS="-XX:+UseParallelGC"

2.3 环境变量生效

source /etc/profile

2.4 查看安装情况,可以看到版本说明安装成功

hadoop version

2.5 解压Hive文件

cd /usr/local/hive

tar -zxvf apache-hive-2.1.1-bin.tar.gz


3.配置

添加环境变量值
vi /etc/profile

export HIVE_HOME=/usr/local/hive/apache-hive-2.1.1-bin
export PATH=$HIVE_HOME/bin:$PATH

环境变量生效

source /etc/profile

创建 hive-env.sh

cp hive-env.sh.template hive-env.sh

将hadoop的安装路径配置上去
vi hive-env.sh

HADOOP_HOME=/usr/local/hadoop/hadoop-2.8.0

3.配置vi etc/hadoop/core-site.xml
在<configuration></configuration>中加入

<property>
<name>fs.defaultFS</name>
<value>hdfs://node01:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>

<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>

vi etc/hadoop/hdfs-site.xml
在<configuration></configuration>中加入

<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/hdfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node01:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>

vi mapred-site.xml

在<configuration></configuration>中加入

<property>
<name>mapred.job.tracker</name>
<value>node01:9001</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/usr/local/hadoop/var</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>


配置mysql

<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://worker2:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>root</value>
</property>
</configuration>

hive元数据库初始化

schematool -dbType mysql -initSchema


创建mysql用户,并收取给hive表

grant all privileges on hive.* to 'hadoop'@'%' identified by 'hadoop';


4.启动

查看hostname

hostname

修改hostname

vi /etc/hosts

添加node01映射127.0.0.1

127.0.0.1 node01

第一次启动Hadoop需要初始化

hdfs namenode -format

sh /usr/local/hadoop/hadoop-2.8.0/sbin/start-dfs.sh
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: