您的位置:首页 > 运维架构

hadoop二 ---- hadoop伪分布式服务的搭建

2015-08-27 11:19 423 查看
ApacheHadoop是一款支持数据密集型分布式应用并以Apache2.0许可协议发布的开源软件框架。它支持在商品硬件构建的大型集群上运行的应用程序。Hadoop是根据Google公司发表的MapReduce和Google档案系统的论文自行实作而成。Hadoop框架透明地为应用提供可靠性和数据移动。它实现了名为MapReduce的编程范式:应用程序被分割成许多小部分,而每个部分都能在集群中的任意节点上执行或重新执行。此外,Hadoop还提供了分布式文件系统,用以存储所有计算节点的数据,这为整个集群带来了非常高的带宽。MapReduce和分布式文件系统的设计,使得整个框架能够自动处理节点故障。它使应用程序与成千上万的独立计算的电脑和PB级的数据。现在普遍认为整个ApacheHadoop“平台”包括Hadoop内核、MapReduce、Hadoop分布式文件系统(HDFS)以及一些相关项目,有ApacheHive和ApacheHBase等等。<http://zh.wikipedia.org/wiki/Apache_Hadoop>IP:10.15.62.228系统环境:#uname-srnLinuxlocalhost2.6.32-358.el6.x86_64解决依赖关系:#yumgroupinstall"ServerPlatfromDevelopments""DevelopmentTools"-y编译安装安装jdktarxfjdk-8u5-linux-x64.gz-C/usr/local/cd/usr/local/ln-sv/usr/local/jdk-8u5-linux-x64/usr/local/java导出java的环境变量#vim/etc/profile.d/java.shexportJAVA_HOME=/usr/local/javaexportJAVA_BIN=/usr/local/java/binexportPATHPATH=$PATH:$JAVA_BINexportCLASSPATH=exportCLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar测试安装的JDK#java-versionjavaversion"1.7.0_09-icedtea"OpenJDKRuntimeEnvironment(rhel-2.3.4.1.el6_3-x86_64)OpenJDK64-BitServerVM(build23.2-b09,mixedmode)安装hadoop创建hadoop的运行用户#useraddhadoop#echo"password"|passwd--stdinhadoop#tarxfhadoop-1.0.3.tar.gz-C/usr/local/#cd/usr/local/hadoop-1.0.3/#chown-Rhadoop:hadoop./*#ln-sv/usr/local/hadoop-1.0.3//usr/local/hadoop导出hadoop的环境变量:#vim/etc/profile.d/hadoop.shHADOOP_PREFIX=/usr/local/hadoopPATH=$HADOOP_PREFIX/bin:$PATHexportHADOOP_PREFIXPATH#./etc/profile.d/hadoop.sh#su-hadoop配置hadoop用户能够以基于密钥的验正方式登录本地主机,以便Hadoop可远程启动各节点上的Hadoop进程并执行监控等额外的管理工作。$ssh-keygen-trsa-P''$ssh-copy-id.ssh/id_rsa.pubhadoop@localhost测试安装的hadoop$hadoop-versionjavaversion"1.8.0_05"Java(TM)SERuntimeEnvironment(build1.8.0_05-b13)JavaHotSpot(TM)64-BitServerVM(build25.5-b02,mixedmode)修改hadoop如下三个配置文件core-site.xml,hdfs-site.xml,mapred-site.xml修改core-site.xml配置文件,fs.default.name中最好使用HOST:PORT
[hadoop@zabbixconf]$vim/usr/local/hadoop/conf/core-site.xml
<?xmlversion="1.0"?>
<?xml-stylesheettype="text/xsl"href="configuration.xsl"?>
<!--Putsite-specificpropertyoverridesinthisfile.-->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop/temp</value>
</property>
<property>
<name>fs.trash.interval</name>
<value>1440</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://zabbix.zkg.com:9000</value>
</property>
</configuration>
修改hdfs-site.xml配置文件
[hadoop@zabbixconf]$vim/usr/local/hadoop/conf/hdfs-site.xml
<?xmlversion="1.0"?>
<?xml-stylesheettype="text/xsl"href="configuration.xsl"?>
<!--Putsite-specificpropertyoverridesinthisfile.-->
<configuration>
<?xmlversion="1.0"?>
<?xml-stylesheettype="text/xsl"href="configuration.xsl"?>
<!--Putsite-specificpropertyoverridesinthisfile.-->
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/hadoop/temp/hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/hadoop/temp/hdfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>
</configuration>
修改mapred-site.xml配置文件
[hadoop@zabbixconf]$vim/usr/local/hadoop/conf/mapred-site.xml
<?xmlversion="1.0"encoding="UTF-8"?>
<?xml-stylesheettype="text/xsl"href="configuration.xsl"?>
<?xmlversion="1.0"encoding="UTF-8"?>
<?xml-stylesheettype="text/xsl"href="configuration.xsl"?>
<!--Putsite-specificpropertyoverridesinthisfile.-->
<configuration>
<property>
<name>mapred.job.tracker</name>
<!--<value>10.15.62.228:9001</value>-->
<value>zabbix.zkg.com:9001</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/hadoop/temp/local</value>
</property>
<property>
<name>mapred.system.dir</name>
<value>/hadoop/temp/system</value>
</property>
</configuration>
创建hadoop.tmp.dir所指定的目录,并修改属组和属主为hadoop的运行用户(hadoop)#mkdir-pv/hadoop/tempmkdir:createddirectory`/hadoop'mkdir:createddirectory`/hadoop/temp'#chown-Rhadoop.hadoop/hadoop/*格式化名称节点
启用hadoop,hadoop自身提供的有start-all.shhadoop启动脚本(stop-all.shhadoop停止脚本),在hadoop的bin目录下
[hadoop@zabbixconf]$start-all.sh
startingnamenode,loggingto/usr/local/hadoop/logs/hadoop-hadoop-namenode-zabbix.server.com.out
zabbix.zkg.com:Warning:$HADOOP_HOMEisdeprecated.
zabbix.zkg.com:
zabbix.zkg.com:startingdatanode,loggingto/usr/local/hadoop/logs/hadoop-hadoop-datanode-zabbix.zkg.com.out
zabbix.zkg.com:Warning:$HADOOP_HOMEisdeprecated.
zabbix.zkg.com:
zabbix.zkg.com:startingsecondarynamenode,loggingto/usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-zabbix.zkg.com.out
startingjobtracker,loggingto/usr/local/hadoop/logs/hadoop-hadoop-jobtracker-zabbix.server.com.out
zabbix.zkg.com:Warning:$HADOOP_HOMEisdeprecated.
zabbix.zkg.com:
zabbix.zkg.com:startingtasktracker,loggingto/usr/local/hadoop/logs/hadoop-hadoop-tasktracker-zabbix.zkg.com.out
使用jps验证hadoop是否正常启动
[hadoop@zabbixconf]$jps
3168DataNode
3281SecondaryNameNode
3553Jps
3059NameNode
3369JobTracker
3499TaskTracker
$ss-untlp|grepjavatcpLISTEN0128:::50060:::*users:(("java",8022,83))tcpLISTEN050:::38829:::*users:(("java",7683,61))tcpLISTEN0128::ffff:127.0.0.1:56334:::*users:(("java",8022,66))tcpLISTEN0128:::50030:::*users:(("java",7888,79))tcpLISTEN0128:::50070:::*users:(("java",7568,82))tcpLISTEN050:::50010:::*users:(("java",7683,72))tcpLISTEN0128:::50075:::*users:(("java",7683,73))tcpLISTEN050:::47517:::*users:(("java",7568,61))tcpLISTEN0128:::50020:::*users:(("java",7683,79))tcpLISTEN050:::41540:::*users:(("java",7803,61))tcpLISTEN0128::ffff:10.15.62.228:9000:::*users:(("java",7568,71))tcpLISTEN0128::ffff:10.15.62.228:9001:::*users:(("java",7888,68))tcpLISTEN0128:::50090:::*users:(("java",7803,74))tcpLISTEN050:::50698:::*users:(("java",7888,61))使用hadoop就可以列出要使用的命令
使用hadoopfs-ls列出目录,出现ls:Cannotaccess.:Nosuchfileordirectory.时,创建一个test目录,然后使用hadoopfs-ls列出目录:
在本地编辑一个测试文件:$vimhad.txthellowordwordThisisatestfile!welcometohadoop!将创建的文件拷贝到/user/hadoop/test$hadoopfs-put~/had.txttest$hadoopfs-lstestFound1items-rw-r--r--1hadoopsupergroup562014-07-2514:13/user/hadoop/test/had.txt打开查看文件内容:$hadoopfs-catwordcount/*hellowordwordThisisatestfile!welcometohadoop!离开hodoop的安全模式$hadoopdfsadmin-safemodeleave测试运行wordcount程序$hadoopjar/usr/local/hadoop/hadoop-examples-1.0.3.jarwordcounttestoutput14/07/2514:29:56INFOinput.FileInputFormat:Totalinputpathstoprocess:114/07/2514:29:56INFOutil.NativeCodeLoader:Loadedthenative-hadooplibrary14/07/2514:29:56WARNsnappy.LoadSnappy:Snappynativelibrarynotloaded14/07/2514:29:57INFOmapred.JobClient:Runningjob:job_201407251359_000114/07/2514:29:58INFOmapred.JobClient:map0%reduce0%14/07/2514:30:15INFOmapred.JobClient:map100%reduce0%14/07/2514:30:27INFOmapred.JobClient:map100%reduce100%14/07/2514:30:32INFOmapred.JobClient:Jobcomplete:job_201407251359_000114/07/2514:30:32INFOmapred.JobClient:Counters:2914/07/2514:30:32INFOmapred.JobClient:Map-ReduceFramework14/07/2514:30:32INFOmapred.JobClient:SpilledRecords=2014/07/2514:30:32INFOmapred.JobClient:Mapoutputmaterializedbytes=11714/07/2514:30:32INFOmapred.JobClient:Reduceinputrecords=1014/07/2514:30:32INFOmapred.JobClient:Virtualmemory(bytes)snapshot=548607590414/07/2514:30:32INFOmapred.JobClient:Mapinputrecords=514/07/2514:30:32INFOmapred.JobClient:SPLIT_RAW_BYTES=11414/07/2514:30:32INFOmapred.JobClient:Mapoutputbytes=10014/07/2514:30:32INFOmapred.JobClient:Reduceshufflebytes=11714/07/2514:30:32INFOmapred.JobClient:Physicalmemory(bytes)snapshot=24708300814/07/2514:30:32INFOmapred.JobClient:Reduceinputgroups=1014/07/2514:30:32INFOmapred.JobClient:Combineoutputrecords=1014/07/2514:30:32INFOmapred.JobClient:Reduceoutputrecords=1014/07/2514:30:32INFOmapred.JobClient:Mapoutputrecords=1114/07/2514:30:32INFOmapred.JobClient:Combineinputrecords=1114/07/2514:30:32INFOmapred.JobClient:CPUtimespent(ms)=582014/07/2514:30:32INFOmapred.JobClient:Totalcommittedheapusage(bytes)=13880934414/07/2514:30:32INFOmapred.JobClient:FileInputFormatCounters14/07/2514:30:32INFOmapred.JobClient:BytesRead=5614/07/2514:30:32INFOmapred.JobClient:FileSystemCounters14/07/2514:30:32INFOmapred.JobClient:HDFS_BYTES_READ=17014/07/2514:30:32INFOmapred.JobClient:FILE_BYTES_WRITTEN=4321114/07/2514:30:32INFOmapred.JobClient:FILE_BYTES_READ=11714/07/2514:30:32INFOmapred.JobClient:HDFS_BYTES_WRITTEN=7114/07/2514:30:32INFOmapred.JobClient:JobCounters14/07/2514:30:32INFOmapred.JobClient:Launchedmaptasks=114/07/2514:30:32INFOmapred.JobClient:Launchedreducetasks=114/07/2514:30:32INFOmapred.JobClient:SLOTS_MILLIS_REDUCES=1087514/07/2514:30:32INFOmapred.JobClient:Totaltimespentbyallreduceswaitingafterreservingslots(ms)=014/07/2514:30:32INFOmapred.JobClient:SLOTS_MILLIS_MAPS=1799414/07/2514:30:32INFOmapred.JobClient:Totaltimespentbyallmapswaitingafterreservingslots(ms)=014/07/2514:30:32INFOmapred.JobClient:Data-localmaptasks=114/07/2514:30:32INFOmapred.JobClient:FileOutputFormatCounters14/07/2514:30:32INFOmapred.JobClient:BytesWritten=71查看输出目录output,出现如下目录则执行成功$hadoopfs-lsoutputFound3items-rw-r--r--1hadoopsupergroup02014-07-2514:30/user/hadoop/output/_SUCCESSdrwxr-xr-x-hadoopsupergroup02014-07-2514:29/user/hadoop/output/_logs-rw-r--r--1hadoopsupergroup712014-07-2514:30/user/hadoop/output/part-r-00000查看单词统计结果:$hadoopfs-cat/user/hadoop/output/part-r-00000This1a1file!1hadoop!1hello1is1test1to1welcome1word2在WEB页面下查看Hadoop工作情况启动后可以通过以下两个页面查看节点状况和job状况http://IP:50070;http://IP:50030。可以查看任务的执行情况http://IP:50060
                                            
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息