Linux CentOS下Hadoop伪分布再配置,再次一定要玩真的集群啦~~
2013-04-05 23:02
190 查看
上次配了一次,没多少感觉。
代码看累了。
配置一次再轻松一下脑袋。。。
不过,这次是在家里的VM上,用最新的JDK-7U-17和上HADOOP-1.1.2搞的。
CENTOS版本来6.3-I386.
一次OK。
这次参考的贴子是:
http://bjbxy.blog.51cto.com/854497/352692
相关输出如下:
# jps
2614 Jps
2280 TaskTracker
1908 NameNode
2110 SecondaryNameNode
2012 DataNode
2169 JobTracker
报告输出:
HDFS管理界面:
WORDCOUNT的JOB测试样例运行:
]# ./hadoop jar hadoop-examples-1.1.2.jar wordcount bxy output
Exception in thread "main" java.io.IOException: Error opening job jar: hadoop-examples-1.1.2.jar
at org.apache.hadoop.util.RunJar.main(RunJar.java:90)
Caused by: java.io.FileNotFoundException: hadoop-examples-1.1.2.jar (No such file or directory)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:214)
at java.util.zip.ZipFile.<init>(ZipFile.java:144)
at java.util.jar.JarFile.<init>(JarFile.java:153)
at java.util.jar.JarFile.<init>(JarFile.java:90)
at org.apache.hadoop.util.RunJar.main(RunJar.java:88)
[root@localhost bin]# pwd
/usr/local/hadoop/hadoop-1.1.2/bin
[root@localhost bin]# ./hadoop jar /usr/local/hadoop/hadoop-1.1.2/hadoop-examples-1.1.2.jar wordcount bxy output
12/12/20 07:35:43 INFO input.FileInputFormat: Total input paths to process : 1
12/12/20 07:35:43 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/12/20 07:35:43 WARN snappy.LoadSnappy: Snappy native library not loaded
12/12/20 07:35:45 INFO mapred.JobClient: Running job: job_201212200705_0001
12/12/20 07:35:46 INFO mapred.JobClient: map 0% reduce 0%
12/12/20 07:36:10 INFO mapred.JobClient: map 100% reduce 0%
12/12/20 07:36:26 INFO mapred.JobClient: map 100% reduce 100%
12/12/20 07:36:30 INFO mapred.JobClient: Job complete: job_201212200705_0001
12/12/20 07:36:30 INFO mapred.JobClient: Counters: 29
12/12/20 07:36:30 INFO mapred.JobClient: Job Counters
12/12/20 07:36:30 INFO mapred.JobClient: Launched reduce tasks=1
12/12/20 07:36:30 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=25805
12/12/20 07:36:30 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
12/12/20 07:36:30 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
12/12/20 07:36:30 INFO mapred.JobClient: Launched map tasks=1
12/12/20 07:36:30 INFO mapred.JobClient: Data-local map tasks=1
12/12/20 07:36:30 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=15472
12/12/20 07:36:30 INFO mapred.JobClient: File Output Format Counters
12/12/20 07:36:30 INFO mapred.JobClient: Bytes Written=1135
12/12/20 07:36:30 INFO mapred.JobClient: FileSystemCounters
12/12/20 07:36:30 INFO mapred.JobClient: FILE_BYTES_READ=1600
12/12/20 07:36:30 INFO mapred.JobClient: HDFS_BYTES_READ=1280
12/12/20 07:36:30 INFO mapred.JobClient: FILE_BYTES_WRITTEN=105526
12/12/20 07:36:30 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=1135
12/12/20 07:36:30 INFO mapred.JobClient: File Input Format Counters
12/12/20 07:36:30 INFO mapred.JobClient: Bytes Read=1166
12/12/20 07:36:30 INFO mapred.JobClient: Map-Reduce Framework
12/12/20 07:36:30 INFO mapred.JobClient: Map output materialized bytes=1600
12/12/20 07:36:30 INFO mapred.JobClient: Map input records=34
12/12/20 07:36:30 INFO mapred.JobClient: Reduce shuffle bytes=1600
12/12/20 07:36:30 INFO mapred.JobClient: Spilled Records=230
12/12/20 07:36:30 INFO mapred.JobClient: Map output bytes=1824
12/12/20 07:36:30 INFO mapred.JobClient: Total committed heap usage (bytes)=131665920
12/12/20 07:36:30 INFO mapred.JobClient: CPU time spent (ms)=8970
12/12/20 07:36:30 INFO mapred.JobClient: Combine input records=169
12/12/20 07:36:30 INFO mapred.JobClient: SPLIT_RAW_BYTES=114
12/12/20 07:36:30 INFO mapred.JobClient: Reduce input records=115
12/12/20 07:36:30 INFO mapred.JobClient: Reduce input groups=115
12/12/20 07:36:30 INFO mapred.JobClient: Combine output records=115
12/12/20 07:36:30 INFO mapred.JobClient: Physical memory (bytes) snapshot=187215872
12/12/20 07:36:30 INFO mapred.JobClient: Reduce output records=115
12/12/20 07:36:30 INFO mapred.JobClient: Virtual memory (bytes) snapshot=755154944
12/12/20 07:36:30 INFO mapred.JobClient: Map output records=169
代码看累了。
配置一次再轻松一下脑袋。。。
不过,这次是在家里的VM上,用最新的JDK-7U-17和上HADOOP-1.1.2搞的。
CENTOS版本来6.3-I386.
一次OK。
这次参考的贴子是:
http://bjbxy.blog.51cto.com/854497/352692
相关输出如下:
# jps
2614 Jps
2280 TaskTracker
1908 NameNode
2110 SecondaryNameNode
2012 DataNode
2169 JobTracker
报告输出:
HDFS管理界面:
WORDCOUNT的JOB测试样例运行:
]# ./hadoop jar hadoop-examples-1.1.2.jar wordcount bxy output
Exception in thread "main" java.io.IOException: Error opening job jar: hadoop-examples-1.1.2.jar
at org.apache.hadoop.util.RunJar.main(RunJar.java:90)
Caused by: java.io.FileNotFoundException: hadoop-examples-1.1.2.jar (No such file or directory)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:214)
at java.util.zip.ZipFile.<init>(ZipFile.java:144)
at java.util.jar.JarFile.<init>(JarFile.java:153)
at java.util.jar.JarFile.<init>(JarFile.java:90)
at org.apache.hadoop.util.RunJar.main(RunJar.java:88)
[root@localhost bin]# pwd
/usr/local/hadoop/hadoop-1.1.2/bin
[root@localhost bin]# ./hadoop jar /usr/local/hadoop/hadoop-1.1.2/hadoop-examples-1.1.2.jar wordcount bxy output
12/12/20 07:35:43 INFO input.FileInputFormat: Total input paths to process : 1
12/12/20 07:35:43 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/12/20 07:35:43 WARN snappy.LoadSnappy: Snappy native library not loaded
12/12/20 07:35:45 INFO mapred.JobClient: Running job: job_201212200705_0001
12/12/20 07:35:46 INFO mapred.JobClient: map 0% reduce 0%
12/12/20 07:36:10 INFO mapred.JobClient: map 100% reduce 0%
12/12/20 07:36:26 INFO mapred.JobClient: map 100% reduce 100%
12/12/20 07:36:30 INFO mapred.JobClient: Job complete: job_201212200705_0001
12/12/20 07:36:30 INFO mapred.JobClient: Counters: 29
12/12/20 07:36:30 INFO mapred.JobClient: Job Counters
12/12/20 07:36:30 INFO mapred.JobClient: Launched reduce tasks=1
12/12/20 07:36:30 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=25805
12/12/20 07:36:30 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
12/12/20 07:36:30 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
12/12/20 07:36:30 INFO mapred.JobClient: Launched map tasks=1
12/12/20 07:36:30 INFO mapred.JobClient: Data-local map tasks=1
12/12/20 07:36:30 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=15472
12/12/20 07:36:30 INFO mapred.JobClient: File Output Format Counters
12/12/20 07:36:30 INFO mapred.JobClient: Bytes Written=1135
12/12/20 07:36:30 INFO mapred.JobClient: FileSystemCounters
12/12/20 07:36:30 INFO mapred.JobClient: FILE_BYTES_READ=1600
12/12/20 07:36:30 INFO mapred.JobClient: HDFS_BYTES_READ=1280
12/12/20 07:36:30 INFO mapred.JobClient: FILE_BYTES_WRITTEN=105526
12/12/20 07:36:30 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=1135
12/12/20 07:36:30 INFO mapred.JobClient: File Input Format Counters
12/12/20 07:36:30 INFO mapred.JobClient: Bytes Read=1166
12/12/20 07:36:30 INFO mapred.JobClient: Map-Reduce Framework
12/12/20 07:36:30 INFO mapred.JobClient: Map output materialized bytes=1600
12/12/20 07:36:30 INFO mapred.JobClient: Map input records=34
12/12/20 07:36:30 INFO mapred.JobClient: Reduce shuffle bytes=1600
12/12/20 07:36:30 INFO mapred.JobClient: Spilled Records=230
12/12/20 07:36:30 INFO mapred.JobClient: Map output bytes=1824
12/12/20 07:36:30 INFO mapred.JobClient: Total committed heap usage (bytes)=131665920
12/12/20 07:36:30 INFO mapred.JobClient: CPU time spent (ms)=8970
12/12/20 07:36:30 INFO mapred.JobClient: Combine input records=169
12/12/20 07:36:30 INFO mapred.JobClient: SPLIT_RAW_BYTES=114
12/12/20 07:36:30 INFO mapred.JobClient: Reduce input records=115
12/12/20 07:36:30 INFO mapred.JobClient: Reduce input groups=115
12/12/20 07:36:30 INFO mapred.JobClient: Combine output records=115
12/12/20 07:36:30 INFO mapred.JobClient: Physical memory (bytes) snapshot=187215872
12/12/20 07:36:30 INFO mapred.JobClient: Reduce output records=115
12/12/20 07:36:30 INFO mapred.JobClient: Virtual memory (bytes) snapshot=755154944
12/12/20 07:36:30 INFO mapred.JobClient: Map output records=169
相关文章推荐
- HBase入门笔记(三)-- 完全分布模式Hadoop集群安装配置
- 【Spark亚太研究院系列丛书】Spark实战高手之路-第一章 构建Spark集群-配置Hadoop伪分布模式并运行Wordcount示例(1)
- hadoop spark全真式分布集群配置
- hadoop spark全真式分布集群配置
- 完全分布模式hadoop集群安装配置之二 添加新节点组成分布式集群
- 云计算基础(二):Hadoop单机、伪分布、集群配置
- HBase入门笔记(三)-- 完全分布模式Hadoop集群安装配置
- Centos6.4 +Hadoop 1.2.1集群完全分布模式配置
- Hadoop全分布集群搭建(3)——Hadoop安装与配置
- 【Spark亚太研究院系列丛书】Spark实战高手之路-第一章 构建Spark集群-配置Hadoop伪分布模式并运行Wordcount(2)
- hadoop-2.2.0伪分布式与(全分布集群安装于配置续,很详细的哦~)
- hadoop-2.2.0全分布集群安装与配置(接上篇伪分布式)
- 完全分布模式hadoop集群安装配置之一安装第一个节点
- 完全分布模式hadoop集群安装配置之二 添加新节点组成分布式集群
- hadoop1.2.1在linux中配置安装独立运行Standalone Operation,伪分布Pseudo-Distributed Operation,集群配置三种配置和测试
- 【Spark亚太研究院系列丛书】Spark实战高手之路-第一章 构建Spark集群-配置Hadoop伪分布模式并运行Wordcount示例(1)
- 【Spark亚太研究院系列丛书】Spark实战高手之路-第一章 构建Spark集群-配置Hadoop-伪分布模式并运行Wordcount(2)
- hadoop再次集群搭建(2)-配置免秘钥ssh登录
- hadoop再次集群搭建(2)-配置免秘钥ssh登录
- Hadoop-2.6.5集群安装配置