您的位置:首页 > 产品设计 > UI/UE

Getting started with hadoop --- quick start

2010-04-27 17:26 363 查看

Prepare to Start the Hadoop Cluster

Unpack the downloaded Hadoop distribution. In the distribution,
edit the
file conf/hadoop-env.sh

to define
at least
JAVA_HOME

to be the root of your
Java installation.

Try the following command:

$ bin/hadoop

This will display the usage documentation for the hadoop

script.

Now you are ready to start your Hadoop cluster in one of the three
supported
modes:

Local (Standalone) Mode

Pseudo-Distributed Mode

Fully-Distributed Mode

Standalone Operation

By default, Hadoop is configured to run in a non-distributed
mode, as a single Java process. This is useful for debugging.

The following example copies the unpacked conf

directory to
use as input and then finds and displays every match of the
given regular
expression. Output is written to the given output

directory.

$ mkdir input

$ cp conf/*.xml input

$ bin/hadoop jar hadoop-*-examples.jar grep input output
'dfs[a-z.]+'

$ cat output/*

以下是我的运行结果:

# bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'

10/04/27 17:01:10 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=

10/04/27 17:01:11 INFO mapred.FileInputFormat: Total input paths to process : 5

10/04/27 17:01:11 INFO mapred.JobClient: Running job: job_local_0001

10/04/27 17:01:11 INFO mapred.FileInputFormat: Total input paths to process : 5

10/04/27 17:01:11 INFO mapred.MapTask: numReduceTasks: 1

10/04/27 17:01:11 INFO mapred.MapTask: io.sort.mb = 100

10/04/27 17:01:12 INFO mapred.MapTask: data buffer = 79691776/99614720

10/04/27 17:01:12 INFO mapred.MapTask: record buffer = 262144/327680

10/04/27 17:01:12 INFO mapred.MapTask: Starting flush of map output

10/04/27 17:01:12 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting

10/04/27 17:01:12 INFO mapred.LocalJobRunner: file:/root/下载/hadoop-0.20.2/input/hdfs-site.xml:0+178

10/04/27 17:01:12 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000000_0' done.

10/04/27 17:01:12 INFO mapred.MapTask: numReduceTasks: 1

10/04/27 17:01:12 INFO mapred.MapTask: io.sort.mb = 100

10/04/27 17:01:12 INFO mapred.MapTask: data buffer = 79691776/99614720

10/04/27 17:01:12 INFO mapred.MapTask: record buffer = 262144/327680

10/04/27 17:01:12 INFO mapred.JobClient: map 100% reduce 0%

10/04/27 17:01:12 INFO mapred.MapTask: Starting flush of map output

10/04/27 17:01:12 INFO mapred.MapTask: Finished spill 0

10/04/27 17:01:12 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting

10/04/27 17:01:12 INFO mapred.LocalJobRunner: file:/root/下载/hadoop-0.20.2/input/hadoop-policy.xml:0+4190

10/04/27 17:01:12 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000001_0' done.

10/04/27 17:01:12 INFO mapred.MapTask: numReduceTasks: 1

10/04/27 17:01:12 INFO mapred.MapTask: io.sort.mb = 100

10/04/27 17:01:13 INFO mapred.MapTask: data buffer = 79691776/99614720

10/04/27 17:01:13 INFO mapred.MapTask: record buffer = 262144/327680

10/04/27 17:01:13 INFO mapred.MapTask: Starting flush of map output

10/04/27 17:01:13 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000002_0 is done. And is in the process of commiting

10/04/27 17:01:13 INFO mapred.LocalJobRunner: file:/root/下载/hadoop-0.20.2/input/capacity-scheduler.xml:0+3936

10/04/27 17:01:13 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000002_0' done.

10/04/27 17:01:13 INFO mapred.MapTask: numReduceTasks: 1

10/04/27 17:01:13 INFO mapred.MapTask: io.sort.mb = 100

10/04/27 17:01:13 INFO mapred.MapTask: data buffer = 79691776/99614720

10/04/27 17:01:13 INFO mapred.MapTask: record buffer = 262144/327680

10/04/27 17:01:13 INFO mapred.MapTask: Starting flush of map output

10/04/27 17:01:13 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000003_0 is done. And is in the process of commiting

10/04/27 17:01:13 INFO mapred.LocalJobRunner: file:/root/下载/hadoop-0.20.2/input/mapred-site.xml:0+178

10/04/27 17:01:13 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000003_0' done.

10/04/27 17:01:13 INFO mapred.MapTask: numReduceTasks: 1

10/04/27 17:01:13 INFO mapred.MapTask: io.sort.mb = 100

10/04/27 17:01:13 INFO mapred.MapTask: data buffer = 79691776/99614720

10/04/27 17:01:13 INFO mapred.MapTask: record buffer = 262144/327680

10/04/27 17:01:13 INFO mapred.MapTask: Starting flush of map output

10/04/27 17:01:13 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000004_0 is done. And is in the process of commiting

10/04/27 17:01:13 INFO mapred.LocalJobRunner: file:/root/下载/hadoop-0.20.2/input/core-site.xml:0+178

10/04/27 17:01:13 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000004_0' done.

10/04/27 17:01:13 INFO mapred.LocalJobRunner:

10/04/27 17:01:13 INFO mapred.Merger: Merging 5 sorted segments

10/04/27 17:01:13 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes

10/04/27 17:01:13 INFO mapred.LocalJobRunner:

10/04/27 17:01:13 INFO mapred.TaskRunner: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting

10/04/27 17:01:13 INFO mapred.LocalJobRunner:

10/04/27 17:01:13 INFO mapred.TaskRunner: Task attempt_local_0001_r_000000_0 is allowed to commit now

10/04/27 17:01:13 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to file:/root/下载/hadoop-0.20.2/grep-temp-151036151

10/04/27 17:01:13 INFO mapred.LocalJobRunner: reduce > reduce

10/04/27 17:01:13 INFO mapred.TaskRunner: Task 'attempt_local_0001_r_000000_0' done.

10/04/27 17:01:13 INFO mapred.JobClient: map 100% reduce 100%

10/04/27 17:01:13 INFO mapred.JobClient: Job complete: job_local_0001

10/04/27 17:01:13 INFO mapred.JobClient: Counters: 13

10/04/27 17:01:13 INFO mapred.JobClient: FileSystemCounters

10/04/27 17:01:13 INFO mapred.JobClient: FILE_BYTES_READ=973951

10/04/27 17:01:13 INFO mapred.JobClient: FILE_BYTES_WRITTEN=1029914

10/04/27 17:01:13 INFO mapred.JobClient: Map-Reduce Framework

10/04/27 17:01:13 INFO mapred.JobClient: Reduce input groups=1

10/04/27 17:01:13 INFO mapred.JobClient: Combine output records=1

10/04/27 17:01:13 INFO mapred.JobClient: Map input records=219

10/04/27 17:01:13 INFO mapred.JobClient: Reduce shuffle bytes=0

10/04/27 17:01:14 INFO mapred.JobClient: Reduce output records=1

10/04/27 17:01:14 INFO mapred.JobClient: Spilled Records=2

10/04/27 17:01:14 INFO mapred.JobClient: Map output bytes=17

10/04/27 17:01:14 INFO mapred.JobClient: Map input bytes=8660

10/04/27 17:01:14 INFO mapred.JobClient: Combine input records=1

10/04/27 17:01:14 INFO mapred.JobClient: Map output records=1

10/04/27 17:01:14 INFO mapred.JobClient: Reduce input records=1

10/04/27 17:01:14 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized

10/04/27 17:01:14 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.

10/04/27 17:01:14 INFO mapred.FileInputFormat: Total input paths to process : 1

10/04/27 17:01:15 INFO mapred.JobClient: Running job: job_local_0002

10/04/27 17:01:15 INFO mapred.FileInputFormat: Total input paths to process : 1

10/04/27 17:01:15 INFO mapred.MapTask: numReduceTasks: 1

10/04/27 17:01:15 INFO mapred.MapTask: io.sort.mb = 100

10/04/27 17:01:15 INFO mapred.MapTask: data buffer = 79691776/99614720

10/04/27 17:01:15 INFO mapred.MapTask: record buffer = 262144/327680

10/04/27 17:01:15 INFO mapred.MapTask: Starting flush of map output

10/04/27 17:01:15 INFO mapred.MapTask: Finished spill 0

10/04/27 17:01:15 INFO mapred.TaskRunner: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting

10/04/27 17:01:15 INFO mapred.LocalJobRunner: file:/root/下载/hadoop-0.20.2/grep-temp-151036151/part-00000:0+111

10/04/27 17:01:15 INFO mapred.TaskRunner: Task 'attempt_local_0002_m_000000_0' done.

10/04/27 17:01:15 INFO mapred.LocalJobRunner:

10/04/27 17:01:15 INFO mapred.Merger: Merging 1 sorted segments

10/04/27 17:01:15 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes

10/04/27 17:01:15 INFO mapred.LocalJobRunner:

10/04/27 17:01:15 INFO mapred.TaskRunner: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting

10/04/27 17:01:15 INFO mapred.LocalJobRunner:

10/04/27 17:01:15 INFO mapred.TaskRunner: Task attempt_local_0002_r_000000_0 is allowed to commit now

10/04/27 17:01:15 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0002_r_000000_0' to file:/root/下载/hadoop-0.20.2/output

10/04/27 17:01:15 INFO mapred.LocalJobRunner: reduce > reduce

10/04/27 17:01:15 INFO mapred.TaskRunner: Task 'attempt_local_0002_r_000000_0' done.

10/04/27 17:01:16 INFO mapred.JobClient: map 100% reduce 100%

10/04/27 17:01:16 INFO mapred.JobClient: Job complete: job_local_0002

10/04/27 17:01:16 INFO mapred.JobClient: Counters: 13

10/04/27 17:01:16 INFO mapred.JobClient: FileSystemCounters

10/04/27 17:01:16 INFO mapred.JobClient: FILE_BYTES_READ=640267

10/04/27 17:01:16 INFO mapred.JobClient: FILE_BYTES_WRITTEN=683733

10/04/27 17:01:16 INFO mapred.JobClient: Map-Reduce Framework

10/04/27 17:01:16 INFO mapred.JobClient: Reduce input groups=1

10/04/27 17:01:16 INFO mapred.JobClient: Combine output records=0

10/04/27 17:01:16 INFO mapred.JobClient: Map input records=1

10/04/27 17:01:16 INFO mapred.JobClient: Reduce shuffle bytes=0

10/04/27 17:01:16 INFO mapred.JobClient: Reduce output records=1

10/04/27 17:01:16 INFO mapred.JobClient: Spilled Records=2

10/04/27 17:01:16 INFO mapred.JobClient: Map output bytes=17

10/04/27 17:01:16 INFO mapred.JobClient: Map input bytes=25

10/04/27 17:01:16 INFO mapred.JobClient: Combine input records=0

10/04/27 17:01:16 INFO mapred.JobClient: Map output records=1

10/04/27 17:01:16 INFO mapred.JobClient: Reduce input records=1

# cat output/*

1 dfsadmin

Pseudo-Distributed Operation

Hadoop can also be run on a single-node in a pseudo-distributed mode
where each Hadoop daemon runs in a separate Java process.

Configuration

Use the following:

conf/core-site.xml
:

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
conf/hdfs-site.xml
:

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
conf/mapred-site.xml
:

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>

Setup passphraseless ssh

Now check that you can ssh to the localhost without a passphrase:

$ ssh localhost

If you cannot ssh to localhost without a passphrase, execute the
following commands:

$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa

$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Execution

Format a new distributed-filesystem:

$ bin/hadoop namenode -format

Start the hadoop daemons:

$ bin/start-all.sh

The hadoop daemon log output is written to the
${HADOOP_LOG_DIR}
directory (defaults to
${HADOOP_HOME}/logs
).

Browse the web interface for the NameNode and the JobTracker; by
default they are available at:

NameNode
-
http://localhost:50070/

JobTracker
-
http://localhost:50030/

Copy the input files into the distributed filesystem:

$ bin/hadoop fs -put conf input

Run some of the examples provided:

$ bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'

Examine the output files:

Copy the output files from the distributed filesystem to the local
filesytem and examine them:

$ bin/hadoop fs -get output output

$ cat output/*

or

View the output files on the distributed filesystem:

$ bin/hadoop fs -cat output/*

When you're done, stop the daemons with:

$ bin/stop-all.sh

以下是我的运行结果:

# bin/hadoop namenode -format

10/04/27 18:33:26 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG: host = jtangfs-ubuntu/127.0.1.1

STARTUP_MSG: args = [-format]

STARTUP_MSG: version = 0.20.2

STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010

************************************************************/

10/04/27 18:33:27 INFO namenode.FSNamesystem: fsOwner=root,root

10/04/27 18:33:27 INFO namenode.FSNamesystem: supergroup=supergroup

10/04/27 18:33:27 INFO namenode.FSNamesystem: isPermissionEnabled=true

10/04/27 18:33:27 INFO common.Storage: Image file of size 94 saved in 0 seconds.

10/04/27 18:33:27 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.

10/04/27 18:33:27 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at jtangfs-ubuntu/127.0.1.1

************************************************************/

# bin/hadoop fs -put conf input

# bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'

10/04/27 19:07:00 INFO mapred.FileInputFormat: Total input paths to process : 17

10/04/27 19:07:01 INFO mapred.JobClient: Running job: job_201004271837_0001

10/04/27 19:07:02 INFO mapred.JobClient: map 0% reduce 0%

10/04/27 19:07:12 INFO mapred.JobClient: map 5% reduce 0%

10/04/27 19:07:16 INFO mapred.JobClient: map 11% reduce 0%

10/04/27 19:07:19 INFO mapred.JobClient: map 23% reduce 0%

10/04/27 19:07:25 INFO mapred.JobClient: map 35% reduce 3%

10/04/27 19:07:28 INFO mapred.JobClient: map 41% reduce 7%

10/04/27 19:07:31 INFO mapred.JobClient: map 52% reduce 11%

10/04/27 19:07:34 INFO mapred.JobClient: map 58% reduce 11%

10/04/27 19:07:37 INFO mapred.JobClient: map 70% reduce 11%

10/04/27 19:07:40 INFO mapred.JobClient: map 70% reduce 13%

10/04/27 19:07:43 INFO mapred.JobClient: map 82% reduce 13%

10/04/27 19:07:46 INFO mapred.JobClient: map 82% reduce 23%

10/04/27 19:07:49 INFO mapred.JobClient: map 94% reduce 23%

10/04/27 19:07:52 INFO mapred.JobClient: map 100% reduce 27%

10/04/27 19:07:58 INFO mapred.JobClient: map 100% reduce 31%

10/04/27 19:08:05 INFO mapred.JobClient: map 100% reduce 100%

10/04/27 19:08:06 INFO mapred.JobClient: Job complete: job_201004271837_0001

10/04/27 19:08:06 INFO mapred.JobClient: Counters: 18

10/04/27 19:08:06 INFO mapred.JobClient: Job Counters

10/04/27 19:08:06 INFO mapred.JobClient: Launched reduce tasks=1

10/04/27 19:08:06 INFO mapred.JobClient: Launched map tasks=17

10/04/27 19:08:06 INFO mapred.JobClient: Data-local map tasks=17

10/04/27 19:08:06 INFO mapred.JobClient: FileSystemCounters

10/04/27 19:08:06 INFO mapred.JobClient: FILE_BYTES_READ=158

10/04/27 19:08:06 INFO mapred.JobClient: HDFS_BYTES_READ=21046

10/04/27 19:08:06 INFO mapred.JobClient: FILE_BYTES_WRITTEN=956

10/04/27 19:08:06 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=280

10/04/27 19:08:06 INFO mapred.JobClient: Map-Reduce Framework

10/04/27 19:08:06 INFO mapred.JobClient: Reduce input groups=7

10/04/27 19:08:06 INFO mapred.JobClient: Combine output records=7

10/04/27 19:08:06 INFO mapred.JobClient: Map input records=632

10/04/27 19:08:06 INFO mapred.JobClient: Reduce shuffle bytes=254

10/04/27 19:08:06 INFO mapred.JobClient: Reduce output records=7

10/04/27 19:08:06 INFO mapred.JobClient: Spilled Records=14

10/04/27 19:08:06 INFO mapred.JobClient: Map output bytes=193

10/04/27 19:08:06 INFO mapred.JobClient: Map input bytes=21046

10/04/27 19:08:06 INFO mapred.JobClient: Combine input records=10

10/04/27 19:08:06 INFO mapred.JobClient: Map output records=10

10/04/27 19:08:06 INFO mapred.JobClient: Reduce input records=7

10/04/27 19:08:06 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.

10/04/27 19:08:07 INFO mapred.FileInputFormat: Total input paths to process : 1

10/04/27 19:08:08 INFO mapred.JobClient: Running job: job_201004271837_0002

10/04/27 19:08:09 INFO mapred.JobClient: map 0% reduce 0%

10/04/27 19:08:20 INFO mapred.JobClient: map 100% reduce 0%

10/04/27 19:08:32 INFO mapred.JobClient: map 100% reduce 100%

10/04/27 19:08:34 INFO mapred.JobClient: Job complete: job_201004271837_0002

10/04/27 19:08:34 INFO mapred.JobClient: Counters: 18

10/04/27 19:08:34 INFO mapred.JobClient: Job Counters

10/04/27 19:08:34 INFO mapred.JobClient: Launched reduce tasks=1

10/04/27 19:08:34 INFO mapred.JobClient: Launched map tasks=1

10/04/27 19:08:34 INFO mapred.JobClient: Data-local map tasks=1

10/04/27 19:08:34 INFO mapred.JobClient: FileSystemCounters

10/04/27 19:08:34 INFO mapred.JobClient: FILE_BYTES_READ=158

10/04/27 19:08:34 INFO mapred.JobClient: HDFS_BYTES_READ=280

10/04/27 19:08:34 INFO mapred.JobClient: FILE_BYTES_WRITTEN=348

10/04/27 19:08:34 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=96

10/04/27 19:08:34 INFO mapred.JobClient: Map-Reduce Framework

10/04/27 19:08:34 INFO mapred.JobClient: Reduce input groups=3

10/04/27 19:08:34 INFO mapred.JobClient: Combine output records=0

10/04/27 19:08:34 INFO mapred.JobClient: Map input records=7

10/04/27 19:08:34 INFO mapred.JobClient: Reduce shuffle bytes=158

10/04/27 19:08:34 INFO mapred.JobClient: Reduce output records=7

10/04/27 19:08:34 INFO mapred.JobClient: Spilled Records=14

10/04/27 19:08:34 INFO mapred.JobClient: Map output bytes=138

10/04/27 19:08:34 INFO mapred.JobClient: Map input bytes=194

10/04/27 19:08:34 INFO mapred.JobClient: Combine input records=0

10/04/27 19:08:34 INFO mapred.JobClient: Map output records=7

10/04/27 19:08:34 INFO mapred.JobClient: Reduce input records=7

# bin/hadoop fs -cat output/*

3 dfs.class

2 dfs.period

1 dfs.file

1 dfs.replication

1 dfs.servers

1 dfsadmin

1 dfsmetrics.log

# bin/stop-all.sh

stopping jobtracker

localhost: stopping tasktracker

stopping namenode

localhost: stopping datanode

localhost: stopping secondarynamenode

Fully-Distributed Operation

For information on setting up fully-distributed, non-trivial clusters
see Hadoop Cluster Setup
.

next step, I'll digest
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: