您的位置:首页 > 运维架构

Hadoop源码分析33 Child的主要流程

2014-05-28 08:48 393 查看
添加调试参数

<property>

namemapred.child.java.opts/name

value-Xmx200m-Xdebug-Xrunjdwp:transport=dt_socket,address=9999,server=y,suspend=y/value

/property

提交作业:

hadoop jar /opt/hadoop-1.0.0/hadoop-examples-1.0.0.jar wordcount /user/admin/in/yellow2.txt /user/admin/out/128

生成2个Map、2个Reduce任务。

执行Setup任务:

args= [127.0.0.1,40996, attempt_201404282305_0001_m_000003_0,/opt/hadoop-1.0.0/logs/userlogs/job_201404282305_0001/attempt_201404282305_0001_m_000003_0,-1093852866]

变量:

jvmId=JVMId{id=-1093852866,isMap=true,jobId=job_201404282305_0001}

cwd=/tmp/hadoop-admin/mapred/local/taskTracker/admin/jobcache/job_201404282305_0001/attempt_201404282305_0001_m_000003_0/work

jobTokenFile=/tmp/hadoop-admin/mapred/local/taskTracker/admin/jobcache/job_201404282305_0001/jobToken

taskOwner=job_201404282305_0001

umbilical=(TaskUmbilicalProtocol)RPC.getProxy(TaskUmbilicalProtocol.class,

TaskUmbilicalProtocol.versionID,address,defaultConf);

context=JvmContext{jvmId=jvm_201404282305_0001_m_-1093852866,pid="28737"}

myTask =JvmTask {shouldDie=false,

t=MapTask {
taskId=attempt_201404282305_0001_m_000003_0,jobCleanup=false, jobSetup=true, taskCleanup=false,
jobFile="/tmp/hadoop-admin/mapred/local/taskTracker/admin/jobcache/job_201404282305_0001/job.xml"},
taskStatus=MapTaskStatus{runState=UNASSIGNED}}

job= JobConf{Configuration:core-default.xml, core-site.xml, mapred-default.xml,mapred-site.xml,/tmp/hadoop-admin/mapred/local/taskTracker/admin/jobcache/job_201404282305_0001/job.xml}

currentJobSegmented= false

isCleanup =false

DistributedFileSystem的
workingDir=hdfs://server1:9000/user/admin

启动一个TaskReporter线程,检查Task.progressFlag变量(AtomicBoolean),true则通过RPC汇报statusUpdate(taskId,taskStatus,jvmContext),false则通过RPC进行ping(askId,jvmContext).

Task的jobContext={conf={Configuration:core-default.xml,
core-site.xml, mapred-default.xml,mapred-site.xml, hdfs-default.xml, hdfs-site.xml,/tmp/hadoop-admin/mapred/local/taskTracker/admin/jobcache/job_201404282305_0001/job.xml},

job=JobConf{Configuration:core-default.xml, core-site.xml,
mapred-default.xml,mapred-site.xml, hdfs-default.xml, hdfs-site.xml,/tmp/hadoop-admin/mapred/local/taskTracker/admin/jobcache/job_201404282305_0001/job.xml},

jobId={job_201404282305_0001}}

Task的taskContext={conf={Configuration:core-default.xml,
core-site.xml, mapred-default.xml,mapred-site.xml, hdfs-default.xml, hdfs-site.xml,/tmp/hadoop-admin/mapred/local/taskTracker/admin/jobcache/job_201404282305_0001/job.xml},taskId=attempt_201404282305_0001_m_000003_0,jobId=job_201404282305_0001,status=""}

outputFormat=org.apache.hadoop.mapreduce.lib.output.TextOutputFormat@7099c91f

committer={outputFileSystem=DFS[DFSClient[clientName=DFSClient_attempt_201404282305_0001_m_000003_0,ugi=admin]],outputpath=/user/admin/out/128,
workPath=hdfs://server1:9000/user/admin/out/128/_temporary/_attempt_201404282305_0001_m_000003_0}

Task的resourceCalculator=org.apache.hadoop.util.LinuxResourceCalculatorPlugin@2288e718

Task的initCpuCumulativeTime=13620

建立文件夹
/user/admin/out/128/_temporary后则完成.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: