TaggedInputSplit cannot be cast to org.apache.hadoop.mapreduce.lib.input.FileSplit
2016-08-25 17:06
1146 查看
异常:
java.lang.Exception: java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.TaggedInputSplit cannot be cast to org.apache.hadoop.mapreduce.lib.input.FileSplit
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.TaggedInputSplit cannot be cast to org.apache.hadoop.mapreduce.lib.input.FileSplit
at com.cys.TuiSong.TwoMapper.map(TwoMapper.java:18)
at com.cys.TuiSong.TwoMapper.map(TwoMapper.java:1)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
异常分析:
在mapper运行的时候,可以用如下的方法得到对应的filesplit,也就能拿到对应的输入路径,等等信息.
(FileSplit)(reporter.getInputSplit()); 0.19
(FileSplit)(context.getInputSplit());0.20
但是如果是使用
MultipleInputs.addInputPath(job, new Path(path),
SequenceFileInputFormat.class, ProfileMapper.class);
在mapper中再使用上面的那种方式,就会报出一个类型转换错误
Java.lang.ClassCastException: org.apache.Hadoop.mapreduce.lib.input.TaggedInputSplit
cannot be cast to org.apache.hadoop.mapreduce.lib.input.FileSplit
而我们需要的filesplit实际上就是TaggedInputSplit中的成员变量inputSplit
然而TaggedInputSplit这个类在社区版中并不是public的,所以我们并不能直接直接拿到对应的信息了
解决方案:
MultipleInputs.addInputPath(job, new Path(path),
SequenceFileInputFormat.class, ProfileMapper.class);替换成:
FileInputFormat.addInputPath(job,new Path(""));即可
java.lang.Exception: java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.TaggedInputSplit cannot be cast to org.apache.hadoop.mapreduce.lib.input.FileSplit
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.TaggedInputSplit cannot be cast to org.apache.hadoop.mapreduce.lib.input.FileSplit
at com.cys.TuiSong.TwoMapper.map(TwoMapper.java:18)
at com.cys.TuiSong.TwoMapper.map(TwoMapper.java:1)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
异常分析:
在mapper运行的时候,可以用如下的方法得到对应的filesplit,也就能拿到对应的输入路径,等等信息.
(FileSplit)(reporter.getInputSplit()); 0.19
(FileSplit)(context.getInputSplit());0.20
但是如果是使用
MultipleInputs.addInputPath(job, new Path(path),
SequenceFileInputFormat.class, ProfileMapper.class);
在mapper中再使用上面的那种方式,就会报出一个类型转换错误
Java.lang.ClassCastException: org.apache.Hadoop.mapreduce.lib.input.TaggedInputSplit
cannot be cast to org.apache.hadoop.mapreduce.lib.input.FileSplit
而我们需要的filesplit实际上就是TaggedInputSplit中的成员变量inputSplit
然而TaggedInputSplit这个类在社区版中并不是public的,所以我们并不能直接直接拿到对应的信息了
解决方案:
MultipleInputs.addInputPath(job, new Path(path),
SequenceFileInputFormat.class, ProfileMapper.class);替换成:
FileInputFormat.addInputPath(job,new Path(""));即可
相关文章推荐
- 用java运行Hadoop程序报错:org.apache.hadoop.fs.LocalFileSystem cannot be cast to org.apache.
- MapReduce——LongWritable cannot be cast to org.apache.hadoop.io.Text 错误原因
- Hadoop-mapreduce org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text错误
- java.lang.Exception: java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.TaggedInput
- Java 向Hbase表插入数据报(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac)
- 异常-----springmvc + ajaxfileupload解决ajax不能异步上传图片的问题。java.lang.ClassCastException: org.apache.catalina.connector.RequestFacade cannot be cast to org.springframework.web.multipart.
- 解决kylin报错 ClassCastException org.apache.hadoop.hive.ql.exec.ConditionalTask cannot be cast to org.apache.hadoop.hive.ql.exec.mr.MapRedTask
- Java 向Hbase表插入数据报错(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apa)
- Hadoop: LongWritable cannot be cast to org.apache.hadoop.io.IntWritable
- Java 向Hbase表插入数据报(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac
- Java 向Hbase表插入数据报(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac
- Java 向Hbase表插入数据异常org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apache.client.HTable
- Java 向Hbase表插入数据报(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac)
- org.apache.hadoop.hive.metastore.api.InvalidOperationException cannot be cast to java.lang.RuntimeEx
- org.apache.catalina.util.DefaultAnnotationProcessor cannot be cast to org.apache.AnnotationProcessor
- org.apache.catalina.util.DefaultAnnotationProcessor cannot be cast to org.apache.AnnotationProcessor
- 错误:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost/
- org.apache.catalina.util.DefaultAnnotationProcessor cannot be cast to org.apache.AnnotationProcessor
- java.lang.ClassCastException: org.apache.taglibs.standard.tlv.JstlCoreTLV cannot be cast to javax.servlet.jsp.tagext.TagLibraryValidator
- java.lang.ClassCastException:org.apache.catalina.util.DefaultAnnotationProcessor cannot be cast to org.apache.AnnotationProcesso