您的位置:首页 > 产品设计 > UI/UE

定义DoubleArray并将其作为value写入SequenceFile

2016-01-20 20:25 507 查看
1)上代码:

/**
* Created with IntelliJ IDEA.
* User: hadoop
* Date: 16-1-20
* Time: 下午7:30
* To change this template use File | Settings | File Templates.
*/
import org.apache.hadoop.io.*;
import java.util.*;
public class DoubleWritableArray {
public static class DoubleArray extends ArrayWritable{
public DoubleArray(){
super(DoubleWritable.class);
}
// public  DoubleWritable[] get(){return values;}

public static double[] convert2double(DoubleWritable[] w){
double[] value=new double[w.length];
for (int i = 0; i < value.length; i++) {
value[i]=Double.valueOf(w[i].get());
}
return value;
}

}

public static void main(String[]args){

ArrayWritable aw=new ArrayWritable(DoubleWritable.class);
aw.set(new DoubleWritable[] {new DoubleWritable(4.34),new DoubleWritable(6.56),
new DoubleWritable(9.56)});

DoubleWritable[] values=(DoubleWritable[])aw.get();
for(DoubleWritable val1:values){
System.out.println(val1);
}
//与ArrayWritable相比,新定义的DoubleArray只是在调用构造函数的时候少写了DoubleWritable类而已
DoubleArray d=new DoubleArray();
d.set(new DoubleWritable[] {new DoubleWritable(4.34),new DoubleWritable(6.56),
new DoubleWritable(9.56)});

double[] temp=DoubleArray.convert2double((DoubleWritable[])d.get())   ;
for (double val:temp)
System.out.println(val);

}

}


ArrayWritable作为reduce的输入的话,需要创建其子类,在子类中创建一个无参构造函数。



来源:http://grepcode.com/file/repo1.maven.org/maven2/org.jvnet.hudson.hadoop/hadoop-core/0.19.1-hudson-2/org/apache/hadoop/io/ArrayWritable.java#ArrayWritable.toArray%28%29

2)读取txt文件并将其以<long,DoubleArray>作为键值对的形式写入SequenceFile

package convert;

/**
* Created with IntelliJ IDEA.
* User: hadoop
* Date: 16-1-19
* Time: 下午3:09
* To change this template use File | Settings | File Templates.
*/
import java.io.IOException;
import java.net.URI;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.SequenceFile;
import org.apache.hadoop.io.Text;
import org.apache.commons.io.FileUtils;
import org.apache.commons.io.LineIterator;

//import Jama.Matrix.*;
//import  java.io.IOException;
import java.io.File;

//import javax.sound.midi.SysexMessage;
public class SequenceFileWriteDemo {
public static void main(String[] args) throws IOException {
String uri ="/home/hadoop/srcData/bDoubleArraySeq";
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(uri), conf);
Path path = new Path(uri);
LongWritable key = new LongWritable();
DoubleArrayWritable value = new DoubleArrayWritable();
SequenceFile.Writer writer = null;
try {
writer = SequenceFile.createWriter(fs, conf, path, key.getClass(),
value.getClass());

final LineIterator it2 = FileUtils.lineIterator(new File("/home/hadoop/srcData/transB.txt"), "UTF-8");
try {
int i=0;
String[] strings;
DoubleWritable[] ArrayDoubleWritables;
while (it2.hasNext()) {
++i;
final String line = it2.nextLine();
key.set(i);
strings=line.split("\t");
ArrayDoubleWritables=new DoubleWritable[strings.length];
for (int j = 0; j < ArrayDoubleWritables.length; j++) {
ArrayDoubleWritables[j] =new DoubleWritable(Double.valueOf(strings[j]));

}

value.set(ArrayDoubleWritables);
writer.append(key,value);
//System.out.println("ffd");

}
} finally {
it2.close();
}

}finally {
IOUtils.closeStream(writer);
}
System.out.println("ok");

}

}
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: