您的位置:首页 > 运维架构

Hadoop 序列化的研究

2012-06-02 10:58 351 查看
Hadoop和java自带的序列化机制不同,它自己提供了一组序列化体系接口和类。

对于基本数据类型,Writable接口就代表了可以被序列化的数据,这个接口定义了2个方法,其中write方法可以吧这个数据序列化到参数给出的DataOutput的字节数组中,而readField方法则可以从DatInput中读取被序列化后的字节数组,并且反序列化为Hadoop数据:

public interface Writable {
/**
* Serialize the fields of this object to <code>out</code>.
*
* @param out <code>DataOuput</code> to serialize this object into.
* @throws IOException
*/
void write(DataOutput out) throws IOException;

/**
* Deserialize the fields of this object from <code>in</code>.
*
* <p>For efficiency, implementations should attempt to re-use storage in the
* existing object where possible.</p>
*
* @param in <code>DataInput</code> to deseriablize this object from.
* @throws IOException
*/
void readFields(DataInput in) throws IOException;
}


但是在Hadoop中,序列化过程一般用于Map-Reduce,我们看不到序列化的中间产物。为了捕捉到序列化的轨迹,我们自己写了一个工具方法,让其序列化到字节数组中,这样我们可以把字节数组的内容打印出来得到序列化后的产物:

/*
*/
package com.charles.hadoop.serial;

import java.io.ByteArrayOutputStream;
import java.io.DataOutputStream;
import java.io.IOException;

import org.apache.hadoop.io.Writable;

/**
*
* Description: 这个类提供了工具方法来记录序列化的轨迹
* 因为,在hadoop中序列化和反序列化都是在Writable接口中进行的,Writable是被序列化的Hadoop对象
* 所以我们把序列化的产物存到字节数组中从而可以捕捉到内容
*
* @author charles.wang
* @created Jun 2, 2012 9:32:41 AM
*
*/
public class HadoopSerializationUtil {

//这个方法可以把Hadoop的对象(Writable表示这个是可以序列化的)序列化到字节数组中,
//然后把字节数组中的内容返回出来
//入参,被序列化的数值对象
//返回值:序列化后的字节数组
public static byte[] serialize(Writable writable) throws IOException {
//创建一个字节数组
ByteArrayOutputStream out = new ByteArrayOutputStream();
//创建一个DataOutputStream,并且包装字节数组,用于存放序列化后的字节流
DataOutputStream dataout =  new DataOutputStream(out);
//让Hadoop对象序列化到字节数组对应的字节流中
writable.write(dataout);
dataout.close();
//返回序列化后的字节流
return out.toByteArray();
}

}


为了显示序列化前的Hadoop基本数据值和序列化后的字节数组,我们写了另一个工具类,这个工具类可以在包装序列化前后的信息到字符串对象中:

/*
*/
package com.charles.hadoop.serial;

import org.apache.hadoop.io.BooleanWritable;
import org.apache.hadoop.io.ByteWritable;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.FloatWritable;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.util.StringUtils;

/**
*
* Description: 这个工具类用于帮助我们做实验,
* 它可以在封装序列化前的基本数据类型值和序列化后的字节数组的内容到字符串对象中
*
* @author charles.wang
* @created Jun 2, 2012 9:58:44 AM
*
*/
public class HadoopExperimentUtil {

public static String displaySerializationExperimentResult(Writable writable) throws Exception{

String typeInfo = null;
String primaryValueInfo = null;
byte[] serializedHadoopValue = null;
String lengthInfo = null;
String serializeValueInfo = null;

//获取参数对象的类名
String className=writable.getClass().getName();

if(className.equals("org.apache.hadoop.io.BooleanWritable")){
typeInfo = "被测试的Hadoop类型为: " + "org.apache.hadoop.io.BooleanWritable" +"\n";
primaryValueInfo = "初始值为: "+((BooleanWritable)writable).get()+"\n";
}

else if(className.equals("org.apache.hadoop.io.ByteWritable")){
typeInfo = "被测试的Hadoop类型为: " + "org.apache.hadoop.io.ByteWritable" +"\n";
primaryValueInfo = "初始值为: "+((ByteWritable)writable).get()+"\n";
}

else if(className.equals("org.apache.hadoop.io.IntWritable")){
typeInfo = "被测试的Hadoop类型为: " + "org.apache.hadoop.io.IntWritable" +"\n";
primaryValueInfo = "初始值为: "+((IntWritable)writable).get()+"\n";
}

else if(className.equals("org.apache.hadoop.io.FloatWritable")){
typeInfo = "被测试的Hadoop类型为: " + "org.apache.hadoop.io.FloatWritable" +"\n";
primaryValueInfo = "初始值为: "+((FloatWritable)writable).get()+"\n";
}

else if(className.equals("org.apache.hadoop.io.LongWritable")){
typeInfo = "被测试的Hadoop类型为: " + "org.apache.hadoop.io.LongWritable" +"\n";
primaryValueInfo = "初始值为: "+((LongWritable)writable).get()+"\n";
}

else if(className.equals("org.apache.hadoop.io.DoubleWritable")){
typeInfo = "被测试的Hadoop类型为: " + "org.apache.hadoop.io.DoubleWritable" +"\n";
primaryValueInfo = "初始值为: "+((DoubleWritable)writable).get()+"\n";
}

//使用我们自定义的工具类方法,这个方法最终会调用Writable接口的write方法执行序列化动作
serializedHadoopValue =HadoopSerializationUtil.serialize(writable);
//获取序列化后的自己数组的长度信息
lengthInfo= "序列化后的字节数组长度为: "+serializedHadoopValue.length+"\n";
//使用hadoop的StringUtils工具类来读取以16进制显示的字节数组的值
serializeValueInfo= "序列化后的值为: " +StringUtils.byteToHexString(serializedHadoopValue)+"\n";

//返回全部信息
return "\n"+typeInfo+primaryValueInfo+lengthInfo+serializeValueInfo+"\n";

}
}


最终,我们还需要一个实验类,我们传入各种Hadoop数据对象(都要是Writable的),然后我们依次调用实验工具类的方法,最终把这些序列化过程前后的信息打印在控制台上:

/*
*/
package com.charles.hadoop.serial;

import org.apache.hadoop.io.BooleanWritable;
import org.apache.hadoop.io.ByteWritable;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.FloatWritable;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;

/**
*
* Description: 这里做实验
*
* @author charles.wang
* @created Jun 2, 2012 9:43:57 AM
*
*/
public class HadoopSerializationMain {

/**
* @param args
*/
public static void main(String[] args) throws Exception {

BooleanWritable hadoopBooleanValue = new BooleanWritable(false);
System.out.println(HadoopExperimentUtil.displaySerializationExperimentResult(hadoopBooleanValue));

byte b = 3&0x0f;
ByteWritable hadoopByteValue = new ByteWritable(b);
System.out.println(HadoopExperimentUtil.displaySerializationExperimentResult(hadoopByteValue));

IntWritable hadoopIntValue = new IntWritable(100);
System.out.println(HadoopExperimentUtil.displaySerializationExperimentResult(hadoopIntValue) );

FloatWritable hadoopFloatValue = new FloatWritable(73.54f);
System.out.println(HadoopExperimentUtil.displaySerializationExperimentResult(hadoopFloatValue) );

LongWritable hadoopLongValue = new LongWritable(82l);
System.out.println(HadoopExperimentUtil.displaySerializationExperimentResult(hadoopLongValue) );

DoubleWritable hadoopDoubleValue = new DoubleWritable(100.302d);
System.out.println(HadoopExperimentUtil.displaySerializationExperimentResult(hadoopDoubleValue) );
}

}


最终控制台显示如下:

被测试的Hadoop类型为: org.apache.hadoop.io.BooleanWritable
初始值为: false
序列化后的字节数组长度为: 1
序列化后的值为: 00

被测试的Hadoop类型为: org.apache.hadoop.io.ByteWritable
初始值为: 3
序列化后的字节数组长度为: 1
序列化后的值为: 03

被测试的Hadoop类型为: org.apache.hadoop.io.IntWritable
初始值为: 100
序列化后的字节数组长度为: 4
序列化后的值为: 00000064

被测试的Hadoop类型为: org.apache.hadoop.io.FloatWritable
初始值为: 73.54
序列化后的字节数组长度为: 4
序列化后的值为: 4293147b

被测试的Hadoop类型为: org.apache.hadoop.io.LongWritable
初始值为: 82
序列化后的字节数组长度为: 8
序列化后的值为: 0000000000000052

被测试的Hadoop类型为: org.apache.hadoop.io.DoubleWritable
初始值为: 100.302
序列化后的字节数组长度为: 8
序列化后的值为: 40591353f7ced917


我们从实验结果不难看出,Hadoop的序列化后的字节数组的长度和Java基本类型本身占用的字节数是完全匹配的。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  Hadoop Writable 序列化