Hadoop的key和value的传递序列化需要涉及两个重要的接口Writable和WritableComparable:
1、Writable
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import org.apache.hadoop.io.Writable;
public class DemoWritable implements Writable{
@Override
public void write(DataOutput out) throws IOException {
// TODO Auto-generated method stub
}
@Override
public void readFields(DataInput in) throws IOException {
// TODO Auto-generated method stub
}
}
只有读数据和写数据的方式
2、WritableComparable
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import org.apache.hadoop.io.WritableComparable;
public class DemoWritable implements WritableComparable<DemoWritable>{
@Override
public void write(DataOutput out) throws IOException {
// TODO Auto-generated method stub
}
@Override
public void readFields(DataInput in) throws IOException {
// TODO Auto-generated method stub
}
@Override
public int compareTo(demoWritable arg0) {
// TODO Auto-generated method stub
return 0;
}
}
就是比Writable多了一个compareTo方法,用来判断key是否唯一或者说是不是相同。
Hadoop为Key的数据类型必须实现WritableComparable,而Value的数据类型只需要实现Writable即可,能用做Key值的一定可以用做Value值,但是能做Value值的未必能用来做Key值。