18.自定义Inputformat

版权声明:1911907658 https://blog.csdn.net/qq_33598343/article/details/85015898

需求:

将一个文件夹里的几个小文件读入并合并,输出为 : 文件路径+文件内容

代码:

public class Fcinputformat extends FileInputFormat<NullWritable, BytesWritable> {
    @Override
    protected boolean isSplitable(JobContext context, Path filename) {
        //不切原来文件
        return false;
    }

    @Override
    public RecordReader<NullWritable, BytesWritable> createRecordReader(InputSplit inputSplit, TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {

      FcRecordReader fc =  new FcRecordReader();
        return fc;
    }
}

-----------------------------------------------------------
public class FcRecordReader extends RecordReader<NullWritable, BytesWritable> {
    boolean isProcess = false;
    FileSplit sp;
    Configuration conf;
    BytesWritable value = new BytesWritable();

    public void initialize(InputSplit inputSplit, TaskAttemptContext Context)  {
        this.sp = (FileSplit) inputSplit;
        conf = Context.getConfiguration();
    }

    public boolean nextKeyValue() throws IOException {

        if (!isProcess){
                 FSDataInputStream fis ;
                 FileSystem fs ;

                //1.根据切片长度获得缓冲区
                byte [] bur = new byte[(int)sp.getLength()];
                //2.获得路径
                Path path = sp.getPath();
                //3.通过路径获得文件系统
                fs = path.getFileSystem(conf);
                //4.通过文件系统获得输入流
                fis = fs.open(path);
                //5.拷贝流
                IOUtils.readFully(fis,bur,0,bur.length);
                //6.关闭流

                value.set(bur, 0, bur.length);


                IOUtils.closeStream(fis);
                IOUtils.closeStream(fs);



            isProcess = true;

            return true;
        }

        return false;
    }

    public NullWritable getCurrentKey()  {
        return NullWritable.get();
    }

    public BytesWritable getCurrentValue()  {
        return value;
    }

    public float getProgress() {
        return 0;
    }

    public void close()  {

    }
}
----------------------------------------------------
public class SquenceDrive {

    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);

        job.setJarByClass(SquenceDrive.class);



        job.setMapperClass(SquenceMapper.class);
        job.setReducerClass(SquenceReducer.class);

        job.setInputFormatClass(Fcinputformat.class);
        job.setOutputFormatClass(SequenceFileOutputFormat.class);
//        job.setOutputFormatClass(TextOutputFormat.class);


        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(BytesWritable.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(BytesWritable.class);

        FileInputFormat.setInputPaths(job,new Path("B:/测试数据/"));
        FileOutputFormat.setOutputPath(job,new Path("B:/测试数据/out"));

        boolean b = job.waitForCompletion(true);

        System.out.println(b);
    }
}
-----------------------------------------------------------------------------------------
public class SquenceMapper extends Mapper<NullWritable, BytesWritable, Text, BytesWritable> {
        Text k = new Text();
    @Override
    protected void setup(Context context) throws IOException, InterruptedException {
        FileSplit sp = (FileSplit) context.getInputSplit();
        Path path = sp.getPath();
        k.set(path.toString());
    }

    @Override
    protected void map(NullWritable key, BytesWritable value, Context context) throws IOException, InterruptedException {
        context.write(k,value);
    }
}
--------------------------------------------------------------------
public class SquenceReducer extends Reducer<Text, BytesWritable,Text, BytesWritable> {
    @Override
    protected void reduce(Text key, Iterable<BytesWritable> values, Context context) throws IOException, InterruptedException {
        for (BytesWritable v : values){
            context.write(key,v);
        }
    }
}

输入结果:

在这里插入图片描述

上面包名不能去掉,用的不多,主要熟悉下Inputformat的过程和写法!

默认格式TextInputformat

setInputFormat:
TextInputFormat:用于读取纯文本文件,文件被分为一系列以LF或CR结束的行,key是每一行的偏移量(LongWritable),value是每一行的内容(Text)。
KeyValueTextInputFormat:用于读取文件,如果行被分隔符分割为两部分,第一部分为key,剩下的为value;若没有分隔符,整行作为key,value为空。
SequenceFileInputFormat:用于读取SequenceFile,读取格式要与写出SequenceFileOutputFormat时设置的setOutputKeyClass与setOutputValueClass一致(key+value的格式)。
SequenceFileInputFilter:根据filter从SequenceFile中取得满足条件的数据,通过setFilterClass指定Filter,内置了三种Filter,RegexFilter取key值满足指定的正则表达式的记录;PercentFilter通过指定参数f,取记录行数f%0的记录;MD5Filter通过指定参数f,取MD5(key)%f0的记录。

setOutputFormat:
TextOutputFormat:输出到纯文本文件,格式为key + “ ”+ value。
NullOutputFormat:hadoop中的/dev/null,将输出送进黑洞。
SequenceFileOutputFormat,输出SequenceFile文件,其具体格式与setOutputKeyClass,setOutputValueClass相关 ,如此SequenceFileInputFormat的读取格式应该与SequenceFileOutputFormat的输出格式一致(key+value的格式)
MultipleSequenceFileOutputFormat, MultipleTextOutputFormat:根据key将记录输出到不同的文件,可以被重写
DBInputFormat和DBOutputFormat,从DB读取,输出到DB。

猜你喜欢

转载自blog.csdn.net/qq_33598343/article/details/85015898