DistributedCache使用

DistributedCache使用:

1.import包

import org.apache.hadoop.filecache.DistributedCache;

2.加到Cache中

DistributedCache.addCacheFile(new Path(args[++i]).toUri(), job.getConfiguration());

3.Map或Reduce中调用

Configuration conf = context.getConfiguration();

Path [] pathwaysFiles = new Path[0];

try {

pathwaysFiles = DistributedCache.getLocalCacheFiles(conf);

}catch (IOException ioe) {

System.err.println("Caught exception while getting cached files: " + StringUtils.stringifyException(ioe));

}

for (Path pathwaysFile : pathwaysFiles) {

try {

BufferedReader fis = new BufferedReader(new FileReader(pathwaysFile.toString()));

String pathway = null;

while ((pathway = fis.readLine()) != null) {

String [] p = pathway.split(" ");

pathways.add(p);

}

}catch (IOException ioe) {

}

}

DistributeCche的存储路径:

/ifshk4/HDFS/hadoop/hadoop12/tmp/mapred/local/taskTracker/archive/compute-7-0.local/user/hadoop/kipu/expression_head.txt/expression_head.txt

执行命令:

/tmp/hadoop/bin/hadoop jar kipu.jar org.bgi.kipu.kipu /user/hadoop/kipu/expression_final.txt /user/hadoop/kipu/output/ -pathways /user/hadoop/kipu/pathways_final.txt -head /user/hadoop/kipu/expression_head.txt


这个路径其实是当前map/reduce任务的本地路径再加上hdfs上的路径,

因此当然会跟执行任务时输入的参数不一致啦。

因此在DistributeCache中有多个文件的时候可以用以下方法判断:

1.将参数的内容放到conf中去:

job.getConfiguration().set("kipu.head.path", args[i]);

2.用contains来判断

pathwaysFile.toString().contains(conf.get("kipu.head.path"))

猜你喜欢

转载自gushengchang.iteye.com/blog/1315332