Hadoop javaAPI运行append时报错 lease recovery is in progress 处理办法

如集群节点少于3个在运行时就会抛异常;解决方案修改【dfs.client.block.write.replace-datanode-on-failure.policy=NEVER】

Configuration conf = new Configuration();

conf.set("dfs.client.block.write.replace-datanode-on-failure.policy", "NEVER");    //修改属性参数

全部代码如下

public void  appendByAPI() throws IOException{
        Configuration conf = new Configuration();
        conf.set("dfs.client.block.write.replace-datanode-on-failure.policy", "NEVER");    //修改属性参数
        FileSystem fs = FileSystem.get(conf);
        Path file = new Path("/spaceQuota/hello.txt");
        FSDataOutputStream out = fs.append(file);
        out.writeChars("aaaa");
        out.close();

猜你喜欢

转载自blog.csdn.net/xutao_ccu/article/details/84729640
今日推荐