hadoop appendToFile往原有文件中追加文件报错 The current failed datanode replacement policy is DEFAULT,

今天在学hadoop Shell客户端启动,在往hdfs原有文件中追加文件发现以下问题:
1):如果集群中的文件是空文件,则能追加进去。
2):如果集群中的文件有内容,则追加的时候会报错。
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.59.13:50010,DS-7feab941-bbe9-4760-8459-08894ff8bdbe,DISK], DatanodeInfoWithStorage[192.168.59.14:50010,DS-c43301df-a9f0-4801-b985-6f0cad6f03b5,DISK]], original=[DatanodeInfoWithStorage[192.168.59.13:50010,DS-7feab941-bbe9-4760-8459-08894ff8bdbe,DISK], DatanodeInfoWithStorage[192.168.59.14:50010,DS-c43301df-a9f0-4801-b985-6f0cad6f03b5,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1281)
at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1353)
at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1568)
at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1469)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:708)

查阅资料发现,往空文件中追加就相当于直接写文件,所以能追加进去,而往有内容的文件中追加不进去是因为datanode节点数不够三个,再克隆一台虚拟机,加到hdfs集群中就能解决问题。

猜你喜欢

转载自blog.csdn.net/wlk_328909605/article/details/81806868