Hadoop 搭建全分布模式子节点的datanode未起来的解决办法

                搭建全分布模式hadoop的时候,子节点的datanode没有起来:

解决办法参考如下网站: https://blog.csdn.net/u013310025/article/details/52796233

总结:在全分布模式下,将hadoop文件用scp -r ~/training/hadoop2.7.3 root@bigdata112 ~/training/后,需要在各节点也执行hdfs namenode -format才行,否则启动hadoop,节点的datanode起不了会报如下的错误。(此结论需要后期再进行验证)

解决办法(选择了方法二,方法一尝试了无效):

方法1.进入tmp/dfs,修改VERSION文件即可,将nameNode里version文件夹里面的内容修改成和master一致的。

方法2.直接删除tmp/dfs,然后格式化hdfs即可(./hdfs namenode -format)重新在tmp目录下生成一个dfs文件

ervices: 2018-04-20 23:41:33,881 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool (Datanode Uuid unassigned) service to bigdata111/169.254.169.111:9000 starting to offer service 2018-04-20 23:41:34,013 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting 2018-04-20 23:41:34,072 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2018-04-20 23:41:36,251 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data direc tories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1) 2018-04-20 23:41:36,290 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /root/training/hadoop-2.7.3/t mp/dfs/data/in_use.lock acquired by nodename 48801@bigdata111 2018-04-20 23:41:36,293 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK ]file:/root/training/hadoop-2.7.3/tmp/dfs/data/ java.io.IOException: Incompatible clusterIDs in /root/training/hadoop-2.7.3/tmp/dfs/data: namenode clusterID = C ID-53071357-d7bd-4fd4-badc-b7b9851c3c82; datanode clusterID = CID-0c92e0ca-b7c2-4a66-ad48-842788bbe4d3 at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:775) at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300) at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416) at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:3 17) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223 ) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802) at java.lang.Thread.run(Thread.java:745) 2018-04-20 23:41:36,296 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block p ool (Datanode Uuid unassigned) service to bigdata111/169.254.169.111:9000. Exiting. java.io.IOException: All specified directories are failed to load. at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:574) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327) --More--(98%)

猜你喜欢

转载自my.oschina.net/u/2413597/blog/1798858