Hadoop HA This node has namespaceId '***' and clusterId '**'

由于先前搭建了Hadoop的联盟环境,搭建HA环境是只是注释了配置文件中有关Federation 的配置,结果在HA环境时,kill 掉主节点的namenode ,从节点的namenode 也一并被kill掉了,先看namenode 的报错日志信息:

2018-08-02 07:08:00,260 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Quorum journal URI 'qjournal://BigData11:8485;BigData12:8485;/ns1' has an even number of Journal Nodes specified. This is not recommended!
2018-08-02 07:08:00,279 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Starting recovery process for unclosed journal segments...
2018-08-02 07:08:00,321 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [192.168.163.11:8485, 192.168.163.12:8485], stream=null))
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/2. 2 exceptions thrown:
192.168.163.11:8485: Incompatible namespaceID for journal Storage Directory /root/training/hadoop-2.7.3/journal/ns1: NameNode has nsId 1821372295 but storage has nsId 1097636865
        at org.apache.hadoop.hdfs.qjournal.server.JNStorage.checkConsistentNamespace(JNStorage.java:234)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.newEpoch(Journal.java:289)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.newEpoch(JournalNodeRpcServer.java:135)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.newEpoch(QJournalProtocolServerSideTranslatorPB.java:133)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25417)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

192.168.163.12:8485: Incompatible namespaceID for journal Storage Directory /root/training/hadoop-2.7.3/journal/ns1: NameNode has nsId 1821372295 but storage has nsId 1097636865






Response message:
This node has namespaceId '527227651 and clusterId 'CID-c28c68e2-bf9c-4997-84ca-820a7d303f2a' bu
t the requesting node expected '1197518331' and 'CID-e89c22bd-aaa1-477f-ba17-aec50b3a61b8'
	at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFil
eInputStream.java:468)




2018-08-02 21:18:25,151 ERROR org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Got error reading edit log input stream
 http://BigData11:8480/getJournal?jid=ns1&segmentTxId=1&storageInfo=-63%3A1197518331%3A0%3ACID-e89c22bd-aaa1-477f-ba17-aec50b3a61b8;
 failing over to edit log http://BigData12:8480/getJournal?jid=ns1&segmentTxId=1&storageInfo=-63%3A1197518331%3A0%3ACID-e89c22bd-aaa1-477f-ba17-aec50b3a61b8
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream$PrematureEOFException: got premature end-of-file at txid 0; expected file to go up to 2


几经折腾,最后Hadoop logs日志,hadoop.tmp.dir(core-site.xml) 配的目录,dfs.journalnode.edits.dir (hdfs-site.xml)配置的目录,重新启动HA 环境,解决了.

猜你喜欢

转载自blog.csdn.net/u013985879/article/details/81380384