溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

hadoop有哪些常見(jiàn)錯(cuò)誤

發(fā)布時(shí)間:2021-12-08 11:11:11 來(lái)源:億速云 閱讀:232 作者:小新 欄目:云計(jì)算

這篇文章主要為大家展示了“hadoop有哪些常見(jiàn)錯(cuò)誤”,內(nèi)容簡(jiǎn)而易懂,條理清晰,希望能夠幫助大家解決疑惑,下面讓小編帶領(lǐng)大家一起研究并學(xué)習(xí)一下“hadoop有哪些常見(jiàn)錯(cuò)誤”這篇文章吧。

1.Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:

2016-01-05 23:03:32,967 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [192.168.10.31:8485, 192.168.10.32:8485, 192.168.10.33:8485], stream=null))
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
192.168.10.31:8485: Call From bdata4/192.168.10.34 to bdata1:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.10.33:8485: Call From bdata4/192.168.10.34 to bdata3:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.10.32:8485: Call From bdata4/192.168.10.34 to bdata2:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createNewUniqueEpoch(QuorumJournalManager.java:182)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnfinalizedSegments(QuorumJournalManager.java:436)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$8.apply(JournalSet.java:624)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:621)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1394)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1151)
        at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1658)
        at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
        at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
        at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1536)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1335)
        at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
        at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)

2016-01-05 23:03:32,968 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

錯(cuò)誤原因:

我們?cè)趫?zhí)行start-dfs.sh的時(shí)候,默認(rèn)啟動(dòng)順序是namenode>datanode>journalnode>zkfc,如果journalnode和namenode不在一臺(tái)機(jī)器啟動(dòng)的話,很容易因?yàn)榫W(wǎng)絡(luò)延遲問(wèn)題導(dǎo)致NN無(wú)法連接JN,無(wú)法實(shí)現(xiàn)選舉,最后導(dǎo)致剛剛啟動(dòng)的namenode會(huì)突然掛掉一個(gè)主的,留下一個(gè)standy的,雖然有NN啟動(dòng)時(shí)有重試機(jī)制等待JN的啟動(dòng),但是由于重試次數(shù)限制,可能網(wǎng)絡(luò)情況不好,導(dǎo)致重試次數(shù)用完了,也沒(méi)有啟動(dòng)成功,

A:此時(shí)需要手動(dòng)啟動(dòng)主的那個(gè)namenode,避免了網(wǎng)絡(luò)延遲等待journalnode的步驟,一旦兩個(gè)namenode連入journalnode,實(shí)現(xiàn)了選舉,則不會(huì)出現(xiàn)失敗情況,

B:先啟動(dòng)JournalNode然后再運(yùn)行start-dfs.sh,

C:把nn對(duì)jn的容錯(cuò)次數(shù)或時(shí)間調(diào)成更大的值,保證能夠?qū)φ5膯?dòng)延遲、網(wǎng)絡(luò)延遲能容錯(cuò)

在hdfs-site.xml中加入,nn對(duì)jn檢測(cè)的重試次數(shù),默認(rèn)為10次,每次1000ms,故網(wǎng)絡(luò)情況差需要增加,這里設(shè)置為30次

    <property>
         <name>ipc.client.connect.max.retries</name>
         <value>30</value>

    </property>

2、org.apache.hadoop.security.AccessControlException: Permission denied

在master節(jié)點(diǎn)上修改hdfs-site.xml加上以下內(nèi)容 

<property> 

<name>dfs.permissions</name> 

<value>false</value> 

</property> 

旨在取消權(quán)限檢查,原因是為了解決我在windows機(jī)器上配置eclipse連接hadoop服務(wù)器時(shí),配置map/reduce連接后報(bào)錯(cuò)

3、運(yùn)行報(bào):[org.apache.hadoop.security.ShellBasedUnixGroupsMapping]-[WARN] got exception trying to get groups for user bdata

在master節(jié)點(diǎn)上修改hdfs-site.xml加上以下內(nèi)容

<property>
<name>dfs.web.ugi</name>
<value>bdata,supergroup</value>
</property>

以上是“hadoop有哪些常見(jiàn)錯(cuò)誤”這篇文章的所有內(nèi)容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內(nèi)容對(duì)大家有所幫助,如果還想學(xué)習(xí)更多知識(shí),歡迎關(guān)注億速云行業(yè)資訊頻道!

向AI問(wèn)一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI