您好,登錄后才能下訂單哦!
這篇文章主要介紹了如何解決重啟Hadoop集群時no namenode to stop的異常,具有一定借鑒價值,感興趣的朋友可以參考下,希望大家閱讀完這篇文章之后大有收獲,下面讓小編帶著大家一起了解一下。
修改了hadoop集群的配置文件而需要重啟集群,但是卻報錯如下:
[hadoop@master ~]# stop-dfs.sh Stopping namenodes on [master] master1: no namenode to stop master2: no namenode to stop slave2: no datanode to stop slave1: no datanode to stop
問題的原因是hadoop在stop的時候依據(jù)的是datanode上的journalnode和dfs的pid。而默認(rèn)的進(jìn)程號保存在/tmp下,linux 默認(rèn)會每隔一段時間(一般是一個月或者7天左右)去刪除這個目錄下的文件。
因此刪掉hadoop-hadoop-journalnode.pid和hadoop-hadoop-datanode.pid兩個文件后,namenode自然就找不到datanode上的這兩個進(jìn)程了。
在配置文件hadoop_env.sh中配置export HADOOP_PID_DIR可以解決這個問題, 也可以在hadoop-deamon.sh中修改,它會調(diào)用hadoop_env.sh。修改HADOOP_PID_DIR的路徑為“/var/hadoop_pid”,記得手動在“/var”目錄下創(chuàng)建hadoop_pid文件夾并將owner權(quán)限分配給hadoop用戶。
[hadoop@slave3 ~]$ ls /var/hadoop_pid/ hadoop-hadoop-datanode.pid hadoop-hadoop-journalnode.pid
然后手動在出錯的Slave上殺死Datanode的進(jìn)程(kill -9 pid),再重新運(yùn)行start-dfs..sh時發(fā)現(xiàn)沒有“no datanode to stop”和“no namenode to stop”的出現(xiàn),問題解決。
[hadoop@master1 ~]$ start-dfs.sh 16/04/13 17:20:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [master1 master2] master1: starting namenode, logging to /data/usr/hadoop/logs/hadoop-hadoop-namenode-master1.out master2: starting namenode, logging to /data/usr/hadoop/logs/hadoop-hadoop-namenode-master2.out slave4: starting datanode, logging to /data/usr/hadoop/logs/hadoop-hadoop-datanode-slave4.out slave3: starting datanode, logging to /data/usr/hadoop/logs/hadoop-hadoop-datanode-slave3.out slave2: starting datanode, logging to /data/usr/hadoop/logs/hadoop-hadoop-datanode-slave2.out slave1: starting datanode, logging to /data/usr/hadoop/logs/hadoop-hadoop-datanode-slave1.out Starting journal nodes [master1 master2 slave1 slave2 slave3] slave3: starting journalnode, logging to /data/usr/hadoop/logs/hadoop-hadoop-journalnode-slave3.out master1: starting journalnode, logging to /data/usr/hadoop/logs/hadoop-hadoop-journalnode-master1.out slave1: starting journalnode, logging to /data/usr/hadoop/logs/hadoop-hadoop-journalnode-slave1.out master2: starting journalnode, logging to /data/usr/hadoop/logs/hadoop-hadoop-journalnode-master2.out slave2: starting journalnode, logging to /data/usr/hadoop/logs/hadoop-hadoop-journalnode-slave2.out 16/04/13 17:20:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting ZK Failover Controllers on NN hosts [master1 master2] master1: starting zkfc, logging to /data/usr/hadoop/logs/hadoop-hadoop-zkfc-master1.out master2: starting zkfc, logging to /data/usr/hadoop/logs/hadoop-hadoop-zkfc-master2.out
感謝你能夠認(rèn)真閱讀完這篇文章,希望小編分享的“如何解決重啟Hadoop集群時no namenode to stop的異常”這篇文章對大家有幫助,同時也希望大家多多支持億速云,關(guān)注億速云行業(yè)資訊頻道,更多相關(guān)知識等著你來學(xué)習(xí)!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報,并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。