溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

基于zookeeper的高可用集群

發(fā)布時(shí)間:2020-07-23 22:43:53 來源:網(wǎng)絡(luò) 閱讀:380 作者:素顏豬 欄目:大數(shù)據(jù)

1.準(zhǔn)備zookeeper服務(wù)器

#node1,node2,node3
#安裝請(qǐng)參考http://suyanzhu.blog.51cto.com/8050189/1946580


2.準(zhǔn)備NameNode節(jié)點(diǎn)

#node1,node4


3.準(zhǔn)備JournalNode節(jié)點(diǎn)

#node2,node3,node4


4.準(zhǔn)備DataNode節(jié)點(diǎn)

#node2,node3,node4
#啟動(dòng)DataNode節(jié)點(diǎn)命令hadoop-daemon.sh start datanode


5.修改hadoop的hdfs-site.xml配置文件

<configuration>
        <property>
                <name>dfs.nameservices</name>
                <value>yunshuocluster</value>
        </property>
        <property>
                <name>dfs.ha.namenodes.yunshuocluster</name>
                <value>nn1,nn2</value>
        </property>
        <property>
                <name>dfs.namenode.rpc-address.yunshuocluster.nn1</name>
                <value>node1:8020</value>
        </property>
        <property>
                <name>dfs.namenode.rpc-address.yunshuocluster.nn2</name>
                <value>node4:8020</value>
        </property>
        <property>
                <name>dfs.namenode.http-address.yunshuocluster.nn1</name>
                <value>node1:50070</value>
        </property>
        <property>
                <name>dfs.namenode.http-address.yunshuocluster.nn2</name>
                <value>node4:50070</value>
        </property>
        <property>
                <name>dfs.namenode.shared.edits.dir</name>
                <value>qjournal://node2:8485;node3:8485;node4:8485/yunshuocluste
r</value>
        </property>
        <property>
                <name>dfs.client.failover.proxy.provider.mycluster</name>
                <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailo
verProxyProvider</value>
        </property>
        <property>
                <name>dfs.ha.fencing.methods</name>
                <value>sshfence</value>
        </property>
        <property>
                <name>dfs.ha.fencing.ssh.private-key-files</name>
                <value>/root/.ssh/id_dsa</value>
        </property>
        <property>
                <name>dfs.journalnode.edits.dir</name>
                <value>/opt/journalnode/</value>
        </property>
        <property>
                <name>dfs.ha.automatic-failover.enabled</name>
                <value>true</value>
        </property>
</configuration>


6.修改hadoop的core-site.xml配置文件

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://yunshuocluster</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadoop-2.5</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>node1:2181,node2:2181,node3:2181</value>
    </property>
</configuration>


7.配置slaves配置文件

node2
node3
node4


8.啟動(dòng)zookeeper(node1,node2,node3)

zkServer.sh start


9.啟動(dòng)Journalnode(node2,node3,node4上分別執(zhí)行下面的命令)

#啟動(dòng)命令 停止命令hadoop-daemon.sh stop journalnode
hadoop-daemon.sh start journalnode


10.檢查Journalnode,通過查看日志

cd /home/hadoop-2.5.1/logs
ls
tail -200 hadoop-root-journalnode-node2.log


11.格式化NameNode(兩臺(tái)中的一臺(tái),這里格式化node4這臺(tái)NameNode節(jié)點(diǎn))

hdfs namenode -format

cd /opt/hadoop-2.5
#兩臺(tái)NameNode同步完成
scp -r /opt/hadoop-2.5/* root@node1:/opt/hadoop-2.5/


12.初始化zkfc

hdfs zkfc -formatZK


13.啟動(dòng)服務(wù)

start-dfs.sh
#stop-dfs.sh表示停止服務(wù)


向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI