溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

Hadoop HA 雙namenode搭建

發(fā)布時(shí)間:2020-06-29 14:54:45 來(lái)源:網(wǎng)絡(luò) 閱讀:592 作者:hbxztc 欄目:大數(shù)據(jù)

機(jī)器分布

hadoop1 192.168.56121

hadoop2 192.168.56122

hadoop3 192.168.56123

準(zhǔn)備安裝包

jdk-7u71-linux-x64.tar.gz

zookeeper-3.4.9.tar.gz

hadoop-2.9.2.tar.gz

把安裝包上傳到三臺(tái)機(jī)器的/usr/local目錄下并解壓

配置hosts

echo?"192.168.56.121?hadoop1"?>>?/etc/hosts
echo?"192.168.56.122?hadoop2"?>>?/etc/hosts
echo?"192.168.56.123?hadoop3"?>>?/etc/hosts

配置環(huán)境變量

/etc/profile

export?HADOOP_PREFIX=/usr/local/hadoop-2.9.2
export?JAVA_HOME=/usr/local/jdk1.7.0_71

部署zookeeper

創(chuàng)建zoo用戶

useradd?zoo
passwd?zoo

修改zookeeper目錄的屬主為zoo

chown?zoo:zoo?-R?/usr/local/zookeeper-3.4.9

修改zookeeper配置文件

到/usr/local/zookeeper-3.4.9/conf目錄

cp?zoo_sample.cfg?zoo.cfg
vi?zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper-3.4.9
clientPort=2181
server.1=hadoop1:2888:3888
server.2=hadoop2:2888:3888
server.3=hadoop3:2888:3888

創(chuàng)建myid文件放在/usr/local/zookeeper-3.4.9目錄下,myid文件中只保存1-255的數(shù)字,與zoo.cfg中server.id行中的id相同。

hadoop1中myid為1

hadoop2中myid為2

hadoop3中myid為3

在三臺(tái)機(jī)器啟動(dòng)zookeeper服務(wù)

[zoo@hadoop1?zookeeper-3.4.9]$?bin/zkServer.sh?start

驗(yàn)證zookeeper

[zoo@hadoop1?zookeeper-3.4.9]$?bin/zkServer.sh?status
ZooKeeper?JMX?enabled?by?default
Using?config:?/usr/local/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode:?follower

配置Hadoop

創(chuàng)建用戶

useradd?hadoop
passwd?hadoop

修改hadoop目錄屬主為hadoop

chmod?hadoop:hadoop?-R?/usr/local/hadoop-2.9.2

創(chuàng)建目錄

mkdir?/hadoop1?/hadoop2?/hadoop3
chown?hadoop:hadoop?/hadoop1
chown?hadoop:hadoop?/hadoop2
chown?hadoop:hadoop?/hadoop3

配置互信

ssh-keygen
ssh-copy-id?-i?~/.ssh/id_rsa.pub?hadoop@hadoop1
ssh-copy-id?-i?~/.ssh/id_rsa.pub?hadoop@hadoop2
ssh-copy-id?-i?~/.ssh/id_rsa.pub?hadoop@hadoop3
#使用如下命令測(cè)試互信
ssh?hadoop1?date
ssh?hadoop2?date
ssh?hadoop3?date

配置環(huán)境變量

/home/hadoop/.bash_profile

export?PATH=$JAVA_HOME/bin:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin:$PATH

配置參數(shù)

etc/hadoop/hadoop-env.sh?

export?JAVA_HOME=/usr/local/jdk1.7.0_71

etc/hadoop/core-site.xml

<!--?指定hdfs的nameservice為ns?-->
?<property>
??????<name>fs.defaultFS</name>
??????<value>hdfs://ns</value>
?</property>
?<!--指定hadoop數(shù)據(jù)臨時(shí)存放目錄-->
?<property>
??????<name>hadoop.tmp.dir</name>
??????<value>/usr/loca/hadoop-2.9.2/temp</value>
?</property>
?<property>
??????<name>io.file.buffer.size</name>
??????<value>4096</value>
?</property>
?<!--指定zookeeper地址-->
?<property>
??????<name>ha.zookeeper.quorum</name>
??????<value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value>
?</property>

?

etc/hadoop/hdfs-site.xml

<!--指定hdfs的nameservice為ns,需要和core-site.xml中的保持一致?-->
??<property>
??????<name>dfs.nameservices</name>
??????<value>ns</value>
??</property>
??<!--?ns下面有兩個(gè)NameNode,分別是nn1,nn2?-->
??<property>
?????<name>dfs.ha.namenodes.ns</name>
?????<value>nn1,nn2</value>
??</property>
??<!--?nn1的RPC通信地址?-->
??<property>
?????<name>dfs.namenode.rpc-address.ns.nn1</name>
?????<value>hadoop1:9000</value>
??</property>
??<!--?nn1的http通信地址?-->
??<property>
??????<name>dfs.namenode.http-address.ns.nn1</name>
??????<value>hadoop1:50070</value>
??</property>
??<!--?nn2的RPC通信地址?-->
??<property>
??????<name>dfs.namenode.rpc-address.ns.nn2</name>
??????<value>hadoop2:9000</value>
??</property>
??<!--?nn2的http通信地址?-->
??<property>
??????<name>dfs.namenode.http-address.ns.nn2</name>
??????<value>hadoop2:50070</value>
??</property>
??<!--?指定NameNode的元數(shù)據(jù)在JournalNode上的存放位置?-->
??<property>
???????<name>dfs.namenode.shared.edits.dir</name>
???????<value>qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/ns</value>
??</property>
??<!--?指定JournalNode在本地磁盤存放數(shù)據(jù)的位置?-->
??<property>
????????<name>dfs.journalnode.edits.dir</name>
????????<value>/hadoop1/hdfs/journal</value>
??</property>
??<!--?開啟NameNode故障時(shí)自動(dòng)切換?-->
??<property>
????????<name>dfs.ha.automatic-failover.enabled</name>
????????<value>true</value>
??</property>
??<!--?配置失敗自動(dòng)切換實(shí)現(xiàn)方式?-->
??<property>
??????????<name>dfs.client.failover.proxy.provider.ns</name>
??????????<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
??</property>
??<!--?配置隔離機(jī)制,如果ssh是默認(rèn)22端口,value直接寫sshfence即可?-->
??<property>
???????????<name>dfs.ha.fencing.methods</name>
???????????<value>sshfence</value>
??</property>
??<!--?使用隔離機(jī)制時(shí)需要ssh免登陸?-->
??<property>
??????????<name>dfs.ha.fencing.ssh.private-key-files</name>
??????????<value>/home/hadoop/.ssh/id_rsa</value>
??</property>
??<property>
??????<name>dfs.namenode.name.dir</name>
??????<value>file:/hadoop1/hdfs/name,file:/hadoop2/hdfs/name</value>
??</property>
??<property>
??????<name>dfs.datanode.data.dir</name>
??????<value>file:/hadoop1/hdfs/data,file:/hadoop2/hdfs/data,file:/hadoop3/hdfs/data</value>
??</property>
??<property>
?????<name>dfs.replication</name>
?????<value>2</value>
??</property>
??<!--?在NN和DN上開啟WebHDFS?(REST?API)功能,不是必須?-->
??<property>
?????<name>dfs.webhdfs.enabled</name>
?????<value>true</value>
??</property>
??<property>
??<!--?List?of?permitted/excluded?DataNodes.??-->
<name>dfs.hosts.exclude</name>
<value>/usr/local/hadoop-2.9.2/etc/hadoop/excludes</value>
</property>


etc/hadoop/mapred-site.xml

<property>
??????<name>mapreduce.framework.name</name>
??????<value>yarn</value>
??</property>
yarn-site.xml
??<!--?指定nodemanager啟動(dòng)時(shí)加載server的方式為shuffle?server?-->
??<property>
??????????<name>yarn.nodemanager.aux-services</name>
??????????<value>mapreduce_shuffle</value>
???</property>
???<property>
??????????<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
??????????<value>org.apache.hadoop.mapred.ShuffleHandler</value>
???</property>
???<!--?指定resourcemanager地址?-->
???<property>
??????????<name>yarn.resourcemanager.hostname</name>
??????????<value>hadoop1</value>
????</property>

etc/hadoop/slaves

hadoop1
hadoop2
hadoop3

首次啟動(dòng)命令

1、首先啟動(dòng)各個(gè)節(jié)點(diǎn)的Zookeeper,在各個(gè)節(jié)點(diǎn)上執(zhí)行以下命令:
bin/zkServer.sh?start
2、在某一個(gè)namenode節(jié)點(diǎn)執(zhí)行如下命令,創(chuàng)建命名空間
hdfs?zkfc?-formatZK
3、在每個(gè)journalnode節(jié)點(diǎn)用如下命令啟動(dòng)journalnode
sbin/hadoop-daemon.sh?start?journalnode
4、在主namenode節(jié)點(diǎn)格式化namenode和journalnode目錄
hdfs?namenode?-format?ns
5、在主namenode節(jié)點(diǎn)啟動(dòng)namenode進(jìn)程
sbin/hadoop-daemon.sh?start?namenode
6、在備namenode節(jié)點(diǎn)執(zhí)行第一行命令,這個(gè)是把備namenode節(jié)點(diǎn)的目錄格式化并把元數(shù)據(jù)從主namenode節(jié)點(diǎn)copy過(guò)來(lái),并且這個(gè)命令不會(huì)把journalnode目錄再格式化了!然后用第二個(gè)命令啟動(dòng)備namenode進(jìn)程!
hdfs?namenode?-bootstrapStandby
sbin/hadoop-daemon.sh?start?namenode
7、在兩個(gè)namenode節(jié)點(diǎn)都執(zhí)行以下命令
sbin/hadoop-daemon.sh?start?zkfc
8、在所有datanode節(jié)點(diǎn)都執(zhí)行以下命令啟動(dòng)datanode
sbin/hadoop-daemon.sh?start?datanode

日常啟停命令

#啟動(dòng)腳本,啟動(dòng)所有節(jié)點(diǎn)服務(wù)
sbin/start-dfs.sh
#停止腳本,停止所有節(jié)點(diǎn)服務(wù)
sbin/stop-dfs.sh驗(yàn)證

jps檢查進(jìn)程

Hadoop HA 雙namenode搭建


http://192.168.56.122:50070

Hadoop HA 雙namenode搭建

http://192.168.56.121:50070

Hadoop HA 雙namenode搭建

測(cè)試文件上傳下載

#創(chuàng)建目錄
[hadoop@hadoop1?~]$?hadoop?fs?-mkdir?/test
#驗(yàn)證
[hadoop@hadoop1?~]$?hadoop?fs?-ls?/
Found?1?items
drwxr-xr-x???-?hadoop?supergroup??????????0?2019-04-12?12:16?/test????
#上傳文件
[hadoop@hadoop1?~]$?hadoop?fs?-put?/usr/local/hadoop-2.9.2/LICENSE.txt?/test
#驗(yàn)證
[hadoop@hadoop1?~]$?hadoop?fs?-ls?/test?????????????????????????????????????
Found?1?items
-rw-r--r--???2?hadoop?supergroup?????106210?2019-04-12?12:17?/test/LICENSE.txt
#下載文件到/tmp
[hadoop@hadoop1?~]$?hadoop?fs?-get?/test/LICENSE.txt?/tmp
#驗(yàn)證
[hadoop@hadoop1?~]$?ls?-l?/tmp/LICENSE.txt?
-rw-r--r--.?1?hadoop?hadoop?106210?Apr?12?12:19?/tmp/LICENSE.txt


參考:https://blog.csdn.net/Trigl/article/details/55101826

向AI問(wèn)一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI