您好,登錄后才能下訂單哦!
Hadoop Cluster中的角色:
HDFS:
NameNode,NN
SecondaryNameNode,SNN
DataNode,DN
YARN:
ResourceManager
NodeManager
生產(chǎn)環(huán)境中hadoop分布式部署注意事項(xiàng):
HDFS集群:
NameNode和Secondary應(yīng)該分開部署,避免NameNode和SecondaryNode同時(shí)出現(xiàn)故障,而無(wú)法恢復(fù)
DataNode數(shù)量至少為3個(gè),因?yàn)閿?shù)據(jù)至少要保存3份
YARN集群:
ResourceManager部署在獨(dú)立的節(jié)點(diǎn)上
NodeManager運(yùn)行在DataNode上
Hadoop集群架構(gòu)如下圖所示:
我在測(cè)試環(huán)境中進(jìn)行分布式部署時(shí),將NameNode、SecondaryNameNode和ResourceManager三個(gè)角色部署在同一服務(wù)器Master節(jié)點(diǎn)上,
三個(gè)從節(jié)點(diǎn)部署DataNode和NodeManager
1、配置hosts文件
node1、node2、node3、node4上的/etc/hosts文件中追加以下內(nèi)容
172.16.2.3 node1.hadooptest.com node1 master 172.16.2.14 node2.hadooptest.com node2 172.16.2.60 node3.hadooptest.com node3 172.16.2.61 node4.hadooptest.com node4
2、創(chuàng)建hadoop用戶和組
如果需要通過(guò)master節(jié)點(diǎn)啟動(dòng)或者停止整個(gè)集群,還需要在master節(jié)點(diǎn)上配置用于運(yùn)行服務(wù)的用戶(如hdfs和yarn)能以密鑰的方式通過(guò)ssh遠(yuǎn)程連接到各個(gè)從節(jié)點(diǎn)
node1、node2、node3、node4上分別執(zhí)行
useradd hadoop echo 'p@ssw0rd' | passwd --stdin hadoop
登錄node1,創(chuàng)建密鑰
su - hadoop ssh-keygen -t rsa
將公鑰從node1分別上傳到node2、node3、node4
ssh-copy-id -i .ssh/id_rsa.pub hadoop@node2 ssh-copy-id -i .ssh/id_rsa.pub hadoop@node3 ssh-copy-id -i .ssh/id_rsa.pub hadoop@node4
注意:master節(jié)點(diǎn)也要將公鑰傳到自己的hadoop賬戶下,否則啟動(dòng)secondarynamenode時(shí),要輸入密碼
[hadoop@node1 hadoop]$ node1ssh-copy-id -i .ssh/id_rsa.pub hadoop@0.0.0.0
測(cè)試從node1登錄到node2、node3、node4
[hadoop@OPS01-LINTEST01 ~]$ ssh node2 'date' Tue Mar 27 14:26:10 CST 2018 [hadoop@OPS01-LINTEST01 ~]$ ssh node3 'date' Tue Mar 27 14:26:13 CST 2018 [hadoop@OPS01-LINTEST01 ~]$ ssh node4 'date' Tue Mar 27 14:26:17 CST 2018
3、配置hadoop環(huán)境
node1、node2、node3、node4上都需要執(zhí)行
vim /etc/profile.d/hadoop.sh
export HADOOP_PREFIX=/bdapps/hadoop export PATH=$PATH:${HADOOP_PREFIX}/bin:${HADOOP_PREFIX}/sbin export HADOOP_COMMON_HOME=${HADOOP_PREFIX} export HADOOP_YARN_HOME=${HADOOP_PREFIX} export HADOOP_HDFS_HOME=${HADOOP_PREFIX} export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}
node1配置
創(chuàng)建目錄
[root@OPS01-LINTEST01 ~]# mkdir -pv /bdapps /data//hadoop/hdfs/{nn,snn,dn} mkdir: created directory `/bdapps' mkdir: created directory `/data//hadoop' mkdir: created directory `/data//hadoop/hdfs' mkdir: created directory `/data//hadoop/hdfs/nn' mkdir: created directory `/data//hadoop/hdfs/snn' mkdir: created directory `/data//hadoop/hdfs/dn'
配置權(quán)限
chown -R hadoop:hadoop /data/hadoop/hdfs/ cd /bdapps/ [root@OPS01-LINTEST01 bdapps]# ls hadoop-2.7.5 [root@OPS01-LINTEST01 bdapps]# ln -sv hadoop-2.7.5 hadoop [root@OPS01-LINTEST01 bdapps]# cd hadoop [root@OPS01-LINTEST01 hadoop]# mkdir logs
修改hadoop目錄下所有文件的屬主屬組為hadoop,并給logs目錄添加寫入權(quán)限
[root@OPS01-LINTEST01 hadoop]# chown -R hadoop:hadoop ./* [root@OPS01-LINTEST01 hadoop]# ll total 140 drwxr-xr-x 2 hadoop hadoop 4096 Dec 16 09:12 bin drwxr-xr-x 3 hadoop hadoop 4096 Dec 16 09:12 etc drwxr-xr-x 2 hadoop hadoop 4096 Dec 16 09:12 include drwxr-xr-x 3 hadoop hadoop 4096 Dec 16 09:12 lib drwxr-xr-x 2 hadoop hadoop 4096 Dec 16 09:12 libexec -rw-r--r-- 1 hadoop hadoop 86424 Dec 16 09:12 LICENSE.txt drwxr-xr-x 2 hadoop hadoop 4096 Mar 27 14:51 logs -rw-r--r-- 1 hadoop hadoop 14978 Dec 16 09:12 NOTICE.txt -rw-r--r-- 1 hadoop hadoop 1366 Dec 16 09:12 README.txt drwxr-xr-x 2 hadoop hadoop 4096 Dec 16 09:12 sbin drwxr-xr-x 4 hadoop hadoop 4096 Dec 16 09:12 share [root@OPS01-LINTEST01 hadoop]# chmod g+w logs
core-site.xml文件配置
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:8020</value> <final>true</final> </property> </configuration>
yarm-site.xml文件配置
注意:yarn-site.xml是ResourceManager角色相關(guān)配置。生產(chǎn)環(huán)境下該角色和NameNode是應(yīng)該分開部署的,所以該文件中的master和core-site.xml中的master不是同一臺(tái)機(jī)器。由于我這里是在測(cè)試環(huán)境中模擬分布式部署,
將NameNode和ResourceManager部署在一臺(tái)機(jī)器上了,所以才會(huì)需要在NameNode服務(wù)器上配置該文件。
<configuration> <property> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master:8088</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.auxservices.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.scheduler.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value> </property> </configuration>
hdfs-site.xml文件配置
修改 dfs.replication 副本保留數(shù)量
<configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///data/hadoop/hdfs/nn</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///data/hadoop/hdfs/dn</value> </property> <property> <name>fs.checkpoint.dir</name> <value>file:///data/hadoop/hdfs/snn</value> </property> <property> <name>fs.checkpoint.edits.dir</name> <value>file:///data/hadoop/hdfs/snn</value> </property> </configuration>
mapred-site.xml文件配置
cp mapred-site.xml.template mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
slaves文件配置
node2 node3 node4
hadoop-env.sh文件配置
export JAVA_HOME=/usr/java/jdk1.8.0_151
配置node2、node3、node4節(jié)點(diǎn)
###創(chuàng)建hadoop安裝目錄,和數(shù)據(jù)目錄以及l(fā)ogs目錄,并修改權(quán)限 mkdir -pv /bdapps /data/hadoop/hdfs/{nn,snn,dn} chown -R hadoop.hadoop /data/hadoop/hdfs/ tar zxf hadoop-2.7.5.tar.gz -C /bdapps/ cd /bdapps ln -sv hadoop-2.7.5 hadoop cd hadoop mkdir logs chmod g+w logs chown -R hadoop:hadoop ./*
配置文件修改
由于我們前面在master節(jié)點(diǎn)(node1)已經(jīng)修改了hadoop相關(guān)配置文件,所以可以直接從master節(jié)點(diǎn)同步到node2、node3、node4節(jié)點(diǎn)上
scp /bdapps/hadoop/etc/hadoop/* node2:/bdapps/hadoop/etc/hadoop/ scp /bdapps/hadoop/etc/hadoop/* node3:/bdapps/hadoop/etc/hadoop/ scp /bdapps/hadoop/etc/hadoop/* node4:/bdapps/hadoop/etc/hadoop/
啟動(dòng)hadoop相關(guān)服務(wù)
master節(jié)點(diǎn)
與偽分布式模式相同,在HDFS集群的NN啟動(dòng)之前需要先初始化其用于處處數(shù)據(jù)的目錄,如果hdfs-site.xml中 dfs.namenode.name.dir屬性指定的目錄不存在,
格式化命令會(huì)自動(dòng)創(chuàng)建之,如果事先存在,請(qǐng)確保其權(quán)限設(shè)置正確,此時(shí)格式操作會(huì)清除其內(nèi)部的所有數(shù)據(jù)并重新建立一個(gè)新的文件系統(tǒng)。需要以hdfs用戶的身份在master節(jié)點(diǎn)執(zhí)行如下命令
hdfs namenode -format
啟動(dòng)集群節(jié)點(diǎn)有兩種方式:
1、登錄到各個(gè)節(jié)點(diǎn)手動(dòng)啟動(dòng)服務(wù)
2、在master節(jié)點(diǎn)控制啟動(dòng)整個(gè)集群
集群規(guī)模較大時(shí),分別啟動(dòng)各個(gè)節(jié)點(diǎn)的各個(gè)服務(wù)會(huì)比較繁瑣,所以hadoop提供了start-dfs.sh和stop-dfs.sh來(lái)啟動(dòng)及停止整個(gè)hdfs集群,以及start-yarn.sh和stop-yarn.sh來(lái)啟動(dòng)和停止整個(gè)yarn集群
[hadoop@node1 hadoop]$start-dfs.sh Starting namenodes on [master] hadoop@master's password: master: starting namenode, logging to /bdapps/hadoop/logs/hadoop-hadoop-namenode-node1.out node4: starting datanode, logging to /bdapps/hadoop/logs/hadoop-hadoop-datanode-node4.out node2: starting datanode, logging to /bdapps/hadoop/logs/hadoop-hadoop-datanode-node2.out node3: starting datanode, logging to /bdapps/hadoop/logs/hadoop-hadoop-datanode-node3.out Starting secondary namenodes [0.0.0.0] hadoop@0.0.0.0's password: 0.0.0.0: starting secondarynamenode, logging to /bdapps/hadoop/logs/hadoop-hadoop-secondarynamenode-node1.out [hadoop@node1 hadoop]$ jps 69127 NameNode 69691 Jps 69566 SecondaryNameNode
登錄到datanode節(jié)點(diǎn)查看進(jìn)程
[root@node2 ~]# jps 66968 DataNode 67436 Jps [root@node3 ~]# jps 109281 DataNode 109991 Jps [root@node4 ~]# jps 108753 DataNode 109674 Jps
停止整個(gè)集群的服務(wù)
[hadoop@node1 hadoop]$ stop-dfs.sh Stopping namenodes on [master] master: stopping namenode node4: stopping datanode node2: stopping datanode node3: stopping datanode Stopping secondary namenodes [0.0.0.0] 0.0.0.0: stopping secondarynamenode
測(cè)試
在master節(jié)點(diǎn)上,上傳一個(gè)文件
[hadoop@node1 ~]$ hdfs dfs -mkdir /test [hadoop@node1 ~]$ hdfs dfs -put /etc/fstab /test/fstab [hadoop@node1 ~]$ hdfs dfs -ls -R /test -rw-r--r-- 2 hadoop supergroup 223 2018-03-27 16:48 /test/fstab
登錄node2
[hadoop@node2 ~]$ ls /data/hadoop/hdfs/dn/current/BP-1194588190-172.16.2.3-1522138946011/current/finalized/ [hadoop@node2 ~]$
沒(méi)有fstab文件
登錄node3,可以看到fstab文件
[hadoop@node3 ~]$ cat /data/hadoop/hdfs/dn/current/BP-1194588190-172.16.2.3-1522138946011/current/finalized/subdir0/subdir0/blk_1073741825 UUID=dbcbab6c-2836-4ecd-8d1b-2da8fd160694 / ext4 defaults 1 1 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 dev/vdb1 none swap sw 0 0
登錄node4,也可以看到fstab文件
[hadoop@node4 root]$ cat /data/hadoop/hdfs/dn/current/BP-1194588190-172.16.2.3-1522138946011/current/finalized/subdir0/subdir0/blk_1073741825 UUID=dbcbab6c-2836-4ecd-8d1b-2da8fd160694 / ext4 defaults 1 1 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 dev/vdb1 none swap sw 0 0
結(jié)論:由于我們數(shù)據(jù)保存2份,所以只在node3,node4上有文件副本,node2上沒(méi)有
啟動(dòng)yarn集群
登錄node1(master),執(zhí)行start-yarn.sh
[hadoop@node1 ~]$ start-yarn.sh starting yarn daemons starting resourcemanager, logging to /bdapps/hadoop/logs/yarn-hadoop-resourcemanager-node1.out node4: starting nodemanager, logging to /bdapps/hadoop/logs/yarn-hadoop-nodemanager-node4.out node2: starting nodemanager, logging to /bdapps/hadoop/logs/yarn-hadoop-nodemanager-node2.out node3: starting nodemanager, logging to /bdapps/hadoop/logs/yarn-hadoop-nodemanager-node3.out [hadoop@node1 ~]$ jps 78115 ResourceManager 71574 NameNode 71820 SecondaryNameNode 78382 Jps
登錄node2,執(zhí)行jps命令,可以看到NodeManager服務(wù)已經(jīng)啟動(dòng)了
[ansible@node2 ~]$ sudo su - hadoop [hadoop@node2 ~]$ jps 68800 DataNode 75400 Jps 74856 NodeManager
查看Web UI控制臺(tái)
其他參考文檔:
http://www.codeceo.com/understand-hadoop-hbase-hive-spark-distributed-system-architecture.html
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。