您好,登錄后才能下訂單哦!
一、環(huán)境
系統(tǒng) CentOS7.0 64位
namenode01 192.168.0.220
namenode02 192.168.0.221
datanode01 192.168.0.222
datanode02 192.168.0.223
datanode03 192.168.0.224
二、配置基礎(chǔ)環(huán)境
在所有的機(jī)器上添加本地hosts文件解析
[root@namenode01 ~]# tail -5 /etc/hosts 192.168.0.220 namenode01 192.168.0.221 namenode02 192.168.0.222 datanode01 192.168.0.223 datanode02 192.168.0.224 datanode03
在5臺(tái)機(jī)器上創(chuàng)建hadoop用戶,并設(shè)置密碼是hadoop,這里只以naemenode01為例子
[root@namenode01 ~]# useradd hadoop [root@namenode01 ~]# passwd hadoop Changing password for user hadoop. New password: BAD PASSWORD: The password is shorter than 8 characters Retype new password: passwd: all authentication tokens updated successfully.
配置5臺(tái)機(jī)器hadoop用戶之間互相免密碼ssh登錄
#namenode01的操作 [root@namenode01 ~]# su - hadoop [hadoop@namenode01 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Created directory '/home/hadoop/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: 1c:7e:89:9d:14:9a:10:fc:69:1e:11:3d:6d:18:a5:01 hadoop@namenode01 The key's randomart p_w_picpath is: +--[ RSA 2048]----+ | .o.E++=. | | ...o++o | | .+ooo | | o== o | | oS.= | | .. | | | | | | | +-----------------+ [hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01 [hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02 [hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01 [hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02 [hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03 #驗(yàn)證結(jié)果 [hadoop@namenode01 ~]$ ssh namenode01 hostname namenode01 [hadoop@namenode01 ~]$ ssh namenode02 hostname namenode02 [hadoop@namenode01 ~]$ ssh datanode01 hostname datanode01 [hadoop@namenode01 ~]$ ssh datanode02 hostname datanode02 [hadoop@namenode01 ~]$ ssh datanode03 hostname datanode03 #在namenode02上操作 [root@namenode02 ~]# su - hadoop [hadoop@namenode02 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: a9:f5:0d:cb:c9:88:7b:71:f5:71:d8:a9:23:c6:85:6a hadoop@namenode02 The key's randomart p_w_picpath is: +--[ RSA 2048]----+ | | | | | . o.| | . ...o.o| | S +....o | | +.E.O o. | | o ooB o . | | .. | | .. | +-----------------+ [hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01 [hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02 [hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01 [hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02 [hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03 #驗(yàn)證結(jié)果 [hadoop@namenode02 ~]$ ssh namenode01 hostname namenode01 [hadoop@namenode02 ~]$ ssh namenode02 hostname namenode02 [hadoop@namenode02 ~]$ ssh datanode01 hostname datanode01 [hadoop@namenode02 ~]$ ssh datanode02 hostname datanode02 [hadoop@namenode02 ~]$ ssh datanode03 hostname datanode03 #在datanode01上操作 [root@datanode01 ~]# su - hadoop [hadoop@datanode01 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: 48:72:20:69:64:e7:81:b7:03:64:41:5e:fa:88:db:5e hadoop@datanode01 The key's randomart p_w_picpath is: +--[ RSA 2048]----+ | +O+= | | +=*.o | | .ooo.o | | . oo+ . | |. . ... S | | o | |. . E | | . . | | . | +-----------------+ [hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01 [hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02 [hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01 [hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02 [hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03 #驗(yàn)證結(jié)果 [hadoop@datanode01 ~]$ ssh namenode01 hostname namenode01 [hadoop@datanode01 ~]$ ssh namenode02 hostname namenode02 [hadoop@datanode01 ~]$ ssh datanode01 hostname datanode01 [hadoop@datanode01 ~]$ ssh datanode02 hostname datanode02 [hadoop@datanode01 ~]$ ssh datanode03 hostname datanode03 #datanode02上操作 [hadoop@datanode02 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: 32:aa:88:fa:ce:ec:51:6f:de:f4:06:c9:4e:9c:10:31 hadoop@datanode02 The key's randomart p_w_picpath is: +--[ RSA 2048]----+ | E. | | .. | | . | | . | | . o+So | | . o oB | | . . oo.. | |.+ o o o... | |=+B . ... | +-----------------+ [hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01 [hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02 [hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01 [hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02 [hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03 #驗(yàn)證結(jié)果 [hadoop@datanode02 ~]$ ssh namenode01 hostname namenode01 [hadoop@datanode02 ~]$ ssh namenode02 hostname namenode02 [hadoop@datanode02 ~]$ ssh datanode01 hostname datanode01 [hadoop@datanode02 ~]$ ssh datanode02 hostname datanode02 [hadoop@datanode02 ~]$ ssh datanode03 hostname datanode03 #datanode03上操作 [root@datanode03 ~]# su - hadoop [hadoop@datanode03 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: f3:f3:3c:85:61:c6:e4:82:58:10:1f:d8:bf:71:89:b4 hadoop@datanode03 The key's randomart p_w_picpath is: +--[ RSA 2048]----+ | o=. | | ..o.. . | | o.+ * . | | . . E O | | S B o | | o. . . | | o . | | +. | | o. | +-----------------+ [hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01 [hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02 [hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01 [hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02 [hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03 #驗(yàn)證結(jié)果 [hadoop@datanode03 ~]$ ssh namenode01 hostname namenode01 [hadoop@datanode03 ~]$ ssh namenode02 hostname namenode02 [hadoop@datanode03 ~]$ ssh datanode01 hostname datanode01 [hadoop@datanode03 ~]$ ssh datanode02 hostname datanode02 [hadoop@datanode03 ~]$ ssh datanode03 hostname datanode03
三、安裝jdk環(huán)境
[root@namenode01 ~]# wget http://download.oracle.com/otn-pub/java/jdk/8u74-b02/jdk-8u74-linux-x64.tar.gz?AuthParam=1461828883_648d68bc6c7b0dfd253a6332a5871e06 [root@namenode01 ~]# tar xf jdk-8u74-linux-x64.tar.gz -C /usr/local/ #配置環(huán)境變量配置文件 [root@namenode01 ~]# cat /etc/profile.d/java.sh JAVA_HOME=/usr/local/jdk1.8.0_74 JAVA_BIN=/usr/local/jdk1.8.0_74/bin JRE_HOME=/usr/local/jdk1.8.0_74/jre PATH=$PATH:/usr/local/jdk1.8.0_74/bin:/usr/local/jdk1.8.0_74/jre/bin CLASSPATH=/usr/local/jdk1.8.0_74/jre/lib:/usr/local/jdk1.8.0_74/lib:/usr/local/jdk1.8.0_74/jre/lib/charsets.jar export JAVA_HOME PATH #加載環(huán)境變量 [root@namenode01 ~]# source /etc/profile.d/java.sh [root@namenode01 ~]# which java /usr/local/jdk1.8.0_74/bin/java #測(cè)試結(jié)果 [root@namenode01 ~]# java -version java version "1.8.0_74" Java(TM) SE Runtime Environment (build 1.8.0_74-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode) #將環(huán)境變量配置文件和二進(jìn)制包復(fù)制到其余的4臺(tái)機(jī)器上 [root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 namenode02:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode01:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode02:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode03:/usr/local/ [root@namenode01 ~]# scp /etc/profile.d/java.sh namenode02:/etc/profile.d/ 100% 308 0.3KB/s 00:00 [root@namenode01 ~]# scp /etc/profile.d/java.sh datanode01:/etc/profile.d/ 100% 308 0.3KB/s 00:00 [root@namenode01 ~]# scp /etc/profile.d/java.sh datanode02:/etc/profile.d/ 100% 308 0.3KB/s 00:00 [root@namenode01 ~]# scp /etc/profile.d/java.sh datanode03:/etc/profile.d/ #測(cè)試結(jié)果,以namenode02為例子 [root@namenode02 ~]# source /etc/profile.d/java.sh [root@namenode02 ~]# java -version java version "1.8.0_74" Java(TM) SE Runtime Environment (build 1.8.0_74-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)
四、安裝hadoop
#下載hadoop軟件 [root@namenode01 ~]# wget http://apache.fayea.com/hadoop/common/hadoop-2.5.2/hadoop-2.5.2.tar.gz [root@namenode01 ~]# tar xf hadoop-2.5.2.tar.gz -C /usr/local/ [root@namenode01 ~]# chown -R hadoop.hadoop /usr/local/hadoop-2.5.2/ [root@namenode01 ~]# ln -sv /usr/local/hadoop-2.5.2/ /usr/local/hadoop ‘/usr/local/hadoop’ -> ‘/usr/local/hadoop-2.5.2/’ #添加hadoop的環(huán)境變量配置文件 [root@namenode01 ~]# cat /etc/profile.d/hadoop.sh HADOOP_HOME=/usr/local/hadoop PATH=$HADOOP_HOME/bin:$PATH export HADOOP_BASE PATH #切換到hadoop用戶下,檢查jdk環(huán)境是否正常 [root@namenode01 ~]# su - hadoop Last login: Thu Apr 28 15:17:16 CST 2016 from datanode01 on pts/1 [hadoop@namenode01 ~]$ java -version java version "1.8.0_74" Java(TM) SE Runtime Environment (build 1.8.0_74-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode) #開(kāi)始編輯hadoop的配置文件 #編輯hadoop的環(huán)境變量文件 [hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/hadoop-env.sh export JAVA_HOME=/usr/local/jdk1.8.0_74 #修改JAVA_HOME變量的值 #編輯core-site.xml文件 [hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/core-site.xml <configuration> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/temp</value> </property> <property> <name>fs.defaultFS</name> <value>hdfs://mycluster</value> </property> <property> <name>io.file.buffers.size</name> <value>131072</value> </property> </configuration> #編輯hdfs-site.xml文件 [hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml <configuration> <property> <name>dfs.namenode.name.dir</name> <value>/data/hdfs/dfs/name</value> #namenode目錄 </property> <property> <name>dfs.datanode.data.dir</name> <value>/data/hdfs/data</value> #datanode目錄 </property> <property> <name>dfs.permissions</name> <value>false</value> </property> <property> <name>dfs.nameservices</name> <value>mycluster</value> #和core-site.xml文件中保持一致 </property> <property> <name>dfs.ha.namenodes.mycluster</name> <value>namenode01,namenode02</value> #namenode節(jié)點(diǎn) </property> <property> <name>dfs.namenode.rpc-address.mycluster.namenode01</name> <value>namenode01:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.namenode02</name> <value>namenode02:8020</value> </property> <property> <name>dfs.namenode.http-address.mycluster.namenode01</name> <value>namenode01:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.namenode02</name> <value>namenode02:50070</value> </property> <property> #namenode往journalnode寫edits文件,填寫所有的journalnode節(jié)點(diǎn) <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://namenode01:8485;namenode02:8485;datanode01:8485;datanode02:8485;datanode03:8485/mycluster</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/data/hdfs/journal</value> #journalnode目錄 </property> <property> <name>dfs.client.faliover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fening.methods</name> <value>sshfence</value> #通過(guò)什么方法進(jìn)行fence操作 </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> #主機(jī)之間的認(rèn)證 </property> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>6000</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>false</value> #關(guān)閉主備自動(dòng)切換,后面通過(guò)zookeeper來(lái)切換 </property> <property> <name>dfs.replication</name> <value>3</value> #replicaion的數(shù)量,默認(rèn)為3分,少于這個(gè)數(shù)量會(huì)報(bào)錯(cuò) </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> </configuration> #編輯yarn-site.xml文件 [hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/yarn-site.xml <configuration> <property> <name>yarn.nodemanager.aux-service</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>namenode01:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>namenode01:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>namenode01:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>namenode01:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>namenode01:8033</value> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>15360</value> </property> </configuration> #編輯mapred-site.xml文件 [hadoop@namenode01 ~]$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml [hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapredue.jobtracker.http.address</name> <value>namenode01:50030</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>namenode01:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>namenode01:19888</value> </property> </configuration> #編輯slaves配置文件 [hadoop@namenode01 ~]$ cat /usr/local/hadoop/etc/hadoop/slaves datanode01 datanode02 datanode03 #在namenodee01上切換到root用戶下,創(chuàng)建相應(yīng)的目錄 [root@namenode01 ~]# mkdir /data/hdfs [root@namenode01 ~]# chown hadoop.hadoop /data/hdfs/ #將hadoop用戶的環(huán)境變量配置文件復(fù)制到其余4臺(tái)機(jī)器上 [root@namenode01 ~]# scp /etc/profile.d/hadoop.sh namenode02:/etc/profile.d/ [root@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode01:/etc/profile.d/ [root@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode02:/etc/profile.d/ [root@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode03:/etc/profile.d/ #復(fù)制hadoop安裝文件到其余的4臺(tái)機(jī)器上 [root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ namenode02:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode01:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode02:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode03:/usr/local/ #修改目錄的權(quán)限,以namenode02為例 [root@namenode02 ~]# chown -R hadoop.hadoop /usr/local/hadoop-2.5.2/ [root@namenode02 ~]# ln -sv /usr/local/hadoop-2.5.2/ /usr/local/hadoop ‘/usr/local/hadoop’ -> ‘/usr/local/hadoop-2.5.2/’ [root@namenode02 ~]# ll /usr/local |grep hadoop lrwxrwxrwx 1 root root 24 Apr 28 17:19 hadoop -> /usr/local/hadoop-2.5.2/ drwxr-xr-x 9 hadoop hadoop 139 Apr 28 17:16 hadoop-2.5.2 #創(chuàng)建目錄 [root@namenode02 ~]# mkdir /data/hdfs [root@namenode02 ~]# chown -R hadoop.hadoop /data/hdfs/ #檢查jdk環(huán)境 [root@namenode02 ~]# su - hadoop Last login: Thu Apr 28 15:12:24 CST 2016 on pts/0 [hadoop@namenode02 ~]$ java -version java version "1.8.0_74" Java(TM) SE Runtime Environment (build 1.8.0_74-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode) [hadoop@namenode02 ~]$ which hadoop /usr/local/hadoop/bin/hadoop
五、啟動(dòng)hadoop
#在所有服務(wù)器執(zhí)行hadoop-daemon.sh start journalnode,要在hadoop用戶下執(zhí)行 #只貼出namenoe01的過(guò)程 [hadoop@namenode01 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start journalnode starting journalnode, logging to /usr/local/hadoop-2.5.2/logs/hadoop-hadoop-journalnode-namenode01.out #在namenode01上執(zhí)行 [hadoop@namenode01 ~]$ hadoop namenode -format #說(shuō)明:第一次啟動(dòng)的時(shí)候需要執(zhí)行hadoop namenoe -format,非首次啟動(dòng)則運(yùn)行hdfs namenode -initializeSharedEdits 這里需要解釋一下。 首次啟動(dòng)是指安裝的時(shí)候就配置了HA,hdfs還沒(méi)有數(shù)據(jù)。這時(shí)需要用format命令把namenode1格式化。 非首次啟動(dòng)是指原來(lái)有一個(gè)沒(méi)有配置HA的HDFS已經(jīng)在運(yùn)行了,HDFS上已經(jīng)有數(shù)據(jù)了,現(xiàn)在需要配置HA而加入一臺(tái)namenode。這時(shí)候namenode1通過(guò)initializeSharedEdits命令來(lái)初始化journalnode,把edits文件共享到j(luò)ournalnode上。 #開(kāi)始啟動(dòng)namenode節(jié)點(diǎn) #在namenode01上執(zhí)行 [hadoop@namenode01 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start namenode #在namenode02上執(zhí)行 [hadoop@namenode02 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start namenode-bootstrapStandby #啟動(dòng)datanode節(jié)點(diǎn) [hadoop@datanode01 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start datanode [hadoop@datanode02 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start datanode [hadoop@datanode03 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start datanode #驗(yàn)證結(jié)果 #查看namenode01結(jié)果 [hadoop@namenode01 ~]$ jps 2467 NameNode #namenode角色 2270 JournalNode 2702 Jps #查看namenode02的結(jié)果 [hadoop@namenode01 ~]$ ssh namenode02 jps 2264 JournalNode 2680 Jps #查看datanode01的結(jié)果 [hadoop@namenode01 ~]$ ssh datanode01 jps 2466 Jps 2358 DataNode #datanode角色 2267 JournalNode #查看datannode02的結(jié)果 [hadoop@namenode01 ~]$ ssh datanode02 jps 2691 Jps 2612 DataNode #datanode角色 2265 JournalNode #查看datanode03的結(jié)果 [hadoop@namenode01 ~]$ ssh datanode03 jps 11987 DataNode #datanode角色 12067 Jps 11895 JournalNode
六、zookeeper高可用環(huán)境搭建
#下載軟件,使用root用戶的身份去安裝 [root@namenode01 ~]# wget http://apache.fayea.com/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz #解壓文件/usr/local下,并修改權(quán)限 [root@namenode01 ~]# tar xf zookeeper-3.4.6.tar.gz -C /usr/local/ [root@namenode01 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/ #修改zookeeper配置文件 [root@namenode01 ~]# cp /usr/local/zookeeper-3.4.6/conf/zoo_sample.cfg /usr/local/zookeeper-3.4.6/conf/zoo.cfg [root@namenode01 ~]# egrep -v "^#|^$" /usr/local/zookeeper-3.4.6/conf/zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data/hdfs/zookeeper/data dataLogDir=/data/hdfs/zookeeper/logs clientPort=2181 server.1=namenode01:2888:3888 server.2=namenode02:2888:3888 server.3=datanode01:2888:3888 server.4=datanode02:2888:3888 server.5=datanode03:2888:3888 #配置zookeeper環(huán)境變量 [root@namenode01 ~]# cat /etc/profile.d/zookeeper.sh export ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.6 export PATH=$PATH:$ZOOKEEPER_HOME/bin #在namenode01上創(chuàng)建相關(guān)的目錄和myid文件 [root@namenode01 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs} [root@namenode01 ~]# tree /data/hdfs/zookeeper /data/hdfs/zookeeper ├── data └── logs [root@namenode01 ~]# echo "1" >/data/hdfs/zookeeper/data/myid [root@namenode01 ~]# cat /data/hdfs/zookeeper/data/myid 1 [root@namenode01 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper [root@namenode01 ~]# ll /data/hdfs/ total 0 drwxrwxr-x 3 hadoop hadoop 17 Apr 29 10:05 dfs drwxrwxr-x 3 hadoop hadoop 22 Apr 29 10:05 journal drwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:42 zookeeper #將zookeeper安裝目錄和環(huán)境變量配置文件復(fù)制到其余的幾臺(tái)機(jī)器上,以復(fù)制到namenode02為例 [root@namenode01 ~]# scp -r /usr/local/zookeeper-3.4.6 namenode02:/usr/local/ [root@namenode01 ~]# scp /etc/profile.d/zookeeper.sh namenode02:/etc/profile.d/ #namenode02上創(chuàng)建相關(guān)的目錄和文件,并修改相應(yīng)目錄的權(quán)限 [root@namenode02 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/ [root@namenode02 ~]# ll /usr/local/ |grep zook drwxr-xr-x 10 hadoop hadoop 4096 Apr 29 10:47 zookeeper-3.4.6 [root@namenode02 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs} [root@namenode02 ~]# echo "2" >/data/hdfs/zookeeper/data/myid [root@namenode02 ~]# cat /data/hdfs/zookeeper/data/myid 2 [root@namenode02 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper [root@namenode02 ~]# ll /data/hdfs/ |grep zook drwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:50 zookeeper #在datanode01上創(chuàng)建相關(guān)的目錄和文件,并修改相應(yīng)目錄的權(quán)限 [root@datanode01 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/ [root@datanode01 ~]# ll /usr/local/ |grep zook drwxr-xr-x 10 hadoop hadoop 4096 Apr 29 10:48 zookeeper-3.4.6 [root@datanode01 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs} [root@datanode01 ~]# echo "3" >/data/hdfs/zookeeper/data/myid [root@datanode01 ~]# cat /data/hdfs/zookeeper/data/myid 3 [root@datanode01 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper [root@datanode01 ~]# ll /data/hdfs/ |grep zook drwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:54 zookeeper #在datanode02上創(chuàng)建相關(guān)的目錄和文件,并修改相應(yīng)目錄的權(quán)限 [root@datanode02 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/ [root@datanode02 ~]# ll /usr/local/ |grep zook drwxr-xr-x 10 hadoop hadoop 4096 Apr 29 10:49 zookeeper-3.4.6 [root@datanode02 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs} [root@datanode02 ~]# echo "4" >/data/hdfs/zookeeper/data/myid [root@datanode02 ~]# cat /data/hdfs/zookeeper/data/myid 4 [root@datanode02 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper [root@datanode02 ~]# ll /data/hdfs/ |grep zook drwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:56 zookeeper #在datanode03上創(chuàng)建相關(guān)的目錄和文件,并修改相應(yīng)目錄的權(quán)限 [root@datanode03 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/ [root@datanode03 ~]# ll /usr/local/ |grep zook drwxr-xr-x 10 hadoop hadoop 4096 Apr 29 18:49 zookeeper-3.4.6 [root@datanode03 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs} [root@datanode03 ~]# echo "5" >/data/hdfs/zookeeper/data/myid [root@datanode03 ~]# cat /data/hdfs/zookeeper/data/myid 5 [root@datanode03 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper [root@datanode03 ~]# ll /data/hdfs/ |grep zook drwxr-xr-x 4 hadoop hadoop 28 Apr 29 18:57 zookeeper #在5臺(tái)機(jī)器上已hadoop的身份窮zookeeper #namenode01上啟動(dòng) [hadoop@namenode01 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED #namenode02上啟動(dòng) [hadoop@namenode02 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED #datanode01上啟動(dòng) [hadoop@datanode01 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED #datanode02上啟動(dòng) [hadoop@datanode02 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED #datanode03上啟動(dòng) [hadoop@datanode03 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED #查看namenode01的結(jié)果 [hadoop@namenode01 ~]$ jps 2467 NameNode 3348 QuorumPeerMain #zookeeper進(jìn)程 3483 Jps 2270 JournalNode [hadoop@namenode01 ~]$ zkServer.sh status JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: follower #查看namenode02的結(jié)果 [hadoop@namenode01 ~]$ ssh namenode02 jps 2264 JournalNode 2888 QuorumPeerMain 2936 Jps [hadoop@namenode01 ~]$ ssh namenode02 'zkServer.sh status' JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: follower #查看datanode01的結(jié)果 [hadoop@namenode01 ~]$ ssh datanode01 jps 2881 QuorumPeerMain 2358 DataNode 2267 JournalNode 2955 Jps [hadoop@namenode01 ~]$ ssh datanode01 'zkServer.sh status' JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: follower #查看datanode02的結(jié)果 [hadoop@namenode01 ~]$ ssh datanode02 jps 2849 QuorumPeerMain 2612 DataNode 2885 Jps 2265 JournalNode [hadoop@namenode01 ~]$ ssh datanode02 'zkServer.sh status' JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: follower #查看datanode03的結(jié)果 [hadoop@namenode01 ~]$ ssh datanode03 jps 11987 DataNode 12276 Jps 12213 QuorumPeerMain 11895 JournalNode [hadoop@namenode01 ~]$ ssh datanode03 'zkServer.sh status' JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: leader
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。