溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務(wù)條款》

hadoop ha+zookeeper+hbase

發(fā)布時間:2020-07-06 19:41:37 來源:網(wǎng)絡(luò) 閱讀:474 作者:Leyin 欄目:關(guān)系型數(shù)據(jù)庫

一、環(huán)境

1、系統(tǒng):Red Hat Enterprise Linux Server release 6.4


2、所需軟件包

    hadoop-2.2.0.tar.gz  

    hbase-0.98.2-hadoop2-bin.tar.gz  

    jdk-7u67-linux-x64.tar.gz  

    zookeeper-3.4.6.tar.gz


3、各機器運行服務(wù)

192.168.10.40 master1 namenode resourcemanager   ZKFC   hmaster  

192.168.10.41 master2 namenode                   ZKFC   hmaster(backup)

192.168.10.42 slave1  datanode nodemanager  journalnode  hregionserver  zookeeper

192.168.10.43 slave2  datanode nodemanager  journalnode  hregionserver  zookeeper

192.168.10.44 slave3  datanode nodemanager  journalnode  hregionserver  zookeeper


二、安裝步驟:(為了便于同步,一般都是在master1上操作)

1、ssh無密碼登錄

(mkdir -m700 .ssh)


2、jdk的安裝(每臺都是)

1)、解壓

tar zxf jdk-7u67-linux-x64.tar.gz 

ln -sf jdk1.7.0_67 jdk


2)、配置

sudo vim /etc/profile

export JAVA_HOME=/home/richmail/jdk

export PATH=$JAVA_HOME/bin:$PATH

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar


3)執(zhí)行,使生效

source /etc/profile


3、zookeeper的安裝

1)解壓

tar zxf zookeeper-3.4.6.tar.gz 

ln -sf zookeeper-3.4.6 zookeeper


2)、配置

vim zookeeper/bin/zkEnv.sh

ZOO_LOG_DIR="/home/richmail/zookeeper/logs"


cd zookeeper/conf

cp zoo_sample.cfg zoo.cfg

vim zoo.cfg

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/home/richmail/zookeeper/data

dataLogDir=/home/richmail/zookeeper/logs

clientPort=2181

server.1=slave1:2888:3888

server.2=slave2:2888:3888

server.3=slave3:2888:3888


mkdir -p /home/richmail/zookeeper/{data,logs}


3)、復(fù)制到slave1,slave2,slave3上 

cd

scp -rv zookeeper slave1:~/ 

ssh slave1 ‘echo 1 > /home/richmail/zookeeper/data/myid’

scp -rv zookeeper slave2:~/  

ssh slave1 ‘echo 2 > /home/richmail/zookeeper/data/myid'

scp -rv zookeeper slave3:~/  

ssh slave1 ‘echo 3 > /home/richmail/zookeeper/data/myid’


4)、啟動zookeeper

分別去slave1,slave2,slave3區(qū)啟動zookeeper

cd ~/zookeeper/bin

./zkServer.sh start


4、hadoop的安裝

1)、解壓

tar zxf hadoop-2.2.0.tar.gz

ln -sf hadoop-2.2.0 hadoop


2)、配置

cd /home/richmail/hadoop/etc/hadoop

vim core-site.xml

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://cluster</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/home/richmail/hadoop/storage/tmp</value>

</property>

<property>

<name>ha.zookeeper.quorum</name>

<value>slave1:2181,slave2:2181,slave3:2181</value>

</property>

</configuration>


mkdir -p /home/richmail/hadoop/storage/tmp

vim hadoop-env.sh 

export JAVA_HOME=/home/richmail/jdk

export HADOOP_PID_DIR=/var/hadoop/pids  //默認 /tmp下


vim hdfs-site.xml 

<configuration>

<property>

<name>dfs.nameservices</name>

<value>cluster</value>

</property>

<property>

<name>dfs.ha.namenodes.cluster</name>

<value>master1,master2</value>

</property>

<property>

<name>dfs.namenode.rpc-address.cluster.master1</name>

<value>master1:9000</value>

</property>

<property>

<name>dfs.namenode.rpc-address.cluster.master2</name>

<value>master2:9000</value>

</property>

<property>

<name>dfs.namenode.http-address.cluster.master1</name>

<value>master1:50070</value>

</property>

<property>

<name>dfs.namenode.http-address.cluster.master2</name>

<value>master2:50070</value>

</property>

<property>

<name>dfs.namenode.shared.edits.dir</name>

<value>qjournal://slave1:8485;slave2:8485;slave3:8485/cluster</value>

</property>

<property>

<name>dfs.ha.automatic-failover.enabled</name>

<value>true</value>

</property>

<property>

<name>dfs.client.failover.proxy.provider.cluster</name>

<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

</property>

<property>

<name>dfs.ha.fencing.methods</name>

<value>sshfence</value>

</property>

<property>

<name>dfs.ha.fencing.ssh.private-key-files</name>

<value>/home/richmail/.ssh/id_rsa</value>

</property>

<property>

<name>dfs.journalnode.edits.dir</name>

<value>/home/richmail/hadoop/storage/journal</value>

</property>

</configuration>


mkdir -p /home/richmail/hadoop/storage/journal

vim mapred-site.xml

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

</configuration>


vim yarn-env.sh

export YARN_PID_DIR=/var/hadoop/pids


 vim yarn-site.sh

<configuration>

<property>

<name>yarn.resourcemanager.hostname</name>

<value>master1</value>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

</configuration>


vim slaves

slave1

slave2

slave3


3)、復(fù)制至其他機器

cd

scp -rv hadoop master2:~/

scp -rv hadoop slaver1:~/

scp -rv hadoop slaver2:~/

scp -rv hadoop slaver3:~/


4)、啟動hadoop

1)、在slave1,slave2,slave3上執(zhí)行journalnode

cd ~/hadoop/sbin

./hadoop-daemon.sh start journalnode


2)、在master1上執(zhí)行

cd ~/hadoop/bin

./hdfs zkfc -formatZK

./hdfs namenode -format

cd ../sbin

./hadoop-daemon.sh start namenode

./start-all.sh


3)、在master2上執(zhí)行

cd ~/hadoop/bin

hdfs namenode -bootstrapStandby

cd ../sbin

hadoop-daemon.sh start namenode


5)、驗證

使用瀏覽器訪問192.168.10.40:50070和192.168.10.41:50070,能夠看到兩個節(jié)點。一個是active,一個是standny

或者在名字節(jié)點執(zhí)行命令:

hdfs haadmin -getServiceState master1

hdfs haadmin -getServiceState master2

執(zhí)行hdfs haadmin –failover –forceactive master1 master2,可以使這兩個節(jié)點的狀態(tài)進行交換


5、hbase的安裝

1)、解壓

tar zxf hbase-0.98.2-hadoop2-bin.tar.gz 

ln -sf hbase-0.98.2-hadoop2 hbase 


2)、配置

cd ~/hbase/conf

vim hbase-env.sh

export JAVA_HOME=/home/richmail/jdk

export HBASE_MANAGES_ZK=false


vim hbase-env.sh 

export HBASE_PID_DIR=/var/hadoop/pids


vim regionservers

slave1

slave2

slave3


vim hbase-site.xml 

<configuration>

<property>

<name>hbase.rootdir</name>

<value>hdfs://cluster/hbase</value>

</property>

<property>

<name>hbase.master</name>

<value>60000</value>

</property>

<property>

<name>hbase.zookeeper.quorum</name>

<value>slave1,slave2,slave3</value>

</property>

<property>

<name>hbase.zookeeper.property.clientPort</name>

<value>2181</value>

</property>

<property>

<name>hbase.zookeeper.property.dataDir</name>

<value>/home/richmail/hbase/zkdata</value>

</property>

<property>

<name>hbase.cluster.distributed</name>

<value>true</value>

</property>

<property>

<name>hbase.tmp.dir</name>

<value>/home/richmail/hbase/data</value>

</property>

</configuration>


mkdir ~/hbase/{zkdata,data}

hbase有個啟動錯誤需要把hadoop的配置文件hdfs-site.xml復(fù)制到hbase/conf下,才能解決


3)、復(fù)制至其他機器

cd

scp -rv hbase master2:~/

scp -rv hbase slaver1:~/

scp -rv hbase slaver2:~/

scp -rv hbase slaver3:~/


4)、啟動hbase

在master1上執(zhí)行

cd ~/hbase/bin

./start-hbase.sh


在master2上執(zhí)行

./bin/hbase-daemon.sh start master --backup

至此這個集群就部署OK啦

向AI問一下細節(jié)

免責聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI