溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點(diǎn)擊 登錄注冊 即表示同意《億速云用戶服務(wù)條款》

Hadoop+Hive(MySQL)+Hbase+zooke

發(fā)布時(shí)間:2020-03-01 20:12:33 來源:網(wǎng)絡(luò) 閱讀:807 作者:斜陽笑山坡 欄目:關(guān)系型數(shù)據(jù)庫

一、hadoop安裝

虛擬機(jī)(centos7

Master192.168.0.228

Slave192.168.0.207

軟件

apache-hive-1.2.1-bin.tar.gz

hadoop-2.6.0-cdh6.4.8.tar.gz

jdk-8u65-linux-x64.tar.gz

mysql-connector-java-5.1.31-bin.jar

hbase-0.98.15-hadoop2-bin.tar

zookeeper-3.4.6.tar

1.關(guān)閉防火墻

  Systemctl disable firewalld.service

  Systemctl stop   firewalld.service

  Setenforce 0

  Vim /etc/selinux/config  永久關(guān)閉

  將SELINUX=enforce改為SELINUX=disable

2.配置主機(jī)名

  192.168.0.228: echo master” > /etc/hostname

  192.168.0.207:  echo slave” > /etc/hostname

3.主機(jī)間解析

  在兩臺(tái)機(jī)器/etc/hosts文件下添加ip地址和主機(jī)名

   

4.配置SSH互信

  master 

  yum  -y install  sshpass

  ssh-keygen 一路回車

  ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.0.220

slave

  yum  -y install  sshpass

  ssh-keygen 一路回車

  ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.0.201

 

如圖,OK

5.安裝JDK

  兩臺(tái)機(jī)器都需要配置

  tar zxvf jdk-8u65-linux-x64.tar.gz

  mv jdk1.8.0_65  /usr/jdk

  設(shè)置環(huán)境變量

  Vim /etc/profile

export JAVA_HOME=/usr/jdk

export JRE_HOME=/usr/jdk/jre

export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib

export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

執(zhí)行 source  /etc/profile

測試

java -version,如圖

 

 

6.安裝Hadoop

  tar zxvf hadoop-2.6.0-cdh6.4.8.tar.gz

  mv hadoop-2.6.0-cdh6.4.8  /usr/hadoop

  cd  /usr/hadoop

  mkdir -p dfs/name

  mkdir -p dfs/data

  mkdir -p tmp

 6.1編輯配置文件

     Salves

     yarn-env.sh

     Yarn-site.xml

     mapred-site.xml

     hdfs-env.sh

     core-site.xml

     Hadoop-env.sh

 

cd /usr/hadoop/etc/hadoop

 vim slaves

 192.168.0.220   #添加slaveIP

 

vim hadoop-env.sh / vim yarn-env.sh

export JAVA_HOME=/usr/jdk   #加入java變量

 

Vim core-site.xml

<configuration>

        <property>

                <name>fs.defaultFS</name>

                <value>hdfs://192.168.0.228:9000</value>

        </property>

        <property>

                <name>io.file.buffer.size</name>

                <value>131702</value>

        </property>

        <property>

                <name>hadoop.tmp.dir</name>

                <value>file:/usr/hadoop/tmp</value>

        </property>

        <property>

                <name>hadoop.proxyuser.hadoop.hosts</name>

                <value>*</value>

        </property>

        <property>

                <name>hadoop.proxyuser.hadoop.groups</name>

                <value>*</value>

        </property>

</configuration>

 

 

Vim hdfs-site.xml

<configuration>

        <property>

                <name>dfs.namenode.name.dir</name>

                <value>:/usr/hadoop/dfs/name</value>

        </property>

        <property>

                <name>dfs.datanode.data.dir</name>

                <value>:/usr/hadoop/dfs/data</value>

        </property>

        <property>

                <name>dfs.replication</name>

                <value>2</value>

        </property>

        <property>

                <name>dfs.namenode.secondary.http-address</name>

                <value>192.168.0.228:9001</value>

        </property>

        <property>

                <name>dfs.webhdfs.enabled</name>

                <value>true</value>

        </property>

</configuration>

 

Vim mapred-site.xml

<configuration>

        <property>

                <name>mapreduce.framework.name</name>

                <value>yarn</value>

        </property>

        <property>

                <name>mapreduce.jobhistory.address</name>

                <value>192.168.0.228:10020</value>

        </property>

        <property>

                <name>mapreduce.jobhistory.webapp.address</name>

                <value>192.168.0.228:19888</value>

        </property>

</configuration>

 

 

 

Vim yarn-site.xml

<configuration>

        <property>

                <name>yarn.nodemanager.aux-services</name>

                <value>mapreduce_shuffle</value>

        </property>

        <property>

                <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>

                <value>org.apache.hadoop.mapred.ShuffleHandler</value>

        </property>

        <property>

                <name>yarn.resourcemanager.address</name>

                <value>192.168.0.228:8032</value>

        </property>

        <property>

                <name>yarn.resourcemanager.scheduler.address</name>

                <value>192.168.0.228:8030</value>

        </property>

        <property>

                <name>yarn.resourcemanager.resource-tracker.address</name>

                <value>192.168.0.228:8031</value>

        </property>

        <property>

                <name>yarn.resourcemanager.admin.address</name>

                <value>192.168.0.228:8033</value>

        </property>

        <property>

                <name>yarn.resourcemanager.webapp.address</name>

                <value>192.168.0.228:8088</value>

        </property>

        <property>

                <name>yarn.nodemanager.resource.memory-mb</name>

                <value>768</value>

        </property>

</configuration>

把目錄拷貝到slave機(jī)器上

scp -r /usr/hadoop root@192.168.0.207:/usr/

 

格式化namenode

./bin/hdfs namenode -format

啟動(dòng)hdfs

./sbin/start-dfs.sh  ./sbin/start-yarn.sh

使用jps測試

 

 

 

 

 

 

訪問192.168.0.228:50070

 

192.168.0.228:8088

 

 

 

安裝MySQLHive

本地模式:這種模式是將元數(shù)據(jù)保存在本地?cái)?shù)據(jù)庫中(一般是MySQL)。這樣可以支持多用戶,多會(huì)話。

MySQL

wget http://dev.mysql.com/get/mysql-community-release-el7-5.noarch.rpm

rpm -ivh mysql-community-release-el7-5.noarch.rpm

yum -y install mysql-community-server

systemctl start mysql  啟動(dòng)

mysqladmin -uroot password password’為root設(shè)置密碼

mysql -uroot -ppassword

Create database hive;  創(chuàng)建hive

grant all on hive.* to 'hive'@'localhost' identified by hive授權(quán)

 

Hive

tar zxf apache-hive-1.2.1-bin.tar.gz

mv apache-hive-1.2.1-bin/ /usr/hadoop/hive

配置變量

vim /etc/profile

export HIVE_HOME=/usr/hadoop/hive

export PATH=$HIVE_HOME/bin:$HIVE_HOME/conf:$PATH

執(zhí)行 source /etc/profile

 

mv mysql-connector-java-5.1.31-bin.jar /usr/hadoop/hive/lib 

JDBC驅(qū)動(dòng)包拷貝到hivelib

 

 

cd /usr/hadoop/hive/conf

 

Cp hive-default.xml.template hive-site.xml

 

Vim hive-site.xml 更改配置文件

 

 

Cd /usr/hadoop/hive/bin/

 啟動(dòng)Hive

 

 

 

安裝zookeeperHbase

1.Zookeeper

   Master配置如下:

   tar zxf zookeeper-3.4.6.tar

   mv zookeeper-3.4.6 /usr/hadoop/zookeeper

   更改文件的擁有者

   Chown -R root:root /usr/hadoop/zookeeper

   cd /usr/hadoop/zookeeper

   mkdir data  創(chuàng)建zookeeper數(shù)據(jù)存儲(chǔ)目錄

  

   配置變量 vim /etc/profile

   加入export ZOOKEEPER_HOME=/usr/hadoop/zookeeper

        export PATH=$PATH:$ZOOKEEPER_HOME/bin

   執(zhí)行 source /etc/profile

   配置文件存放在conf/目錄下,將zoo_sample.cfd文件名稱改為zoo.cfg,  配置如下:

   Cp zoo_sample.cfd zoo.cfg

Vim zoo.cfg

 

 

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/usr/hadoop/zookeeper/data

clientPort=2181

輸入masterslaveip地址或主機(jī)名:

server.1=192.168.0.228:2888:3888

server.2=192.168.0.207:2888:3888

 

mkdir data/myid    創(chuàng)建myid文件

Vim myid

填寫zoo.cfg中本機(jī)ip前面server.后邊的數(shù)字

1

 

將文件拷貝器slave節(jié)點(diǎn)

scp -r /usr/hadoop/zookeeper/ root@192.168.0.207:/root/hadoop/

 

 

Slave配置:

     配置變量 vim /etc/profile

   加入export ZOOKEEPER_HOME=/usr/hadoop/zookeeper

        export PATH=$PATH:$ZOOKEEPER_HOME/bin

   執(zhí)行 source /etc/profile

 

Cd /usr/hadoop/zookeeper/data

  mkdir data/myid    創(chuàng)建myid文件

  Vim myid

  填寫zoo.cfg中本機(jī)ip前面server.后邊的數(shù)字

  2

 

啟動(dòng):

  [root@master bin]# /usr/hadoop/zookeeper/bin/zkServer.sh start

  輸入jps查看,如圖

     


  安裝Hbase

     1tar解壓hbase安裝包 

      2、配置hbase 

      a、/conf/hbase-env.sh 

      export JAVA_HOME= /usr/jdk

      export HBASE_MANAGES_ZK=false (可以啟用hbase自帶的zookeeper,這樣也   不用單獨(dú)安裝zookeeper了,如單獨(dú)安裝了,配為false) 

    b、conf/hbase-site.xml 

   該配置采用了hbase自帶的zookeeper 

<configuration> 

<property> 

<name>hbase.rootdir</name> s

<value>hdfs://master:9000/hbase</value> 

</property> 

<property> 

<name>hbase.cluster.distributed</name> 

<value>true</value> 

</property> 

<property> 

<name>hbase.zookeeper.quorum</name> 

<value>slave1,slave2,slave3</value> 

</property> 

<property> 

<name>dfs.replication</name> 

<value>2</value> 

<description> 

</description> 

</property> 

</configuration> 

單獨(dú)安裝的zookeeper采用如下配置 

regionservers<configuration> 

<property> 

<name>hbase.rootdir</name> 

<value>hdfs://master:9000/hbase</value> 

</property> 

<property> 

<name>hbase.cluster.distributed</name> 

<value>true</value> 

</property> 

<property> 

<name>hbase.zookeeper.quorum</name> 

<value>master,slave1,slave2,slave3</value> 

</property> 

<property> 

<name>dfs.replication</name> 

<value>2</value> 

<description> 

</description> 

</property> 

 

<property> 

<name>hbase.zookeeper.property.dataDir</name> 

<value> 

    /home/hadoop/zk</value> 

<description> 

</description> 

</property> 

 

 

   

</configuration>注意hbase.rootdir配置需要與hadoop的配置一致。 

cconf/regionservers 

slave1 

slave2 

slave3 

到此hbase的配置已完成,用scp命令復(fù)制到slave1~salve3中。 

 

啟動(dòng)hbase, 

start-hbase.sh 

jps觀看是否啟動(dòng)正常,或通過瀏覽器查看,master:60010。


向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請(qǐng)聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI