溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

Hive的安裝和配置

發(fā)布時(shí)間:2020-06-04 10:33:23 來(lái)源:億速云 閱讀:1209 作者:Leah 欄目:大數(shù)據(jù)

這篇文章給大家分享的是Hive的詳細(xì)安裝配置教程,相信大部分人都還不知道怎么安裝配置,為了讓大家學(xué)會(huì),給大家總結(jié)了以下內(nèi)容,話不多說(shuō),一起往下看吧。

1.準(zhǔn)備hive安裝包

官方網(wǎng)站下載hive的安裝包

1.1 下載hive

下載地址 打開下載地址后,如下圖點(diǎn)擊apache-hive-1.2.2-bin.tar.gz  下載
Hive的安裝和配置

1.2 上傳hvie安裝包

基于我們之前的環(huán)境安裝情況已經(jīng)可以了解到我們已經(jīng)在node1上部署了namenode,resourcemanager,secondarynamenode等比較重要的進(jìn)程;node3上呢我們已經(jīng)安裝了centos的桌面和idea,這兩個(gè)主要的進(jìn)程消耗的系統(tǒng)資源比較多,那么接下來(lái)我們要安裝的hive計(jì)劃安裝在node2節(jié)點(diǎn)上,所以我們將hive的安裝包通過(guò)xhsell中的xftp的工具上傳到node2上.

如下圖
Hive的安裝和配置
如下圖,
Hive的安裝和配置

安裝包上傳成功,如下圖
Hive的安裝和配置

1.3 解壓hive安裝包

#1.把hive的壓縮安裝包解壓到/opt/bigdata/目錄下
[root@node2 ~]# tar -xzvf apache-hive-1.2.2-bin.tar.gz -C /opt/bigdata/   #輸入完命令后回車
#2.切換到bigdata目錄下
[root@node2 ~]# cd /opt/bigdata/
#3.修改hive安裝目錄的所屬用戶和組為hadoop:hadoop
[root@node2 bigdata]# chown -R hadoop:hadoop apache-hive-1.2.2-bin/
#4.修改hive安裝目錄的讀寫權(quán)限
[root@node2 bigdata]# chmod -R 755  apache-hive-1.2.2-bin/  

2、內(nèi)嵌模式

元數(shù)據(jù)保存在內(nèi)嵌的derby數(shù)據(jù)庫(kù)中,只允許一個(gè)會(huì)話鏈接,嘗試多個(gè)會(huì)話鏈接時(shí)會(huì)報(bào)錯(cuò)

2.1 啟動(dòng)hive

[root@node2 bigdata]# chmod -R 755  apache-hive-1.2.2-bin/
You have new mail in /var/spool/mail/root
#1.修改完hive安裝目錄的所屬用戶組和讀寫權(quán)限之后,我們使用su - hadoop命令切換到hadoop用戶
[root@node2 bigdata]# su - hadoop
Last login: Mon Jul  8 17:44:54 CST 2019 on pts/0
#2.切換到hive的安裝目錄
[hadoop@node2 ~]$ cd /opt/bigdata/apache-hive-1.2.2-bin/
#3.我們先試用ll命令列舉一下hive安裝目錄的結(jié)構(gòu),如下:
[hadoop@node2 apache-hive-1.2.2-bin]$ ll
total 84
drwxr-xr-x 3 hadoop hadoop   119 Jul 11 10:57 bin
drwxr-xr-x 2 hadoop hadoop   212 Jul 11 10:57 conf
-rw-rw-r-- 1 hadoop hadoop 21099 Jul 11 11:29 derby.log
drwxr-xr-x 4 hadoop hadoop    34 Jul 11 10:57 examples
drwxr-xr-x 7 hadoop hadoop    68 Jul 11 10:57 hcatalog
drwxr-xr-x 4 hadoop hadoop  8192 Jul 11 10:57 lib
-rwxr-xr-x 1 hadoop hadoop 24754 Mar 31  2017 LICENSE
-rwxr-xr-x 1 hadoop hadoop   397 Mar 31  2017 NOTICE
-rwxr-xr-x 1 hadoop hadoop  4374 Apr  1  2017 README.txt
-rwxr-xr-x 1 hadoop hadoop  4255 Apr  1  2017 RELEASE_NOTES.txt
drwxr-xr-x 3 hadoop hadoop    23 Jul 11 10:57 scripts
[hadoop@node2 apache-hive-1.2.2-bin]$ ./bin/hive  #啟動(dòng)hive
#請(qǐng)忽略此行的日志提示,不影響hive的執(zhí)行
ls: cannot access /opt/spark-2.4.3-bin-hadoop2.7/lib/spark-assembly-*.jar: No such file or directory

#3.初始化hive
Logging initialized using configuration in jar:file:/opt/bigdata/apache-hive-1.2.2-bin/lib/hive-common-1.2.2.jar!/hive-log4j.properties
#4.hive命令提示符:hive>
#5.第一次安裝先顯示下hive中默認(rèn)有哪些數(shù)據(jù)庫(kù),我們可以看到只有一個(gè)default默認(rèn)的數(shù)據(jù)庫(kù).
hive> show databases;
OK
default
Time taken: 0.56 seconds, Fetched: 1 row(s)
hive>

我們將hive啟動(dòng)后,在hive的安裝目錄下會(huì)產(chǎn)生一個(gè)文件(derby.log)和一個(gè)文件夾(metastore_db),在當(dāng)前node2的連接窗口上,右鍵,點(diǎn)擊復(fù)制ssh渠道打開新的一個(gè)node2的ssh連接窗口。
Hive的安裝和配置
如下圖,在hive的安裝目錄下會(huì)產(chǎn)生一個(gè)文件(derby.log,保存hive命令以及執(zhí)行sql命令過(guò)程中的一些后臺(tái)日志)和一個(gè)文件夾(metastore_db,保存在hive中創(chuàng)建的庫(kù)和表的原信息比如庫(kù)名稱,表名稱,表中列名稱,列的數(shù)據(jù),等元數(shù)據(jù))
Hive的安裝和配置
同時(shí)我們還會(huì)發(fā)現(xiàn)./bin/hive啟動(dòng)hive的時(shí)候會(huì)在hdfs 的tmp的目錄下創(chuàng)建hvie的目錄

[hadoop@node3 ~]$ hdfs dfs -ls /tmp
Found 2 items
drwx------   - hadoop supergroup          0 2019-07-05 21:03 /tmp/hadoop-yarn
#1.創(chuàng)建了hive的目錄
drwx-wx-wx   - hadoop supergroup          0 2019-07-11 17:46 /tmp/hive
[hadoop@node3 ~]$ hdfs dfs -ls /tmp/hive
Found 1 items
drwx------   - hadoop supergroup          0 2019-07-11 17:46 /tmp/hive/hadoop
[hadoop@node3 ~]$ hdfs dfs -ls /tmp/hive/hadoop
Found 1 items
drwx------   - hadoop supergroup          0 2019-07-11 17:46 /tmp/hive/hadoop/ebeadee8-6034-4beb-bc2b-35a632aa31ef
[hadoop@node3 ~]$ 

2.2 使用hive

2.2.1 創(chuàng)建表
hive> create table pokes(foo int,bar string);
OK
Time taken: 0.808 seconds
#1.使用show tables;顯示default數(shù)據(jù)庫(kù)下的所有表;
hive> show tables;
OK
#2.剛剛創(chuàng)建的表
pokes
Time taken: 0.072 seconds, Fetched: 1 row(s)
hive>
2.2.2 插入數(shù)據(jù)
hive> insert into pokes(foo,bar) values(1,'first bar');

hive> 

另一種的方式是使用load的方式將指定的文件加載到hive表中,參考下面的操作.

#1.使用hive的load 命令將hive本身提供的示例數(shù)據(jù)加載到pokes表中
hive> LOAD DATA LOCAL INPATH './examples/files/kv1.txt' OVERWRITE INTO TABLE pokes;
Loading data to table default.pokes
Table default.pokes stats: [numFiles=1, numRows=0, totalSize=5812, rawDataSize=0]
OK
Time taken: 2.269 seconds
hive> select count(*) from pokes;
Query ID = hadoop_20190712141621_989a3ed0-b9cc-4546-8e20-941614789bdc
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1562912119783_0001, Tracking URL = http://node1:18088/proxy/application_1562912119783_0001/
Kill Command = /opt/bigdata/hadoop-2.7.3/bin/hadoop job  -kill job_1562912119783_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-07-12 14:16:32,167 Stage-1 map = 0%,  reduce = 0%
2019-07-12 14:16:46,940 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.65 sec
2019-07-12 14:16:53,221 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 6.13 sec
MapReduce Total cumulative CPU time: 6 seconds 130 msec
Ended Job = job_1562912119783_0001
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 6.13 sec   HDFS Read: 12221 HDFS Write: 4 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 130 msec
OK
500
Time taken: 33.884 seconds, Fetched: 1 row(s)
2.2.3 查詢表數(shù)據(jù)
#1.我們使用select語(yǔ)句查詢?nèi)龡l數(shù)據(jù)出來(lái)看看結(jié)果吧
hive> select * from pokes limit 3;
OK
#2.顯示結(jié)果,歐了,搞定.
238 val_238
86  val_86
311 val_311
Time taken: 0.07 seconds, Fetched: 3 row(s)
hive>
2.2.4 創(chuàng)建庫(kù)
#1.使用create database kkb;語(yǔ)句創(chuàng)建數(shù)據(jù)庫(kù);
hive>  create database kkb;
OK
Time taken: 0.591 seconds
hive> show databases;
OK
default
#2.剛剛創(chuàng)建的庫(kù)
kkb
Time taken: 0.015 seconds, Fetched: 2 row(s)
hive>
#1.查看/user/hive/warehouse/
[root@node3 ~]# hdfs dfs -ls /user/hive/warehouse/
Found 2 items
drwxr-xr-x   - hadoop supergroup          0 2019-07-12 14:21 /user/hive/warehouse/kkb.db
drwxr-xr-x   - hadoop supergroup          0 2019-07-12 14:15 /user/hive/warehouse/pokes
[root@node3 ~]# 

通過(guò)我們查看hdfs的hive目錄我們,可以看到創(chuàng)建庫(kù)的時(shí)候其實(shí)就是在hdfs上創(chuàng)建一個(gè)目錄作為數(shù)據(jù)庫(kù)的表存儲(chǔ)目錄,然后在對(duì)應(yīng)數(shù)據(jù)庫(kù)目錄下面存儲(chǔ)的就是表數(shù)據(jù)文件。

2.3 hive直接啟動(dòng)報(bào)錯(cuò)問題分析

2.3.1 Name node is in safe mode
[hadoop@node2 apache-hive-1.2.2-bin]$ ./bin/hive
ls: cannot access /opt/spark-2.4.3-bin-hadoop2.7/lib/spark-assembly-*.jar: No such file or directory

Logging initialized using configuration in jar:file:/opt/bigdata/apache-hive-1.2.2-bin/lib/hive-common-1.2.2.jar!/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: 
#1.提示Name node處于安全模式,安全模式時(shí)hadoop對(duì)外是不提供服務(wù)的,也就是說(shuō)hive不能使用hdfs來(lái)存儲(chǔ)數(shù)據(jù).
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive. Name node is in safe mode.
The reported blocks 0 needs additional 6 blocks to reach the threshold 0.9990 of total blocks 6.
The number of live datanodes 0 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1327)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3895)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive. Name node is in safe mode.
The reported blocks 0 needs additional 6 blocks to reach the threshold 0.9990 of total blocks 6.
The number of live datanodes 0 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1327)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3895)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

    at org.apache.hadoop.ipc.Client.call(Client.java:1475)
    at org.apache.hadoop.ipc.Client.call(Client.java:1412)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy15.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
    at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
    at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1043)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
    at org.apache.hadoop.hive.ql.exec.Utilities.createDirsWithPermission(Utilities.java:3678)
    at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:597)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
    ... 8 more
#2.出現(xiàn)此問題時(shí)不要著急,我們使用 hadoop dfsadmin -safemode get命令查看集群情況
[hadoop@node2 apache-hive-1.2.2-bin]$ hadoop dfsadmin -safemode get
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
#3.發(fā)現(xiàn)安全模式是開啟的,這時(shí)我們需要使用命令關(guān)閉它
Safe mode is ON
#4.使用hadoop dfsadmin -safemode leave命令關(guān)閉安全模式
[hadoop@node2 apache-hive-1.2.2-bin]$ hadoop dfsadmin -safemode leave
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
#5.安全模式已經(jīng)關(guān)閉
Safe mode is OFF
#6.再次查看安全模式是否已經(jīng)關(guān)閉
[hadoop@node2 apache-hive-1.2.2-bin]$ hadoop dfsadmin -safemode get
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
#7.安全模式已經(jīng)關(guān)閉
Safe mode is OFF
#8.關(guān)閉hadoop安全模式之后我們?cè)俅沃匦聠?dòng)hive
[hadoop@node2 apache-hive-1.2.2-bin]$ ./bin/hive  #啟動(dòng)hive
#請(qǐng)忽略此行的日志提示,不影響hive的執(zhí)行
ls: cannot access /opt/spark-2.4.3-bin-hadoop2.7/lib/spark-assembly-*.jar: No such file or directory

#9.初始化hive
Logging initialized using configuration in jar:file:/opt/bigdata/apache-hive-1.2.2-bin/lib/hive-common-1.2.2.jar!/hive-log4j.properties
#10.hive命令提示符:hive>
#11.第一次安裝先顯示下hive中默認(rèn)有哪些數(shù)據(jù)庫(kù),我們可以看到只有一個(gè)default默認(rèn)的數(shù)據(jù)庫(kù).
hive> show databases;
OK
default
Time taken: 0.56 seconds, Fetched: 1 row(s)
hive>
2.3.2 Execution Error, return code 1
hive> LOAD DATA LOCAL INPATH './examples/files/kv1.txt' OVERWRITE INTO TABLE pokes;
Loading data to table default.pokes
Failed with exception Unable to move source file:/opt/bigdata/apache-hive-1.2.2-bin/examples/files/kv1.txt to destination hdfs://node1:9000/user/hive/warehouse/pokes/kv1.txt
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask
hive> LOAD DATA LOCAL INPATH './examples/files/kv1.txt' OVERWRITE INTO TABLE pokes;
Loading data to table default.pokes
Failed with exception Call From node2/192.168.200.12 to node1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask

問題分析:

1.從報(bào)錯(cuò)信息中可以看到DataNode節(jié)點(diǎn)有異常,導(dǎo)致hive 命令執(zhí)行失敗

解決方法:

1.這時(shí)我們需要重新啟動(dòng)下集群就可以了.

3.安裝mysql數(shù)據(jù)庫(kù)

3.1 安裝wget工具

#1.安裝wget工具
[root@node2 ~]# yum install -y wget
Loaded plugins: fastestmirror
Determining fastest mirrors
#省略部分日志                                                       
Complete!
#2.下載mysql的yum源安裝文件,使用下面的命令就直接下載了安裝用的Yum Repository,大概25KB的樣子,然后就可以直接yum安裝了。
[root@node2 ~]# wget -i -c http://dev.mysql.com/get/mysql57-community-release-el7-10.noarch.rpm
#省略部分日志

100%[======================================================>] 25,548       120KB/s   in 0.2s   
#省略部分日志
Total wall clock time: 3.2s
Downloaded: 1 files, 25K in 0.2s (120 KB/s)
You have new mail in /var/spool/mail/root
[root@node2 ~]# ll
total 88764
-rw-------. 1 root root     1259 Jul  2 04:54 anaconda-ks.cfg
-rw-r--r--  1 root root 90859180 Jul 10 20:58 apache-hive-1.2.2-bin.tar.gz
-rw-r--r--  1 root root    25548 Apr  7  2017 mysql57-community-release-el7-10.noarch.rpm
#3.安裝mysql的yum源配置
[root@node2 ~]# yum -y install mysql57-community-release-el7-10.noarch.rpm 
Loaded plugins: fastestmirror

Installed:
  mysql57-community-release.noarch 0:el7-10                                                     
Complete!
You have new mail in /var/spool/mail/root
[root@node2 ~]# 

3.2 yum安裝MySQL

這步花費(fèi)的時(shí)間比較多,安裝完成后就會(huì)覆蓋掉之前的mariadb。

[root@node2 ~]# yum -y install mysql-community-server
#這步可能會(huì)花些時(shí)間,安裝完成后就會(huì)覆蓋掉之前的mariadb。
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.jdcloud.com
 * extras: mirror.jdcloud.com
 * updates: mirror.bit.edu.cn
mysql-connectors-community                                               | 2.5 kB  00:00:00     
#省略部分日志...

#出現(xiàn)Complete!這樣的字樣提示表示安裝成功
Complete!
You have new mail in /var/spool/mail/root
[root@node2 ~]# 

3.3 啟動(dòng)mysql

[root@node2 ~]# systemctl start  mysqld.service
[root@node2 ~]# systemctl status mysqld.service
● mysqld.service - MySQL Server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
   #狀態(tài)是active(running)就對(duì)了,說(shuō)明mysql數(shù)據(jù)庫(kù)的服務(wù)已經(jīng)成功啟動(dòng)!
   Active: active (running) since Wed 2019-07-17 15:48:44 CST; 1min 57s ago
     Docs: man:mysqld(8)
           http://dev.mysql.com/doc/refman/en/using-systemd.html
  Process: 109515 ExecStart=/usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid $MYSQLD_OPTS (code=exited, status=0/SUCCESS)
  Process: 109430 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
 Main PID: 109518 (mysqld)
   CGroup: /system.slice/mysqld.service
           └─109518 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid

Jul 17 15:48:31 node2 systemd[1]: Starting MySQL Server...
Jul 17 15:48:44 node2 systemd[1]: Started MySQL Server.
You have new mail in /var/spool/mail/root
[root@node2 ~]# 

mysql服務(wù)啟動(dòng)后的狀態(tài)截圖如下圖:
Hive的安裝和配置

3.4 登錄mysql

此時(shí)此刻,我們通過(guò)上面的幾個(gè)步驟已經(jīng)成功安裝完并啟動(dòng)mysql,不過(guò)想要登錄mysql時(shí),我們需要知道m(xù)ysql的密碼,就在我們安裝的過(guò)程中mysql已經(jīng)默認(rèn)給我們生成了一個(gè)root用戶的默認(rèn)密碼,通過(guò)使用root用戶和默認(rèn)密碼可以登錄mysql,登錄之后我們需要將默認(rèn)的密碼修改掉(為了mysql數(shù)據(jù)庫(kù)的安全著想).

#1.查看默認(rèn)密碼
[root@node2 ~]# grep "password" /var/log/mysqld.log
2019-07-17T07:48:39.339873Z 1 [Note] A temporary password is generated for 
#2.顯示默認(rèn)密碼:n8Ae>q70PeHG
root@localhost: n8Ae>q70PeHG
You have new mail in /var/spool/mail/root
#3.使用命令mysql -uroot -p登錄mysql
[root@node2 ~]# mysql -uroot -p     # 回車后會(huì)提示輸入密碼
Enter password: #
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.26

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
#3.使用命令show databases;顯示所有的數(shù)據(jù)庫(kù)
mysql> show databases;
#4.因?yàn)槲覀儧]有修改默認(rèn)密碼,會(huì)提示我們修改密碼
ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement.
#6.使用下面命令修改密碼為:!Qaz123456
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY '!Qaz123456';
#7.設(shè)置ok
Query OK, 0 rows affected (0.00 sec)

#8.再次執(zhí)行show databases;
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)

mysql> 

如下圖,默認(rèn)密碼
Hive的安裝和配置
注意另一種密碼格式
Hive的安裝和配置

4、本地模式

本地安裝模式其實(shí)就是把mysql和hive安裝在一臺(tái)機(jī)器上而已(這就是本地模式)

(本地安裝mysql 替代derby存儲(chǔ)元數(shù)據(jù),使用mysql進(jìn)行hive的元數(shù)據(jù)信息管理)

4.1 復(fù)制hive配置文件

新打開一個(gè)鏈接node2的xshell窗口,切換到hadoop用戶,切換目錄到hive的安裝目錄下的conf目錄

#1.在conf目錄下從hive-default.xml.template復(fù)制出來(lái)一個(gè)配置文件hive-site.xml文件
#hive-site.xml文件中配置鏈接mysql的相關(guān)信息
[hadoop@node2 conf]$ cp hive-default.xml.template hive-site.xml
#2.使用vi命令編輯hive-site.xml文件并設(shè)置行號(hào),按shift 輸入:set number回車發(fā)現(xiàn)行號(hào)設(shè)置成功
[hadoop@node2 conf]$ vi hive-site.xml 

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
--><configuration>
  <!-- WARNING!!! This file is auto generated for documentation purposes ONLY! -->
  <!-- WARNING!!! Any changes you make to this file will be ignored by Hive.   -->
  <!-- WARNING!!! You must make your changes in hive-site.xml instead.         -->
:set number

如下圖,行號(hào)設(shè)置成功,按照教程繼續(xù)往下操作
Hive的安裝和配置
在此按下shift+: 時(shí) set number的命令就會(huì)消失,意味著我們可以接著輸入新的命令,那就輸入新的命令吧
如下圖,:18,3913d 輸入后回車,會(huì)發(fā)現(xiàn)<configuration></configuration>標(biāo)簽中間多余的內(nèi)容已經(jīng)刪除掉了。
Hive的安裝和配置
在標(biāo)簽<configuration></configuration>中填入以下內(nèi)容

<property> 
    <!--這個(gè)是用于存放hive元數(shù)據(jù)的目錄位置。 -->
  <name>hive.metastore.warehouse.dir</name>  
  <value>/user/hive_remote/warehouse</value>  
</property>  
<!-- 控制hive是否連接一個(gè)遠(yuǎn)程metastore服務(wù)器還是開啟一個(gè)本地客戶端jvm,默認(rèn)是true,Hive0.10已經(jīng)取消了該配置項(xiàng);-->
<property>  
  <name>hive.metastore.local</name>  
  <value>true</value>  
</property>  
   <!-- JDBC連接字符串,默認(rèn)jdbc:derby:;databaseName=metastore_db;create=true;-->
<property>  
  <name>javax.jdo.option.ConnectionURL</name>  
  <value>jdbc:mysql://localhost/hive_remote?createDatabaseIfNotExist=true</value>  
</property>  
   <!--JDBC的driver,默認(rèn)org.apache.derby.jdbc.EmbeddedDriver; -->
<property>  
  <name>javax.jdo.option.ConnectionDriverName</name>  
  <value>com.mysql.jdbc.Driver</value>  
</property>  
   <!-- username,默認(rèn)APP;-->
<property>  
  <name>javax.jdo.option.ConnectionUserName</name>  
  <value>root</value>  
</property>  
   <!--password,默認(rèn)mine;-->
<property>  
  <name>javax.jdo.option.ConnectionPassword</name>  
  <value>!Qaz123456</value>  
</property>  

4.2 下載mysql驅(qū)動(dòng)包

下載mysql驅(qū)動(dòng)包
Hive的安裝和配置
如下圖,新打開一個(gè)node2的xshell連接,使用xftp將下載的mysql的驅(qū)動(dòng)包上傳到root用戶的家目錄下
Hive的安裝和配置
[外鏈圖片轉(zhuǎn)存失敗,源站可能有防盜鏈機(jī)制,建議將圖片保存下來(lái)直接上傳(img-R25f6wGm-1579412316616)(assets/1563354813113.png)]

將下載的mysql驅(qū)動(dòng)包復(fù)制到hive的lib目錄下

[root@node2 ~]# cp mysql-connector-java-5.1.46.jar  /opt/bigdata/apache-hive-1.2.2-bin/lib/
You have new mail in /var/spool/mail/root
[root@node2 ~]#

4.3 啟動(dòng)hive

切換到hadoop用戶,執(zhí)行hive命令啟動(dòng)hive

#1.切換到hive的安裝目錄,使用ll命令,查看hive的安裝目錄
[hadoop@node2 apache-hive-1.2.2-bin]$ ll
total 84
drwxr-xr-x 3 hadoop hadoop   119 Jul 11 10:57 bin
drwxr-xr-x 2 hadoop hadoop   233 Jul 17 16:55 conf
-rw-rw-r-- 1 hadoop hadoop 21099 Jul 12 14:15 derby.log
drwxr-xr-x 4 hadoop hadoop    34 Jul 11 10:57 examples
drwxr-xr-x 7 hadoop hadoop    68 Jul 11 10:57 hcatalog
drwxr-xr-x 4 hadoop hadoop  8192 Jul 17 17:18 lib
-rwxr-xr-x 1 hadoop hadoop 24754 Mar 31  2017 LICENSE
drwxrwxr-x 5 hadoop hadoop   133 Jul 12 14:15 metastore_db
-rwxr-xr-x 1 hadoop hadoop   397 Mar 31  2017 NOTICE
-rwxr-xr-x 1 hadoop hadoop  4374 Apr  1  2017 README.txt
-rwxr-xr-x 1 hadoop hadoop  4255 Apr  1  2017 RELEASE_NOTES.txt
drwxr-xr-x 3 hadoop hadoop    23 Jul 11 10:57 scripts
#2.刪除掉derby模式生成的日志文件和元數(shù)據(jù)目錄,我們現(xiàn)在使用mysql進(jìn)行管理元數(shù)據(jù)信息,所以這里不再需要他們,我們就刪除掉即可.
[hadoop@node2 apache-hive-1.2.2-bin]$ rm -rf derby.log 
[hadoop@node2 apache-hive-1.2.2-bin]$ rm -rf metastore_db/
#3.重新啟動(dòng)hive
[hadoop@node2 apache-hive-1.2.2-bin]$ ./bin/hive
ls: cannot access /opt/spark-2.4.3-bin-hadoop2.7/lib/spark-assembly-*.jar: No such file or directory
19/07/17 17:23:15 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist

Logging initialized using configuration in jar:file:/opt/bigdata/apache-hive-1.2.2-bin/lib/hive-common-1.2.2.jar!/hive-log4j.properties
Wed Jul 17 17:23:19 CST 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Wed Jul 17 17:23:20 CST 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Wed Jul 17 17:23:20 CST 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Wed Jul 17 17:23:20 CST 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Wed Jul 17 17:23:20 CST 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Wed Jul 17 17:23:20 CST 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Wed Jul 17 17:23:21 CST 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Wed Jul 17 17:23:21 CST 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
#4.啟動(dòng)后我們執(zhí)行show databases;命令顯示所有數(shù)據(jù)庫(kù),hive的本地mysql數(shù)據(jù)庫(kù)配置完成.
hive> show databases;
OK
default
Time taken: 0.902 seconds, Fetched: 1 row(s)
#5.退出hive
hive> quit;
#6.在顯示下hive安裝目錄的所有信息,我們發(fā)現(xiàn)目錄下沒有產(chǎn)生新文件,此時(shí)的元數(shù)據(jù)已經(jīng)存入到mysql數(shù)據(jù)庫(kù)中了,繼續(xù)下面的步驟我們會(huì)看到.
[hadoop@node2 apache-hive-1.2.2-bin]$ ll
total 60
drwxr-xr-x 3 hadoop hadoop   119 Jul 11 10:57 bin
drwxr-xr-x 2 hadoop hadoop   233 Jul 17 16:55 conf
drwxr-xr-x 4 hadoop hadoop    34 Jul 11 10:57 examples
drwxr-xr-x 7 hadoop hadoop    68 Jul 11 10:57 hcatalog
drwxr-xr-x 4 hadoop hadoop  8192 Jul 17 17:18 lib
-rwxr-xr-x 1 hadoop hadoop 24754 Mar 31  2017 LICENSE
-rwxr-xr-x 1 hadoop hadoop   397 Mar 31  2017 NOTICE
-rwxr-xr-x 1 hadoop hadoop  4374 Apr  1  2017 README.txt
-rwxr-xr-x 1 hadoop hadoop  4255 Apr  1  2017 RELEASE_NOTES.txt
drwxr-xr-x 3 hadoop hadoop    23 Jul 11 10:57 scripts
[hadoop@node2 apache-hive-1.2.2-bin]$ 

4.4 查看hive的元數(shù)據(jù)信息

4.4.1 mysql中的元數(shù)據(jù)

經(jīng)過(guò)上面的hive配置之后,我們來(lái)看下mysql中有什么樣的變化呢?,經(jīng)過(guò)我們的多次操作我們已經(jīng)在xshell中打開了三個(gè)連接node2的窗口,如下圖,切換到已經(jīng)登錄mysql的窗口,按照?qǐng)D中的提示操作.
Hive的安裝和配置
接著上面的操作,我們繼續(xù)向下探索....

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| hive_remote        |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.01 sec)

--1.切換當(dāng)前數(shù)據(jù)庫(kù)為hive_remote,hive_remote數(shù)據(jù)庫(kù)就是我們?cè)趆ive-site.xml文件中配置的默認(rèn)生成的數(shù)據(jù)庫(kù).
mysql> use hive_remote;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
--2.一下顯示的是hive的元數(shù)據(jù)信息的所有表
mysql> show tables;
+---------------------------+
| Tables_in_hive_remote     |
+---------------------------+
| BUCKETING_COLS            |
| CDS                       |
| COLUMNS_V2                |
| DATABASE_PARAMS           |
| DBS                       |
| FUNCS                     |
| FUNC_RU                   |
| GLOBAL_PRIVS              |
| PARTITIONS                |
| PARTITION_KEYS            |
| PARTITION_KEY_VALS        |
| PARTITION_PARAMS          |
| PART_COL_STATS            |
| ROLES                     |
| SDS                       |
| SD_PARAMS                 |
| SEQUENCE_TABLE            |
| SERDES                    |
| SERDE_PARAMS              |
| SKEWED_COL_NAMES          |
| SKEWED_COL_VALUE_LOC_MAP  |
| SKEWED_STRING_LIST        |
| SKEWED_STRING_LIST_VALUES |
| SKEWED_VALUES             |
| SORT_COLS                 |
| TABLE_PARAMS              |
| TAB_COL_STATS  
| --存儲(chǔ)在hive中創(chuàng)建的表的表明  |
| TBLS                      |
| VERSION                   |
+---------------------------+
29 rows in set (0.00 sec)

mysql> 

切換到運(yùn)行hive的xshell窗口,啟動(dòng)hive,并創(chuàng)建表

[hadoop@node2 apache-hive-1.2.2-bin]$ ./bin/hive

hive> show tables;
OK
Time taken: 0.672 seconds
hive> create table pokes(foo int,bar string);
OK
Time taken: 0.347 seconds
hive> 

如下圖的操作
Hive的安裝和配置
好,此時(shí)我們已經(jīng)在hive中創(chuàng)建了表,那MySQL中有什么樣的變化,我們繼續(xù)探索....

--1.執(zhí)行下面的語(yǔ)句,發(fā)現(xiàn)我們剛才在hive中創(chuàng)建的表pokes的表名.
mysql> select TBL_NAME from TBLS;
+----------+
| TBL_NAME |
+----------+
| pokes    |
+----------+
1 row in set (0.00 sec)

mysql> 

如下圖,操作結(jié)果
Hive的安裝和配置

4.4.2 hdfs中的元數(shù)據(jù)
[hadoop@node2 ~]$ hdfs dfs -ls /user
Found 3 items
drwxr-xr-x   - hadoop supergroup          0 2019-07-13 15:00 /user/hadoop
drwxr-xr-x   - hadoop supergroup          0 2019-07-11 17:56 /user/hive
#1.當(dāng)我們?cè)趆ive命令行中執(zhí)行創(chuàng)建表的語(yǔ)句之后,同時(shí)會(huì)在hdfs上創(chuàng)建/user/hive_remote目錄.
drwxr-xr-x   - hadoop supergroup          0 2019-07-17 17:42 /user/hive_remote
[hadoop@node2 ~]$ 
[hadoop@node2 ~]$ 

至此我們本地mysql管理hive元數(shù)據(jù)安裝結(jié)束.

5、遠(yuǎn)程模式

遠(yuǎn)程模式的意思就是mysql的安裝和hive的安裝沒有在同一臺(tái)機(jī)器上,和本地模式最大的區(qū)別就是在hive-site.xml中,我們需要修改下mysql的ip地址和端口號(hào)。但是我們需要注意以下兩個(gè)問題:

  1. hive-site.xml中的mysql安裝主機(jī)的ip地址和端口號(hào).
  2. 訪問mysql的用戶名的訪問權(quán)限需要進(jìn)行設(shè)置.

5.1 安裝遠(yuǎn)端mysql

繼續(xù)我們的探索之旅吧,目前我們已經(jīng)在node2上安裝過(guò)了mysql,此時(shí)我們要進(jìn)行遠(yuǎn)程模式的mysql安裝,我們選擇在node3節(jié)點(diǎn)上安裝mysql(參考安裝mysql數(shù)據(jù)庫(kù)),然后設(shè)置在node2上使用root用戶能夠訪問node3上的mysql的權(quán)限。

按照【安裝mysql數(shù)據(jù)庫(kù)】安裝完mysql數(shù)據(jù)庫(kù),修改完初始化密碼之后,需要修改數(shù)據(jù)庫(kù)的遠(yuǎn)程訪問權(quán)限(訪問者在遠(yuǎn)端,訪問mysql的操作沒有在mysql安裝機(jī)器上執(zhí)行sql語(yǔ)句或者mysql命令).

mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY '!Qaz123456';
Query OK, 0 rows affected (0.00 sec)

--授權(quán)使用root用戶名和密碼(!Qaz123456),可以通過(guò)任意機(jī)器(%符號(hào)代替所有機(jī)器)訪問mysql數(shù)據(jù)庫(kù)
mysql> grant all on *.* to root@'%' identified by '!Qaz123456';
Query OK, 0 rows affected, 1 warning (0.00 sec)
--使上一步授權(quán)生效
mysql> flush privileges;

5.2 修改hive-site.xml配置

因?yàn)槲覀冎皇歉淖兞薽ysql的安裝位置,hive的安裝位置不需要改變,我們只需要將hive安裝目錄conf目錄下的hive-site.xml文件中的mysql的連接ip修改成node3的ip即可,這樣就完成了遠(yuǎn)程模式的配置.

<property>  
  <name>hive.metastore.warehouse.dir</name>  
  <value>/user/hive_remote/warehouse</value>  
</property>  

<property>  
  <name>hive.metastore.local</name>  
  <value>true</value>  
</property>  

<property>  
  <name>javax.jdo.option.ConnectionURL</name>  
  <value>jdbc:mysql://192.168.200.13/hive_remote?createDatabaseIfNotExist=true</value>  
</property>  

<property>  
  <name>javax.jdo.option.ConnectionDriverName</name>  
  <value>com.mysql.jdbc.Driver</value>  
</property>  

<property>  
  <name>javax.jdo.option.ConnectionUserName</name>  
  <value>root</value>  
</property>  

<property>  
  <name>javax.jdo.option.ConnectionPassword</name>  
  <value>!Qaz123456</value>  
</property>  

5.3 啟動(dòng)hive

因?yàn)槲覀冎皇歉淖兞薽ysql的安裝位置,hive的安裝位置不需要改變,我們以同樣的方式啟動(dòng)hive.

#啟動(dòng)hive
[hadoop@node2 apache-hive-1.2.2-bin]$ ./bin/hive

#顯示hive表
hive> show tables;
OK
Time taken: 0.713 seconds
hive> 

查看mysql數(shù)據(jù)庫(kù)中的變化,如下圖操作,我們發(fā)現(xiàn)存放hive元數(shù)據(jù)的數(shù)據(jù)庫(kù)hive_remote創(chuàng)建成功.

使用同樣的在hive中創(chuàng)建表,在hdfs查看創(chuàng)建的hive表目錄

 create table pokes(foo int,bar string);

[外鏈圖片轉(zhuǎn)存失敗,源站可能有防盜鏈機(jī)制,建議將圖片保存下來(lái)直接上傳(img-q04p2lM7-1579412316630)(assets/1563432986248.png)]

fNotExist=true</value>  
</property>

<property>  
<name>javax.jdo.option.ConnectionDriverName</name>  
<value>com.mysql.jdbc.Driver</value>  
</property>

<property>  
<name>javax.jdo.option.ConnectionUserName</name>  
<value>root</value>  
</property>

<property>  
<name>javax.jdo.option.ConnectionPassword</name>  
<value>!Qaz123456</value>  
</property>

### 5.3 啟動(dòng)hive

因?yàn)槲覀冎皇歉淖兞薽ysql的安裝位置,hive的安裝位置不需要改變,我們以同樣的方式啟動(dòng)hive.

```shell
#啟動(dòng)hive
[hadoop@node2 apache-hive-1.2.2-bin]$ ./bin/hive

#顯示hive表
hive> show tables;
OK
Time taken: 0.713 seconds
hive> 

查看mysql數(shù)據(jù)庫(kù)中的變化,如下圖操作,我們發(fā)現(xiàn)存放hive元數(shù)據(jù)的數(shù)據(jù)庫(kù)hive_remote創(chuàng)建成功.
Hive的安裝和配置
使用同樣的在hive中創(chuàng)建表,在hdfs查看創(chuàng)建的hive表目錄

 create table pokes(foo int,bar string);

Hive的安裝和配置

關(guān)于Hive的安裝和配置就分享到這里了,希望以上內(nèi)容可以對(duì)大家有一定的幫助,可以學(xué)到更多知識(shí)。如果覺得文章不錯(cuò),可以把它分享出去讓更多的人看到。

向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI