您好,登錄后才能下訂單哦!
小編給大家分享一下Hive中庫、表、字段、交互式查詢的基本操作有哪些,希望大家閱讀完這篇文章之后都有所收獲,下面讓我們一起去探討吧!
(1)打印查詢頭,需要顯示設(shè)置:
set hive.cli.print.header=true;
(2)加"--",其后的都被認為是注釋,但 CLI 不解析注釋。帶有注釋的文件只能通過這種方式執(zhí)行:
hive -f script_name
(3)-e后跟帶引號的hive指令或者查詢,-S去掉多余的輸出:
hive -S -e "select * FROM mytable LIMIT 3" > /tmp/myquery
(4)遍歷所有分區(qū)的查詢將產(chǎn)生一個巨大的MapReduce作業(yè),如果你的數(shù)據(jù)集和目錄非常多,
因此建議你使用strict模型,也就是你存在分區(qū)時,必須指定where語句
hive> set hive.mapred.mode=strict;
(5)顯示當前使用數(shù)據(jù)庫
set hive.cli.print.current.db=true;
(6)設(shè)置 Hive Job 優(yōu)先級
set mapred.job.priority=VERY_HIGH | HIGH | NORMAL | LOW | VERY_LOW
(VERY_LOW=1,LOW=2500,NORMAL=5000,HIGH=7500,VERY_HIGH=10000)
set mapred.job.map.capacity=M設(shè)置同時最多運行M個map任務(wù)
set mapred.job.reduce.capacity=N設(shè)置同時最多運行N個reduce任務(wù)
(7)Hive 中的Mapper個數(shù)的是由以下幾個參數(shù)確定的:
mapred.min.split.size ,mapred.max.split.size ,dfs.block.size
splitSize = Math.max(minSize, Math.min(maxSize, blockSize));
map個數(shù)還與inputfilles的個數(shù)有關(guān),如果有2個輸入文件,即使總大小小于blocksize,也會產(chǎn)生2個map
mapred.reduce.tasks用來設(shè)置reduce個數(shù)。
(1)查看某個表所有分區(qū)
SHOW PARTITIONS ext_trackflow
查詢具體某個分區(qū)
SHOW PARTITIONS ext_trackflow PARTITION(statDate='20140529');
(2)查看格式化的完整表結(jié)構(gòu)
desc formatted ext_trackflow;
DESCRIBE EXTENDED ext_trackflow;
(3)刪除分區(qū):分區(qū)的元數(shù)據(jù)和數(shù)據(jù)將被一并刪除,但是對于擴展表則只刪除元數(shù)據(jù)
ALTER TABLE ext_trackflow DROP PARTITION (statDate='20140529');
(4)查詢是外部表還是內(nèi)部表
DESCRIBE EXTENDED tablename
(5)復(fù)制表結(jié)構(gòu)
CREATE EXTERNAL TABLE IF NOT EXISTS mydb.employees3
LIKE mydb.employees
LOCATION '/path/to/data';
Note:如果你忽略關(guān)鍵字EXTERNAL,那么將依據(jù) employees 是外部還是內(nèi)部,如果加了那么一定是EXTERNAL,并要LOCATION
(6)為內(nèi)部表某個分區(qū)導(dǎo)入數(shù)據(jù),Hive將建立目錄并拷貝數(shù)據(jù)到分區(qū)當中
LOAD DATA LOCAL INPATH '${env:HOME}/california-employees'
INTO TABLE employees
PARTITION (country = 'US', state = 'CA');
(7)為外部表某個分區(qū)添加數(shù)據(jù)
ALTER TABLE log_messages ADD IF NOT EXISTS PARTITION(year = 2012, month = 1, day = 2)
LOCATION 'hdfs://master_server/data/log_messages/2012/01/02';
Note:Hive并不關(guān)心分區(qū),目錄是否存在,是否有數(shù)據(jù),這會導(dǎo)致沒有查詢結(jié)果
(8)修改表:在任何時候你都可以修改表,但是你僅僅修改的是表的元數(shù)據(jù),都實際數(shù)據(jù)不會造成任何影響
例如更改分區(qū)指定位置,這個命令不會刪除舊的數(shù)據(jù)
ALTER TABLE log_messages PARTITION(year = 2011, month = 12, day = 2)
SET LOCATION 's3n://ourbucket/logs/2011/01/02';
(9)更改表屬性
ALTER TABLE log_messages SET TBLPROPERTIES (
'notes' = 'The process id is no longer captured; this column is always NULL'
);
(10)更改存儲屬性
ALTER TABLE log_messages
PARTITION(year = 2012, month = 1, day = 1)
SET FILEFORMAT SEQUENCEFILE;
Note:如果table是分區(qū)的話那么partition是必須的
(11)指定新的 SerDe
ALTER TABLE table_using_JSON_storage
SET SERDE 'com.example.JSONSerDe'
WITH SERDEPROPERTIES (
'prop1' = 'value1',
'prop2' = 'value2'
);
Note:SERDEPROPERTIE解釋SERDE用的何種模型,屬性值和名稱都為字符串,方便告訴用戶,為自己指定SERDE并且應(yīng)用于什么模型
為當前SERDE設(shè)定
ALTER TABLE table_using_JSON_storage
SET SERDEPROPERTIES (
'prop3' = 'value3',
'prop4' = 'value4'
);
(12)改變存儲屬性
ALTER TABLE stocks
CLUSTERED BY (exchange, symbol)
SORTED BY (symbol)
INTO 48 BUCKETS;
(13)復(fù)雜更改表語句:為各種不同的操作添加 hook ALTER TABLE … TOUCH
ALTER TABLE log_messages TOUCH
PARTITION(year = 2012, month = 1, day = 1);
典型的應(yīng)用場景就是當分區(qū)有改動的時候,那么將觸發(fā)
hive -e 'ALTER TABLE log_messages TOUCH PARTITION(year = 2012, month = 1, day = 1);'
(14)ALTER TABLE … ARCHIVE PARTITION 捕獲分區(qū)文件到Hadoop archive file也就是HAR
ALTER TABLE log_messages ARCHIVE
PARTITION(year = 2012, month = 1, day = 1);(只可以用在被分區(qū)的表)
(15)保護分區(qū)不被刪除和查詢
ALTER TABLE log_messages
PARTITION(year = 2012, month = 1, day = 1) ENABLE NO_DROP;
ALTER TABLE log_messages
PARTITION(year = 2012, month = 1, day = 1) ENABLE OFFLINE;
Note:與ENABLE對應(yīng)的是DISABLE,不能應(yīng)用在未被分區(qū)的表
(16)按正條件(正則表達式)顯示表
hive> SHOW TABLES '.*s';
(17)外部表、內(nèi)部表互轉(zhuǎn)
alter table tablePartition set TBLPROPERTIES ('EXTERNAL'='TRUE'); //內(nèi)部表轉(zhuǎn)外部表
alter table tablePartition set TBLPROPERTIES ('EXTERNAL'='FALSE'); //外部表轉(zhuǎn)內(nèi)部表
(18)分區(qū)與分桶:
partition(分區(qū):按目錄保存文件,每個partition對應(yīng)一個目錄)例如:
CREATE EXTERNAL TABLE table1 ( column1 STRING, column2 STRING, column3 STRING, ) PARTITIONED BY (dt STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' STORED AS TEXTFILE; ALTER TABLE table1 ADD IF NOT EXISTS PARTITION (dt=20090105); ALTER TABLE table1 ADD IF NOT EXISTS PARTITION (dt=20090102); ALTER TABLE table1 ADD IF NOT EXISTS PARTITION (dt=20081231);
bucket(分桶,對指定列作hash,每個bucket對應(yīng)一個文件)
CREATE TABLE VT_NEW_DATA ( column1 STRING, column2 STRING, column3 STRING, ) CLUSTERED BY (column1) SORTED BY (column1) INTO 48 BUCKETS ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' STORED AS RCFILE;
(1)重命名列,更改位置,類型和注釋
ALTER TABLE log_messages
CHANGE COLUMN hms hours_minutes_seconds INT
COMMENT 'The hours, minutes, and seconds part of the timestamp'
AFTER severity;
更改名稱: new column old column type
comment不是必須的,你可以添加注釋
AFTER用于更改字段位置
僅修改了元數(shù)據(jù)并未對源data做任何改動
(2)添加新列
ALTER TABLE log_messages ADD COLUMNS (
app_name STRING COMMENT 'Application name',
session_id LONG COMMENT 'The current session id');
(3)刪除和替換列:慎用?。?!
ALTER TABLE table_name ADD|REPLACE COLUMNS (col_name data_type [COMMENT col_comment], ...)
ADD是代表新增一字段,字段位置在所有列后面(partition列前)
REPLACE則是表示替換表中所有字段。
REPLACE COLUMNS removes all existing columns and adds the new set of columns.
REPLACE COLUMNS can also be used to drop columns. For example:
"ALTER TABLE test_change REPLACE COLUMNS (a int, b int);" will remove column `c' from test_change's schema. Note that this does not delete underlying data, it just changes the schema.
(4)REGEX Column Specification
SELECT 語句可以使用正則表達式做列選擇,下面的語句查詢除了 ds 和 hr 之外的所有列:
SELECT `(ds|hr)?+.+` FROM test
hive> set;
…
hive> set-v;
… even more output!…
‘set’輸出 hivevar,hiveconf,system 和 env 命名空間下的所有變量。
‘set -v’包括了輸出Hadoop定義的全部變量。
hive> set hivevar:foo=hello;
hive> set hivevar:foo;
hivevar:foo=hello
使用變量:
hive> create table toss1(i int, ${hivevar:foo} string);
-- 創(chuàng)建數(shù)據(jù)庫 create database ecdata WITH DBPROPERTIES ('creator' = 'June', 'date' = '2014-06-01'); -- 或者使用 COMMENT 關(guān)鍵字 -- 查看數(shù)據(jù)庫描述 DESCRIBE DATABASE ecdata; DESCRIBE DATABASE EXTENDED ecdata; -- 切庫 use ecdata; -- 刪除表 drop table ext_trackflow; -- 創(chuàng)建表 create EXTERNAL table IF NOT EXISTS ext_trackflow ( cookieId string COMMENT '05dvOVC6Il6INhYABV6LAg==', cate1 string COMMENT '4', area1 string COMMENT '102', url string COMMENT 'http://cd.ooxx.com/jinshan-mingzhan-1020', trackTime string COMMENT '2014-05-25 23:03:36', trackURLMap map<string,String> COMMENT '{"area":"102","cate":"4,29,14052"}', ) PARTITIONED BY (statDate STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001' COLLECTION ITEMS TERMINATED BY '\002' MAP KEYS TERMINATED BY '\003' LINES TERMINATED BY '\n' STORED AS TEXTFILE LOCATION '/DataWarehouse/ods/TrackFlowTable' ; --添加分區(qū)語句 ALTER TABLE ext_trackflow ADD PARTITION (statDate='20140525') LOCATION '/DataWarehouse/ods/TrackFlowTable/20140525'; --每天建立分區(qū) yesterday=`date -d '1 days ago' +'%Y%m%d'` hive -e "use ecdata; ALTER TABLE ext_trackflow ADD PARTITION (statDate='$yesterday') LOCATION '/DataWarehouse/ods/TrackFlowTable/$yesterday';"
(1)按頁面類型統(tǒng)計 pv
select pageType, count(pageType) from ext_trackflow where statDate = '20140521' group by pageType;
Note:一般 SELECT 查詢會掃描整個表,使用 PARTITIONED BY 子句建表,查詢就可以利用分區(qū)剪枝(input pruning)的特性
Hive 當前的實現(xiàn)是,只有分區(qū)斷言出現(xiàn)在離 FROM 子句最近的那個WHERE 子句中,才會啟用分區(qū)剪枝
(2)導(dǎo)出查詢結(jié)果到本地的兩種方式
INSERT OVERWRITE LOCAL DIRECTORY '/home/jun06/tmp/110.112.113.115' select area1, count(area1) from ext_trackflow where statDate = '20140521' group by area1 having count(area1) > 1000;
hive -e 'use ecdata; select area1, count(area1) from ext_trackflow where statDate = '20140521' group by area1 having count(area1) > 1000;' > a.txt
(3)map 數(shù)據(jù)結(jié)構(gòu)的查詢與使用
select trackURLMap, extField, unitParamMap, queryParamMap from ext_trackflow where statDate = '20140525' and size(unitParamMap)!=0 limit 10;
(4)下面的查詢語句查詢銷售記錄最大的 5 個銷售代表。
SET mapred.reduce.tasks = 1;
SELECT * FROM test SORT BY amount DESC LIMIT 5;
(5)將同一表中數(shù)據(jù)插入到不同表、路徑中:
FROM src
INSERT OVERWRITE TABLE dest1 SELECT src.* WHERE src.key < 100
INSERT OVERWRITE TABLE dest2 SELECT src.key, src.value WHERE src.key >= 100 and src.key < 200
INSERT OVERWRITE TABLE dest3 PARTITION(ds='2008-04-08', hr='12') SELECT src.key WHERE src.key >= 200 and src.key < 300
INSERT OVERWRITE LOCAL DIRECTORY '/tmp/dest4.out' SELECT src.value WHERE src.key >= 300;
(6)用streaming方式將文件流直接插入文件:
hive> FROM invites a INSERT OVERWRITE TABLE events SELECT TRANSFORM(a.foo, a.bar) AS (oof, rab) USING '/bin/cat' WHERE a.ds > '2008-08-09';
(7)Hive 只支持等值連接(equality joins)、外連接(outer joins)和(left semi joins)。Hive 不支持所有非等值的連接,因為非等值連接非常難轉(zhuǎn)化到 map/reduce 任務(wù)
LEFT,RIGHT和FULL OUTER關(guān)鍵字用于處理join中空記錄的情況
LEFT SEMI JOIN 是 IN/EXISTS 子查詢的一種更高效的實現(xiàn)
join 時,每次 map/reduce 任務(wù)的邏輯是這樣的:reducer 會緩存 join 序列中除了最后一個表的所有表的記錄,再通過最后一個表將結(jié)果序列化到文件系統(tǒng)
實踐中,應(yīng)該把最大的那個表寫在最后
(8)join 查詢時,需要注意幾個關(guān)鍵點
只支持等值join
SELECT a.* FROM a JOIN b ON (a.id = b.id)
SELECT a.* FROM a JOIN b
ON (a.id = b.id AND a.department = b.department)
可以 join 多于 2 個表,例如
SELECT a.val, b.val, c.val FROM a JOIN b
ON (a.key = b.key1) JOIN c ON (c.key = b.key2)
Note:如果join中多個表的 join key 是同一個,則 join 會被轉(zhuǎn)化為單個 map/reduce 任務(wù)
(9)LEFT,RIGHT和FULL OUTER
SELECT a.val, b.val FROM a LEFT OUTER JOIN b ON (a.key=b.key)
如果你想限制 join 的輸出,應(yīng)該在 WHERE 子句中寫過濾條件——或是在 join 子句中寫
容易混淆的問題是表分區(qū)的情況
SELECT c.val, d.val FROM c LEFT OUTER JOIN d ON (c.key=d.key)
WHERE a.ds='2010-07-07' AND b.ds='2010-07-07'
如果 d 表中找不到對應(yīng) c 表的記錄,d 表的所有列都會列出 NULL,包括 ds 列。也就是說,join 會過濾 d 表中不能找到匹配 c 表 join key 的所有記錄。這樣的話,LEFT OUTER 就使得查詢結(jié)果與 WHERE 子句無關(guān)
解決辦法
SELECT c.val, d.val FROM c LEFT OUTER JOIN d
ON (c.key=d.key AND d.ds='2009-07-07' AND c.ds='2009-07-07')
(10)LEFT SEMI JOIN
LEFT SEMI JOIN 的限制是, JOIN 子句中右邊的表只能在 ON 子句中設(shè)置過濾條件,在 WHERE 子句、SELECT 子句或其他地方過濾都不行
SELECT a.key, a.value
FROM a
WHERE a.key in
(SELECT b.key
FROM B);
可以被重寫為:
SELECT a.key, a.val
FROM a LEFT SEMI JOIN b on (a.key = b.key)
(11)從SQL到HiveQL應(yīng)轉(zhuǎn)變的習慣
①Hive不支持傳統(tǒng)的等值連接
?SQL中對兩表內(nèi)聯(lián)可以寫成:
?select * from dual a,dual b where a.key = b.key;
?Hive中應(yīng)為
?select * from dual a join dual b on a.key = b.key;
而不是傳統(tǒng)的格式:
SELECT t1.a1 as c1, t2.b1 as c2FROM t1, t2
WHERE t1.a2 = t2.b2
②分號字符
?分號是SQL語句結(jié)束標記,在HiveQL中也是,但是在HiveQL中,對分號的識別沒有那么智慧,例如:
?select concat(key,concat(';',key)) from dual;
?但HiveQL在解析語句時提示:
FAILED: Parse Error: line 0:-1 mismatched input '<EOF>' expecting ) in function specification
?解決的辦法是,使用分號的八進制的ASCII碼進行轉(zhuǎn)義,那么上述語句應(yīng)寫成:
?select concat(key,concat('\073',key)) from dual;
看完了這篇文章,相信你對“Hive中庫、表、字段、交互式查詢的基本操作有哪些”有了一定的了解,如果想了解更多相關(guān)知識,歡迎關(guān)注億速云行業(yè)資訊頻道,感謝各位的閱讀!
免責聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。