溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

Hive常用查詢命令和使用方法

發(fā)布時間:2021-11-11 17:33:22 來源:億速云 閱讀:169 作者:柒染 欄目:大數據

這期內容當中小編將會給大家?guī)碛嘘PHive常用查詢命令和使用方法,文章內容豐富且以專業(yè)的角度為大家分析和敘述,閱讀完這篇文章希望大家可以有所收獲。

1. 將日志文件傳到HDFS
```bash
hdfs dfs -mkdir /user/hive/warehouse/original_access_logs_0104

hdfs dfs -put access.log /user/hive/warehouse/original_access_logs_0104
```

檢查文件是否已正確拷貝
```bash
hdfs dfs -ls /user/hive/warehouse/original_access_logs_0104
```

2. 建立Hive外部表對應于日志文件
```sql
DROP TABLE IF EXISTS original_access_logs;
CREATE EXTERNAL TABLE original_access_logs (
    ip STRING,
    request_time STRING,
    method STRING,
    url STRING,
    http_version STRING,
    code1 STRING,
    code2 STRING,
    dash STRING,
    user_agent STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
    'input.regex' = '([^ ]*) - - \\[([^\\]]*)\\] "([^\ ]*) ([^\ ]*) ([^\ ]*)" (\\d*) (\\d*) "([^"]*)" "([^"]*)"',
    'output.format.string' = "%1$$s %2$$s %3$$s %4$$s %5$$s %6$$s %7$$s %8$$s %9$$s")
LOCATION '/user/hive/warehouse/original_access_logs_0104';
```

3. 將TEXT表轉換為PARQUET表
```sql
DROP TABLE IF EXISTS pq_access_logs;
CREATE TABLE pq_access_logs (
    ip STRING,
    request_time STRING,
    method STRING,
    url STRING,
    http_version STRING,
    code1 STRING,
    code2 STRING,
    dash STRING,
    user_agent STRING,
    `timestamp` int)
STORED AS PARQUET;

#ADD JAR /opt/cloudera/parcels/CDH/lib/hive/lib/hive-contrib.jar;

#ADD JAR /opt/cloudera/parcels/CDH/lib/hive/contrib/hive-contrib-2.1.1-cdh7.3.2.jar

INSERT OVERWRITE TABLE pq_access_logs
SELECT 
  ip,
  from_unixtime(unix_timestamp(request_time, 'dd/MMM/yyyy:HH:mm:ss z'), 'yyyy-MM-dd HH:mm:ss z'),
  method,
  url,
  http_version,
  code1,
  code2,
  dash,
  user_agent,
  unix_timestamp(request_time, 'dd/MMM/yyyy:HH:mm:ss z')
FROM original_access_logs;
```

4. 統(tǒng)計最多訪問的5個IP
```sql
select ip, count(*) cnt
from pq_access_logs
group by ip
order by cnt desc
limit 5
```

注意觀察Hive Job拆分成Map Reduce Job并執(zhí)行

如何查看Hive Job執(zhí)行的日志

## 演示 - 分區(qū)表

### 步驟

1. 創(chuàng)建分區(qū)表
```sql
DROP TABLE IF EXISTS partitioned_access_logs;
CREATE EXTERNAL TABLE partitioned_access_logs (
    ip STRING,
    request_time STRING,
    method STRING,
    url STRING,
    http_version STRING,
    code1 STRING,
    code2 STRING,
    dash STRING,
    user_agent STRING,
    `timestamp` int)
PARTITIONED BY (request_date STRING)
STORED AS PARQUET
;
```

2. 將日志表寫入分區(qū)表,使用動態(tài)分區(qū)插入

```sql
set hive.exec.dynamic.partition.mode=nonstrict;

INSERT OVERWRITE TABLE partitioned_access_logs 
PARTITION (request_date)
SELECT ip, request_time, method, url, http_version, code1, code2, dash, user_agent, `timestamp`, to_date(request_time) as request_date
FROM pq_access_logs
;
```
默認分區(qū):__HIVE_DEFAULT_PARTITION__, 沒有匹配上的記錄會放在這個分區(qū)


3. 觀察分區(qū)表目錄結構
```bash
hdfs dfs -ls /user/hive/warehouse/partitioned_access_logs
```

## 演示 - 分桶表

### 步驟

1. 創(chuàng)建日志分桶表
    按IP的第一段分桶,然后按請求時間排序

```sql
DROP TABLE IF EXISTS bucketed_access_logs;
CREATE TABLE bucketed_access_logs (
    first_ip_addr INT,
    request_time STRING)
CLUSTERED BY (first_ip_addr) 
SORTED BY (request_time) 
INTO 10 BUCKETS
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE
;

!如果DISTRIBUTE BY和SORT BY不寫,則需要設置hive參數 (2.0后不用,默認為true)
SET hive.enforce.sorting = true;
SET hive.enforce.bucketing = true;

INSERT OVERWRITE TABLE bucketed_access_logs 
SELECT cast(split(ip, '\\.')[0] as int) as first_ip_addr, request_time
FROM pq_access_logs
--DISTRIBUTE BY first_ip_addr
--SORT BY request_time
;
```

2. 觀察分桶表的物理存儲結構
```bash
hdfs dfs -ls /user/hive/warehouse/bucketed_access_logs/
# 猜猜有幾個文件?

hdfs dfs -cat /user/hive/warehouse/bucketed_access_logs/000000_0 | head

hdfs dfs -cat /user/hive/warehouse/bucketed_access_logs/000001_0 | head

hdfs dfs -cat /user/hive/warehouse/bucketed_access_logs/000009_0 | head

# 能看出分桶的規(guī)則嗎?

```

## 演示 - ORC表的壓縮

1. 新建一張訪問日志的ORC表,插入數據時啟用壓縮
```sql
DROP TABLE IF EXISTS compressed_access_logs;
CREATE TABLE compressed_access_logs (
    ip STRING,
    request_time STRING,
    method STRING,
    url STRING,
    http_version STRING,
    code1 STRING,
    code2 STRING,
    dash STRING,
    user_agent STRING,
    `timestamp` int)
STORED AS ORC
TBLPROPERTIES ("orc.compression"="SNAPPY");

--SET hive.exec.compress.intermediate=true;
--SET mapreduce.map.output.compress=true;

INSERT OVERWRITE TABLE compressed_access_logs
SELECT * FROM pq_access_logs;

describe formatted compressed_access_logs;
```

2. 和原來不啟用壓縮的Parquet表進行比對

大小

原始TXT是38 MB.

```
hdfs dfs -ls /user/hive/warehouse/pq_access_logs/
```
Parquet無壓縮: 4,158,592 (4.1 MB)

```
hdfs dfs -ls /user/hive/warehouse/compressed_access_logs/
```
Orc壓縮后: 1,074,404 (1.0 MB)

壓縮比: 約等于5:2  (4:1 - Parquet Raw: ORC Compressed)

注意: 數據備份時建議啟用壓縮,數據讀多的情況下,啟用壓縮不一定能帶來查詢性能提升。

上述就是小編為大家分享的Hive常用查詢命令和使用方法了,如果剛好有類似的疑惑,不妨參照上述分析進行理解。如果想知道更多相關知識,歡迎關注億速云行業(yè)資訊頻道。

向AI問一下細節(jié)

免責聲明:本站發(fā)布的內容(圖片、視頻和文字)以原創(chuàng)、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI