您好,登錄后才能下訂單哦!
這篇文章將為大家詳細(xì)講解有關(guān)apache-hive-1.2.1中l(wèi)ocal mr的示例分析,小編覺得挺實(shí)用的,因此分享給大家做個(gè)參考,希望大家閱讀完這篇文章后可以有所收獲。
在hive中運(yùn)行sql有很多是比較小的SQL,數(shù)據(jù)量小,計(jì)算量小。這些比較小的SQL 如果也采用分布式的方式來執(zhí)行,那么就得不償失,因?yàn)镾QL真正執(zhí)行的時(shí)間可能只有10s,但是分布式任務(wù)生成的其他過程執(zhí)行可能要1min。這樣小任務(wù)采用local mr方式執(zhí)行,就是本地執(zhí)行,通過把輸入數(shù)據(jù)拉回到客戶端來執(zhí)行
三個(gè)參數(shù)來決定:
hive.exec.mode.local.auto=true 是否啟動(dòng)本地mr模式
hive.exec.mode.local.auto.input.files.max=4 input files的數(shù)量,默認(rèn)是4個(gè)
hive.exec.mode.local.auto.inputbytes.max=134217728 input files的大小,默認(rèn)是128M
注意:
hive.exec.mode.local.auto是大前提,只有設(shè)置為true,才可能會(huì)啟用local mr模式
hive.exec.mode.local.auto.input.files.max 和 hive.exec.mode.local.auto.inputbytes.max是 與的關(guān)系,只有同時(shí)滿足才會(huì)執(zhí)行l(wèi)ocal mr
t_1==> 5個(gè)文件
t_2==> 2個(gè)文件
hive>set hive.exec.mode.local.auto=false hive> select * from t_2 order by id; Query ID = hadoop_20160125132157_d767beb0-f674-4962-ac3c-8fbdd2949d01 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1453706740954_0006, Tracking URL = http://hftest0001.webex.com:8088/proxy/application_1453706740954_0006/ Kill Command = /home/hadoop/hadoop-2.7.1/bin/hadoop job -kill job_1453706740954_0006 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2016-01-25 13:22:19,210 Stage-1 map = 0%, reduce = 0% 2016-01-25 13:22:26,497 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.47 sec 2016-01-25 13:22:40,207 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.68 sec MapReduce Total cumulative CPU time: 3 seconds 680 msec Ended Job = job_1453706740954_0006 MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.68 sec HDFS Read: 5465 HDFS Write: 32 SUCCESS Total MapReduce CPU Time Spent: 3 seconds 680 msec OK ... ... hive>set hive.exec.mode.local.auto=true hive> select * from t_2 order by id; hive> select * from t_2 order by id; Automatically selecting local only mode for query ==> 啟動(dòng)用本地模式 Query ID = hadoop_20160125132322_9649b904-ad87-47fa-89ad-5e5f67315ac8 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Job running in-process (local Hadoop) 2016-01-25 13:23:27,192 Stage-1 map = 100%, reduce = 100% Ended Job = job_local1850780899_0002 MapReduce Jobs Launched: Stage-Stage-1: HDFS Read: 1464 HDFS Write: 1618252652 SUCCESS Total MapReduce CPU Time Spent: 0 msec OK ... ... hive>set hive.exec.mode.local.auto=true hive> select * from t_1 order by id; Query ID = hadoop_20160125132411_3ecd7ee9-8ccb-4bcc-8582-6d797c13babd Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Cannot run job locally: Number of Input Files (= 5) is larger than hive.exec.mode.local.auto.input.files.max(= 4) ==>5 > 4 還是啟用了分布式 Starting Job = job_1453706740954_0007, Tracking URL = http://hftest0001.webex.com:8088/proxy/application_1453706740954_0007/ Kill Command = /home/hadoop/hadoop-2.7.1/bin/hadoop job -kill job_1453706740954_0007 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2016-01-25 13:24:38,775 Stage-1 map = 0%, reduce = 0% 2016-01-25 13:24:52,115 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.55 sec 2016-01-25 13:24:59,548 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.84 sec MapReduce Total cumulative CPU time: 3 seconds 840 msec Ended Job = job_1453706740954_0007 MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.84 sec HDFS Read: 5814 HDFS Write: 56 SUCCESS Total MapReduce CPU Time Spent: 3 seconds 840 msec OK ... ... hive>set hive.exec.mode.local.auto=true hive> set hive.exec.mode.local.auto.input.files.max=5; ==> 設(shè)置輸入文件數(shù)max=5 hive> select * from t_1 order by id; Automatically selecting local only mode for query ==> 啟用了本地模式 Query ID = hadoop_20160125132558_db2f4fca-f6bf-4b91-9569-c779a3b13386 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Job running in-process (local Hadoop) 2016-01-25 13:26:03,232 Stage-1 map = 100%, reduce = 100% Ended Job = job_local264155444_0003 MapReduce Jobs Launched: Stage-Stage-1: HDFS Read: 1920 HDFS Write: 1887961792 SUCCESS Total MapReduce CPU Time Spent: 0 msec OK
關(guān)于“apache-hive-1.2.1中l(wèi)ocal mr的示例分析”這篇文章就分享到這里了,希望以上內(nèi)容可以對(duì)大家有一定的幫助,使各位可以學(xué)到更多知識(shí),如果覺得文章不錯(cuò),請(qǐng)把它分享出去讓更多的人看到。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。