溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務(wù)條款》

【Hadoop】Map和Reduce個數(shù)問題

發(fā)布時間:2020-08-05 23:17:51 來源:網(wǎng)絡(luò) 閱讀:1363 作者:符敦輝 欄目:大數(shù)據(jù)

在hadoop中當一個任務(wù)沒有設(shè)置的時候,該任務(wù)的執(zhí)行的map的個數(shù)是由任務(wù)本身的數(shù)據(jù)量決定的,具體計算方法會在下文說明;而reduce的個數(shù)hadoop是默認設(shè)置為1的。為何設(shè)置為1那,因為一個任務(wù)的輸出的文件個數(shù)是由reduce的個數(shù)來決定的。一般一個任務(wù)的結(jié)果默認是輸出到一個文件中,所以reduce的數(shù)目設(shè)置為1。那如果我們?yōu)榱颂岣呷蝿?wù)的執(zhí)行速度如何對map與reduce的個數(shù)來進行調(diào)整那。

在講解之前首先,看一下hadoop官方文檔是如何說明的。

Number of Maps
The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute.
Actually controlling the number of maps is subtle. The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapred.min.split.size. Thus, if you expect 10TB of input data and have 128MB DFS blocks, you'll end up with 82k maps, unless your mapred.map.tasks is even larger. Ultimately the InputFormat determines the number of maps.
The number of map tasks can also be increased manually using the JobConf's conf.setNumMapTasks(int num). This can be used to increase the number of map tasks, but will not set the number below that which Hadoop determines via splitting the input data.

Number of Reduces
The right number of reduces seems to be 0.95 or 1.75 * (nodes * mapred.tasktracker.tasks.maximum). At 0.95 all of the reduces can launch immediately and start transfering map outputs as the maps finish. At 1.75 the faster nodes will finish their first round of reduces and launch a second round of reduces doing a much better job of load balancing.
Currently the number of reduces is limited to roughly 1000 by the buffer size for the output files (io.buffer.size * 2 * numReduces << heapSize). This will be fixed at some point, but until it is it provides a pretty firm upper bound.
The number of reduces also controls the number of output files in the output directory, but usually that is not important because the next map/reduce step will split them into even smaller splits for the maps.
The number of reduce tasks can also be increased in the same way as the map tasks, via JobConf's conf.setNumReduceTasks(int num).


上述的說明是map與reduce的個數(shù)是如何確定的。對于map的個數(shù)是通過任務(wù)執(zhí)行的時候讀入的數(shù)據(jù)量除以每個block的大小(默認是64M)來決定的,而reduce就是默認為1,而且它有個建議范圍,這個范圍是由你的node個數(shù)來決定的。一般reduce的個數(shù)是通過:nodes個數(shù) X 一個TaskTracker設(shè)置的最大reduce個數(shù)(默認為2)  X (0.95~1.75)之間的數(shù)目。注意這上述的個數(shù)只是設(shè)置中的一個最大的上限。在實際運行中的個數(shù),還要看你具體的任務(wù)設(shè)置。


如果想設(shè)置一個任務(wù)執(zhí)行的map與reduce的個數(shù),那可以使用如下方法。

map:當你想更改map的個數(shù)的時候,則可以通過更改配置文件中block的size來增大或者減小map的個數(shù),或者通過 JobConf's conf.setNumMapTasks(int num).。但是就算你設(shè)置了數(shù)目在這里,它在實際運行中的數(shù)目不會小于它實際分割產(chǎn)生的數(shù)目。意思就是當你通過程序設(shè)置map為2個,但是在讀入數(shù)據(jù)的時候,分割數(shù)據(jù)是需要3個,那么最后任務(wù)在實際運行的過程中map個數(shù)是3個而不是你設(shè)置的2個。

reduce:當想修改reduce的個數(shù)那么可以按照如下方法進行更改:

當是在程序調(diào)試中可以通過聲明一個job對象,調(diào)用job.setNumReduceTasks(tasks),或者在conf設(shè)置中調(diào)用conf.setStrings("mapred.reduce.tasks", values);

而當是通過命令進行執(zhí)行任務(wù)的時候可以在命令行加入運行期參數(shù):

bin/hadoop jar examples.jar  job_name -Dmapred.map.tasks=nums -Dmapred.reduce.tasks=nums INPUT OUTPUT

向AI問一下細節(jié)

免責聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI