您好,登錄后才能下訂單哦!
魯春利的工作筆記,誰說程序員不能有文藝范?
Hadoop是大數(shù)據(jù)處理的存儲(chǔ)和計(jì)算平臺(tái),HDFS主要用來實(shí)現(xiàn)數(shù)據(jù)存儲(chǔ),MapReduce實(shí)現(xiàn)數(shù)據(jù)的計(jì)算。
MapReduce內(nèi)部已經(jīng)封裝了分布式的計(jì)算功能,在做業(yè)務(wù)功能開發(fā)時(shí)用戶只需要繼承Mapper和Reducer這兩個(gè)類,并分別實(shí)現(xiàn)map()和reduce()方法即可。
1、Map階段
讀取hdfs中的數(shù)據(jù),然后把原始數(shù)據(jù)進(jìn)行規(guī)范處理,轉(zhuǎn)化為有利于后續(xù)進(jìn)行處理的數(shù)據(jù)形式。
2、Reduce階段
接受map階段輸出的數(shù)據(jù),自身進(jìn)行匯總,然后把結(jié)果寫入到hdfs中。
map和reduce接收的形參是<key,value>。
hadoop1中,jobtracker和tasktracker。
hadoop2中,yarn上有resourcemanager和nodemanager。
Mapper端
# Hadoop提供的Mapper,自定義的Mapper需要繼承該類 package org.apache.hadoop.mapreduce; public class Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT> { /** * Called once at the beginning of the task. */ protected void setup(Context context) throws IOException, InterruptedException { // NOTHING } /** * Called once for each key/value pair in the input split. * Most applications should override this, but the default is the identity function. */ @SuppressWarnings("unchecked") protected void map(KEYIN key, VALUEIN value, Context context) throws IOException, InterruptedException { context.write((KEYOUT) key, (VALUEOUT) value); } /** * Called once at the end of the task. */ protected void cleanup(Context context) throws IOException, InterruptedException { // NOTHING } /** * Expert users can override this method for more complete control over the * * @param context * @throws IOException */ public void run(Context context) throws IOException, InterruptedException { setup(context); try { while (context.nextKeyValue()) { map(context.getCurrentKey(), context.getCurrentValue(), context); } } finally { cleanup(context); } } }
Reducer
# Hadoop提供的Reducer,自定義的Reducer需要繼承該類 package org.apache.hadoop.mapreduce; public class Reducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT> { /** * Called once at the start of the task. */ protected void setup(Context context) throws IOException, InterruptedException { // NOTHING } /** * This method is called once for each key. * Most applications will define their reduce class by overriding this method. * The default implementation is an identity function. */ @SuppressWarnings("unchecked") protected void reduce(KEYIN key, Iterable<VALUEIN> values, Context context) throws IOException, InterruptedException { for(VALUEIN value: values) { context.write((KEYOUT) key, (VALUEOUT) value); } } /** * Called once at the end of the task. */ protected void cleanup(Context context) throws IOException, InterruptedException { // NOTHING } /** * Advanced application writers can use the * {@link #run(org.apache.hadoop.mapreduce.Reducer.Context)} method to * control how the reduce task works. */ public void run(Context context) throws IOException, InterruptedException { setup(context); try { while (context.nextKey()) { reduce(context.getCurrentKey(), context.getValues(), context); // If a back up store is used, reset it Iterator<VALUEIN> iter = context.getValues().iterator(); if(iter instanceof ReduceContext.ValueIterator) { ((ReduceContext.ValueIterator<VALUEIN>)iter).resetBackupStore(); } } } finally { cleanup(context); } } }
Map過程
自定義Mapper類繼承自該Mapper.class,類Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>類的四個(gè)參數(shù)中,前兩個(gè)為map()函數(shù)的輸入,后兩個(gè)為map()函數(shù)的輸出。
1、讀取輸入文件內(nèi)容,解析成<key, value>形式,每一個(gè)<key, value>對(duì)調(diào)用一次map()函數(shù);
2、在map()函數(shù)中實(shí)現(xiàn)自己的業(yè)務(wù)邏輯,對(duì)輸入的<key, value>進(jìn)行處理,通過上下文對(duì)象將處理后的結(jié)果以<key, value>的形式輸出;
3、對(duì)輸出的<key, value>進(jìn)行分區(qū);
4、對(duì)不同分組的數(shù)據(jù),按照key進(jìn)行排序、分組,相同key的value放到一個(gè)集合中;
5、分組后的數(shù)據(jù)進(jìn)行歸并處理。
說明:
用戶指定輸入文件的路徑,HDFS可以會(huì)自動(dòng)讀取文件內(nèi)容,一般為文本文件(也可以是其他的),每行調(diào)用一次map()函數(shù),調(diào)用時(shí)每行的行偏移量作為key,行內(nèi)容作為value傳入map中;
MR是分布式的計(jì)算框架,map與reduce可能都有多個(gè)任務(wù)在執(zhí)行,分區(qū)的目的是為了確認(rèn)哪些map輸出應(yīng)該由哪個(gè)reduce來進(jìn)行接收處理。
map端的shuffle過程隨著后續(xù)的學(xué)習(xí)再進(jìn)行補(bǔ)充。
單詞計(jì)數(shù)舉例:
[hadoop@nnode hadoop2.6.0]$ hdfs dfs -cat /data/file1.txt hello world hello markhuang hello hadoop [hadoop@nnode hadoop2.6.0]$
每次傳入時(shí)都是一行行的讀取的,每次調(diào)用map函數(shù)分別傳入的數(shù)據(jù)是<0, hello world>, <12, hello markhuang>, <28, hello hadoop>
在每次map函數(shù)處理時(shí),key為L(zhǎng)ongWritable類型的,無需處理,只需要對(duì)接收到的value進(jìn)行處理即可。由于是需要進(jìn)行計(jì)數(shù),因此需要對(duì)value的值進(jìn)行split,split后每個(gè)單詞記一次(出現(xiàn)次數(shù)1)。
KEYIN, VALUEIN, KEYOUT, VALUEOUT=>IntWritable, Text, Text, IntWritable
Reduce過程
自定義Reducer類繼承自Reducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT>類,類似于Mapper類,并重寫reduce方法,實(shí)現(xiàn)自己的業(yè)務(wù)邏輯。
1、對(duì)多個(gè)map任務(wù)的輸出,按照不同的分區(qū),通過網(wǎng)絡(luò)拷貝到不同的reduce節(jié)點(diǎn);
2、對(duì)多個(gè)任務(wù)的輸出進(jìn)行何必、排序,通過自定義業(yè)務(wù)邏輯進(jìn)行處理;
3、把reduce的輸出保存到指定文件中。
說明:
reduce接收的輸入數(shù)據(jù)Value按key分組(group),而group按照key排序,形成了<key, <collection of values>>的結(jié)構(gòu)。
單詞計(jì)數(shù)舉例:
有四組數(shù)據(jù)<hello, {1, 1, 1}>, <world, {1}>, <markhuang, {1}>, <hadoop, {1}>
依次調(diào)用reduce方法,并作為key,value傳入,在reduce中通過業(yè)務(wù)邏輯處理。
KEYIN,VALUEIN,KEYOUT,VALUEOUT=>Text,IntWritable, Text,IntWritable
單詞計(jì)數(shù)程序代碼:
Map端
package com.lucl.hadoop.mapreduce; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; // map端 public class CustomizeMapper extends Mapper<LongWritable, Text, Text, LongWritable> { @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { LongWritable one = new LongWritable(1); Text word = new Text(); StringTokenizer token = new StringTokenizer(value.toString()); while (token.hasMoreTokens()) { String v = token.nextToken(); word.set(v); context.write(word, one); } } }
Reduce端
package com.lucl.hadoop.mapreduce; import java.io.IOException; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; // reduce端 public class CustomizeReducer extends Reducer<Text, LongWritable, Text, LongWritable> { @Override protected void reduce(Text key, Iterable<LongWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (LongWritable intWritable : values) { sum += intWritable.get(); } context.write(key, new LongWritable(sum)); } }
驅(qū)動(dòng)類
package com.lucl.hadoop.mapreduce; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; import org.apache.log4j.Logger; /** * * @author lucl * */ public class MyWordCountApp extends Configured implements Tool{ private static final Logger logger = Logger.getLogger(MyWordCountApp.class); public static void main(String[] args) { try { ToolRunner.run(new MyWordCountApp(), args); } catch (Exception e) { e.printStackTrace(); } } @Override public int run(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (otherArgs.length < 2) { logger.info("Usage: wordcount <in> [<in>...] <out>"); System.exit(2); } /** * 每個(gè)map作為一個(gè)job任務(wù)運(yùn)行 */ Job job = Job.getInstance(conf , this.getClass().getSimpleName()); job.setJarByClass(MyWordCountApp.class); /** * 指定輸入文件或目錄 */ FileInputFormat.addInputPaths(job, args[0]); // 目錄 /** * map端相關(guān)設(shè)置 */ job.setMapperClass(CustomizeMapper.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(LongWritable.class); /** * reduce端相關(guān)設(shè)置 */ job.setReducerClass(CustomizeReducer.class); job.setCombinerClass(CustomizeReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(LongWritable.class); /** * 指定輸出文件目錄 */ FileOutputFormat.setOutputPath(job, new Path(args[1])); return job.waitForCompletion(true) ? 0 : 1; } }
單詞計(jì)數(shù)程序調(diào)用:
[hadoop@nnode code]$ hadoop jar WCApp.jar /data /wc-201511290101 15/11/29 00:20:37 INFO client.RMProxy: Connecting to ResourceManager at nnode/192.168.137.117:8032 15/11/29 00:20:38 INFO input.FileInputFormat: Total input paths to process : 2 15/11/29 00:20:39 INFO mapreduce.JobSubmitter: number of splits:2 15/11/29 00:20:39 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1448694510754_0004 15/11/29 00:20:39 INFO impl.YarnClientImpl: Submitted application application_1448694510754_0004 15/11/29 00:20:39 INFO mapreduce.Job: The url to track the job: http://nnode:8088/proxy/application_1448694510754_0004/ 15/11/29 00:20:39 INFO mapreduce.Job: Running job: job_1448694510754_0004 15/11/29 00:21:10 INFO mapreduce.Job: Job job_1448694510754_0004 running in uber mode : false 15/11/29 00:21:10 INFO mapreduce.Job: map 0% reduce 0% 15/11/29 00:21:41 INFO mapreduce.Job: map 100% reduce 0% 15/11/29 00:22:01 INFO mapreduce.Job: map 100% reduce 100% 15/11/29 00:22:02 INFO mapreduce.Job: Job job_1448694510754_0004 completed successfully 15/11/29 00:22:02 INFO mapreduce.Job: Counters: 49 File System Counters FILE: Number of bytes read=134 FILE: Number of bytes written=323865 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=271 HDFS: Number of bytes written=55 HDFS: Number of read operations=9 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 Job Counters Launched map tasks=2 Launched reduce tasks=1 Data-local map tasks=2 Total time spent by all maps in occupied slots (ms)=55944 Total time spent by all reduces in occupied slots (ms)=17867 Total time spent by all map tasks (ms)=55944 Total time spent by all reduce tasks (ms)=17867 Total vcore-seconds taken by all map tasks=55944 Total vcore-seconds taken by all reduce tasks=17867 Total megabyte-seconds taken by all map tasks=57286656 Total megabyte-seconds taken by all reduce tasks=18295808 Map-Reduce Framework Map input records=6 Map output records=12 Map output bytes=170 Map output materialized bytes=140 Input split bytes=188 Combine input records=12 Combine output records=8 Reduce input groups=7 Reduce shuffle bytes=140 Reduce input records=8 Reduce output records=7 Spilled Records=16 Shuffled Maps =2 Failed Shuffles=0 Merged Map outputs=2 GC time elapsed (ms)=315 CPU time spent (ms)=2490 Physical memory (bytes) snapshot=510038016 Virtual memory (bytes) snapshot=2541662208 Total committed heap usage (bytes)=257171456 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=83 File Output Format Counters Bytes Written=55 [hadoop@nnode code]$
單詞計(jì)數(shù)程序輸出結(jié)果:
[hadoop@nnode ~]$ hdfs dfs -ls /wc-201511290101 Found 2 items -rw-r--r-- 2 hadoop hadoop 0 2015-11-29 00:22 /wc-201511290101/_SUCCESS -rw-r--r-- 2 hadoop hadoop 55 2015-11-29 00:21 /wc-201511290101/part-r-00000 [hadoop@nnode ~]$ hdfs dfs -text /wc-201511290101/part-r-00000 2.3 1 fail 1 hadoop 4 hello 3 markhuang 1 ok 1 world 1 [hadoop@nnode ~]$
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。