您好,登錄后才能下訂單哦!
1)啟動環(huán)境
start-all.sh
2)產看狀態(tài)
jps
0613 NameNode
10733 DataNode
3455 NodeManager
15423 Jps
11082 ResourceManager
10913 SecondaryNameNode
3)利用Eclipse編寫jar
1.編寫 MapCal類
package com.mp; import java.io.IOException; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class MapCal extends Mapper<LongWritable, Text, Text, Text> { @Override protected void map(LongWritable lon, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); String[] peps = line.split("-"); // 鍵值對 context.write(new Text(peps[0]), new Text("s" + peps[1])); context.write(new Text(peps[1]), new Text("g" + peps[0])); } } |
2.編寫ReduceCal類
public class ReduceCal extends Reducer<Text, Text, Text, Text> { @Override protected void reduce(Text arg0, Iterable<Text> arg1, Context context) throws IOException, InterruptedException { ArrayList<Text> grands = new ArrayList<Text>(); ArrayList<Text> sons = new ArrayList<Text>(); // 把這些值寫入集合 for (Text text : arg1) { String str = text.toString(); if (str.startsWith("g")) { grands.add(text); } else { sons.add(text); } } // 輸出 for (int i = 0; i < sons.size(); i++) { for (int j = 0; j < grands.size(); j++) { context.write(grands.get(i), sons.get(j)); } } } } |
3. 編寫Jobrun類
public class RunJob { // 全限定名 public static void main(String[] args) { Configuration conf = new Configuration(); // 本地多線程模擬執(zhí)行。 // conf.set("fs.defaultFS", "hdfs://node3:8020"); // conf.set("mapred.jar", "C:\\Users\\Administrator\\Desktop\\wc.jar"); try { FileSystem fs = FileSystem.get(conf); Job job = Job.getInstance(conf); job.setJobName("wc"); job.setJarByClass(RunJob.class); job.setMapperClass(WordCountMapper.class); job.setReducerClass(WordCountReduce.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(IntWritable.class); // job 輸入數據和輸出數據的目錄 FileInputFormat.addInputPath(job, new Path("/word.txt")); Path outPath = new Path("/output/wc2");// job執(zhí)行結果存放的目錄。該目錄在執(zhí)行前不能存在。 if (fs.exists(outPath)) { fs.delete(outPath, true); } FileOutputFormat.setOutputPath(job, outPath); boolean f = job.waitForCompletion(true); if (f) { System.out.println("任務執(zhí)行成功!"); } } catch (Exception e) { e.printStackTrace(); } } } |
4)導出jar包.
5)通過ftp上傳jar到linux目錄
6)運行jar包
hadoop jar shuju.jar com.mc.RunJob / /outg
7)如果map和reduce都100%
Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=45 File Output Format Counters Bytes Written=18 |
表示運行成功!!
8)產看結果
hadoop fs -tail /outg/part-r-00000
免責聲明:本站發(fā)布的內容(圖片、視頻和文字)以原創(chuàng)、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。