您好,登錄后才能下訂單哦!
學習任何spark知識點之前請先正確理解spark,可以參考:正確理解spark
本文詳細介紹了spark key-value類型的rdd java api
一、key-value類型的RDD的創(chuàng)建方式
1、sparkContext.parallelizePairs
JavaPairRDD<String, Integer> javaPairRDD = sc.parallelizePairs(Arrays.asList(new Tuple2("test", 3), new Tuple2("kkk", 3))); //結果:[(test,3), (kkk,3)] System.out.println("javaPairRDD = " + javaPairRDD.collect());
2、keyBy的方式
public class User implements Serializable { private String userId; private Integer amount; public User(String userId, Integer amount) { this.userId = userId; this.amount = amount; } @Override public String toString() { return "User{" + "userId='" + userId + '\'' + ", amount=" + amount + '}'; } } JavaRDD<User> userJavaRDD = sc.parallelize(Arrays.asList(new User("u1", 20))); JavaPairRDD<String, User> userJavaPairRDD = userJavaRDD.keyBy(new Function<User, String>() { @Override public String call(User user) throws Exception { return user.getUserId(); } }); //結果:[(u1,User{userId='u1', amount=20})] System.out.println("userJavaPairRDD = " + userJavaPairRDD.collect());
3、zip的方式
JavaRDD<Integer> rdd = sc.parallelize(Arrays.asList(1, 1, 2, 3, 5, 8, 13)); //兩個rdd zip也是創(chuàng)建key-value類型RDD的一種方式 JavaPairRDD<Integer, Integer> zipPairRDD = rdd.zip(rdd); //結果:[(1,1), (1,1), (2,2), (3,3), (5,5), (8,8), (13,13)] System.out.println("zipPairRDD = " + zipPairRDD.collect());
4、groupBy的方式
JavaRDD<Integer> rdd = sc.parallelize(Arrays.asList(1, 1, 2, 3, 5, 8, 13)); Function<Integer, Boolean> isEven = new Function<Integer, Boolean>() { @Override public Boolean call(Integer x) throws Exception { return x % 2 == 0; } }; //將偶數(shù)和奇數(shù)分組,生成key-value類型的RDD JavaPairRDD<Boolean, Iterable<Integer>> oddsAndEvens = rdd.groupBy(isEven); //結果:[(false,[1, 1, 3, 5, 13]), (true,[2, 8])] System.out.println("oddsAndEvens = " + oddsAndEvens.collect()); //結果:1 System.out.println("oddsAndEvens.partitions.size = " + oddsAndEvens.partitions().size()); oddsAndEvens = rdd.groupBy(isEven, 2); //結果:[(false,[1, 1, 3, 5, 13]), (true,[2, 8])] System.out.println("oddsAndEvens = " + oddsAndEvens.collect()); //結果:2 System.out.println("oddsAndEvens.partitions.size = " + oddsAndEvens.partitions().size());
二、combineByKey
JavaPairRDD<String, Integer> javaPairRDD = sc.parallelizePairs(Arrays.asList(new Tuple2("coffee", 1), new Tuple2("coffee", 2), new Tuple2("panda", 3), new Tuple2("coffee", 9)), 2); //當在一個分區(qū)中遇到新的key的時候,對這個key對應的value應用這個函數(shù) Function<Integer, Tuple2<Integer, Integer>> createCombiner = new Function<Integer, Tuple2<Integer, Integer>>() { @Override public Tuple2<Integer, Integer> call(Integer value) throws Exception { return new Tuple2<>(value, 1); } }; //當在一個分區(qū)中遇到已經(jīng)應用過上面createCombiner函數(shù)的key的時候,對這個key對應的value應用這個函數(shù) Function2<Tuple2<Integer, Integer>, Integer, Tuple2<Integer, Integer>> mergeValue = new Function2<Tuple2<Integer, Integer>, Integer, Tuple2<Integer, Integer>>() { @Override public Tuple2<Integer, Integer> call(Tuple2<Integer, Integer> acc, Integer value) throws Exception { return new Tuple2<>(acc._1() + value, acc._2() + 1); } }; //當需要對不同分區(qū)的數(shù)據(jù)進行聚合的時候應用這個函數(shù) Function2<Tuple2<Integer, Integer>, Tuple2<Integer, Integer>, Tuple2<Integer, Integer>> mergeCombiners = new Function2<Tuple2<Integer, Integer>, Tuple2<Integer, Integer>, Tuple2<Integer, Integer>>() { @Override public Tuple2<Integer, Integer> call(Tuple2<Integer, Integer> acc1, Tuple2<Integer, Integer> acc2) throws Exception { return new Tuple2<>(acc1._1() + acc2._1(), acc1._2() + acc2._2()); } }; JavaPairRDD<String, Tuple2<Integer, Integer>> combineByKeyRDD = javaPairRDD.combineByKey(createCombiner, mergeValue, mergeCombiners); //結果:[(coffee,(12,3)), (panda,(3,1))] System.out.println("combineByKeyRDD = " + combineByKeyRDD.collect());
combineByKey的數(shù)據(jù)流如下:
對于combineByKey的原理講解詳細見: spark core RDD api原理詳解
三、aggregateByKey
JavaPairRDD<String, Tuple2<Integer, Integer>> aggregateByKeyRDD = javaPairRDD.aggregateByKey(new Tuple2<>(0, 0), mergeValue, mergeCombiners); //結果:[(coffee,(12,3)), (panda,(3,1))] System.out.println("aggregateByKeyRDD = " + aggregateByKeyRDD.collect()); //aggregateByKey是由combineByKey實現(xiàn)的,上面的aggregateByKey就是等于下面的combineByKeyRDD Function<Integer, Tuple2<Integer, Integer>> createCombinerAggregateByKey = new Function<Integer, Tuple2<Integer, Integer>>() { @Override public Tuple2<Integer, Integer> call(Integer value) throws Exception { return mergeValue.call(new Tuple2<>(0, 0), value); } }; //結果是: [(coffee,(12,3)), (panda,(3,1))] System.out.println(javaPairRDD.combineByKey(createCombinerAggregateByKey, mergeValue, mergeCombiners).collect());
四、reduceByKey
JavaPairRDD<String, Integer> reduceByKeyRDD = javaPairRDD.reduceByKey(new Function2<Integer, Integer, Integer>() { @Override public Integer call(Integer value1, Integer value2) throws Exception { return value1 + value2; } }); //結果:[(coffee,12), (panda,3)] System.out.println("reduceByKeyRDD = " + reduceByKeyRDD.collect()); //reduceByKey底層也是combineByKey實現(xiàn)的,上面的reduceByKey等于下面的combineByKey Function<Integer, Integer> createCombinerReduce = new Function<Integer, Integer>() { @Override public Integer call(Integer integer) throws Exception { return integer; } }; Function2<Integer, Integer, Integer> mergeValueReduce = new Function2<Integer, Integer, Integer>() { @Override public Integer call(Integer integer, Integer integer2) throws Exception { return integer + integer2; } }; //結果:[(coffee,12), (panda,3)] System.out.println(javaPairRDD.combineByKey(createCombinerReduce, mergeValueReduce, mergeValueReduce).collect());
五、foldByKey
JavaPairRDD<String, Integer> foldByKeyRDD = javaPairRDD.foldByKey(0, new Function2<Integer, Integer, Integer>() { @Override public Integer call(Integer integer, Integer integer2) throws Exception { return integer + integer2; } }); //結果:[(coffee,12), (panda,3)] System.out.println("foldByKeyRDD = " + foldByKeyRDD.collect()); //foldByKey底層也是combineByKey實現(xiàn)的,上面的foldByKey等于下面的combineByKey Function2<Integer, Integer, Integer> mergeValueFold = new Function2<Integer, Integer, Integer>() { @Override public Integer call(Integer integer, Integer integer2) throws Exception { return integer + integer2; } }; Function<Integer, Integer> createCombinerFold = new Function<Integer, Integer>() { @Override public Integer call(Integer integer) throws Exception { return mergeValueFold.call(0, integer); } }; //結果:[(coffee,12), (panda,3)] System.out.println(javaPairRDD.combineByKey(createCombinerFold, mergeValueFold, mergeValueFold).collect());
六、groupByKey
JavaPairRDD<String, Iterable<Integer>> groupByKeyRDD = javaPairRDD.groupByKey(); //結果:[(coffee,[1, 2, 9]), (panda,[3])] System.out.println("groupByKeyRDD = " + groupByKeyRDD.collect()); //groupByKey底層也是combineByKey實現(xiàn)的,上面的groupByKey等于下面的combineByKey Function<Integer, List<Integer>> createCombinerGroup = new Function<Integer, List<Integer>>() { @Override public List<Integer> call(Integer integer) throws Exception { List<Integer> list = new ArrayList<>(); list.add(integer); return list; } }; Function2<List<Integer>, Integer, List<Integer>> mergeValueGroup = new Function2<List<Integer>, Integer, List<Integer>>() { @Override public List<Integer> call(List<Integer> integers, Integer integer) throws Exception { integers.add(integer); return integers; } }; Function2<List<Integer>, List<Integer>, List<Integer>> mergeCombinersGroup = new Function2<List<Integer>, List<Integer>, List<Integer>>() { @Override public List<Integer> call(List<Integer> integers, List<Integer> integers2) throws Exception { integers.addAll(integers2); return integers; } }; //結果:[(coffee,[1, 2, 9]), (panda,[3])] System.out.println(javaPairRDD.combineByKey(createCombinerGroup, mergeValueGroup, mergeCombinersGroup).collect());
對于api原理性的東西很難用文檔說明清楚,如果想更深入,更透徹的理解api的原理,可以參考: spark core RDD api原理詳解
免責聲明:本站發(fā)布的內容(圖片、視頻和文字)以原創(chuàng)、轉載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權內容。