您好,登錄后才能下訂單哦!
今天就跟大家聊聊有關(guān)Spark2.2.0中RDD轉(zhuǎn)DataFrame的方式是什么,可能很多人都不太了解,為了讓大家更加了解,小編給大家總結(jié)了以下內(nèi)容,希望大家根據(jù)這篇文章可以有所收獲。
Spark SQL如何將現(xiàn)有的RDDs轉(zhuǎn)換為數(shù)據(jù)集。
方法:通過編程接口,該接口允許您構(gòu)造一個(gè)模式,然后將其應(yīng)用于現(xiàn)有的RDD。雖然此方法更詳細(xì),但它允許您在列及其類型直到運(yùn)行時(shí)才知道時(shí)構(gòu)造數(shù)據(jù)集。
數(shù)據(jù)準(zhǔn)備studentData.txt
1001,20,zhangsan1002,17,lisi1003,24,wangwu1004,16,zhaogang
代碼實(shí)例:
package com.unicom.ljs.spark220.study;
import org.apache.spark.SparkConf;
import org.apache.spark.SparkContext;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.rdd.RDD;
import org.apache.spark.sql.*;
import org.apache.spark.sql.types.*;
import java.util.ArrayList;
import java.util.List;
/**
* @author: Created By lujisen
* @company ChinaUnicom Software JiNan
* @date: 2020-01-21 13:42
* @version: v1.0
* @description: com.unicom.ljs.spark220.study
*/
public class RDD2DataFrameProgramatically {
public static void main(String[] args) {
SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("RDD2DataFrameProgramatically");
JavaSparkContext sc = new JavaSparkContext(sparkConf);
SQLContext sqlContext = new SQLContext(sc);
JavaRDD<String> lineRDD =sc.textFile("C:\\Users\\Administrator\\Desktop\\studentData.txt");
JavaRDD<Row> rowJavaRDD = lineRDD.map(new Function<String, Row>() {
@Override
public Row call(String line) throws Exception {
String[] splitLine = line.split(",");
return RowFactory.create(Integer.valueOf(splitLine[0])
,Integer.valueOf(splitLine[1])
,splitLine[2]);
}
});
List<StructField> structFields=new ArrayList<StructField>();
/*StructField structField1=new StructField("id", DataTypes.IntegerType,true);*/
structFields.add(DataTypes.createStructField("id",DataTypes.IntegerType,true));
structFields.add(DataTypes.createStructField("age",DataTypes.IntegerType,true));
structFields.add(DataTypes.createStructField("name",DataTypes.StringType,true));
StructType structType=DataTypes.createStructType(structFields);
Dataset<Row> dataFrame = sqlContext.createDataFrame(rowJavaRDD, structType);
dataFrame.registerTempTable("studentInfo");
Dataset<Row> resultDataSet = sqlContext.sql("select * from studentInfo where age > 17");
List<Row> collect = resultDataSet.javaRDD().collect();
for(Row row: collect){
System.out.println(row);
}
sc.close();
}
}
pom.xml關(guān)鍵依賴:
<spark.version>2.2.0</spark.version>
<scala.version>2.11.8</scala.version>
<dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.11</artifactId> <version>${spark.version}</version></dependency><dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.11</artifactId> <version>${spark.version}</version></dependency>
看完上述內(nèi)容,你們對Spark2.2.0中RDD轉(zhuǎn)DataFrame的方式是什么有進(jìn)一步的了解嗎?如果還想了解更多知識或者相關(guān)內(nèi)容,請關(guān)注億速云行業(yè)資訊頻道,感謝大家的支持。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。