您好,登錄后才能下訂單哦!
這篇文章給大家介紹一下什么是Apache Flink 中Flink DataSet編程,內(nèi)容非常詳細(xì),感興趣的小伙伴們可以參考借鑒,希望對大家能有所幫助。
Flink中DataSet編程是非常常規(guī)的編程,只需要實現(xiàn)他的數(shù)據(jù)集的轉(zhuǎn)換(例如filtering, mapping, joining, grouping)。這個數(shù)據(jù)集最初是通過數(shù)據(jù)源創(chuàng)建(例如讀取文件、本地數(shù)據(jù)集加載本地集合),轉(zhuǎn)換的結(jié)果通過sink返回到本地(或者分布式)的文件系統(tǒng)或者終端。Flink程序可以運行在各種環(huán)境中例如單機,或者嵌入其他程序中。執(zhí)行過程可以在本地JVM中或者集群中。
Source ===> Flink(transformation)===> Sink
readTextFile(path)
/ TextInputFormat
- Reads files line wise and returns them as Strings.
readTextFileWithValue(path)
/ TextValueInputFormat
- Reads files line wise and returns them as StringValues. StringValues are mutable strings.
readCsvFile(path)
/ CsvInputFormat
- Parses files of comma (or another char) delimited fields. Returns a DataSet of tuples or POJOs. Supports the basic java types and their Value counterparts as field types.
readFileOfPrimitives(path, Class)
/ PrimitiveInputFormat
- Parses files of new-line (or another char sequence) delimited primitive data types such as String
or Integer
.
readFileOfPrimitives(path, delimiter, Class)
/ PrimitiveInputFormat
- Parses files of new-line (or another char sequence) delimited primitive data types such as String
or Integer
using the given delimiter.
fromCollection(Collection)
fromCollection(Iterator, Class)
fromElements(T ...)
fromParallelCollection(SplittableIterator, Class)
generateSequence(from, to)
基于集合的數(shù)據(jù)源往往用來在開發(fā)環(huán)境中或者程序員學(xué)習(xí)中,可以隨意造我們所需要的數(shù)據(jù),因為方式簡單。下面從java和scala兩種方式來實現(xiàn)使用集合作為數(shù)據(jù)源。數(shù)據(jù)源是簡單的1到10
import org.apache.flink.api.java.ExecutionEnvironment; import java.util.ArrayList; import java.util.List; public class JavaDataSetSourceApp { public static void main(String[] args) throws Exception { ExecutionEnvironment executionEnvironment = ExecutionEnvironment.getExecutionEnvironment(); fromCollection(executionEnvironment); } public static void fromCollection(ExecutionEnvironment env) throws Exception { List<Integer> list = new ArrayList<Integer>(); for (int i = 1; i <= 10; i++) { list.add(i); } env.fromCollection(list).print(); } }
import org.apache.flink.api.scala.ExecutionEnvironment object DataSetSourceApp { def main(args: Array[String]): Unit = { val env = ExecutionEnvironment.getExecutionEnvironment fromCollection(env) } def fromCollection(env: ExecutionEnvironment): Unit = { import org.apache.flink.api.scala._ val data = 1 to 10 env.fromCollection(data).print() } }
在本地文件夾:E:\test\input,下面有一個hello.txt,內(nèi)容如下:
hello world welcome hello welcome
def main(args: Array[String]): Unit = { val env = ExecutionEnvironment.getExecutionEnvironment //fromCollection(env) textFile(env) } def textFile(env: ExecutionEnvironment): Unit = { val filePathFilter = "E:/test/input/hello.txt" env.readTextFile(filePathFilter).print() }
readTextFile方法需要參數(shù)1:文件路徑(可以使本地,也可以是hdfs://host:port/file/path),參數(shù)2:編碼(如果不寫,默認(rèn)UTF-8)
是否可以指定文件夾?
我們直接傳遞文件夾路徑
def main(args: Array[String]): Unit = { val env = ExecutionEnvironment.getExecutionEnvironment //fromCollection(env) textFile(env) } def textFile(env: ExecutionEnvironment): Unit = { //val filePathFilter = "E:/test/input/hello.txt" val filePathFilter = "E:/test/input" env.readTextFile(filePathFilter).print() }
運行結(jié)果正常。說明readTextFile方法傳入文件夾,也沒有問題,它將會遍歷文件夾下面的所有文件
public static void main(String[] args) throws Exception { ExecutionEnvironment executionEnvironment = ExecutionEnvironment.getExecutionEnvironment(); // fromCollection(executionEnvironment); textFile(executionEnvironment); } public static void textFile(ExecutionEnvironment env) throws Exception { String filePath = "E:/test/input/hello.txt"; // String filePath = "E:/test/input"; env.readTextFile(filePath).print(); }
同樣的道理,java中也可以指定文件或者文件夾,如果指定文件夾,那么將遍歷文件夾下面的所有文件。
創(chuàng)建一個CSV文件,內(nèi)容如下:
name,age,job Tom,26,cat Jerry,24,mouse sophia,30,developer
讀取csv文件方法readCsvFile,參數(shù)如下:
filePath: String, lineDelimiter: String = "\n", fieldDelimiter: String = ",", 字段分隔符 quoteCharacter: Character = null, ignoreFirstLine: Boolean = false, 是否忽略第一行 ignoreComments: String = null, lenient: Boolean = false, includedFields: Array[Int] = null, 讀取文件的哪幾列 pojoFields: Array[String] = null)
讀取csv文件代碼如下:
def csvFile(env:ExecutionEnvironment): Unit = { import org.apache.flink.api.scala._ val filePath = "E:/test/input/people.csv" env.readCsvFile[(String, Int, String)](filePath, ignoreFirstLine = true).print() }
如何只讀前兩列,就需要指定includedFields了,
env.readCsvFile[(String, Int)](filePath, ignoreFirstLine = true, includedFields = Array(0, 1)).print()
之前使用Tuple方式指定類型,如何指定自定義的一個case class?
def csvFile(env: ExecutionEnvironment): Unit = { import org.apache.flink.api.scala._ val filePath = "E:/test/input/people.csv" // env.readCsvFile[(String, Int, String)](filePath, ignoreFirstLine = true).print() // env.readCsvFile[(String, Int)](filePath, ignoreFirstLine = true, includedFields = Array(0, 1)).print() env.readCsvFile[MyCaseClass](filePath, ignoreFirstLine = true, includedFields = Array(0, 1)).print() } case class MyCaseClass(name: String, age: Int)
如何指定POJO?
新建一個POJO類,people
public class People { private String name; private int age; private String job; @Override public String toString() { return "People{" + "name='" + name + '\'' + ", age=" + age + ", job='" + job + '\'' + '}'; } public String getName() { return name; } public void setName(String name) { this.name = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getJob() { return job; } public void setJob(String job) { this.job = job; } }
env.readCsvFile[People](filePath, ignoreFirstLine = true, pojoFields = Array("name", "age", "job")).print()
public static void csvFile(ExecutionEnvironment env) throws Exception { String filePath = "E:/test/input/people.csv"; DataSource<Tuple2<String, Integer>> types = env.readCsvFile(filePath).ignoreFirstLine().includeFields("11").types(String.class, Integer.class); types.print(); }
只取出第一列和第二列的數(shù)據(jù)。
讀取POJO數(shù)據(jù):
env.readCsvFile(filePath).ignoreFirstLine().pojoType(People.class, "name", "age", "job").print();
def main(args: Array[String]): Unit = { val env = ExecutionEnvironment.getExecutionEnvironment //fromCollection(env) // textFile(env) // csvFile(env) readRecursiveFiles(env) } def readRecursiveFiles(env: ExecutionEnvironment): Unit = { val filePath = "E:/test/nested" val parameter = new Configuration() parameter.setBoolean("recursive.file.enumeration", true) env.readTextFile(filePath).withParameters(parameter).print() }
def readCompressionFiles(env: ExecutionEnvironment): Unit = { val filePath = "E:/test/my.tar.gz" env.readTextFile(filePath).print() }
可以直接讀取壓縮文件。因為提高了空間利用率,但是卻導(dǎo)致CPU的壓力也提升了。因此需要一個權(quán)衡。需要調(diào)優(yōu),在各種情況下去選擇更合適的方式。不是任何一種優(yōu)化都能帶來想要的結(jié)果。如果本身集群的CPU壓力就高,那么就不應(yīng)該讀取壓縮文件了。
關(guān)于Apache Flink 中Flink DataSet編程的示例分析就分享到這里了,希望以上內(nèi)容可以對大家有一定的幫助,可以學(xué)到更多知識。如果覺得文章不錯,可以把它分享出去讓更多的人看到。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。