您好,登錄后才能下訂單哦!
??spark框架是用scala寫的,運(yùn)行在Java虛擬機(jī)(JVM)上。支持Python、Java、Scala或R多種語言編寫客戶端應(yīng)用。
??訪問http://spark.apache.org/downloads.html選擇預(yù)編譯的版本進(jìn)行下載。
??打開終端,將工作路徑轉(zhuǎn)到下載的spark壓縮包所在的目錄,然后解壓壓縮包。
可使用如下命令:
cd ~
tar -xf spark-2.2.2-bin-hadoop2.7.tgz -C /opt/module/
cd spark-2.2.2-bin-hadoop2.7
ls
??注:tar命令中x標(biāo)記指定tar命令執(zhí)行解壓縮操作,f標(biāo)記指定壓縮包的文件名。
??包含用來入門spark的簡單使用說明
??包含可用來和spark進(jìn)行各種方式交互的一系列可執(zhí)行文件
??包含spark項(xiàng)目主要組件的源代碼
??包含一些可查看和運(yùn)行的spark程序,對(duì)學(xué)習(xí)spark的API非常有幫助
./bin/run-example SparkPi 10
./bin/spark-shell --master local[2]
# --master選項(xiàng)指定運(yùn)行模式。local是指使用一個(gè)線程本地運(yùn)行;local[N]是指使用N個(gè)線程本地運(yùn)行。
./bin/pyspark --master local[2]
./bin/sparkR --master local[2]
#支持多種語言提交
./bin/spark-submit examples/src/main/python/pi.py 10
./bin/spark-submit examples/src/main/r/dataframe.R
...
??使用spark-shell腳本進(jìn)行交互式分析。
scala> val textFile = spark.read.textFile("README.md")
textFile: org.apache.spark.sql.Dataset[String] = [value: string]
scala> textFile.count() // Number of items in this Dataset
res0: Long = 126 // May be different from yours as README.md will change over time, similar to other outputs
scala> textFile.first() // First item in this Dataset
res1: String = # Apache Spark
#使用filter算子返回原DataSet的子集
scala> val linesWithSpark = textFile.filter(line => line.contains("Spark"))
linesWithSpark: org.apache.spark.sql.Dataset[String] = [value: string]
#拉鏈方式
scala> textFile.filter(line => line.contains("Spark")).count() // How many lines contain "Spark"?
res3: Long = 15
#使用DataSet的轉(zhuǎn)換和動(dòng)作查找最多單詞的行
scala> textFile.map(line => line.split(" ").size).reduce((a, b) => if (a > b) a else b)
res4: Long = 15
#統(tǒng)計(jì)單詞個(gè)數(shù)
scala> val wordCounts = textFile.flatMap(line => line.split(" ")).groupByKey(identity).count()
wordCounts: org.apache.spark.sql.Dataset[(String, Long)] = [value: string, count(1): bigint]
scala> wordCounts.collect()
res6: Array[(String, Int)] = Array((means,1), (under,2), (this,3), (Because,1), (Python,2), (agree,1), (cluster.,1), ...)
??使用pyspark腳本進(jìn)行交互式分析
>>> textFile = spark.read.text("README.md")
>>> textFile.count() # Number of rows in this DataFrame
126
>>> textFile.first() # First row in this DataFrame
Row(value=u'# Apache Spark')
#filter過濾
>>> linesWithSpark = textFile.filter(textFile.value.contains("Spark"))
#拉鏈方式
>>> textFile.filter(textFile.value.contains("Spark")).count() # How many lines contain "Spark"?
15
#查找最多單詞的行
>>> from pyspark.sql.functions import *
>>> textFile.select(size(split(textFile.value, "\s+")).name("numWords")).agg(max(col("numWords"))).collect()
[Row(max(numWords)=15)]
#統(tǒng)計(jì)單詞個(gè)數(shù)
>>> wordCounts = textFile.select(explode(split(textFile.value, "\s+")).alias("word")).groupBy("word").count()
>>> wordCounts.collect()
[Row(word=u'online', count=1), Row(word=u'graphs', count=1), ...]
??spark除了交互式運(yùn)行之外,spark也可以在Java、Scala或Python的獨(dú)立程序中被連接使用。
??獨(dú)立應(yīng)用與shell的主要區(qū)別在于需要自行初始化SparkContext。
分別統(tǒng)計(jì)包含單詞a和單詞b的行數(shù)
/* SimpleApp.scala */
import org.apache.spark.sql.SparkSession
object SimpleApp {
def main(args: Array[String]) {
val logFile = "YOUR_SPARK_HOME/README.md" // Should be some file on your system
val spark = SparkSession.builder.appName("Simple Application").getOrCreate()
val logData = spark.read.textFile(logFile).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
println(s"Lines with a: $numAs, Lines with b: $numBs")
spark.stop()
}
}
運(yùn)行應(yīng)用
# Use spark-submit to run your application
$ YOUR_SPARK_HOME/bin/spark-submit \
--class "SimpleApp" \
--master local[4] \
target/scala-2.11/simple-project_2.11-1.0.jar
...
Lines with a: 46, Lines with b: 23
分別統(tǒng)計(jì)包含單詞a和單詞b的行數(shù)
/* SimpleApp.java */
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;
public class SimpleApp {
public static void main(String[] args) {
String logFile = "YOUR_SPARK_HOME/README.md"; // Should be some file on your system
SparkSession spark = SparkSession.builder().appName("Simple Application").getOrCreate();
Dataset<String> logData = spark.read().textFile(logFile).cache();
long numAs = logData.filter(s -> s.contains("a")).count();
long numBs = logData.filter(s -> s.contains("b")).count();
System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);
spark.stop();
}
}
運(yùn)行應(yīng)用
# Use spark-submit to run your application
$ YOUR_SPARK_HOME/bin/spark-submit \
--class "SimpleApp" \
--master local[4] \
target/simple-project-1.0.jar
...
Lines with a: 46, Lines with b: 23
分別統(tǒng)計(jì)包含單詞a和單詞b的行數(shù)
setup.py腳本添加內(nèi)容
install_requires=[
'pyspark=={site.SPARK_VERSION}'
]
"""SimpleApp.py"""
from pyspark.sql import SparkSession
logFile = "YOUR_SPARK_HOME/README.md" # Should be some file on your system
spark = SparkSession.builder().appName(appName).master(master).getOrCreate()
logData = spark.read.text(logFile).cache()
numAs = logData.filter(logData.value.contains('a')).count()
numBs = logData.filter(logData.value.contains('b')).count()
print("Lines with a: %i, lines with b: %i" % (numAs, numBs))
spark.stop()
運(yùn)行應(yīng)用
# Use spark-submit to run your application
$ YOUR_SPARK_HOME/bin/spark-submit \
--master local[4] \
SimpleApp.py
...
Lines with a: 46, Lines with b: 23
忠于技術(shù),熱愛分享。歡迎關(guān)注公眾號(hào):java大數(shù)據(jù)編程,了解更多技術(shù)內(nèi)容。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。