您好,登錄后才能下訂單哦!
如何運(yùn)行KafkaWordCount,針對(duì)這個(gè)問(wèn)題,這篇文章詳細(xì)介紹了相對(duì)應(yīng)的分析和解答,希望可以幫助更多想解決這個(gè)問(wèn)題的小伙伴找到更簡(jiǎn)單易行的方法。
概要
Spark應(yīng)用開(kāi)發(fā)實(shí)踐性非常強(qiáng),很多時(shí)候可能都會(huì)將時(shí)間花費(fèi)在環(huán)境的搭建和運(yùn)行上,如果有一個(gè)比較好的指導(dǎo)將會(huì)大大的縮短應(yīng)用開(kāi)發(fā)流程。Spark Streaming中涉及到和許多第三方程序的整合,源碼中的例子如何真正跑起來(lái),文檔不是很多也不詳細(xì)。
下面主要講述如何運(yùn)行KafkaWordCount,這個(gè)需要涉及Kafka集群的搭建,還是說(shuō)的越仔細(xì)越好。
搭建Kafka集群
步驟1:下載kafka 0.8.1及解壓
wget https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.1.1/kafka_2.10-0.8.1.1.tgz
tar zvxf kafka_2.10-0.8.1.1.tgz
cd kafka_2.10-0.8.1.1
步驟2:?jiǎn)?dòng)zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties
步驟3:修改配置文件config/server.properties,添加如下內(nèi)容
host.name=localhost
# Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
advertised.host.name=localhost
步驟4:?jiǎn)?dòng)Kafka server
bin/kafka-server-start.sh config/server.properties
步驟5:創(chuàng)建topic
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
檢驗(yàn)topic創(chuàng)建是否成功
bin/kafka-topics.sh --list --zookeeper localhost:2181
如果正常返回test
步驟6:打開(kāi)producer,發(fā)送消息
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
##啟動(dòng)成功后,輸入以下內(nèi)容測(cè)試
This is a message
This is another message
步驟7:打開(kāi)consumer,接收消息
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
###啟動(dòng)成功后,如果一切正常將會(huì)顯示producer端輸入的內(nèi)容
This is a message
This is another message
運(yùn)行KafkaWordCount
KafkaWordCount源文件位置examples/src/main/scala/org/apache/spark/examples/streaming/KafkaWordCount.scala
盡管里面有使用說(shuō)明,見(jiàn)下文,但如果不是事先對(duì)Kafka有一定的了解的話,決然不知道這些參數(shù)是什么意思,也不知道該如何填寫(xiě)。
/**
* Consumes messages from one or more topics in Kafka and does wordcount.
* Usage: KafkaWordCount
* is a list of one or more zookeeper servers that make quorum
* is the name of kafka consumer group
* is a list of one or more kafka topics to consume from
* is the number of threads the kafka consumer should use
*
* Example:
* `$ bin/run-example \
* org.apache.spark.examples.streaming.KafkaWordCount zoo01,zoo02,zoo03 \
* my-consumer-group topic1,topic2 1`
*/
object KafkaWordCount {
def main(args: Array[String]) {
if (args.length < 4) {
System.err.println("Usage: KafkaWordCount ")
System.exit(1)
}
StreamingExamples.setStreamingLogLevels()
val Array(zkQuorum, group, topics, numThreads) = args
val sparkConf = new SparkConf().setAppName("KafkaWordCount")
val ssc = new StreamingContext(sparkConf, Seconds(2))
ssc.checkpoint("checkpoint")
val topicpMap = topics.split(",").map((_,numThreads.toInt)).toMap
val lines = KafkaUtils.createStream(ssc, zkQuorum, group, topicpMap).map(_._2)
val words = lines.flatMap(_.split(" "))
val wordCounts = words.map(x => (x, 1L))
.reduceByKeyAndWindow(_ + _, _ - _, Minutes(10), Seconds(2), 2)
wordCounts.print()
ssc.start()
ssc.awaitTermination()
}
}
來(lái)看一看該如何運(yùn)行KafkaWordCount
步驟1:停止運(yùn)行剛才的kafka-console-producer和kafka-console-consumer
步驟2:運(yùn)行KafkaWordCountProducer
bin/run-example org.apache.spark.examples.streaming.KafkaWordCountProducer localhost:9092 test 3 5
解釋一下參數(shù)的意思,localhost:9092表示producer的地址和端口, test表示topic,3表示每秒發(fā)多少條消息,5表示每條消息中有幾個(gè)單詞
步驟3:運(yùn)行KafkaWordCount
bin/run-example org.apache.spark.examples.streaming.KafkaWordCount localhost:2181 test-consumer-group test 1
解釋一下參數(shù), localhost:2181表示zookeeper的監(jiān)聽(tīng)地址,test-consumer-group表示consumer-group的名稱,必須和$KAFKA_HOME/config/consumer.properties中的group.id的配置內(nèi)容一致,test表示topic,1表示線程數(shù)。
關(guān)于如何運(yùn)行KafkaWordCount問(wèn)題的解答就分享到這里了,希望以上內(nèi)容可以對(duì)大家有一定的幫助,如果你還有很多疑惑沒(méi)有解開(kāi),可以關(guān)注億速云行業(yè)資訊頻道了解更多相關(guān)知識(shí)。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。