您好,登錄后才能下訂單哦!
Spark2.3.1+Kafka0.9使用Direct模式消費(fèi)信息異常怎么辦,相信很多沒有經(jīng)驗(yàn)的人對此束手無策,為此本文總結(jié)了問題出現(xiàn)的原因和解決方法,通過這篇文章希望你能解決這個問題。
Spark2.3.1+Kafka
使用Direct
模式消費(fèi)信息Maven
依賴<dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming-kafka-0-8_2.11</artifactId> <version>2.3.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming_2.11</artifactId> <version>2.3.1</version> </dependency>
2.3.1
即spark
版本
Direct
模式代碼import kafka.serializer.StringDecoder import org.apache.spark.streaming.kafka.KafkaUtils import org.apache.spark.streaming.{Seconds, StreamingContext} import org.apache.spark.{SparkConf, SparkContext} object Test { val zkQuorum = "mirrors.mucang.cn:2181" val groupId = "nginx-cg" val topic = Map("nginx-log" -> 1) val KAFKA_INTERVAL = 10 case class NginxInof(domain: String, ip: String) def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setAppName("NginxLogAnalyze").setMaster("local[*]") val sparkContext = new SparkContext(sparkConf) val streamContext = new StreamingContext(sparkContext, Seconds(KAFKA_INTERVAL)) val kafkaParam = Map[String, String]( "bootstrap.servers" -> "xx.xx.cn:9092", "group.id" -> "nginx-cg", "auto.offset.reset" -> "largest" ) val topic = Set("nginx-log") val kafkaStream = KafkaUtils.createDirectStream(streamContext, kafkaParam, topic) val counter = kafkaStream .map(_.toString().split(" ")) .map(item => (item(0).split(",")(1) + "-" + item(2), 1)) .reduceByKey((x, y) => (x + y)) counter.foreachRDD(rdd => { rdd.foreach(println) }) streamContext.start() streamContext.awaitTermination() } }
largest
因?yàn)?code>kafka版本過低不支持latest
Caused by: java.lang.NoSuchMethodException: scala.runtime.Nothing$.<init>(kafka.utils.VerifiableProperties) at java.lang.Class.getConstructor0(Class.java:3082) at java.lang.Class.getConstructor(Class.java:1825) at org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator.<init>(KafkaRDD.scala:153) at org.apache.spark.streaming.kafka.KafkaRDD.compute(KafkaRDD.scala:136) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) at org.apache.spark.scheduler.Task.run(Task.scala:109) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) ... 3 more
在驗(yàn)證kafka
屬性時不能使用scala
默認(rèn)的類,需要指定kafka
帶的類createDirectStream[String, String, StringDecoder, StringDecoder]
其中StringDecoder必須是kafka.serializer.StringDecoder
看完上述內(nèi)容,你們掌握Spark2.3.1+Kafka0.9使用Direct模式消費(fèi)信息異常怎么辦的方法了嗎?如果還想學(xué)到更多技能或想了解更多相關(guān)內(nèi)容,歡迎關(guān)注億速云行業(yè)資訊頻道,感謝各位的閱讀!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。