您好,登錄后才能下訂單哦!
這篇文章主要介紹了Spark1.0.0如何實(shí)現(xiàn)偽分布安裝,具有一定借鑒價(jià)值,感興趣的朋友可以參考下,希望大家閱讀完這篇文章之后大有收獲,下面讓小編帶著大家一起了解一下。
軟件準(zhǔn)備:
spark-1.0.0-bin-hadoop1.tgz 下載地址:spark1.0.0
scala-2.10.4.tgz 下載下載:Scala 2.10.4
hadoop-1.2.1-bin.tar.gz 下載地址:hadoop-1.2.1-bin.tar.gz
jdk-7u60-linux-i586.tar.gz 下載地址:去官網(wǎng)下載就行,這個(gè)1.7.x都行
hadoop-1.2.1安裝步驟,請(qǐng)看: http://my.oschina.net/dataRunner/blog/292584
1.解壓:
tar -zxvf scala-2.10.4.tgz mv scala-2.10.4 scala tar -zxvf spark-1.0.0-bin-hadoop1.tgz mv spark-1.0.0-bin-hadoop1 spark
2. 配置環(huán)境變量:
vim /etc/profile (在最后一行加入以下內(nèi)容就行) export HADOOP_HOME_WARN_SUPPRESS=1 export JAVA_HOME=/home/big_data/jdk export JRE_HOME=${JAVA_HOME}/jre export CLASS_PATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export HADOOP_HOME=/home/big_data/hadoop export HIVE_HOME=/home/big_data/hive export SCALA_HOME=/home/big_data/scala export SPARK_HOME=/home/big_data/spark export PATH=.:$SPARK_HOME/bin:$SCALA_HOME/bin:$HIVE_HOME/bin:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH
3.修改spark的spark-env.sh文件
cd spark/conf cp spark-env.sh.template spark-env.sh vim spark-env.sh (在最后一行加入以下內(nèi)容就行) export JAVA_HOME=/home/big_data/jdk export SCALA_HOME=/home/big_data/scala export SPARK_MASTER_IP=192.168.80.100 export SPARK_WORKER_MEMORY=200m export HADOOP_CONF_DIR=/home/big_data/hadoop/conf
然后就配置完畢勒?。。。ň瓦@么簡(jiǎn)單,艸,很多人都知道,但是共享的人太少勒)
hadoop-1.2.1測(cè)試步驟,請(qǐng)看: http://my.oschina.net/dataRunner/blog/292584
1.驗(yàn)證scala
[root@master ~]# scala -version Scala code runner version 2.10.4 -- Copyright 2002-2013, LAMP/EPFL [root@master ~]# [root@master big_data]# scala Welcome to Scala version 2.10.4 (Java HotSpot(TM) Client VM, Java 1.7.0_60). Type in expressions to have them evaluated. Type :help for more information. scala> 1+1 res0: Int = 2 scala> :q
2.驗(yàn)證spark (先啟動(dòng)hadoop-dfs.sh)
[root@master big_data]# cd spark [root@master spark]# cd sbin/start-all.sh ( 也可以分別啟動(dòng) [root@master spark]$ sbin/start-master.sh 可以通過(guò) http://master:8080/ 看到對(duì)應(yīng)界面 [root@master spark]$ sbin/start-slaves.sh park://master:7077 可以通過(guò) http://master:8081/ 看到對(duì)應(yīng)界面 ) [root@master spark]# jps [root@master ~]# jps 4629 NameNode (hadoop的) 5007 Master (spark的) 6150 Jps 4832 SecondaryNameNode (hadoop的) 5107 Worker (spark的) 4734 DataNode (hadoop的) 可以通過(guò) http://192.168.80.100:8080/ 看到對(duì)應(yīng)界面 [root@master big_data]# spark-shell Spark assembly has been built with Hive, including Datanucleus jars on classpath 14/07/20 21:41:04 INFO spark.SecurityManager: Changing view acls to: root 14/07/20 21:41:04 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root) 14/07/20 21:41:04 INFO spark.HttpServer: Starting HTTP Server 14/07/20 21:41:05 INFO server.Server: jetty-8.y.z-SNAPSHOT 14/07/20 21:41:05 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:43343 Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 1.0.0 /_/ Using Scala version 2.10.4 (Java HotSpot(TM) Client VM, Java 1.7.0_60) 。。。 scala> 可以通過(guò) http://192.168.80.100:4040/ 看到對(duì)應(yīng)界面 (隨便上傳一個(gè)文件,里面隨便一些英文單詞,到hdfs上面) scala> val file=sc.textFile("hdfs://master:9000/input") 14/07/20 21:51:05 INFO storage.MemoryStore: ensureFreeSpace(608) called with curMem=31527, maxMem=311387750 14/07/20 21:51:05 INFO storage.MemoryStore: Block broadcast_1 stored as values to memory (estimated size 608.0 B, free 296.9 MB) file: org.apache.spark.rdd.RDD[String] = MappedRDD[5] at textFile at <console>:12 scala> val count=file.flatMap(line=>line.split(" ")).map(word=>(word,1)).reduceByKey(_+_) 14/07/20 21:51:14 INFO mapred.FileInputFormat: Total input paths to process : 1 count: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[10] at reduceByKey at <console>:14 scala> count.collect() 14/07/20 21:51:48 INFO spark.SparkContext: Job finished: collect at <console>:17, took 2.482381535 s res0: Array[(String, Int)] = Array((previously-registered,1), (this,3), (Spark,1), (it,3), (original,1), (than,1), (its,1), (previously,1), (have,2), (upon,1), (order,2), (whenever,1), (it’s,1), (could,3), (Configuration,1), (Master's,1), (SPARK_DAEMON_JAVA_OPTS,1), (This,2), (which,2), (applications,2), (register,,1), (doing,1), (for,3), (just,2), (used,1), (any,1), (go,1), ((equivalent,1), (Master,4), (killing,1), (time,1), (availability,,1), (stop-master.sh,1), (process.,1), (Future,1), (node,1), (the,9), (Workers,1), (however,,1), (up,2), (Details,1), (not,3), (recovered,1), (process,1), (enable,3), (spark-env,1), (enough,1), (can,4), (if,3), (While,2), (provided,1), (be,5), (mode.,1), (minute,1), (When,1), (all,2), (written,1), (store,1), (enter,1), (then,1), (as,1), (officially,1)... scala> scala> count.saveAsTextFile("hdfs://master:9000/output") (結(jié)果保存到hdfs上的/output文件夾下) scala> :q Stopping spark context. [root@master ~]# hadoop fs -ls / Found 3 items drwxr-xr-x - root supergroup 0 2014-07-18 21:10 /home -rw-r--r-- 1 root supergroup 1722 2014-07-18 06:18 /input drwxr-xr-x - root supergroup 0 2014-07-20 21:53 /output [root@master ~]# [root@master ~]# hadoop fs -cat /output/p* 。。。 (mount,1) (production-level,1) (recovery).,1) (Workers/applications,1) (perspective.,1) (so,2) (and,1) (ZooKeeper,2) (System,1) (needs,1) (property Meaning,1) (solution,1) (seems,1)
感謝你能夠認(rèn)真閱讀完這篇文章,希望小編分享的“Spark1.0.0如何實(shí)現(xiàn)偽分布安裝”這篇文章對(duì)大家有幫助,同時(shí)也希望大家多多支持億速云,關(guān)注億速云行業(yè)資訊頻道,更多相關(guān)知識(shí)等著你來(lái)學(xué)習(xí)!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。