您好,登錄后才能下訂單哦!
小編給大家分享一下SUSE上如何搭建Hadoop環(huán)境,希望大家閱讀完這篇文章之后都有所收獲,下面讓我們一起去探討吧!
【環(huán)境】:
經(jīng)常遭遇因為依賴軟件版本不匹配導致的問題,這次大意了,以為java問題不大,就用本來通過yast安裝的java1.6 openjdk去搞了,結果可想而知,問題很多,反復定位,反復谷歌百度,最后一朋友啟發(fā)下決定換換jdk版本。問題解決了,所以這里貼下我的環(huán)境
java環(huán)境: java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
系統(tǒng): openSUSE 11.2 (x86_64)
hadoop版本:Hadoop-1.1.2.tar.gz
【Step1:】創(chuàng)建hadoop用戶及用戶組
組:hadoop
用戶:hadoop -> /home/hadoop
加權限: vi /etc/sudoers 增加 hadoop ALL=(ALL:ALL) ALL
【Stpe2:】安裝hadoop
筆者tar xf 安裝完后是這樣的目錄結構(供參考):
/home/hadoop/hadoop-home/[bin|conf]
【Step3:】配SSH(避免啟動hadoop時需要密碼)
略安裝ssh
ssh-keygen -t rsa -P "" [一路回車及確認]
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
嘗試 ssh localhost [檢查下是不是不需要密碼啦]
【Step4:】安裝java
版本見【環(huán)境】部分
【Step5:】配conf/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_17xxx #[jdk目錄]
export HADOOP_INSTALL=/home/hadoop/hadoop-home
export PATH=$PATH:$HADOOP_INSTALL/bin #[這里是hadoop腳本所在目錄]
【Step6:】使用單機模式
hadoop version
mkdir input
man find > input/test.txt
hadoop jar hadoop-examples-1.1.2.jar wordcount input output
【Step7:】偽分布模式(單機實現(xiàn)namenode,datanode,tackerd等模塊)
conf/[core-site.xml、hdfs-site.xml、mapred-site.xml]
core-site.xml
<configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop/tmp</value> </property> </configuration>
hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.name.dir</name> <value>/usr/local/hadoop/datalog1,/usr/local/hadoop/datalog2</value> </property> <prop<configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration>erty> <name>dfs.data.dir</name> <value>/usr/local/hadoop/data1,/usr/local/hadoop/data2</value> </property> </configuration>
mapred-site.xml
<configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration>
【Step8:】啟動
格式化:hadoop namenode -format
cd bin
sh start-all.sh
hadoop@linux-peterguo:~/hadoop-home/bin> sh start-all.sh starting namenode, logging to /home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-namenode-linux-peterguo.out localhost: starting datanode, logging to /home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-datanode-linux-peterguo.out localhost: starting secondarynamenode, logging to /home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-secondarynamenode-linux-peterguo.out starting jobtracker, logging to /home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-jobtracker-linux-peterguo.out localhost: starting tasktracker, logging to /home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-tasktracker-linux-peterguo.out
jps查看進程是否全啟動 五個java進程 jobtracker/tasktracker/namenode/datanode/sencondarynamenode
可以通過下面的操作來查看服務是否正常,在Hadoop中用于監(jiān)控集群健康狀態(tài)的Web界面:
http://localhost:50030/ - Hadoop 管理介面
http://localhost:50060/ - Hadoop Task Tracker 狀態(tài)
http://localhost:50070/ - Hadoop DFS 狀態(tài)
【Step9:】操作dfs數(shù)據(jù)文件
hadoop dfs -mkdir input
hadoop dfs -copyFromLocal input/test.txt input
hadoop dfs -ls input
【Step10:】運行dfs上的mr
hadoop jar hadoop-examples-1.1.2.jar wordcount input output
hadoop dfs -cat output/*
【Step11:】關閉
stop-all.sh
看完了這篇文章,相信你對“SUSE上如何搭建Hadoop環(huán)境”有了一定的了解,如果想了解更多相關知識,歡迎關注億速云行業(yè)資訊頻道,感謝各位的閱讀!
免責聲明:本站發(fā)布的內容(圖片、視頻和文字)以原創(chuàng)、轉載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權內容。