您好,登錄后才能下訂單哦!
這篇文章主要講解了“hadoop安裝文件配置教程”,文中的講解內(nèi)容簡(jiǎn)單清晰,易于學(xué)習(xí)與理解,下面請(qǐng)大家跟著小編的思路慢慢深入,一起來研究和學(xué)習(xí)“hadoop安裝文件配置教程”吧!
1.目前只是單機(jī)環(huán)境,namenode和datanode都在一臺(tái)機(jī)器。hadoop版本選的是2.7.2,jdk選的是jdk-8u131-linux-64.rpm
2.安裝jdk
rpm -ivh jdk-8u111-linux-x64.rpm
3.安裝密鑰
ssh -keygen -t rsa
在root目錄下會(huì)自動(dòng)生成.ssh目錄
4.把公鑰寫到authorized_keys里面
5.修改權(quán)限
6.關(guān)閉防火墻
7.解壓hadoop安裝包
tar zxf hadoop-2.7.2.tar.gz
8.修改 /etc/profile
#java
JAVA_HOME=/usr/java/default
export PATH=$PATH:$JAVA_HOME/bin
#hadoop
export HADOOP_HOME=/hadoop_soft/hadoop-2.7.2
export HADOOP_OPTS="$HADOOP_OPTS
-Djava.library.path=/hadoop_soft/hadoop-2.7.2/lib/native"
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export
HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
##export LD_LIBRARY_PATH=/hadoop_soft/hadoop-2.7.2/lib/native/:$LD_LIBRARY_PATH
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
9.修改配置文件 hadoop-2.7.2/etc/hadoop/
(1) core-site.xml fs.defaultFS 就是namenode的節(jié)點(diǎn)名稱和地址
fs.defaultFS
hdfs://192.168.1.120:9000
hadoop.tmp.dir
/hadoop_soft/hadoop-2.7.2/current/tmp
fs.trash.interval
4320
(2)hdfs-site.xml
dfs.namenode.name.dir
/hadoop_soft/hadoop-2.7.2/current/dfs/name
dfs.namenode.data.dir
/hadoop_soft/hadoop-2.7.2/current/data
dfs.replication
1
dfs.webhdfs.enabled
true
dfs.permissions.superusergroup
staff
dfs.permissions.enabled
false
(3). yarn-site.xml
yarn.resourcemanager.hostname
192.168.1.115
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
yarn.resourcemanager.address
192.168.1.120:18040
yarn.resourcemanager.scheduler.address
192.168.1.120:18030
yarn.resourcemanager.resource-tracker.address
192.168.1.120:18025
yarn.resourcemanager.admin.address
192.168.1.120:18141
yarn.resourcemanager.webapp.address
192.168.1.120:18088
yarn.log-aggregation-enable
true
yarn.log-aggregation.retain-seconds
86400
yarn.log-aggregation.retain-check-interval-seconds
86400
yarn.nodemanager.remote-app-log-dir
/tmp/logs
yarn.nodemanager.remote-app-log-dir-suffix
logs
(4).復(fù)制mapred-site.xml.template到mapred-site.xml
mapreduce.framework.name
yarn
mapreduce.jobtracker.http.address
192.168.1.120:50030
mapreduce.jobhistory.address
192.168.1.120:10020
mapreduce.jobhistory.webapp.address
192.168.1.120:19888
mapreduce.jobhistory-done-dir
/jobhistory/done
mapreduce.intermediate-done-dir
/jobhistory/done_intermediate
mapreduce.job.ubertask.enable
true
(5).編輯slaves,添加主機(jī)的IP
192.168.1.120
(6).在hadoop-env.sh文件中添加java_home,找到文件JAVA_HOME這一行
10.格式化文件系統(tǒng)
Hdfs namenode –format
11.啟動(dòng) hadoop-2.7.2/sbin/start-all.sh
12.驗(yàn)證 jps
6433 NameNode
6532 DataNode
7014 NodeManager
6762 SecondaryNameNode
6910 ResourceManager
7871 Jps
13.hadoop 基本命令
hadoop fs –mkdir /hadoop-test
hadoop fs -find / -name hadoop-test
hadoop fs -put NOTICE.txt /hadoop-test/
hadoop fs –rm -R
感謝各位的閱讀,以上就是“hadoop安裝文件配置教程”的內(nèi)容了,經(jīng)過本文的學(xué)習(xí)后,相信大家對(duì)hadoop安裝文件配置教程這一問題有了更深刻的體會(huì),具體使用情況還需要大家實(shí)踐驗(yàn)證。這里是億速云,小編將為大家推送更多相關(guān)知識(shí)點(diǎn)的文章,歡迎關(guān)注!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。