您好,登錄后才能下訂單哦!
這篇文章主要介紹Hadoop環(huán)境如何配置及啟動(dòng),文中介紹的非常詳細(xì),具有一定的參考價(jià)值,感興趣的小伙伴們一定要看完!
core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://slave2.hadoop:8020</value> <final>true</final> </property> <property> <name>hadoop.tmp.dir</name> <value>file:/home/hadoop/hadoop-root/tmp</value> </property> <property> <name>fs.checkpoint.period</name> <value>300</value> <description>The number of seconds between two periodic checkpoints.</description> </property> <property> <name>fs.checkpoint.size</name> <value>67108864</value> <description>The size of the current edit log (in bytes) that triggers a periodic checkpoint even if the fs.checkpoint.period hasn't expired. </description> </property> <property> <name>fs.checkpoint.dir</name> <value>${hadoop.tmp.dir}/dfs/namesecondary</value> <description>Determines where on the local filesystem the DFS secondary namenode should store the temporary images to merge.If this is a comma-delimited list of directories then the image is replicated in all of the directories for redundancy.</description> </property> </configuration>
hdfs-site.xml
<configuration> <property> <name>dfs.namenode.name.dir</name> <value>/home/hadoop/hadoop-root/dfs/name</value> <final>true</final> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/hadoop/hadoop-root/dfs/data</value> <final>true</final> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>slave1:50090</value> </property> </configuration>
mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/hadoop-root/mapred/system</value> <final>true</final> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/hadoop-root/mapred/local</value> <final>true</final> </property> <property> <name>mapreduce.tasktracker.map.tasks.maximum</name> <value>2</value> </property> <property> <name>mapreduce.tasktracker.reduce.tasks.maximum</name> <value>1</value> </property> <property> <name>mapreduce.job.maps</name> <value>2</value> </property> <property> <name>mapreduce.job.reduces</name> <value>1</value> </property> <property> <name>mapreduce.tasktracker.http.threads</name> <value>50</value> </property> <property> <name>io.sort.factor</name> <value>20</value> </property> <property> <name>mapred.child.java.opts</name> <value>-Xmx400m</value> </property> <property> <name>mapreduce.task.io.sort.mb</name> <value>200</value> </property> <property> <name>mapreduce.map.sort.spill.percent</name> <value>0.8</value> </property> <property> <name>mapreduce.map.output.compress</name> <value>true</value> </property> <property> <name>mapreduce.map.output.compress.codec</name> <value>org.apache.hadoop.io.compress.DefaultCodec</value> </property> <property> <name>mapreduce.reduce.shuffle.parallelcopies</name> <value>10</value> </property> </configuration>
一、恢復(fù)hadoop
1、停止所有服務(wù)
2、刪除/home/hadoop/hadoop-root/dfs下的data和name,并且重新建立
3、刪除/home/hadoop/hadoop-root/tmp下的文件
4、在namenode節(jié)點(diǎn)執(zhí)行 hadoop namenode -format
5、啟動(dòng)hadoop服務(wù)
-----自此hadoop恢復(fù)----
6、停止hbase服務(wù),停不掉就殺掉
7、(多個(gè)節(jié)點(diǎn))進(jìn)入/tmp/hbase-root/zookeeper 刪除所有文件
8、啟動(dòng)hbase服務(wù)
以上是“Hadoop環(huán)境如何配置及啟動(dòng)”這篇文章的所有內(nèi)容,感謝各位的閱讀!希望分享的內(nèi)容對(duì)大家有幫助,更多相關(guān)知識(shí),歡迎關(guān)注億速云行業(yè)資訊頻道!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。