您好,登錄后才能下訂單哦!
前提:
一定要保證iptables是關(guān)閉的并且selinux是disabled
1、準(zhǔn)備硬件
1臺(tái)namenode和3臺(tái)datanode
namenode 192.168.137.100
datanode1 192.168.137.101
datanode2 192.168.137.102
datanode3 192.168.137.103
2、在4臺(tái)機(jī)器上建立hadoop用戶(也可以是別的用戶名)
useradd hadoop
3、在4臺(tái)機(jī)器上安裝JDK 1.6
安裝后的JAVA_HOME放在/jdk
配置環(huán)境變量
vim /etc/bashrc
export JAVA_HOME=/jdk
scp -r /jdk* datanode1:/
scp -r /jdk* datanode2:/
scp -r /jdk* datanode3:/
4、配置4臺(tái)機(jī)器的多機(jī)互信
一定記得將各個(gè)節(jié)點(diǎn)的
/home/hadoop/.ssh
和其以下的所有文件都設(shè)成700權(quán)限位
5、安裝hadoop
tar zxvf hadoop-1.0.4.tar
安裝在/hadoop
將/hadoop權(quán)限位置為755
vim /hadoop/conf/hadoop-env.sh
export JAVA_HOME=/jdk
vim /hadoop/conf/core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://namenode:9000</value>
</property>
vim /hadoop/conf/mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>namenode:9001</value>
</property>
vim /hadoop/conf/hdfs-site.xml
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/name</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
vim /hadoop/conf/masters
192.168.137.100
vim /hadoop/conf/slaves
192.168.137.101
192.168.137.102
192.168.137.103
把配置好的HADOOP拷貝到datanode上去
cd /
scp -r hadoop datanode1:/hadoop
scp -r hadoop datanode2:/hadoop
scp -r hadoop datanode3:/hadoop
6、安裝zookeeper
tar zxvf zookeeper-3.3.4.tar
安裝在/zookeeper
cd /zookeeper/conf
cp zoo_sample.cfg zoo.cfg
vim zoo.cfg
加入
dataDir=/zookeeper-data
dataLogDir=/zookeeper-log
server.1=namenode:2888:3888
server.2=datanode1:2888:3888
server.3=datanode2:2888:3888
server.4=datanode3:2888:3888
建立/zookeeper-data
mkdir /zookeeper-data
建立/zookeeper-log
建立文件/zookeeper-data/myid
vim /zookeeper-data/myid
1
(datanode1里對(duì)應(yīng)寫(xiě)入2)
(datanode2里對(duì)應(yīng)寫(xiě)入3)
(datanode3里對(duì)應(yīng)寫(xiě)入4)
10、安裝hive
tarzxvf hive-0.8.0.tar
到/hive
vim /hive/bin/hive-config.sh
export HADOOP_HOME=/hadoop
export PATH=.$HADOOP_HOME/bin:$PATH
export HIVE_HOME=/hive
export PATH=$HIVE_HOME/bin:$PATH
export JAVA_HOME=/jdk
export JRE_HOME=/jdk/jre
export CLASSPATH=.$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=.$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
vim /etc/bashrc
export HIVE_HOME=/hive
11、啟動(dòng)hadoop
格式化并啟動(dòng)系統(tǒng)
su hadoop
cd /usr/local/hadoop/bin
./hadoop namenode -format
./start-dfs.sh
./start-mapred.sh
http://192.168.137.100:50070查看HDFS namenode
http://192.168.137.100:50030 查看MAPREDUCE JOB TRACKERS
http://192.168.137.101:5006查看datanode1上的TASK TRACKER
12、相關(guān)命令
hadoop fs -mkdir direc
hadoop fs -ls
hadoop fs -cp file:///tmp/test.file /user/hadoop/direc
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。