您好,登錄后才能下訂單哦!
這篇文章主要講解了“Ubuntu如何搭建完全分布式”,文中的講解內(nèi)容簡(jiǎn)單清晰,易于學(xué)習(xí)與理解,下面請(qǐng)大家跟著小編的思路慢慢深入,一起來(lái)研究和學(xué)習(xí)“Ubuntu如何搭建完全分布式”吧!
環(huán)境說(shuō)明
本文使用vmware® workstation 12 pro虛擬機(jī)創(chuàng)建并安裝三臺(tái)ubuntu16.04系統(tǒng)分別命名為master、slave1、slave2對(duì)應(yīng)對(duì)應(yīng)namenode、datanode、datanode。
安裝過程中要求三個(gè)系統(tǒng)中配置基本相同除個(gè)別配置(比如:節(jié)點(diǎn)的命名)
192.168.190.128 master
192.168.190.129 slave1
192.168.190.131 slave2
在虛擬機(jī)linux上安裝與配置hadoop
需要說(shuō)明的是下面的所有配置三臺(tái)ubuntu系統(tǒng)都要配置而且是基本一樣,為了使配置一致,先在一臺(tái)機(jī)器上配置然后將對(duì)應(yīng)配置scp到其他機(jī)器上
虛擬機(jī)的安裝不是本文重點(diǎn),這里就不贅述了。安裝之后是這樣的:
在linux上安裝hadoop之前,需要安裝兩個(gè)程序:
1)jdk1.6(或更高版本),本文采用jdk 1.7。hadoop是java編寫的程序,hadoop的編譯及mapreduce都需要使用jdk。因此,在安裝hadoop前,必須安裝jdk1.6或更高版本。
2)ssh(安裝外殼協(xié)議),推薦安裝openssh.hadoop需要通過ssh來(lái)啟動(dòng)slave列表中各臺(tái)機(jī)器的守護(hù)進(jìn)程,因此ssh也是必須安裝的,即使是安裝偽分布版本(因?yàn)閔adoop并沒有區(qū)分集群式和偽分布式)。對(duì)于偽分布式,hadoop會(huì)采用與集群相同處理方式,即按次序啟動(dòng)文件conf/slaves中記載的主機(jī)上的進(jìn)程,只不過在偽分布式中slave為localhost(即本身),所以對(duì)于偽分布式hadoop,ssh也是一樣必須的。
部署步驟
添加一個(gè)hadoop用戶,并賦予相應(yīng)權(quán)利,我們接下來(lái)hadoop hbase的安裝都要在hadoop用戶下操作,所以hadoop用戶要將hadoop的文件權(quán)限以及文件所有者賦予給hadoop用戶。
1.每個(gè)虛擬機(jī)系統(tǒng)上都添加 hadoop 用戶,并添加到 sudoers
sudo adduser hadoop
sudo gedit /etc/sudoers
找到對(duì)應(yīng)添加如下:
# user privilege specification root all=(all:all) all hadoop all=(all:all) all
2.切換到 hadoop 用戶:
su hadoop
3.修改 /etc/hostname 主機(jī)名為 master
當(dāng)然master虛擬機(jī)設(shè)置為master
其他兩個(gè)虛擬機(jī)分別設(shè)置為slave1、slave2
4.、修改 /etc/hosts
127.0.0.1 localhost 127.0.1.1 localhost.localdomain localhost 192.168.190.128 master 192.168.190.129 slave1 192.168.190.131 slave2 # the following lines are desirable for ipv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters
5.安裝jdk 1.7
(1)下載和安裝jdk 1.7
jdk-7u76-linux-x64.tar.gz
使用tar命令
tar -zxvf jdk-7u76-linux-x64.tar.gz
將安裝文件移動(dòng)到j(luò)dk安裝目錄,本文jdk的安裝目錄為/usr/lib/jvm/jdk1.7.0_76
(2)配置環(huán)境變量
輸入命令:
sudo gedit /etc/profile
輸入密碼,打開profile文件。在最下面輸入如下內(nèi)容:
#set java environment export java_home=/usr/lib/jvm/jdk1.7.0_76 export jre_home=${java_home}/jre export classpath=.:${java_home}/lib:${jre_home}/lib export path=${java_home}/bin:/home/hadoop/hadoop-2.7.1/bin:/home/hadoop/hadoop-2.7.1/sbin:/home/hadoop/hbase-1.2.4/bin:$path
需要說(shuō)明的是可能profile文件當(dāng)前權(quán)限是只讀的,需要使用
sudo chmod 777 /etc/profile
命令修改文件讀寫權(quán)限。文件中已經(jīng)包含了hadoop以及hbase的環(huán)境配置。
這一步的意義是配置環(huán)境變量,使系統(tǒng)可以找到j(luò)dk。
(4)驗(yàn)證jdk是否安裝成功
輸入命令:
java -version
會(huì)出現(xiàn)如下jdk版本信息:
java version "1.7.0_76" java(tm) se runtime environment (build 1.7.0_76-b13) java hotspot(tm) 64-bit server vm (build 24.76-b04, mixed mode)
如果出現(xiàn)上述jdk版本信息說(shuō)明當(dāng)前安裝jdk并未設(shè)置成ubuntu系統(tǒng)默認(rèn)的jdk,接下來(lái)還需要手動(dòng)將安裝的jdk設(shè)置成系統(tǒng)默認(rèn)的jdk。
(5)手動(dòng)設(shè)置系統(tǒng)默認(rèn)jdk
在終端依次輸入命令:
sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.7.0_76/bin/java 300
sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.7.0_76/bin/javac 300
sudo update-alternatives --config java
接下來(lái)輸入java -version就可以看到所安裝的jdk的版本信息了。
三臺(tái)虛擬機(jī)都要安裝vmware tools工具方便復(fù)制粘貼
6.配置ssh免密碼登錄
(1)確認(rèn)已經(jīng)連上互聯(lián)網(wǎng),然后輸入命令:
sudo apt-get install ssh
(2)配置 master、slave1 和 slave2 節(jié)點(diǎn)可以通過 ssh 無(wú)密碼互相訪問
注意這里的所有操作都是在hadoop用戶下操作的。
首先,查看下hadoop用戶下是否存在.ssh文件夾(注意ssh文件前面有”.”這是一個(gè)隱藏文件夾),輸入命令:
ls -a -l
可以得到
drwxr-xr-x 9 root root 4096 feb 1 02:41 . drwxr-xr-x 4 root root 4096 jan 27 01:50 .. drwx------ 3 root root 4096 jan 31 03:35 .cache drwxr-xr-x 5 root root 4096 jan 31 03:35 .config drwxrwxrwx 11 hadoop root 4096 feb 1 00:18 hadoop-2.7.1 drwxrwxrwx 8 hadoop root 4096 feb 1 02:47 hbase-1.2.4 drwxr-xr-x 3 root root 4096 jan 31 03:35 .local drwxr-xr-x 2 root root 4096 jan 31 14:47 software drwxr-xr-x 2 hadoop root 4096 feb 1 00:01 .ssh
一般來(lái)說(shuō),安裝ssh時(shí)會(huì)自動(dòng)在當(dāng)前用戶下創(chuàng)建這個(gè)隱藏文件夾,如果沒有,可以手動(dòng)創(chuàng)建一個(gè)。
sudo mkdir .ssh
注意這里的.ssh要是hadoop權(quán)限擁有,如果是root的話,使用下面命令:
sudo chown -r hadoop .ssh
接下來(lái),輸入命令:
ssh-keygen -t rsa
如果沒有權(quán)限前面加一個(gè)sudo.
執(zhí)行完可以看到一個(gè)圖標(biāo)并在.ssh文件下創(chuàng)建兩個(gè)文件:id_rsa和id_rsa.pub
cat ~/ssh/id_rsa.pub >> ~/ssh/authorized_keys
在ubuntu中,~代表單前用戶文件夾,此處即/home/hadoop。
這表命令的功能是把公鑰加到用于認(rèn)證的公鑰文件中,這里的authorized_keys是用于認(rèn)證的公鑰文件。
然后使用命令:
sudo gedit authorized_keys
打開對(duì)應(yīng)虛擬機(jī)生成的密碼,如master主機(jī)的hadoop用戶生成了,將其他主機(jī)生成的秘鑰添加到master主機(jī)的authorized_keys文件的末尾,這樣master主機(jī)就擁有slave1的hadoop用戶以及slave2的hadoop用戶的秘鑰了。
如下:
不要復(fù)制我的,復(fù)制我的沒用,我這里只是實(shí)例一下,復(fù)制你自己的三臺(tái)虛擬機(jī)各自生成的秘鑰
ssh-rsa aaaab3nzac1yc2eaaaadaqabaaabaqc743ocp2voa3dehbka+n7cyjc4jv2tj8z6tgvwcxg0njl3ykwyifgc9riyfyrwcl5byi34oe7dytf+9utvh85hca1/idp1m02nlpxsijmcps4ungmlfswg/f/c3bqut7i4t6ehwo/frhjeibu5o/9ghoxk/ykhgjibyh8hhalcke6jtt80i63r2+3dnlhlnzw1sqrjp2qfrgyv61j5dfuyrhfd+/etkftxc7izlvckc7x6hmo4qimq0gbsx9iqto0to1skgylhcx3cbo3hf4i19rukt168eg/x2l1qivf+vgxqudm3lza9/pxdiek5p8c8xupcaor67jmflwll3eub hadoop@master ssh-rsa aaaab3nzac1yc2eaaaadaqabaaabaqdq1jf6ds9y+klqnihq+pdgxm1osf+rsxcglddlzw+qgk7nt28brk6qucm3kjqa/ekekqdhdwegtiqvriosy4a2fabkrsjiornc4qyq/rqb06juvshwtob91qwmv/j/o3mgsentjlfmbupsyw8rrxqv+tytqq+gipl7x0wgubrqyrhjjzkaxqglge3md/siyjn8ge4g31rrtcx9qdvcftcthkvqca0b0f98y+u9fu6w4ari28olxftlzucsebipmze4uwquxt+2kmz0hunpejsdrlkrfqo1okus0pezruvrmyby5flt4tnv0xoqbyclzxieev/ppgh8aeb4qs/zxb25 hadoop@slave1 ssh-rsa aaaab3nzac1yc2eaaaadaqabaaabaqdi8ppgxt94saetuhvt2jmlo4ed11r1wlon1eha5vi3qqm7cgt4ys7lvxl53dc5g7r0n4jwsf2htvd9jf77veixp5g3xqga7hafbimzqupucyahqy+v0rtepabungkfz0ukv+nq8bzjfsuv4hgrorw7yzqaa0ljevhii8uvza7dcz6ba1on/tlkvvzz3mdzulcn7+azjtptg8hpqaelqqws1uuiyiuanosqfpcadart/pjpazgkqek0lbrsvi+u+p0osrz9ax3wvouqknheinm4tmuo3tgyionjev1jqrocxbbzaeqllwnpa0yzbl/zmnjhkesitypmgzwszh3ylc8p hadoop@slave2
至此免密碼登錄主機(jī)已配置完畢。
(3)驗(yàn)證ssh是否已安裝成功,以及是否可以免密碼登錄主機(jī)。
輸入命令:
ssh -v
顯示結(jié)果:
openssh_7.2p2 ubuntu-4ubuntu2.1, openssl 1.0.2g 1 mar 2016
輸入命令:
ssh localhost
會(huì)有如下顯示:
welcome to ubuntu 16.04 lts (gnu/linux 4.4.0-21-generic x86_64) * documentation: https://help.ubuntu.com/ 458 packages can be updated. 171 updates are security updates. the programs included with the ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. ubuntu comes with absolutely no warranty, to the extent permitted by applicable law. the programs included with the ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. ubuntu comes with absolutely no warranty, to the extent permitted by applicable law. last login: wed feb 1 00:02:53 2017 from 127.0.0.1 to run a command as administrator (user "root"), use "sudo <command>". see "man sudo_root" for details.
這說(shuō)明已經(jīng)安裝成功,第一次登錄會(huì)詢問是否繼續(xù)鏈接,輸入yes即可以進(jìn)入。
實(shí)際上,在hadoop的安裝過程中,是否免密碼登錄是無(wú)關(guān)緊要的,但是如果不配置免密碼登錄,每次啟動(dòng)hadoop都需要輸入密碼以登錄到每臺(tái)機(jī)器的datanode上,考慮到一般的hadoop集群動(dòng)輒數(shù)百或者上千臺(tái)機(jī)器,因此一般來(lái)說(shuō)都會(huì)配置ssh免密碼登錄。
master 節(jié)點(diǎn)無(wú)密碼訪問 slave1 和 slave2 節(jié)點(diǎn):
ssh slave1
運(yùn)行結(jié)果:
welcome to ubuntu 16.04 lts (gnu/linux 4.4.0-59-generic x86_64) * documentation: https://help.ubuntu.com/ 312 packages can be updated. 10 updates are security updates. the programs included with the ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. ubuntu comes with absolutely no warranty, to the extent permitted by applicable law. the programs included with the ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. ubuntu comes with absolutely no warranty, to the extent permitted by applicable law. last login: wed feb 1 00:03:30 2017 from 192.168.190.131
不需要密碼,需要密碼說(shuō)明沒有配置成功,看看是不是哪步出現(xiàn)了問題。
安裝并運(yùn)行hadoop
介紹hadoop的安裝之前,先介紹一下hadoop對(duì)各個(gè)節(jié)點(diǎn)的角色定義。
hadoop分別從三個(gè)角度將主機(jī)劃分為兩種角色。第一,最基本的劃分為master和slave,即主人和奴隸;第二,從hdfs的角度,將主機(jī)劃分為namenode和datanode(在分布式文件系統(tǒng)中,目錄的管理很重要,管理目錄相當(dāng)于主任,而namenode就是目錄管理者);第三,從mapreduce角度,將主機(jī)劃分為jobtracker和tasktracker(一個(gè)job經(jīng)常被劃分為多個(gè)task,從這個(gè)角度不難理解它們之間的關(guān)系)。
hadoop有三種運(yùn)行方式:?jiǎn)螜C(jī)模式、偽分布與完全分布式。乍看之下,前兩種并不能體現(xiàn)云計(jì)算的優(yōu)勢(shì),但是它們便于程序的測(cè)試與調(diào)試,所以還是有意義的。
我的博客中有介紹單機(jī)模式和偽分布式方式這里就不贅述,本文主要著重介紹分布式方式配置。
(1)hadoop 用戶目錄下解壓下載的hadoop-2.7.1.tar.gz
使用解壓命令:
tar -zxvf hadoop-2.7.1.tar.gz
注意一下操作都是在hadoop用戶下操作的也就是hadoop-2.7.1的所有者是hadoop.如下所示:
total 120 drwxr-xr-x 19 hadoop hadoop 4096 feb 1 02:28 . drwxr-xr-x 4 root root 4096 jan 31 14:24 .. -rw------- 1 hadoop hadoop 1297 feb 1 03:37 .bash_history -rw-r--r-- 1 hadoop hadoop 220 jan 31 14:24 .bash_logout -rw-r--r-- 1 hadoop hadoop 3771 jan 31 14:24 .bashrc drwx------ 3 root root 4096 jan 31 22:49 .cache drwx------ 5 root root 4096 jan 31 23:59 .config drwx------ 3 root root 4096 jan 31 23:59 .dbus drwxr-xr-x 2 hadoop hadoop 4096 feb 1 00:55 desktop -rw-r--r-- 1 hadoop hadoop 25 feb 1 00:55 .dmrc drwxr-xr-x 2 hadoop hadoop 4096 feb 1 00:55 documents drwxr-xr-x 2 hadoop hadoop 4096 feb 1 00:55 downloads -rw-r--r-- 1 hadoop hadoop 8980 jan 31 14:24 examples.desktop drwx------ 2 hadoop hadoop 4096 feb 1 00:56 .gconf drwx------ 3 hadoop hadoop 4096 feb 1 00:55 .gnupg drwxrwxrwx 11 hadoop hadoop 4096 feb 1 00:30 hadoop-2.7.1 drwxrwxrwx 8 hadoop hadoop 4096 feb 1 02:44 hbase-1.2.4 -rw------- 1 hadoop hadoop 318 feb 1 00:56 .iceauthority drwxr-xr-x 3 root root 4096 jan 31 22:49 .local drwxr-xr-x 2 hadoop hadoop 4096 feb 1 00:55 music drwxr-xr-x 2 hadoop hadoop 4096 feb 1 00:55 pictures -rw-r--r-- 1 hadoop hadoop 675 jan 31 14:24 .profile drwxr-xr-x 2 hadoop hadoop 4096 feb 1 00:55 public drwx------ 2 hadoop hadoop 4096 feb 1 00:02 .ssh drwxr-xr-x 2 hadoop hadoop 4096 feb 1 00:55 templates drwxr-xr-x 2 hadoop hadoop 4096 feb 1 00:55 videos -rw------- 1 hadoop hadoop 51 feb 1 00:55 .xauthority -rw------- 1 hadoop hadoop 1492 feb 1 00:58 .xsession-errors
(2)配置 hadoop 的環(huán)境變量
sudo gedit /etc/profile
配置如下:
#set java environment export java_home=/usr/lib/jvm/jdk1.7.0_76 export jre_home=${java_home}/jre export classpath=.:${java_home}/lib:${jre_home}/lib export path=${java_home}/bin:/home/hadoop/hadoop-2.7.1/bin:/home/hadoop/hadoop-2.7.1/sbin:/home/hadoop/hbase-1.2.4/bin:$path
(3)配置三臺(tái)主機(jī)的hadoop文件,內(nèi)容如下。
conf/hadoop-env.sh:
/home/master/hadoop-2.7.1/etc/hadoop
首先如何找到這個(gè)文件呢,使用ubuntu的搜索工具如圖所示:
這里寫圖片描述
# the java implementation to use. export java_home=/usr/lib/jvm/jdk1.7.0_76 export hadoop_home=/home/master/hadoop-2.7.1 export path=$path:/home/master/hadoop-2.7.1/bin
conf/core-site.xml
/home/master/hadoop-2.7.1/etc/hadoop
<?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- licensed under the apache license, version 2.0 (the "license"); you may not use this file except in compliance with the license. you may obtain a copy of the license at http://www.apache.org/licenses/license-2.0 unless required by applicable law or agreed to in writing, software distributed under the license is distributed on an "as is" basis, without warranties or conditions of any kind, either express or implied. see the license for the specific language governing permissions and limitations under the license. see accompanying license file. --> <!-- put site-specific property overrides in this file. --> <configuration> <property> <name>fs.default.name</name> <value>hdfs://master:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/tmp</value> </property> </configuration>
conf/hdfs-site.xml
/home/master/hadoop-2.7.1/etc/hadoop
<?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- licensed under the apache license, version 2.0 (the "license"); you may not use this file except in compliance with the license. you may obtain a copy of the license at http://www.apache.org/licenses/license-2.0 unless required by applicable law or agreed to in writing, software distributed under the license is distributed on an "as is" basis, without warranties or conditions of any kind, either express or implied. see the license for the specific language governing permissions and limitations under the license. see accompanying license file. --> <!-- put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.replication</name> <value>2</value> </property> </configuration>
conf/mapred-site.xml
/home/master/hadoop-2.7.1/etc/hadoop
搜索發(fā)現(xiàn)沒有這個(gè)文件需要復(fù)制mapred-site.xml.template這個(gè)文件的內(nèi)容到mapred-site.xml
cp mapred-site.xml.template mapred-site.xml
配置如下:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- licensed under the apache license, version 2.0 (the "license"); you may not use this file except in compliance with the license. you may obtain a copy of the license at http://www.apache.org/licenses/license-2.0 unless required by applicable law or agreed to in writing, software distributed under the license is distributed on an "as is" basis, without warranties or conditions of any kind, either express or implied. see the license for the specific language governing permissions and limitations under the license. see accompanying license file. --> <!-- put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value>master:9001</value> </property> </configuration>
conf/masters
/home/master/hadoop-2.7.1/etc/hadoop
沒有手動(dòng)添加一個(gè)master文件
配置如下:
master
conf/slaves:
slave1 slave2
(4) 向 slave1 和 slave2 節(jié)點(diǎn)復(fù)制 hadoop2.7.1 整個(gè)目錄至相同的位置
進(jìn)入hadoop@master節(jié)點(diǎn)hadoop目錄下使用
scp -r hadoop-2.7.1 hadoop@slave1:~/ scp -r hadoop-2.7.1 hadoop@slave2:~/
(5)啟動(dòng)hadoop
在hadoop@master節(jié)點(diǎn)上執(zhí)行
hadoop@master:~$ hadoop namenode -format
如果提示:
hadoop: command not found
需要source一下環(huán)境變量文件
source /etc/profile
執(zhí)行結(jié)果如下:
hadoop@master:~$ hadoop namenode -format deprecated: use of this script to execute hdfs command is deprecated. instead use the hdfs command for it. 17/02/02 02:59:44 info namenode.namenode: startup_msg: /************************************************************ startup_msg: starting namenode startup_msg: host = master/192.168.190.128 startup_msg: args = [-format] startup_msg: version = 2.7.1 startup_msg: classpath = /home/hadoop/hadoop-2.7.1/etc/hadoop:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jsr305-3.0.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/apacheds-i18n-2.0.0-m15.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/asm-3.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-net-3.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/netty-3.6.2.final.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/activation-1.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jettison-1.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/hadoop-auth-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/api-util-1.0.0-m20.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jsch-0.1.42.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/gson-2.2.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-m15.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/hadoop-annotations-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/curator-framework-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/xz-1.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/api-asn1-api-1.0.0-m20.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/junit-4.11.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/hadoop-common-2.7.1-tests.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/hadoop-nfs-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/hadoop-common-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/netty-3.6.2.final.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/netty-all-4.0.23.final.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/xercesimpl-2.9.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/hadoop-hdfs-2.7.1-tests.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/hadoop-hdfs-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/asm-3.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/netty-3.6.2.final.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/activation-1.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jettison-1.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/guice-3.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/xz-1.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-api-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-common-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-registry-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-client-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-common-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/netty-3.6.2.final.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1-tests.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1.jar:/home/master/hadoop-2.7.1/contrib/capacity-scheduler/*.jar:/home/master/hadoop-2.7.1/contrib/capacity-scheduler/*.jar startup_msg: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29t06:04z startup_msg: java = 1.7.0_76 ************************************************************/ 17/02/02 02:59:44 info namenode.namenode: registered unix signal handlers for [term, hup, int] 17/02/02 02:59:44 info namenode.namenode: createnamenode [-format] formatting using clusterid: cid-ef219bd8-5622-49d9-b501-6370f3b5fc73 17/02/02 03:00:03 info namenode.fsnamesystem: no keyprovider found. 17/02/02 03:00:03 info namenode.fsnamesystem: fslock is fair:true 17/02/02 03:00:04 info blockmanagement.datanodemanager: dfs.block.invalidate.limit=1000 17/02/02 03:00:04 info blockmanagement.datanodemanager: dfs.namenode.datanode.registration.ip-hostname-check=true 17/02/02 03:00:04 info blockmanagement.blockmanager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 17/02/02 03:00:04 info blockmanagement.blockmanager: the block deletion will start around 2017 feb 02 03:00:04 17/02/02 03:00:04 info util.gset: computing capacity for map blocksmap 17/02/02 03:00:04 info util.gset: vm type = 64-bit 17/02/02 03:00:04 info util.gset: 2.0% max memory 966.7 mb = 19.3 mb 17/02/02 03:00:04 info util.gset: capacity = 2^21 = 2097152 entries 17/02/02 03:00:04 info blockmanagement.blockmanager: dfs.block.access.token.enable=false 17/02/02 03:00:04 info blockmanagement.blockmanager: defaultreplication = 2 17/02/02 03:00:04 info blockmanagement.blockmanager: maxreplication = 512 17/02/02 03:00:04 info blockmanagement.blockmanager: minreplication = 1 17/02/02 03:00:04 info blockmanagement.blockmanager: maxreplicationstreams = 2 17/02/02 03:00:04 info blockmanagement.blockmanager: shouldcheckforenoughracks = false 17/02/02 03:00:04 info blockmanagement.blockmanager: replicationrecheckinterval = 3000 17/02/02 03:00:04 info blockmanagement.blockmanager: encryptdatatransfer = false 17/02/02 03:00:04 info blockmanagement.blockmanager: maxnumblockstolog = 1000 17/02/02 03:00:04 info namenode.fsnamesystem: fsowner = hadoop (auth:simple) 17/02/02 03:00:04 info namenode.fsnamesystem: supergroup = supergroup 17/02/02 03:00:04 info namenode.fsnamesystem: ispermissionenabled = true 17/02/02 03:00:04 info namenode.fsnamesystem: ha enabled: false 17/02/02 03:00:04 info namenode.fsnamesystem: append enabled: true 17/02/02 03:00:05 info util.gset: computing capacity for map inodemap 17/02/02 03:00:05 info util.gset: vm type = 64-bit 17/02/02 03:00:05 info util.gset: 1.0% max memory 966.7 mb = 9.7 mb 17/02/02 03:00:05 info util.gset: capacity = 2^20 = 1048576 entries 17/02/02 03:00:05 info namenode.fsdirectory: acls enabled? false 17/02/02 03:00:05 info namenode.fsdirectory: xattrs enabled? true 17/02/02 03:00:05 info namenode.fsdirectory: maximum size of an xattr: 16384 17/02/02 03:00:05 info namenode.namenode: caching file names occuring more than 10 times 17/02/02 03:00:05 info util.gset: computing capacity for map cachedblocks 17/02/02 03:00:05 info util.gset: vm type = 64-bit 17/02/02 03:00:05 info util.gset: 0.25% max memory 966.7 mb = 2.4 mb 17/02/02 03:00:05 info util.gset: capacity = 2^18 = 262144 entries 17/02/02 03:00:05 info namenode.fsnamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 17/02/02 03:00:05 info namenode.fsnamesystem: dfs.namenode.safemode.min.datanodes = 0 17/02/02 03:00:05 info namenode.fsnamesystem: dfs.namenode.safemode.extension = 30000 17/02/02 03:00:05 info metrics.topmetrics: nntop conf: dfs.namenode.top.window.num.buckets = 10 17/02/02 03:00:05 info metrics.topmetrics: nntop conf: dfs.namenode.top.num.users = 10 17/02/02 03:00:05 info metrics.topmetrics: nntop conf: dfs.namenode.top.windows.minutes = 1,5,25 17/02/02 03:00:05 info namenode.fsnamesystem: retry cache on namenode is enabled 17/02/02 03:00:05 info namenode.fsnamesystem: retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 17/02/02 03:00:06 info util.gset: computing capacity for map namenoderetrycache 17/02/02 03:00:06 info util.gset: vm type = 64-bit 17/02/02 03:00:06 info util.gset: 0.029999999329447746% max memory 966.7 mb = 297.0 kb 17/02/02 03:00:06 info util.gset: capacity = 2^15 = 32768 entries re-format filesystem in storage directory /tmp/dfs/name ? (y or n) y 17/02/02 03:00:28 info namenode.fsimage: allocated new blockpoolid: bp-1867851271-192.168.190.128-1485975628037 17/02/02 03:00:28 info common.storage: storage directory /tmp/dfs/name has been successfully formatted. 17/02/02 03:00:29 info namenode.nnstorageretentionmanager: going to retain 1 images with txid >= 0 17/02/02 03:00:29 info util.exitutil: exiting with status 0 17/02/02 03:00:29 info namenode.namenode: shutdown_msg: /************************************************************ shutdown_msg: shutting down namenode at master/192.168.190.128 ************************************************************/
說(shuō)明初始格式化文件系統(tǒng)成功!
啟動(dòng)hadoop
注意啟動(dòng)hadoop是在主節(jié)點(diǎn)上執(zhí)行命令,其他節(jié)點(diǎn)不需要,主節(jié)點(diǎn)會(huì)自動(dòng)按照文件配置啟動(dòng)從節(jié)點(diǎn)
hadoop@master:~$ start-all.sh
執(zhí)行結(jié)果如下:
hadoop@master:~$ start-all.sh this script is deprecated. instead use start-dfs.sh and start-yarn.sh starting namenodes on [master] master: starting namenode, logging to /home/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-namenode-master.out slave1: starting datanode, logging to /home/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-datanode-slave1.out slave2: starting datanode, logging to /home/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-datanode-slave2.out starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-secondarynamenode-master.out starting yarn daemons starting resourcemanager, logging to /home/hadoop/hadoop-2.7.1/logs/yarn-hadoop-resourcemanager-master.out slave1: starting nodemanager, logging to /home/hadoop/hadoop-2.7.1/logs/yarn-hadoop-nodemanager-slave1.out slave2: starting nodemanager, logging to /home/hadoop/hadoop-2.7.1/logs/yarn-hadoop-nodemanager-slave2.out
可以通過jps命令查看各個(gè)節(jié)點(diǎn)運(yùn)行的進(jìn)程查看運(yùn)行是否成功。
master節(jié)點(diǎn):
hadoop@master:~$ jps 11012 jps 10748 resourcemanager 10594 secondarynamenode
slave1節(jié)點(diǎn):
hadoop@slave1:~$ jps 7227 jps 7100 nodemanager 6977 datanode
slave2節(jié)點(diǎn):
hadoop@slave2:~$ jps 6654 jps 6496 nodemanager 6373 datanode
你可以通過以下命令或者通過http://master:50070查看集群狀態(tài)。
hadoop dfsadmin -report
至此haoop的安裝配置已經(jīng)全部講完。
hbase的安裝
hbase有三種運(yùn)行模式,其中單機(jī)模式的配置非常簡(jiǎn)單,幾乎不用對(duì)安裝文件做任何修改就可以使用。如果要運(yùn)行分布式模式,hadoop是必不可少的。另外在對(duì)hbase的某些文件進(jìn)行配置之前,需要具備一下先決條件也是我們剛才介紹hadoop介紹過的。
(1)jdk
( 2 )hadoop
( 3 )ssh
完全分布式模式安裝
對(duì)于完全分布式安裝hbase,我們需要通過hbase-site.xml文檔來(lái)配置本機(jī)的hbase特性,通過hbase-env.sh來(lái)配置全局hbase集群系統(tǒng)的特性,也就是說(shuō)每一臺(tái)機(jī)器都可以通過hbase-env.sh來(lái)了解全局的hbase的某些特性。另外,各個(gè)hbase實(shí)例之間需要通過zookeeper來(lái)進(jìn)行通信,因此我們還需要維護(hù)一個(gè)(一組)zookeeper系統(tǒng)。
首先通過查看下hbase文件的所有者和權(quán)限
ls -a -l
得到如下:
total 36 drwxr-xr-x 9 root root 4096 feb 1 02:41 . drwxr-xr-x 4 root root 4096 jan 27 01:50 .. drwx------ 3 root root 4096 jan 31 03:35 .cache drwxr-xr-x 5 root root 4096 jan 31 03:35 .config drwxrwxrwx 11 hadoop root 4096 feb 1 00:18 hadoop-2.7.1 drwxrwxrwx 8 hadoop root 4096 feb 1 02:47 hbase-1.2.4 drwxr-xr-x 3 root root 4096 jan 31 03:35 .local drwxr-xr-x 2 root root 4096 jan 31 14:47 software drwxr-xr-x 2 hadoop root 4096 feb 1 00:01 .ssh
(1)conf/hbase-site.xml文件的配置
hbase.rootdir和hbase.cluster.distributed兩個(gè)參數(shù)的配置對(duì)于hbase來(lái)說(shuō)是必須的。我們通過hbase.rootdir來(lái)指定本臺(tái)機(jī)器hbase的存儲(chǔ)目錄;通過hbase.cluster.distributed來(lái)說(shuō)明其運(yùn)行模式(true為全分布式模式,false為單機(jī)模式或偽分布式模式);另外hbase.master指定的是hbase的master位置,hbase.zookeeper.quorum指定的是zookeeper集群的位置。如下所示為示例配置文檔:
同樣,通過ubuntu的目錄查找hbase-site.xml
/home/hadoop/hbase-1.2.4/conf
配置如下:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- /** * * licensed to the apache software foundation (asf) under one * or more contributor license agreements. see the notice file * distributed with this work for additional information * regarding copyright ownership. the asf licenses this file * to you under the apache license, version 2.0 (the * "license"); you may not use this file except in compliance * with the license. you may obtain a copy of the license at * * http://www.apache.org/licenses/license-2.0 * * unless required by applicable law or agreed to in writing, software * distributed under the license is distributed on an "as is" basis, * without warranties or conditions of any kind, either express or implied. * see the license for the specific language governing permissions and * limitations under the license. */ --> <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://master:9000/hbase</value> <description>hbase data storge directory</description> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> <description>assign hbase run mode</description> </property> <property> <name>hbase.master</name> <value>hdfs://master:60000</value> <description>assign master position</description> </property> <property> <name>hbase.zookeeper.quorum</name> <value>master,slave1,slave2</value> <description>assign zookeeper cluster</description> </property> </configuration>
(2)conf/regionservers的配置
regionservers文件列出了所有運(yùn)行hbase regionserver chregion server的機(jī)器。此文件的配置和hadoop的slaves文件十分類似,每一行指定一臺(tái)機(jī)器。當(dāng)hbase啟動(dòng)的時(shí)候,會(huì)將此文件中列出的機(jī)器啟動(dòng);同樣,當(dāng)hbase關(guān)閉的時(shí)候,也會(huì)同時(shí)自動(dòng)讀取文件并將所有機(jī)器關(guān)閉。
在我們配置中,hbase master及hdfs namenode運(yùn)行在hostname為master的機(jī)器上,hbase regionservers運(yùn)行在master、slave1、slave2上。根據(jù)上述配置,我們只需要將每臺(tái)機(jī)器上hbase安裝目錄下的conf/regionservers文件的內(nèi)容設(shè)置為:
/home/hadoop/hbase-1.2.4/conf
master slave1 slave2
另外,我們可以將hbase的master和hregionserver服務(wù)器分開。這樣只需要在上述配置文件中刪除master一行即可。
(3)zookeeper配置
完全分布式的hbase集群需要zookeeper實(shí)例運(yùn)行,并且需要所有的hbase節(jié)點(diǎn)能夠與zookeeper實(shí)例通信。默認(rèn)情況下hbase自身維護(hù)著一組默認(rèn)的zookeeper實(shí)例。不過,用戶可以配置獨(dú)立的zookeeper實(shí)例,這樣能夠使hbase系統(tǒng)更加健壯。
conf/hbase-env.sh配置文檔中hbase_manages_zk的默認(rèn)值為true,它表示hbase使用自身所帶的zookeeper實(shí)例。但是,該實(shí)例只能為單機(jī)或者偽分布式模式下的hbase提供服務(wù)。當(dāng)安裝完全分布模式時(shí)需要配置自己的zookeeper實(shí)例。在hbase-site.xml文檔中配置了hbase.zookeeper.quorum屬性后,系統(tǒng)將有限使用該屬性所指定的zookeeper列表。此時(shí),若hbase_manages_zk變量值為true,那么在啟動(dòng)hbase時(shí),hbase將把zookeeper作為自身的一部分運(yùn)行,其對(duì)應(yīng)進(jìn)程為“hquorumpeer”;若該變量值為false,那么在啟動(dòng)hbase之前必須首先手動(dòng)運(yùn)行hbase.zookeeper.quorum屬性所指定的zookeeper集群,其對(duì)應(yīng)的進(jìn)程顯示為quorumpeermain.若將zookeeper作為hbase的一部分來(lái)運(yùn)行,那么關(guān)閉hbase時(shí)zookeeper將被自動(dòng)關(guān)閉,否則需要手動(dòng)停止zookeeper服務(wù)。
運(yùn)行hbase
運(yùn)行之前,在hdfs文件系統(tǒng)中添加hbase目錄:
hdfs dfs -mkdir hdfs://master:9000/hbase
執(zhí)行start-hbase.sh
hadoop@master:~$ start-hbase.sh slave1: starting zookeeper, logging to /home/hadoop/hbase-1.2.4/logs/hbase-hadoop-zookeeper-slave1.out slave2: starting zookeeper, logging to /home/hadoop/hbase-1.2.4/logs/hbase-hadoop-zookeeper-slave2.out master: starting zookeeper, logging to /home/hadoop/hbase-1.2.4/logs/hbase-hadoop-zookeeper-master.out starting master, logging to /home/hadoop/hbase-1.2.4/logs/hbase-hadoop-master-master.out master: starting regionserver, logging to /home/hadoop/hbase-1.2.4/logs/hbase-hadoop-regionserver-master.out slave2: starting regionserver, logging to /home/hadoop/hbase-1.2.4/logs/hbase-hadoop-regionserver-slave2.out slave1: starting regionserver, logging to /home/hadoop/hbase-1.2.4/logs/hbase-hadoop-regionserver-slave1.out
在啟動(dòng)hbase之后,用戶可以通過下面命令進(jìn)入hbase shell之中:
hbase shell
成功進(jìn)入之后,用戶會(huì)看到如下所示:
hadoop@master:~$ hbase shell slf4j: class path contains multiple slf4j bindings. slf4j: found binding in [jar:file:/home/hadoop/hbase-1.2.4/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/staticloggerbinder.class] slf4j: found binding in [jar:file:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/staticloggerbinder.class] slf4j: see http://www.slf4j.org/codes.html#multiple_bindings for an explanation. slf4j: actual binding is of type [org.slf4j.impl.log4jloggerfactory] hbase shell; enter 'help<return>' for list of supported commands. type "exit<return>" to leave the hbase shell version 1.2.4, r67592f3d062743907f8c5ae00dbbe1ae4f69e5af, tue oct 25 18:10:20 cdt 2016 hbase(main):001:0>
進(jìn)去hbase shell輸入status命令,如果看到如下結(jié)果,證明hbase安裝成功。
hbase(main):009:0> status 1 active master, 0 backup masters, 3 servers, 0 dead, 0.6667 average load
輸入list
hbase(main):010:0> list table 0 row(s) in 0.3250 seconds => []
感謝各位的閱讀,以上就是“Ubuntu如何搭建完全分布式”的內(nèi)容了,經(jīng)過本文的學(xué)習(xí)后,相信大家對(duì)Ubuntu如何搭建完全分布式這一問題有了更深刻的體會(huì),具體使用情況還需要大家實(shí)踐驗(yàn)證。這里是億速云,小編將為大家推送更多相關(guān)知識(shí)點(diǎn)的文章,歡迎關(guān)注!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。