您好,登錄后才能下訂單哦!
包下載
http://archive.cloudera.com/cdh5/cdh/4/
http://apache.fayea.com/hadoop/common/hadoop-2.6.4/hadoop-2.6.4.tar.gz
http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.8/zookeeper-3.4.8.tar.gz
http://apache.opencas.org/hbase/1.2.0/hbase-1.2.0-bin.tar.gz
http://download.oracle.com/otn-pub/java/jdk/8u73-b02/jdk-8u73-linux-x64.tar.gz
環(huán)境
10.200.140.58 hadoop-308.99bill.com #物理機 datanode zookeeper regionserver
10.200.140.59 hadoop-309.99bill.com #物理機 datanode zookeeper regionserver
10.200.140.60 hadoop-310.99bill.com #物理機 datanode zookeeper regionserver
10.200.140.45 hadoop-311.99bill.com #虛擬機 master
10.200.140.46 hadoop-312.99bill.com #虛擬機 second hmaster
修改主機名,禁用ipv6
cat /etc/profile
export JAVA_HOME=/opt/jdk1.7.0_80/
PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin
CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export JAVA_HOME
export PATH
export CLASSPATH
HADOOP_BASE=/opt/oracle/hadoop
HADOOP_HOME=/opt/oracle/hadoop
YARN_HOME=/opt/oracle/hadoop
PATH=$HADOOP_BASE/bin:$PATH
export HADOOP_BASE PATH
10.200.140.45 能夠免密登陸
[oracle@hadoop-311 hadoop]$ cat core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop-311.99bill.com:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>16384</value>
</property>
</configuration>
[oracle@hadoop-311 hadoop]$ cat hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/hadoop/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/hadoop/data/dfs</value>
</property>
<property>
<name>dfs.datanode.handler.count</name>
<value>150</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>64m</value>
</property>
<property>
<name>dfs.datanode.du.reserved</name>
<value>1073741824</value>
<final>true</final>
</property>
<property>
<name>dfs.hosts.exclude</name>
<value>/opt/oracle/hadoop/etc/hadoop/slave-deny-list</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>hadoop-311.99bill.com:50070</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop-312.99bill.com:50090</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
[oracle@hadoop-311 hadoop]$ cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>4000</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>4000</value>
</property>
</configuration>
定義 datanode
[oracle@hadoop-311 hadoop]$ cat slaves
hadoop-308.99bill.com
hadoop-309.99bill.com
hadoop-310.99bill.com
hadoop-env.sh
export HADOOP_LOG_DIR=$HADOOP_HOME/logs
export HADOOP_PID_DIR=/opt/oracle/hadoop
export HADOOP_SECURE_DN_PID_DIR=/opt/oracle/hadoop
export JAVA_HOME=/opt/jdk1.7.0_80/
export HADOOP_HEAPSIZE=6000
exec_time=`date +'%Y%m%d-%H%M%S'`
export HADOOP_NAMENODE_OPTS="-Xmx6g ${HADOOP_NAMENODE_OPTS}"
export HADOOP_SECONDARYNAMENODE_OPTS="-Xmx6g ${HADOOP_SECONDARYNAMENODE_OPTS}"
export HADOOP_DATANODE_OPTS="-server -Xmx6000m -Xms6000m -Xmn1000m -XX:PermSize=128M -XX:MaxPermSize=128M -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$HADOOP_LOG_DIR/gc-$(hostname)-datanode-${exec_time}.log -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=10 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseCMSInitiatingOccupancyOnly -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=20"
[oracle@hadoop-311 hadoop]$ cat yarn-site.xml
<?xml version="1.0"?>
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop-311.99bill.com:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop-311.99bill.com:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop-311.99bill.com:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop-311.99bill.com:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop-311.99bill.com:8088</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce.shuffle</value>
</property>
</configuration>
啟動hadoop集群
第一次執(zhí)行,需要格式化namenode,以后啟動不需要執(zhí)行此步驟。
hadoop/bin/hadoop -format
然后啟動hadoop
hadoop/sbin/start-all.sh
啟動完成后,如果沒有什么錯誤,執(zhí)行jps查詢一下當前進程,NameNode是Hadoop Master進程,SecondaryNameNode,ResourceManager是Hadoop進程。
[oracle@hadoop-311 hadoop]$ jps
13332 Jps
5430 NameNode
5719 ResourceManager
三、ZooKeeper集群安裝
1. 解壓縮zookeeper-3.4.8.tar.gz并重命名zookeeper, 進入zookeeper/conf目錄,cp zoo_sample.cfg zoo.cfg 并編輯
[oracle@hadoop-308 conf]$ cat zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
maxClientCnxns=0
# The number of ticks that the initial
# synchronization phase can take
initLimit=50
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# 保留快照數(shù)
autopurge.snapRetainCount=2
# Purge task interval in hours
# 清理快照時間間隔(小時)
autopurge.purgeInterval=84
dataDir=/opt/hadoop/zookeeperdata
# the port at which the clients will connect
clientPort=2181
server.1=hadoop-308:2888:3888
server.2=hadoop-309:2888:3888
server.3=hadoop-310:2888:3888
2. 新建并編輯myid文件
1
mkdir /opt/hadoop/zookeeperdata
echo "1" > /opt/hadoop/zookeeperdata/myid
3. 然后同步zookeeper到其他兩個節(jié)點,然后在其他節(jié)點需要修改myid為相應(yīng)的數(shù)字。
啟動 zookeeper
cd /opt/oracle/zookeeper
./bin/zkServer.sh start
[oracle@hadoop-308 tools]$ jps
11939 Jps
4373 DataNode
8579 HRegionServer
四、HBase集群的安裝和配置
1. 解壓縮hbase-1.2.0-bin.tar.gz并重命名為hbase, 編輯/hbase/conf/hbase-env.sh
export HBASE_MANAGES_ZK=false
export HBASE_HEAPSIZE=4000
export JAVA_HOME=/opt/jdk1.7.0_80/
[oracle@hadoop-311 conf]$ cat hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
* Copyright 2010 The Apache Software Foundation
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-->
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop-311:9000/hbase</value>
<description>The directory shared by region servers.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master.port</name>
<value>60000</value>
</property>
<property>
<name>hbase.master</name>
<value>hadoop-312</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop-308,hadoop-309,hadoop-310</value>
</property>
<property>
<name>hbase.regionserver.handler.count</name>
<value>300</value>
</property>
<property>
<name>hbase.hstore.blockingStoreFiles</name>
<value>70</value>
</property>
<property>
<name>zookeeper.session.timeout</name>
<value>60000</value>
</property>
<property>
<name>hbase.regionserver.restart.on.zk.expire</name>
<value>true</value>
<description>
Zookeeper session expired will force regionserver exit.
Enable this will make the regionserver restart.
</description>
</property>
<property>
<name>hbase.replication</name>
<value>false</value>
</property>
<property>
<name>hfile.block.cache.size</name>
<value>0.4</value>
</property>
<property>
<name>hbase.regionserver.global.memstore.upperLimit</name>
<value>0.35</value>
</property>
<property>
<name>hbase.hregion.memstore.block.multiplier</name>
<value>8</value>
</property>
<property>
<name>hbase.server.thread.wakefrequency</name>
<value>100</value>
</property>
<property>
<name>hbase.master.distributed.log.splitting</name>
<value>false</value>
</property>
<property>
<name>hbase.regionserver.hlog.splitlog.writer.threads</name>
<value>3</value>
</property>
<property>
<name>hbase.client.scanner.caching</name>
<value>10</value>
</property>
<property>
<name>hbase.hregion.memstore.flush.size</name>
<value>134217728</value>
</property>
<property>
<name>hbase.hregion.memstore.mslab.enabled</name>
<value>true</value>
</property>
<property>
<name>hbase.coprocessor.user.region.classes</name>
<value>org.apache.hadoop.hbase.coprocessor.AggregateImplementation</value>
</property>
<property>
<name>dfs.datanode.max.xcievers</name>
<value>2096</value>
<description>PRIVATE CONFIG VARIABLE</description>
</property>
</configuration>
分發(fā)hbase到其他4個節(jié)點
五、啟動集群
1. 啟動zookeeper
1
zookeeper/bin/zkServer.sh start
2. 啟動Hadoop
$ hadoop/sbin/start-all.sh
修改hbase/conf/hbase-site.xml
[oracle@hadoop-311 conf]$ cat hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
* Copyright 2010 The Apache Software Foundation
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-->
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop-311:9000/hbase</value>
<description>The directory shared by region servers.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master.port</name>
<value>60000</value>
</property>
<property>
<name>hbase.master</name>
<value>hadoop-312</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop-308,hadoop-309,hadoop-310</value>
</property>
<property>
<name>hbase.regionserver.handler.count</name>
<value>300</value>
</property>
<property>
<name>hbase.hstore.blockingStoreFiles</name>
<value>70</value>
</property>
<property>
<name>zookeeper.session.timeout</name>
<value>60000</value>
</property>
<property>
<name>hbase.regionserver.restart.on.zk.expire</name>
<value>true</value>
<description>
Zookeeper session expired will force regionserver exit.
Enable this will make the regionserver restart.
</description>
</property>
<property>
<name>hbase.replication</name>
<value>false</value>
</property>
<property>
<name>hfile.block.cache.size</name>
<value>0.4</value>
</property>
<property>
<name>hbase.regionserver.global.memstore.upperLimit</name>
<value>0.35</value>
</property>
<property>
<name>hbase.hregion.memstore.block.multiplier</name>
<value>8</value>
</property>
<property>
<name>hbase.server.thread.wakefrequency</name>
<value>100</value>
</property>
<property>
<name>hbase.master.distributed.log.splitting</name>
<value>false</value>
</property>
<property>
<name>hbase.regionserver.hlog.splitlog.writer.threads</name>
<value>3</value>
</property>
<property>
<name>hbase.client.scanner.caching</name>
<value>10</value>
</property>
<property>
<name>hbase.hregion.memstore.flush.size</name>
<value>134217728</value>
</property>
<property>
<name>hbase.hregion.memstore.mslab.enabled</name>
<value>true</value>
</property>
<property>
<name>hbase.coprocessor.user.region.classes</name>
<value>org.apache.hadoop.hbase.coprocessor.AggregateImplementation</value>
</property>
<property>
<name>dfs.datanode.max.xcievers</name>
<value>2096</value>
<description>PRIVATE CONFIG VARIABLE</description>
</property>
</configuration>
hbase-env.sh
export JAVA_HOME=/opt/jdk1.7.0_80/
export HBASE_CLASSPATH=/opt/oracle/hadoop/conf
export HBASE_HEAPSIZE=4000
export HBASE_OPTS="-XX:PermSize=512M -XX:MaxPermSize=512M -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=10 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseCMSInitiatingOccupancyOnly -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=20"
exec_time=`date +'%Y%m%d-%H%M%S'`
export HBASE_MASTER_OPTS="-Xmx4096m -Xms4096m -Xmn128m -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$HBASE_HOME/logs/gc-$(hostname)-master-${exec_time}.log"
export HBASE_REGIONSERVER_OPTS="-Xmx8192m -Xms8192m -Xmn512m -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$HBASE_HOME/logs/gc-$(hostname)-regionserver-${exec_time}.log"
export HBASE_MANAGES_ZK=fals
[oracle@hadoop-311 conf]$ cat regionservers
hadoop-308
hadoop-309
hadoop-310
分發(fā)到其他四臺
cd /opt/oracle/hbase
sh bin/start-hbase.sh
[oracle@hadoop-311 bin]$ ./hbase shell
16/03/23 20:20:47 WARN conf.Configuration: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.15-cdh5.7.1, r, Tue Nov 18 08:42:59 PST 2014
hbase(main):001:0> status
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/oracle/hbase/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/oracle/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/03/23 20:20:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
3 servers, 0 dead, 0.6667 average load
10. 常見問題
10.1. Namenode非正常關(guān)閉
在所有的hadoop環(huán)境機器上用jps命令,把所有的進程列出,然后kill掉,再按照啟動順序啟動
10.2. Datanode非正常關(guān)閉
在namenode上啟動HDFS
運行hadoop/bin/start-all.sh
如果Datanode同時是zookeeper,還需要啟動zookeeper
在該datanode上運行zookeeper/bin/zkServer.sh start。
在namenode上啟動Hbase
運行hbase/bin/start-hbase.sh
http://10.200.140.46:60010/master-status
10.3. 停止一臺非master的服務(wù)器
在該臺服務(wù)器上運行:
hadoop/bin/hadoop-daemon.sh stop datanode
hadoop/bin/hadoop-daemon.sh stop tasktracker
hbase/bin/hbase-daemon.sh stop regionserver
在http://10.200.140.45:50070/dfshealth.jsp查看該節(jié)點是否已經(jīng)變成dead nodes,變成dead nodes之后,就可以停止該臺服務(wù)器
在剛停止服務(wù)的時候,看到的截圖如下:
當停止服務(wù)成功,看到的截圖如下:
重啟服務(wù)器以后,在hadoop001上運行,啟動服務(wù):
hadoop/bin/start-all.sh
hbase/bin/start-hbase.sh
11. 監(jiān)控端口
11.1. Namenode監(jiān)控端口(hadoop001):
60010,60000,50070,50030,9000,9001,10000
11.2. zookeeper監(jiān)控端口(hadoop003,hadoop004,hadoop005)
2181
11.3. Datanode監(jiān)控端口(hadoop003,hadoop004,hadoop005,hadoop006,hadoop007)
60030,50075
12、HDFS 上傳文件不均衡和Balancer太慢的問題
Hmaster 有個start-balancer.sh
###########遷移方案
先在新機房準備一套新的hadoop環(huán)境
###hadoop遷移-hbase
1 確定新hbase可以正常運行,并且兩個集群之間的機器都可以用機器名互相訪問到 ok
2 停掉新hbase ok
3 在兩個集群任何hadoop機器運行下面的命令
./hadoop distcp -bandwidth 10 -m 3 hdfs://hadoop001.99bill.com:9000/hbase/if_fss_files hdfs://hadoop-312.99bill.com:9000/hbase/if_fss_files
4 使用附件的腳本,運行
hbase org.jruby.Main ~/add_table.rb /hbase/if_fss_files
5 啟動新hbase
###hadoop遷移-hadoop數(shù)據(jù)遷移
########整理hadoop文件,對于打包失敗的重新打包
如2014-07-24執(zhí)行
./hdfs dfs -rm -r /fss/2014-07-24
./hdfs dfs -rm -r /fss/2014-07-24.har
./hdfs dfs -mv /fss/2014-07-24a.har /fss/2014-07-24.har
##從遠程fss系統(tǒng)同步到新機房本地
./hdfs dfs -copyToLocal hdfs://hadoop001.99bill.com:9000/fss/2015-04-08.har /opt/sdb/hadoop/tmp/
####從新機房本地導入fss系統(tǒng)
./hdfs dfs -copyFromLocal /opt/sdb/hadoop/tmp/2015-04-08.har /fss/
sleep 5
./hdfs dfs -copyFromLocal /opt/sdb/hadoop/tmp/2015-06/03-30.har /fss/2015-06
免責聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。