您好,登錄后才能下訂單哦!
服務(wù)器 | bigdata121/192.168.50.121,bigdata122/192.168.50.122,bigdata123/192.168.50.123 |
---|---|
zookeeper版本 | 3.4.10 |
系統(tǒng)版本 | centos7.2 |
(1)安裝zk
[root@bigdata121 modules]# cd /opt/modules/zookeeper-3.4.10
[root@bigdata121 zookeeper-3.4.10]# mkdir zkData
[root@bigdata121 zookeeper-3.4.10]# mv conf/zoo_sample.cfg conf/zoo.cfg
(2)修改zoo.cfg配置
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
#dataDir=/tmp/zookeeper
# 指定zk存儲(chǔ)數(shù)據(jù)的目錄
dataDir=/opt/modules/zookeeper-3.4.10/zkData
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
# 這里是重點(diǎn)配置
#############cluster#############################
server.1=bigdata121:2888:3888
server.2=bigdata122:2888:3888
server.3=bigdata123:2888:3888
cluster配置參數(shù)解讀:
Server.A=B:C:D。
A是一個(gè)數(shù)字,表示這個(gè)是第幾號(hào)服務(wù)器,也就是sid;
B是這個(gè)服務(wù)器的ip地址;
C是這個(gè)服務(wù)器與集群中的Leader服務(wù)器交換信息的端口;不是對(duì)外的服務(wù)端口(對(duì)外的服務(wù)端口默認(rèn)是2181)
D是萬(wàn)一集群中的Leader服務(wù)器掛了,需要一個(gè)端口來(lái)重新進(jìn)行選舉,選出一個(gè)新的Leader,而這個(gè)端口就是用來(lái)執(zhí)行選舉時(shí)服務(wù)器相互通信的端口。
將配置好的整個(gè)程序目錄拷貝到其他機(jī)器上,使用scp或者rsync都可以,自己看著辦
(3)指定服務(wù)器id
在前面配置的 dataDir 指定的目錄下,創(chuàng)建一個(gè)“myid”文件,里面的內(nèi)容就寫入當(dāng)前server的id,這個(gè)id就是在zk集群中的唯一標(biāo)識(shí)。并且這個(gè)id需要和前面配置文件中的cluster中指定的一樣,否則會(huì)報(bào)錯(cuò)。
(4)配置環(huán)境變量
vim /etc/profile.d/zookeeper.sh
#!/bin/bash
export ZOOKEEPER_HOME=/opt/modules/zookeeper-3.4.10
export PATH=${ZOOKEEPER_HOME}/bin:$PATH
然后
source /etc/profile.d/zookeeper.sh
(5)啟動(dòng)
在三臺(tái)機(jī)器上執(zhí)行
啟動(dòng):zkServer.sh start
查看當(dāng)前主機(jī)上zk的狀態(tài):zkServer.sh status
[root@bigdata121 conf]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/modules/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
使用 zkCli.sh 進(jìn)入本機(jī)的zk服務(wù)。
可以使用如下命令:
命令 | 功能 |
---|---|
help | 顯示所有命令幫助 |
ls path [watch] | 使用 ls 命令來(lái)查看當(dāng)前znode中所包含的內(nèi)容,后面的watch表示監(jiān)聽(tīng)該節(jié)點(diǎn)下子節(jié)點(diǎn)的改變。注意,監(jiān)聽(tīng)觸發(fā)一次之后就會(huì)失效,如果需要持續(xù)監(jiān)聽(tīng),需要每次觸發(fā)之后重新進(jìn)行監(jiān)聽(tīng) |
ls2 path [watch] | 查看當(dāng)前節(jié)點(diǎn)數(shù)據(jù)并能看到更新次數(shù)等數(shù)據(jù),類似于Linux中的 ls -l |
Create | 普通創(chuàng)建(永久節(jié)點(diǎn)) -s 含有序列,會(huì)在節(jié)點(diǎn)名后面加一串序列號(hào),常用于節(jié)點(diǎn)名稱沖突的情況 -e 創(chuàng)建臨時(shí)節(jié)點(diǎn) |
get path [watch] | 獲得節(jié)點(diǎn)的值。后面的watch表示監(jiān)聽(tīng)該節(jié)點(diǎn)的value的改變。 |
Set path value | 設(shè)置節(jié)點(diǎn)的具體值 |
Stat | 查看節(jié)點(diǎn)狀態(tài) |
rmr path | 遞歸刪除節(jié)點(diǎn) |
1、maven依賴
<dependencies>
<dependency>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
<version>3.4.10</version>
</dependency>
</dependencies>
2、創(chuàng)建zk客戶端
import org.apache.zookeeper.*;
import org.apache.zookeeper.data.Stat;
import org.junit.Before;
import org.junit.Test;
import java.io.IOException;
import java.util.List;
public class ZkTest {
public static String connectString = "bigdata121:2181,bigdata122:2181,bigdata123:2181";
public static int sessionTimeout = 2000;
public ZooKeeper zkClient = null;
@Before
public void init() throws IOException {
//創(chuàng)建zk客戶端
zkClient = new ZooKeeper(connectString, sessionTimeout, new Watcher() {
//返回監(jiān)聽(tīng)事件時(shí)的處理函數(shù),監(jiān)聽(tīng)事件是一次性的
public void process(WatchedEvent watchedEvent) {
System.out.println(watchedEvent.getState() + "," + watchedEvent.getType() + "," + watchedEvent.getPath());
try {
zkClient.getChildren("/", true);
} catch (KeeperException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
});
}
}
3、創(chuàng)建節(jié)點(diǎn)
public void create() {
//創(chuàng)建節(jié)點(diǎn),參數(shù)為:節(jié)點(diǎn)名 節(jié)點(diǎn)值 權(quán)限 節(jié)點(diǎn)類型
//即 /wangjin tao 開(kāi)放權(quán)限 持久化節(jié)點(diǎn)
try {
String s = zkClient.create("/wangjin", "tao".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
} catch (KeeperException e) {
System.out.println("node exists!!!");
} catch (InterruptedException e) {
e.printStackTrace();
}
}
4、獲取子節(jié)點(diǎn)
zkclient.getChildren(路徑,是否監(jiān)聽(tīng))
返回的是子節(jié)點(diǎn)的列表
例子:
public void getChildNode() {
try {
List<String> children = zkClient.getChildren("/", false);
for (String node : children) {
System.out.println(node);
}
} catch (KeeperException e) {
System.out.println("node not exists!!!");
} catch (InterruptedException e) {
e.printStackTrace();
}
}
5、判斷節(jié)點(diǎn)是否存在
zkclient.exists(path, 是否監(jiān)聽(tīng))
返回的是節(jié)點(diǎn)的狀態(tài)信息,如果為null,表示節(jié)點(diǎn)不存在
例子:
public void nodeExist() {
//返回的是節(jié)點(diǎn)的狀態(tài)信息,如果為null,表示節(jié)點(diǎn)不存在
try {
Stat stat = zkClient.exists("/king", false);
System.out.println(stat == null ? "沒(méi)有" : "有");
} catch (KeeperException e) {
System.out.println("node not exists");
} catch (InterruptedException e) {
e.printStackTrace();
}
}
1、maven依賴
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-framework</artifactId>
<version>4.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-recipes</artifactId>
<version>4.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-client</artifactId>
<version>4.0.0</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>16.0.1</version>
</dependency>
2、需求
模擬搶購(gòu)秒殺場(chǎng)景,需要給商品數(shù)量加鎖。
3、代碼
import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import org.apache.curator.retry.ExponentialBackoffRetry;
public class TestDistributedLock {
//定義共享資源
private static int count = 10;
//用于減除商品
private static void printCountNumber() {
System.out.println("***********" + Thread.currentThread().getName() + "**********");
System.out.println("當(dāng)前值:" + count);
count--;
//睡2秒
try {
Thread.sleep(500);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("***********" + Thread.currentThread().getName() + "**********");
}
public static void main(String[] args) {
//定義客戶端重試的策略
RetryPolicy policy = new ExponentialBackoffRetry(1000, //每次等待的時(shí)間
10); //最大重試的次數(shù)
//定義ZK的一個(gè)客戶端
CuratorFramework client = CuratorFrameworkFactory.builder()
.connectString("bigdata121:2181")
.retryPolicy(policy)
.build();
//客戶端對(duì)象連接zk
client.start();
//創(chuàng)建互斥鎖,其實(shí)就是在zk上創(chuàng)建個(gè)節(jié)點(diǎn)
final InterProcessMutex lock = new InterProcessMutex(client, "/mylock");
// 啟動(dòng)10個(gè)線程去訪問(wèn)共享資源
for (int i = 0; i < 10; i++) {
new Thread(new Runnable() {
public void run() {
try {
//請(qǐng)求得到鎖
lock.acquire();
//訪問(wèn)共享資源
printCountNumber();
} catch (Exception ex) {
ex.printStackTrace();
} finally {
//釋放鎖
try {
lock.release();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}).start();
}
}
}
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。