您好,登錄后才能下訂單哦!
魯春利的工作筆記,誰說程序員不能有文藝范?
Kafka主要的shell腳本有
[hadoop@nnode kafka0.8.2.1]$ ll 總計 80 -rwxr-xr-x 1 hadoop hadoop 943 2015-02-27 kafka-console-consumer.sh -rwxr-xr-x 1 hadoop hadoop 942 2015-02-27 kafka-console-producer.sh -rwxr-xr-x 1 hadoop hadoop 870 2015-02-27 kafka-consumer-offset-checker.sh -rwxr-xr-x 1 hadoop hadoop 946 2015-02-27 kafka-consumer-perf-test.sh -rwxr-xr-x 1 hadoop hadoop 860 2015-02-27 kafka-mirror-maker.sh -rwxr-xr-x 1 hadoop hadoop 884 2015-02-27 kafka-preferred-replica-election.sh -rwxr-xr-x 1 hadoop hadoop 946 2015-02-27 kafka-producer-perf-test.sh -rwxr-xr-x 1 hadoop hadoop 872 2015-02-27 kafka-reassign-partitions.sh -rwxr-xr-x 1 hadoop hadoop 866 2015-02-27 kafka-replay-log-producer.sh -rwxr-xr-x 1 hadoop hadoop 872 2015-02-27 kafka-replica-verification.sh -rwxr-xr-x 1 hadoop hadoop 4185 2015-02-27 kafka-run-class.sh -rwxr-xr-x 1 hadoop hadoop 1333 2015-02-27 kafka-server-start.sh -rwxr-xr-x 1 hadoop hadoop 891 2015-02-27 kafka-server-stop.sh -rwxr-xr-x 1 hadoop hadoop 868 2015-02-27 kafka-simple-consumer-shell.sh -rwxr-xr-x 1 hadoop hadoop 861 2015-02-27 kafka-topics.sh drwxr-xr-x 2 hadoop hadoop 4096 2015-02-27 windows -rwxr-xr-x 1 hadoop hadoop 1370 2015-02-27 zookeeper-server-start.sh -rwxr-xr-x 1 hadoop hadoop 875 2015-02-27 zookeeper-server-stop.sh -rwxr-xr-x 1 hadoop hadoop 968 2015-02-27 zookeeper-shell.sh [hadoop@nnode kafka0.8.2.1]$
說明:Kafka也提供了在windows下運行的bat腳本,在bin/windows目錄下。
ZooKeeper腳本
Kafka各組件均依賴于ZooKeeper環(huán)境,因此在使用Kafka之前首先需要具備ZooKeeper環(huán)境;可以配置ZooKeeper集群,也可以使用Kafka集成的ZooKeeper腳本來啟動一個standalone mode的ZooKeeper節(jié)點。
# 啟動Zookeeper Server [hadoop@nnode kafka0.8.2.1]$ bin/zookeeper-server-start.sh USAGE: bin/zookeeper-server-start.sh zookeeper.properties # 配置文件路徑為config/zookeeper.properties,主要配置zookeeper的本地存儲路徑(dataDir) # 內(nèi)部實現(xiàn)為調(diào)用 exec $base_dir/kafka-run-class.sh $EXTRA_ARGS org.apache.zookeeper.server.quorum.QuorumPeerMain $@ # 停止ZooKeeper Server [hadoop@nnode kafka0.8.2.1]$ bin/zookeeper-server-stop.sh # 內(nèi)部實現(xiàn)為調(diào)用 ps ax | grep -i 'zookeeper' | grep -v grep | awk '{print $1}' | xargs kill -SIGINT # 設(shè)置服務(wù)器參數(shù) [hadoop@nnode kafka0.8.2.1]$ zookeeper-shell.sh USAGE: bin/zookeeper-shell.sh zookeeper_host:port[/path] [args...] # 內(nèi)部實現(xiàn)為調(diào)用 exec $(dirname $0)/kafka-run-class.sh org.apache.zookeeper.ZooKeeperMain -server "$@" # zookeeper shell用來查看zookeeper的節(jié)點信息 [hadoop@nnode kafka0.8.2.1]$ bin/zookeeper-shell.sh nnode:2181,dnode1:2181,dnode2:2181/ Connecting to nnode:2181,dnode1:2181,dnode2:2181/ Welcome to ZooKeeper! JLine support is disabled WATCHER:: WatchedEvent state:SyncConnected type:None path:null ls / [hbase, hadoop-ha, admin, zookeeper, consumers, config, zk-book, brokers, controller_epoch]
說明:$@ 表示所有參數(shù)列表。 $# 添加到Shell的參數(shù)個數(shù)。
Kafka啟動與停止
# 啟動Kafka Server [hadoop@nnode kafka0.8.2.1]$ bin/kafka-server-start.sh USAGE: bin/kafka-server-start.sh [-daemon] server.properties # 內(nèi)部實現(xiàn)為調(diào)用 exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka $@ # 略 [hadoop@nnode kafka0.8.2.1]$ bin/kafka-run-class.sh # 停止Kafka Server [hadoop@nnode kafka0.8.2.1]$ kafka-server-stop.sh # 內(nèi)部實現(xiàn)為調(diào)用 ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print $1}' | xargs kill -SIGTERM
說明:Kafka啟動時會從config/server.properties讀取配置信息,其中Kafka Server啟動的三個核心配置項為:
broker.id : broker的唯一標識符,取值為非負整數(shù)(可以取ip的最后一組) port : server監(jiān)聽客戶端連接的端口(默認為9092) zookeeper.connect : ZK的連接信息,格式為hostname1:port1[,hostname2:port2,hostname3:port3] # 可選 log.dirs : Kafka數(shù)據(jù)存儲的路徑(默認為/tmp/kafka-logs),以逗號分割的一個或多個目錄列表。 當有一個新partition被創(chuàng)建時,此時哪個目錄中partition數(shù)目最少,則新創(chuàng)建的partition會被放 置到該目錄。 num.partitions : Topic的partition數(shù)目(默認為1),可以在創(chuàng)建Topic時指定 # 其他參考http://kafka.apache.org/documentation.html#configuration
Kafka消息
# 消息生產(chǎn)者 [hadoop@nnode kafka0.8.2.1]$ bin/kafka-console-producer.sh Read data from standard input and publish it to Kafka. # 從控制臺讀取數(shù)據(jù) Option Description ------ ----------- --topic <topic> REQUIRED: The broker list string in the form HOST1:PORT1,HOST2:PORT2. --broker-list <broker-list> REQUIRED: The topic id to produce messages to. # 這兩個為必選參數(shù),其他的可選參數(shù)可以通過直接執(zhí)行該命令查看幫助 # 消息消費者 [hadoop@nnode kafka0.8.2.1]$ bin/kafka-console-consumer.sh The console consumer is a tool that reads data from Kafka and outputs it to standard output. Option Description ------ ----------- --zookeeper <urls> REQUIRED: The connection string for the zookeeper connection, in the form host:port.(Multiple URLS can be given to allow fail-over.) --topic <topic> The topic id to consume on. --from-beginning If the consumer does not already have an established offset to consume from, start with the earliest message present in the log rather than the latest message. # zookeeper參數(shù)是必須的,其他的都是可選的,具體的參考幫助信息 # 查看消息信息 [hadoop@nnode kafka0.8.2.1]$ bin/kafka-topics.sh Create, delete, describe, or change a topic. Option Description ------ ----------- --zookeeper <urls> REQUIRED: The connection string for the zookeeper connection, in the form host:port. (Multiple URLS can be given to allow fail-over.) --create Create a new topic. --delete Delete a topic --alter Alter the configuration for the topic. --list List all available topics. --describe List details for the given topics. --topic <topic> The topic to be create, alter or describe. Can also accept a regular expression except for --create option。 --help Print usage information. # zookeeper參數(shù)是必須的,其他的都是可選的,具體的參考幫助信息
其余腳本暫略
免責聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。