您好,登錄后才能下訂單哦!
這篇文章主要介紹基于Python讀寫Kafka的方法,文中介紹的非常詳細,具有一定的參考價值,感興趣的小伙伴們一定要看完!
如何使用python來讀寫kafka, 包含生產者和消費者.
以下使用kafka-python客戶端
生產者
爬蟲大多時候作為消息的發(fā)送端, 在消息發(fā)出去后最好能記錄消息被發(fā)送到了哪個分區(qū), offset是多少, 這些記錄在很多情況下可以幫助快速定位問題, 所以需要在send方法后加入callback函數, 包括成功和失敗的處理
# -*- coding: utf-8 -*- ''' callback也是保證分區(qū)有序的, 比如2條消息, a先發(fā)送, b后發(fā)送, 對于同一個分區(qū), 那么會先回調a的callback, 再回調b的callback ''' import json from kafka import KafkaProducer topic = 'demo' def on_send_success(record_metadata): print(record_metadata.topic) print(record_metadata.partition) print(record_metadata.offset) def on_send_error(excp): print('I am an errback: {}'.format(excp)) def main(): producer = KafkaProducer( bootstrap_servers='localhost:9092' ) producer.send(topic, value=b'{"test_msg":"hello world"}').add_callback(on_send_success).add_callback( on_send_error) # close() 方法會阻塞等待之前所有的發(fā)送請求完成后再關閉 KafkaProducer producer.close() def main2(): ''' 發(fā)送json格式消息 :return: ''' producer = KafkaProducer( bootstrap_servers='localhost:9092', value_serializer=lambda m: json.dumps(m).encode('utf-8') ) producer.send(topic, value={"test_msg": "hello world"}).add_callback(on_send_success).add_callback( on_send_error) # close() 方法會阻塞等待之前所有的發(fā)送請求完成后再關閉 KafkaProducer producer.close() if __name__ == '__main__': # main() main2()
消費者
kafka的消費模型比較復雜, 我會分以下幾種情況來進行說明
1.不使用消費組(group_id=None)
不使用消費組的情況下可以啟動很多個消費者, 不再受限于分區(qū)數, 即使消費者數量 > 分區(qū)數, 每個消費者也都可以收到消息
# -*- coding: utf-8 -*- ''' 消費者: group_id=None ''' from kafka import KafkaConsumer topic = 'demo' def main(): consumer = KafkaConsumer( topic, bootstrap_servers='localhost:9092', auto_offset_reset='latest', # auto_offset_reset='earliest', ) for msg in consumer: print(msg) print(msg.value) consumer.close() if __name__ == '__main__': main()
2.指定消費組
以下使用pool方法來拉取消息
pool 每次拉取只能拉取一個分區(qū)的消息, 比如有2個分區(qū)1個consumer, 那么會拉取2次
pool 是如果有消息馬上進行拉取, 如果timeout_ms內沒有新消息則返回空dict, 所以可能出現某次拉取了1條消息, 某次拉取了max_records條
# -*- coding: utf-8 -*- ''' 消費者: 指定group_id ''' from kafka import KafkaConsumer topic = 'demo' group_id = 'test_id' def main(): consumer = KafkaConsumer( topic, bootstrap_servers='localhost:9092', auto_offset_reset='latest', group_id=group_id, ) while True: try: # return a dict batch_msgs = consumer.poll(timeout_ms=1000, max_records=2) if not batch_msgs: continue ''' {TopicPartition(topic='demo', partition=0): [ConsumerRecord(topic='demo', partition=0, offset=42, timestamp=1576425111411, timestamp_type=0, key=None, value=b'74', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=2, serialized_header_size=-1)]} ''' for tp, msgs in batch_msgs.items(): print('topic: {}, partition: {} receive length: '.format(tp.topic, tp.partition, len(msgs))) for msg in msgs: print(msg.value) except KeyboardInterrupt: break consumer.close() if __name__ == '__main__': main()
關于消費組
我們根據配置參數分為以下幾種情況
group_id=None
auto_offset_reset='latest': 每次啟動都會從最新出開始消費, 重啟后會丟失重啟過程中的數據
auto_offset_reset='latest': 每次從最新的開始消費, 不會管哪些任務還沒有消費
指定group_id
auto_offset_reset='latest': 從上次提交offset的地方開始消費
auto_offset_reset='earliest': 從上次提交offset的地方開始消費
auto_offset_reset='latest': 只消費啟動后的收到的數據, 重啟后會從上次提交offset的地方開始消費
auto_offset_reset='earliest': 從最開始消費全量數據
全新group_id
舊group_id(即kafka集群中還保留著該group_id的提交記錄)
性能測試
以下是在本地進行的測試, 如果要在線上使用kakfa, 建議提前進行性能測試
producer
# -*- coding: utf-8 -*- ''' producer performance environment: mac python3.7 broker 1 partition 2 ''' import json import time from kafka import KafkaProducer topic = 'demo' nums = 1000000 def main(): producer = KafkaProducer( bootstrap_servers='localhost:9092', value_serializer=lambda m: json.dumps(m).encode('utf-8') ) st = time.time() cnt = 0 for _ in range(nums): producer.send(topic, value=_) cnt += 1 if cnt % 10000 == 0: print(cnt) producer.flush() et = time.time() cost_time = et - st print('send nums: {}, cost time: {}, rate: {}/s'.format(nums, cost_time, nums // cost_time)) if __name__ == '__main__': main() ''' send nums: 1000000, cost time: 61.89236712455749, rate: 16157.0/s send nums: 1000000, cost time: 61.29534196853638, rate: 16314.0/s '''
consumer
# -*- coding: utf-8 -*- ''' consumer performance ''' import time from kafka import KafkaConsumer topic = 'demo' group_id = 'test_id' def main1(): nums = 0 st = time.time() consumer = KafkaConsumer( topic, bootstrap_servers='localhost:9092', auto_offset_reset='latest', group_id=group_id ) for msg in consumer: nums += 1 if nums >= 500000: break consumer.close() et = time.time() cost_time = et - st print('one_by_one: consume nums: {}, cost time: {}, rate: {}/s'.format(nums, cost_time, nums // cost_time)) def main2(): nums = 0 st = time.time() consumer = KafkaConsumer( topic, bootstrap_servers='localhost:9092', auto_offset_reset='latest', group_id=group_id ) running = True batch_pool_nums = 1 while running: batch_msgs = consumer.poll(timeout_ms=1000, max_records=batch_pool_nums) if not batch_msgs: continue for tp, msgs in batch_msgs.items(): nums += len(msgs) if nums >= 500000: running = False break consumer.close() et = time.time() cost_time = et - st print('batch_pool: max_records: {} consume nums: {}, cost time: {}, rate: {}/s'.format(batch_pool_nums, nums, cost_time, nums // cost_time)) if __name__ == '__main__': # main1() main2() ''' one_by_one: consume nums: 500000, cost time: 8.018627166748047, rate: 62354.0/s one_by_one: consume nums: 500000, cost time: 7.698841094970703, rate: 64944.0/s batch_pool: max_records: 1 consume nums: 500000, cost time: 17.975456953048706, rate: 27815.0/s batch_pool: max_records: 1 consume nums: 500000, cost time: 16.711708784103394, rate: 29919.0/s batch_pool: max_records: 500 consume nums: 500369, cost time: 6.654940843582153, rate: 75187.0/s batch_pool: max_records: 500 consume nums: 500183, cost time: 6.854053258895874, rate: 72976.0/s batch_pool: max_records: 1000 consume nums: 500485, cost time: 6.504687070846558, rate: 76942.0/s batch_pool: max_records: 1000 consume nums: 500775, cost time: 7.047331809997559, rate: 71058.0/s '''
以上是“基于Python讀寫Kafka的方法”這篇文章的所有內容,感謝各位的閱讀!希望分享的內容對大家有幫助,更多相關知識,歡迎關注億速云行業(yè)資訊頻道!
免責聲明:本站發(fā)布的內容(圖片、視頻和文字)以原創(chuàng)、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。