在kakfa的bin目录下有很多脚本,其中有两个脚本是kafka官方自带的压力测试脚本。用来测试kafka在生产和消费中,有哪些瓶颈来限制了工作效率。
安装文档可以参考
- kafka-producer-perf-test.sh 生产者压测脚本
- kafka-consumer-perf-test.sh 消费者压测脚本
测试项 | 压测消息数(单位:W) | 测试命令 |
---|---|---|
生产者MQ消息 | 10 | ./kafka-producer-perf-test.sh --topic test_abcdocker_perf --num-records 100000 --record-size 1000 --throughput 2000 --producer-props bootstrap.servers=1.1.1.1:9092 |
100 | ./kafka-producer-perf-test.sh --topic test_abcdocker_perf --num-records 1000000 --record-size 2000 --throughput 5000 --producer-props bootstrap.servers=1.1.1.1:9092 | |
1000 | ./kafka-producer-perf-test.sh --topic test_abcdocker_perf --num-records 10000000 --record-size 2000 --throughput 5000 --producer-props bootstrap.servers=1.1.1.1:9092 | |
消费者MQ信息 | 10 | ./kafka-consumer-perf-test.sh --broker-list 1.1.1.1:9092 --topic test_abcdocker_perf --fetch-size 1048576 --messages 100000 --threads 1 |
100 | ./kafka-consumer-perf-test.sh --broker-list 1.1.1.1:9092 --topic test_abcdocker_perf --fetch-size 1048576 --messages 1000000 --threads 1 | |
1000 | ./kafka-consumer-perf-test.sh --broker-list 1.1.1.1:9092 --topic test_abcdocker_perf --fetch-size 1048576 --messages 10000000 --threads 1 |
开启SASL加密后,需要指定配置文件
#生产者MQ 压测10W
./kafka-producer-perf-test.sh --topic test_abcdocker_perf --num-records 100000 --record-size 1000 --throughput 2000 --producer-props bootstrap.servers=192.168.31.70:9092,192.168.31.71:9092,192.168.31.72:9092 --producer.config /ssl/kafka.config
#消费者MQ 压测10W
./kafka-consumer-perf-test.sh --broker-list 192.168.31.70:9092,192.168.31.71:9092,192.168.31.72:9092 --topic test_abcdocker_perf --fetch-size 1048576 --messages 100000 --threads 1 --producer.config /ssl/kafka.config
生产者命令参数
kafka-producer-perf-test.sh 脚本命令的参数解析(以100w写入消息为例):
--topic topic名称,本例为test_perf
--num-records 总共需要发送的消息数,本例为100000
--record-size 每个记录的字节数,本例为1000
--throughput 每秒钟发送的记录数,本例为5000
--producer-props bootstrap.servers=localhost:9092 (多台节点可以使用逗号分隔)
--producer.config /ssl/kafka.config Kafka 生产者开启SASL认证配置文件路径
消费者命令参数
kafka-consumer-perf-test.sh 脚本命令的参数为:
--broker-list 指定kafka的链接信息,本例为localhost:9092
--topic 指定topic的名称,本例为test_perf,即4.2.1中写入的消息;
--fetch-size 指定每次fetch的数据的大小,本例为1048576,也就是1M
--messages 总共要消费的消息个数,本例为1000000,100w
--consumer.config /ssl/kafka.config Kafka 消费者开启SASL认证配置文件路径
开始进行压测
- kafka_broker kafka_2.13-3.9.0
- cmak 3.10
生产者压测
模拟压测10w数据
[root@zook01 bin]# ./kafka-producer-perf-test.sh --topic test_abcdocker_perf --num-records 100000 --record-size 1000 --throughput 2000 --producer-props bootstrap.servers=192.168.31.70:9092,192.168.31.71:9092,192.168.31.72:9092 --producer.config /ssl/kafka.config
[2024-12-25 17:20:34,018] WARN [Producer clientId=perf-producer-client] The metadata response from the cluster reported a recoverable issue with correlation id 1 : {test_abcdocker_perf=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
9979 records sent, 1995.4 records/sec (1.90 MB/sec), 76.5 ms avg latency, 867.0 ms max latency.
10015 records sent, 2003.0 records/sec (1.91 MB/sec), 4.0 ms avg latency, 40.0 ms max latency.
9733 records sent, 1943.9 records/sec (1.85 MB/sec), 2.9 ms avg latency, 35.0 ms max latency.
10293 records sent, 2057.8 records/sec (1.96 MB/sec), 34.0 ms avg latency, 762.0 ms max latency.
9992 records sent, 1998.0 records/sec (1.91 MB/sec), 2.4 ms avg latency, 42.0 ms max latency.
9782 records sent, 1942.4 records/sec (1.85 MB/sec), 1.7 ms avg latency, 72.0 ms max latency.
10302 records sent, 2060.0 records/sec (1.96 MB/sec), 18.8 ms avg latency, 774.0 ms max latency.
10002 records sent, 2000.0 records/sec (1.91 MB/sec), 1.9 ms avg latency, 45.0 ms max latency.
9880 records sent, 1975.2 records/sec (1.88 MB/sec), 1.5 ms avg latency, 12.0 ms max latency.
100000 records sent, 1998.800720 records/sec (1.91 MB/sec), 16.11 ms avg latency, 867.00 ms max latency, 2 ms 50th, 39 ms 95th, 400 ms 99th, 740 ms 99.9th.
压测结果说明
本次压测结果是开启SASL认证进行模拟
100000 records sent, 1998.800720 records/sec (1.91 MB/sec), 16.11 ms avg latency, 867.00 ms max latency, 2 ms 50th, 39 ms 95th, 400 ms 99th, 740 ms 99.9th.
100000 records sent 一共写入10w条消息
1998.800720 records/sec (1.91 MB/sec) 吞吐量为1.91 MB/sec (每秒平均向kafka写入了1.91MB的数据,1998.800720条消息/秒)
6.11 ms avg latency 6.11 ms avg latency 每次写入的平均延迟为6.11秒
867.00 ms max latency, 2 ms 50th, 39 ms 95th, 400 ms 99th, 740 ms 99.9th. 最大延迟867ms
消费者压测
模拟消费者10w条数据
[root@zook01 bin]# ./kafka-consumer-perf-test.sh --broker-list 192.168.31.70:9092,192.168.31.71:9092,192.168.31.72:9092 --topic test_abcdocker_perf --fetch-size 1048576 --messages 100000 --threads 1 --consumer.config /ssl/kafka.config
WARNING: option [threads] and [num-fetch-threads] have been deprecated and will be ignored by the test
start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
2024-12-25 17:37:21:369, 2024-12-25 17:37:25:517, 95.3674, 22.9912, 100000, 24108.0039, 3589, 559, 170.6036, 178890.8766
消费者结果说明
本次压测结果是开启SASL认证进行消费者消费
start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
2024-12-25 17:37:21:369, 2024-12-25 17:37:25:517, 95.3674, 22.9912, 100000, 24108.0039, 3589, 559, 170.6036, 178890.8766
start.time, end.time 开始和结束时间2024-12-25 17:37:21:369, 2024-12-25 17:37:25:517
data.consumed.in.MB, 总消费数据为95.3674MB
MB.sec 吞吐量为22.9912/MB
fetch.time.ms 共消费100000
rebalance.time.ms, 重新平衡时间(毫秒)
每秒消费24108.0039