内容简介:这一章节来真正启动Kafka集群,先给出一份Broker的配置项列表,将以下信息复制三份,分别配置三台阿里云ECS上的Broker配置文件:以上列表有两点需要修改的地方:然后使用如下命令分别启动Kafka Broker:
这一章节来真正启动Kafka集群,先给出一份Broker的配置项列表,将以下信息复制三份,分别配置三台阿里云ECS上的Broker配置文件:
############################# Server Basics ############################# broker.id=0 delete.topic.enable=true auto.create.topics.enable=true ############################# Socket Server Settings ############################# listeners=EXTERNAL://阿里云ECS内网IP:9092,INTERNAL://阿里云ECS内网IP:9093 listener.security.protocol.map=EXTERNAL:PLAINTEXT,INTERNAL:PLAINTEXT inter.broker.listener.name=INTERNAL advertised.listeners=EXTERNAL://阿里云ECS外网IP:9092,INTERNAL://阿里云ECS内网IP:9093 num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 ############################# Log Basics ############################# log.dirs=/root/kafka_2.12-2.0.0/data/kafka num.partitions=1 num.recovery.threads.per.data.dir=1 default.replication.factor=3 min.insync.replicas=2 offsets.topic.replication.factor=2 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 ############################# Log Retention Policy ############################# log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 log.segment.ms=604800000 ############################# Zookeeper ############################# zookeeper.connect=zookeeper.server.1:2181,zookeeper.server.2:2181,zookeeper.server.3:2181 zookeeper.connection.timeout.ms=6000 ############################# Group Coordinator Settings ############################# group.initial.rebalance.delay.ms=0 ############################# Message ############################# message.max.bytes=1048576 fetch.message.max.bytes=1048576
以上列表有两点需要修改的地方:
broker.id
然后使用如下命令分别启动Kafka Broker:
kafka_2.12-2.0.0/bin/kafka-server-start.sh kafka_2.12-2.0.0/config/server.properties &
三个Broker没有异常信息,大概率说明我们的Kafka集群部署成功了,下面来验证一下。首先我们创建一个Topic:
kafka_2.12-2.0.0/bin sh kafka-topics.sh --zookeeper zookeeper.server.1:2181 --topic my_topic_in_cluster --create --partitions 3 --replication-factor 2
上面的命令有这样几个信息:
my_topic_in_cluster
如果Kafka集群是成功的,那么理论上这六个Partition会被两两均匀分配到三个Broker中。
连接到部署Broker-0的阿里云ECS,进入Kafka的data目录:
cd /kafka_2.12-2.0.0/data/kafka /kafka_2.12-2.0.0/data/kafka# ls __consumer_offsets-0 __consumer_offsets-3 __consumer_offsets-6 __consumer_offsets-1 __consumer_offsets-30 __consumer_offsets-7 __consumer_offsets-10 __consumer_offsets-31 __consumer_offsets-8 __consumer_offsets-11 __consumer_offsets-32 __consumer_offsets-9 __consumer_offsets-12 __consumer_offsets-33 __consumer_offsets-13 __consumer_offsets-34 __consumer_offsets-14 __consumer_offsets-35 __consumer_offsets-15 __consumer_offsets-36 cleaner-offset-checkpoint __consumer_offsets-16 __consumer_offsets-37 configured-topic-0 __consumer_offsets-17 __consumer_offsets-38 configured-topic-1 __consumer_offsets-18 __consumer_offsets-39 configured-topic-2 __consumer_offsets-19 __consumer_offsets-4 first_topic-0 __consumer_offsets-2 __consumer_offsets-40 first_topic-1 __consumer_offsets-20 __consumer_offsets-41 first_topic-2 __consumer_offsets-21 __consumer_offsets-42 log-start-offset-checkpoint __consumer_offsets-22 __consumer_offsets-43 meta.properties __consumer_offsets-23 __consumer_offsets-44 my_topic_in_cluster-0 __consumer_offsets-24 __consumer_offsets-45 my_topic_in_cluster-2 __consumer_offsets-25 __consumer_offsets-46 recovery-point-offset-checkpoint __consumer_offsets-26 __consumer_offsets-47 replication-offset-checkpoint __consumer_offsets-27 __consumer_offsets-48 with_keys_topic-0 __consumer_offsets-28 __consumer_offsets-49 with_keys_topic-1 __consumer_offsets-29 __consumer_offsets-5 with_keys_topic-2
可以看到Broker-0中分配了 my_topic_in_cluster
的Partition-0和Partition-2。
同理,连接到部署Broker-1的阿里云ECS,进入Kafka的data目录:
cd /kafka_2.12-2.0.0/data/kafka /kafka_2.12-2.0.0/data/kafka# ls meta.properties my_topic_in_cluster-0 my_topic_in_cluster-1 cleaner-offset-checkpoint recovery-point-offset-checkpoint log-start-offset-checkpoint replication-offset-checkpoint
可以看到Broker-1中分配了 my_topic_in_cluster
的Partition-0和Partition-1。
同理,连接到部署Broker-2的阿里云ECS,进入Kafka的data目录:
cd /kafka_2.12-2.0.0/data/kafka /kafka_2.12-2.0.0/data/kafka# ls meta.properties my_topic_in_cluster-1 my_topic_in_cluster-2 cleaner-offset-checkpoint recovery-point-offset-checkpoint log-start-offset-checkpoint replication-offset-checkpoint
可以看到Broker-2中分配了 my_topic_in_cluster
的Partition-1和Partition-2。
从上面的结果可以说明我们的Kafka集群是部署成功的。
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:- Kafka从上手到实践-Kafka集群:配置Broker
- Kafka从上手到实践-实践真知:搭建Zookeeper集群
- Kafka从上手到实践-Kafka集群:Kafka Listeners
- Kafka从上手到实践-Kafka集群:重要配置和性能探讨
- 快速上手virtualenv
- MongoDB 简单上手
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。