之前有做单机版的kafka部署,现在没事在本地测试下kafka+zookeeper集群,目前节点信息如下
01.192.168.128.134 master (kafka+zookeeper)02.192.168.128.135 node1 (kafka+zookeeper)03.192.168.128.137 node2 (kafka+zookeeper)
每个节点都需要部署zookeeper进行相互间检查通讯
下载kafka和zookeeper源码包
01.wget http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.12/zookeeper-3.4.12.tar.gz02.wget http://mirror.bit.edu.cn/apache/kafka/1.1.0/kafka_2.11-1.1.0.tgz
在master节点上配置zookeeper集群
01.tar -zxvf zookeeper-3.4.12.tar.gz02.mv zookeeper-3.4.12 /usr/local/zookeeper03.cd /usr/local/zookeeper/conf04.cp zoo_sample.cfg zoo.cfg05.#编辑zookeeper配置文件06.vi zoo.cfg07.#需要修改的配置文件如下08.tickTime=200009.initLimit=1010.syncLimit=511.dataDir=/tmp/zookeeper12.clientPort=218113.server.1=192.168.128.134:2888:388814.server.2=192.168.128.135:2888:388815.server.3=192.168.128.137:2888:3888
创建dataDir目录创建/tmp/zookeeper
01.mkdir /tmp/zookeeper02.touch /tmp/zookeeper/myid03.echo 1 > /tmp/zookeeper/myid
将master的zookeeper目录和和文件复制到另外两个节点
在node1和node2上同时也要创建dataDir目录/tmp/zookeeper,其它配置文件一样,主要以下myid不一样
01.echo 2 > /tmp/zookeeper/myid #节点2 myid02.echo 3 > /tmp/zookeeper/myid #节点3 myid
分别在3个节点上分别启动zookeeper
01.#启动zookeeper02../bin/zkServer.sh start03.#查看zookeeper状态04../bin/zkServer.sh status05.#其中显示“Mode: leader”则是主节点
kafka集群的配置
01.tar -zxvf kafka_2.11-1.1.0.tgz02.mv kafka_2.11-1.1.0 /usr/local/kafka
修改配置文件
01.cd /usr/local/kafka/config02.vi server.properties03.broker.id=004.listeners=PLAINTEXT://192.168.128.134:909205.advertised.listeners=PLAINTEXT://192.168.128.134:909206.num.network.threads=307.num.io.threads=808.socket.send.buffer.bytes=10240009.socket.receive.buffer.bytes=10240010.socket.request.max.bytes=10485760011.log.dirs=/usr/local/kafka/logs/kafka12.num.partitions=513.num.recovery.threads.per.data.dir=114.offsets.topic.replication.factor=115.transaction.state.log.replication.factor=116.transaction.state.log.min.isr=117.log.retention.hours=2418.log.segment.bytes=107374182419.log.retention.check.interval.ms=30000020.# 连接21.zookeeper.connect=192.168.128.134:2181,192.168.128.135:2181,192.168.128.137:218122.zookeeper.connection.timeout.ms=600023.group.initial.rebalance.delay.ms=024.# 可删除topic25.delete.topic.enable=true
将master的kafka目录和和文件复制到另外两个节点
每个节点直接需要修改部分地方
01.#node1修改02.broker.id=103.listeners=PLAINTEXT://192.168.128.135:909204.advertised.listeners=PLAINTEXT://192.168.128.135:909205.#node2修改06.broker.id=207.listeners=PLAINTEXT://192.168.128.137:909208.advertised.listeners=PLAINTEXT://192.168.128.137:9092
然后在三台机器上启动kafka
01../bin/kafka-server-start.sh config/server.properties &
这样集群就配置好了,但是我们还得测试下Zookeeper+Kafka集群状态是否可用
创建topic
01../bin/kafka-topics.sh --create --zookeeper 192.168.128.134:2181, 192.168.128.135:2181, 192.168.128.135:2181 --replication-factor 3 --partitions 3 --topic test02.显示topic03../bin/kafka-topics.sh --describe --zookeeper 192.168.128.134:2181, 192.168.128.135:2181, 192.168.128.135:2181 --topic test04.列出topic05../bin/kafka-topics.sh --list --zookeeper 192.168.128.134:2181, 192.168.128.135:2181, 192.168.128.135:218106.test
创建生产者
在master节点上 测试生产消息
01../bin/kafka-console-producer.sh --broker-list 192.168.128.134:9092 -topic test02.>hello world03.[2018-04-03 12:18:25,545] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)04.this is example ...05.[2018-04-03 12:19:16,342] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-2. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)06.welcome to china07.[2018-04-03 12:20:53,141] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-1. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
创建消费者
在node1节点上 测试消费
01../bin/kafka-console-consumer.sh --zookeeper 192.168.128.134:2181, 192.168.128.135:2181, 192.168.128.135:2181 -topic test --from-beginning02.Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].03.this is example ...04.hello world05.[2018-04-03 12:20:53,145] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-1. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)06.welcome to china
在node2节点上测试消费
01../bin/kafka-console-consumer.sh --zookeeper 192.168.128.134:2181, 192.168.128.135:2181, 192.168.128.135:2181 -topic test --from-beginning02.Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].03.welcome to china04.hello world05.this is example ...
然后在生产者里输入消息,消费者中就会显示出同样的内容,表示消费成功
删除 topic 和关闭服务
01../bin/kafka-topics.sh --delete --zookeeper ZooKeeper-Kafka-01:2181, ZooKeeper-Kafka-02:2181, ZooKeeper-Kafka-03:2181 --topic test
内容版权声明:除非注明,否则皆为本站原创文章。
转载注明出处:https://sulao.cn/post/516
评论列表