Linux下Zookeeper+Kafka集群环境部署

之前有做单机版的kafka部署,现在没事在本地测试下kafka+zookeeper集群,目前节点信息如下

192.168.128.134 master (kafka+zookeeper)
192.168.128.135 node1 (kafka+zookeeper)
192.168.128.137 node2 (kafka+zookeeper)

每个节点都需要部署zookeeper进行相互间检查通讯

下载kafka和zookeeper源码包

wget http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.12/zookeeper-3.4.12.tar.gz
wget http://mirror.bit.edu.cn/apache/kafka/1.1.0/kafka_2.11-1.1.0.tgz

在master节点上配置zookeeper集群

tar -zxvf zookeeper-3.4.12.tar.gz
mv zookeeper-3.4.12 /usr/local/zookeeper
cd /usr/local/zookeeper/conf
cp zoo_sample.cfg zoo.cfg
#编辑zookeeper配置文件
vi zoo.cfg
#需要修改的配置文件如下
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/tmp/zookeeper
clientPort=2181
server.1=192.168.128.134:2888:3888
server.2=192.168.128.135:2888:3888
server.3=192.168.128.137:2888:3888

创建dataDir目录创建/tmp/zookeeper

mkdir /tmp/zookeeper
touch /tmp/zookeeper/myid
echo 1 > /tmp/zookeeper/myid

将master的zookeeper目录和和文件复制到另外两个节点

在node1和node2上同时也要创建dataDir目录/tmp/zookeeper,其它配置文件一样,主要以下myid不一样

echo 2 > /tmp/zookeeper/myid #节点2 myid
echo 3 > /tmp/zookeeper/myid #节点3 myid

分别在3个节点上分别启动zookeeper

#启动zookeeper
./bin/zkServer.sh start
#查看zookeeper状态
./bin/zkServer.sh status
#其中显示“Mode: leader”则是主节点

kafka集群的配置

tar -zxvf kafka_2.11-1.1.0.tgz
mv kafka_2.11-1.1.0 /usr/local/kafka

修改配置文件

cd /usr/local/kafka/config
vi server.properties
broker.id=0
listeners=PLAINTEXT://192.168.128.134:9092
advertised.listeners=PLAINTEXT://192.168.128.134:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/usr/local/kafka/logs/kafka
num.partitions=5
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=24
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
# 连接
zookeeper.connect=192.168.128.134:2181,192.168.128.135:2181,192.168.128.137:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
# 可删除topic
delete.topic.enable=true

将master的kafka目录和和文件复制到另外两个节点

每个节点直接需要修改部分地方

#node1修改
broker.id=1
listeners=PLAINTEXT://192.168.128.135:9092
advertised.listeners=PLAINTEXT://192.168.128.135:9092
#node2修改
broker.id=2
listeners=PLAINTEXT://192.168.128.137:9092
advertised.listeners=PLAINTEXT://192.168.128.137:9092

然后在三台机器上启动kafka

./bin/kafka-server-start.sh config/server.properties &

这样集群就配置好了,但是我们还得测试下Zookeeper+Kafka集群状态是否可用

创建topic

./bin/kafka-topics.sh --create --zookeeper 192.168.128.134:2181, 192.168.128.135:2181, 192.168.128.135:2181 --replication-factor 3 --partitions 3 --topic test
显示topic
./bin/kafka-topics.sh --describe --zookeeper 192.168.128.134:2181, 192.168.128.135:2181, 192.168.128.135:2181 --topic test
列出topic
./bin/kafka-topics.sh --list --zookeeper 192.168.128.134:2181, 192.168.128.135:2181, 192.168.128.135:2181
test

创建生产者

在master节点上 测试生产消息

./bin/kafka-console-producer.sh --broker-list 192.168.128.134:9092 -topic test
>hello world
[2018-04-03 12:18:25,545] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
this is example ...
[2018-04-03 12:19:16,342] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-2. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
welcome to china
[2018-04-03 12:20:53,141] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-1. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)

创建消费者

在node1节点上 测试消费

./bin/kafka-console-consumer.sh --zookeeper 192.168.128.134:2181, 192.168.128.135:2181, 192.168.128.135:2181 -topic test --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
this is example ...
hello world
[2018-04-03 12:20:53,145] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-1. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
welcome to china

在node2节点上测试消费

./bin/kafka-console-consumer.sh --zookeeper 192.168.128.134:2181, 192.168.128.135:2181, 192.168.128.135:2181 -topic test --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
welcome to china
hello world
this is example ...

然后在生产者里输入消息,消费者中就会显示出同样的内容,表示消费成功

删除 topic 和关闭服务

./bin/kafka-topics.sh --delete --zookeeper ZooKeeper-Kafka-01:2181, ZooKeeper-Kafka-02:2181, ZooKeeper-Kafka-03:2181 --topic test


内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://sulao.cn/post/519.html