Linux下Zookeeper+Kafka集群环境部署

之前有做单机版的kafka部署,现在没事在本地测试下kafka+zookeeper集群,目前节点信息如下

01.
192.168.128.134 master (kafka+zookeeper)
02.
192.168.128.135 node1 (kafka+zookeeper)
03.
192.168.128.137 node2 (kafka+zookeeper)

每个节点都需要部署zookeeper进行相互间检查通讯

下载kafka和zookeeper源码包

01.
wget http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.12/zookeeper-3.4.12.tar.gz
02.
wget http://mirror.bit.edu.cn/apache/kafka/1.1.0/kafka_2.11-1.1.0.tgz

在master节点上配置zookeeper集群

01.
tar -zxvf zookeeper-3.4.12.tar.gz
02.
mv zookeeper-3.4.12 /usr/local/zookeeper
03.
cd /usr/local/zookeeper/conf
04.
cp zoo_sample.cfg zoo.cfg
05.
#编辑zookeeper配置文件
06.
vi zoo.cfg
07.
#需要修改的配置文件如下
08.
tickTime=2000
09.
initLimit=10
10.
syncLimit=5
11.
dataDir=/tmp/zookeeper
12.
clientPort=2181
13.
server.1=192.168.128.134:2888:3888
14.
server.2=192.168.128.135:2888:3888
15.
server.3=192.168.128.137:2888:3888

创建dataDir目录创建/tmp/zookeeper

01.
mkdir /tmp/zookeeper
02.
touch /tmp/zookeeper/myid
03.
echo 1 > /tmp/zookeeper/myid

将master的zookeeper目录和和文件复制到另外两个节点

在node1和node2上同时也要创建dataDir目录/tmp/zookeeper,其它配置文件一样,主要以下myid不一样

01.
echo 2 > /tmp/zookeeper/myid #节点2 myid
02.
echo 3 > /tmp/zookeeper/myid #节点3 myid

分别在3个节点上分别启动zookeeper

01.
#启动zookeeper
02.
./bin/zkServer.sh start
03.
#查看zookeeper状态
04.
./bin/zkServer.sh status
05.
#其中显示“Mode: leader”则是主节点

kafka集群的配置

01.
tar -zxvf kafka_2.11-1.1.0.tgz
02.
mv kafka_2.11-1.1.0 /usr/local/kafka

修改配置文件

01.
cd /usr/local/kafka/config
02.
vi server.properties
03.
broker.id=0
04.
listeners=PLAINTEXT://192.168.128.134:9092
05.
advertised.listeners=PLAINTEXT://192.168.128.134:9092
06.
num.network.threads=3
07.
num.io.threads=8
08.
socket.send.buffer.bytes=102400
09.
socket.receive.buffer.bytes=102400
10.
socket.request.max.bytes=104857600
11.
log.dirs=/usr/local/kafka/logs/kafka
12.
num.partitions=5
13.
num.recovery.threads.per.data.dir=1
14.
offsets.topic.replication.factor=1
15.
transaction.state.log.replication.factor=1
16.
transaction.state.log.min.isr=1
17.
log.retention.hours=24
18.
log.segment.bytes=1073741824
19.
log.retention.check.interval.ms=300000
20.
# 连接
21.
zookeeper.connect=192.168.128.134:2181,192.168.128.135:2181,192.168.128.137:2181
22.
zookeeper.connection.timeout.ms=6000
23.
group.initial.rebalance.delay.ms=0
24.
# 可删除topic
25.
delete.topic.enable=true

将master的kafka目录和和文件复制到另外两个节点

每个节点直接需要修改部分地方

01.
#node1修改
02.
broker.id=1
03.
listeners=PLAINTEXT://192.168.128.135:9092
04.
advertised.listeners=PLAINTEXT://192.168.128.135:9092
05.
#node2修改
06.
broker.id=2
07.
listeners=PLAINTEXT://192.168.128.137:9092
08.
advertised.listeners=PLAINTEXT://192.168.128.137:9092

然后在三台机器上启动kafka

01.
./bin/kafka-server-start.sh config/server.properties &

这样集群就配置好了,但是我们还得测试下Zookeeper+Kafka集群状态是否可用

创建topic

01.
./bin/kafka-topics.sh --create --zookeeper 192.168.128.134:2181, 192.168.128.135:2181, 192.168.128.135:2181 --replication-factor 3 --partitions 3 --topic test
02.
显示topic
03.
./bin/kafka-topics.sh --describe --zookeeper 192.168.128.134:2181, 192.168.128.135:2181, 192.168.128.135:2181 --topic test
04.
列出topic
05.
./bin/kafka-topics.sh --list --zookeeper 192.168.128.134:2181, 192.168.128.135:2181, 192.168.128.135:2181
06.
test

创建生产者

在master节点上 测试生产消息

01.
./bin/kafka-console-producer.sh --broker-list 192.168.128.134:9092 -topic test
02.
>hello world
03.
[2018-04-03 12:18:25,545] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
04.
this is example ...
05.
[2018-04-03 12:19:16,342] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-2. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
06.
welcome to china
07.
[2018-04-03 12:20:53,141] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-1. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)

创建消费者

在node1节点上 测试消费

01.
./bin/kafka-console-consumer.sh --zookeeper 192.168.128.134:2181, 192.168.128.135:2181, 192.168.128.135:2181 -topic test --from-beginning
02.
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
03.
this is example ...
04.
hello world
05.
[2018-04-03 12:20:53,145] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset:-1} for Partition: test-1. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
06.
welcome to china

在node2节点上测试消费

01.
./bin/kafka-console-consumer.sh --zookeeper 192.168.128.134:2181, 192.168.128.135:2181, 192.168.128.135:2181 -topic test --from-beginning
02.
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
03.
welcome to china
04.
hello world
05.
this is example ...

然后在生产者里输入消息,消费者中就会显示出同样的内容,表示消费成功

删除 topic 和关闭服务

01.
./bin/kafka-topics.sh --delete --zookeeper ZooKeeper-Kafka-01:2181, ZooKeeper-Kafka-02:2181, ZooKeeper-Kafka-03:2181 --topic test


内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://sulao.cn/post/516

评论列表