kafka搭建【想吃吗?我喂你】

希望大家有一天都能从借鉴中起飞

环境搭建解释

主机信息规划

  • 192.168.198.131 - swarm01
  • 192.168.198.132 - swarm02
  • 192.168.198.133 - swarm03

安装包下载地址[页面访问速度延迟,可以直接点击备注中的地址直接下载]

  • kafka: http://kafka.apache.org/downloads
  • 能力强者请直接参考官方文档
    • http://kafka.apache.org/documentation/

安装位置

  • 192.168.198.131 /opt/kafka
  • 192.168.198.132 /opt/kafka
  • 192.168.198.133 /opt/kafka

配置修改

  • 标识修改[建议沿用zk的标识,防止冲突或是重复]

131-主机

[root@swarm01 config]# pwd
/opt/kafka/config
[root@swarm01 config]# ls
connect-console-sink.properties    connect-file-source.properties  log4j.properties        trogdor.conf
connect-console-source.properties  connect-log4j.properties        producer.properties     zookeeper.properties
connect-distributed.properties     connect-standalone.properties   server.properties
connect-file-sink.properties       consumer.properties             tools-log4j.properties
[root@swarm01 config]# vi server.properties
---------------------------------------------------------------
broker.id=1
log.dirs=/opt/kafka/logs
zookeeper.connect=swarm01:2181,swarm02:2181,swarm03:2181
---------------------------------------------------------------


132-主机
---------------------------------------------------------------
broker.id=2
log.dirs=/opt/kafka/logs
zookeeper.connect=swarm01:2181,swarm02:2181,swarm03:2181
---------------------------------------------------------------

133-主机
---------------------------------------------------------------
broker.id=3
log.dirs=/opt/kafka/logs
zookeeper.connect=swarm01:2181,swarm02:2181,swarm03:2181
---------------------------------------------------------------

备注:上面这三点配置都需要更具自己的实际情况进行修改

环境配置

[root@swarm01 opt]# vi /etc/profile


# kafka home
export KAFKA_HOME=/opt/kafka
export PATH=$PATH:$KAFKA_HOME/bin

启动

  • 自检环境配置
  • zk环境查看是否正常
    环境启动需要依赖zk环境,因此需要首先启动zk集群环境
[root@swarm01 config]# jps
1856 QuorumPeerMain
2094 Jps
[root@swarm01 config]# 
  • kafka启动
    一定要启动三个
[root@swarm01 bin]# ./kafka-server-start.sh -daemon ../config/server.properties 


如果这里你的环境之前没有进行过任何操作,返回的没有first这个数据

[root@swarm01 bin]# ./kafka-topics.sh --zookeeper 127.0.0.1:2181 --list
__consumer_offsets
first
[root@swarm01 bin]# pwd
/opt/kafka/bin
[root@swarm01 bin]# 
  • 使用jps查看启动的进程
[root@swarm01 bin]# jps
1856 QuorumPeerMain
4321 Jps
4252 Kafka
[root@swarm01 bin]# 

常用配置命令

  • 查询主题列表
[root@swarm01 bin]# ./kafka-topics.sh --zookeeper localhost:2181 --list

__consumer_offsets
first
[root@swarm01 bin]# 
  • 创建新的新主题
3各分区   3个副本
[root@swarm01 bin]# kafka-topics.sh --zookeeper localhost:2181 --create --topic my-topic --replication-factor 3 --partitions 3

Created topic my-topic.
[root@swarm01 bin]#  
  • 主题详情
[root@swarm01 bin]# ./kafka-topics.sh --zookeeper localhost:2181 --describe

Topic:__consumer_offsets        PartitionCount:50       ReplicationFactor:1     Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer
        Topic: __consumer_offsets       Partition: 0    Leader: -1      Replicas: 0     Isr: 0
        Topic: __consumer_offsets       Partition: 1    Leader: 1       Replicas: 1     Isr: 1
        Topic: __consumer_offsets       Partition: 2    Leader: 2       Replicas: 2     Isr: 2
        Topic: __consumer_offsets       Partition: 3    Leader: -1      Replicas: 0     Isr: 0
        Topic: __consumer_offsets       Partition: 4    Leader: 1       Replicas: 1     Isr: 1
        Topic: __consumer_offsets       Partition: 5    Leader: 2       Replicas: 2     Isr: 2
        Topic: __consumer_offsets       Partition: 6    Leader: -1      Replicas: 0     Isr: 0
        
       ......
       
描述具体某一个详情
[root@swarm01 bin]# kafka-topics.sh --zookeeper localhost:2181 --describe  --topic my-topic

Topic:my-topic  PartitionCount:3        ReplicationFactor:3     Configs:
        Topic: my-topic Partition: 0    Leader: 3       Replicas: 3,1,2 Isr: 1,2,3
        Topic: my-topic Partition: 1    Leader: 1       Replicas: 1,2,3 Isr: 1,2,3
        Topic: my-topic Partition: 2    Leader: 2       Replicas: 2,3,1 Isr: 2,1,3
[root@swarm01 bin]# 

  • 删除主题
[root@swarm01 bin]# kafka-topics.sh --zookeeper localhost:2181 --delete  --topic my-topic

Topic my-topic is marked for deletion.
请注意下面的提示,当设置这个属性为真的时候会彻底删除这个主题
Note: This will have no impact if delete.topic.enable is not set to true.
[root@swarm01 bin]# 
  • 查看历史消费者群组
[root@swarm01 bin]# ./kafka-consumer-groups.sh --bootstrap-server swarm01:9092 --list
  or
[root@swarm01 bin]# ./kafka-consumer-groups.sh --bootstrap-server swarm01:9092 --describe --group groupName
[root@swarm01 bin]# 

  • 查看新版消费者群组
kafka-consumer-groups.sh --new-consumer --bootstrap-server 172.21.50.162:9092 --list
 or
kafka-consumer-groups.sh --new-consumer --bootstrap-server 172.21.50.162:9092 --describe --group groupName
  • 查看主题对应的消息数量
[root@swarm01 bin]# ./kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list swarm01:9092 --topic first --time -1
first:0:0
  • 查看日志文件内容
日志所在位置
[root@swarm01 first-0]# pwd
/opt/kafka/logs/first-0
[root@swarm01 first-0]# ls
00000000000000000000.log  00000000000000000000.timeindex  leader-epoch-checkpoint
[root@swarm01 first-0]# 

kafka-run-class.sh kafka.tools.DumpLogSegments --files 00000000000000000000.log --print-data-log
  • 生产者 or 消费者 动作
消费端动作
[root@swarm01 bin]# ls
connect-distributed.sh        kafka-dump-log.sh                    kafka-topics.sh
connect-standalone.sh         kafka-log-dirs.sh                    kafka-verifiable-consumer.sh
kafka-acls.sh                 kafka-mirror-maker.sh                kafka-verifiable-producer.sh
kafka-broker-api-versions.sh  kafka-preferred-replica-election.sh  trogdor.sh
kafka-configs.sh              kafka-producer-perf-test.sh          windows
kafka-console-consumer.sh     kafka-reassign-partitions.sh         zookeeper-security-migration.sh
kafka-console-producer.sh     kafka-replica-verification.sh        zookeeper-server-start.sh
kafka-consumer-groups.sh      kafka-run-class.sh                   zookeeper-server-stop.sh
kafka-consumer-perf-test.sh   kafka-server-start.sh                zookeeper-shell.sh
kafka-delegation-tokens.sh    kafka-server-stop.sh
kafka-delete-records.sh       kafka-streams-application-reset.sh
[root@swarm01 bin]# ./kafka-console-producer.sh --broker-list swarm01:9092 --topic first
>1
>2
>3
>4
>5
>


接收端动作
[root@swarm02 bin]# ./kafka-console-consumer.sh --bootstrap-server swarm01:9092 --topic first --from-beginning
1
2
3
4
5

猜你喜欢

转载自blog.csdn.net/qq_32112175/article/details/105330901