Kafka安装简记

Kafka安装简记

1.简述

kafka是什么?kafka是一个分布式的日志系统。这里说的日志系统其实就是对数据的一个持久化的记录。因为一些优异的特征【哪些特征?】,kafka在众多的分布式日志系统中脱颖而出。

2.下载安装包

链接如右:http://kafka.apache.org/downloads

选择一个合适的版本即可。kafka是用javascala语言共同开发。解压之后如下:

[root@localhost kafka_2.11-1.0.0]# ll
total 72
drwxr-xr-x. 3 root root  4096 Oct 27  2017 bin
drwxr-xr-x. 2 root root  4096 Jun 15 06:57 config
drwxr-xr-x. 2 root root  4096 Apr 20 13:31 libs
-rw-r--r--. 1 root root 28824 Oct 27  2017 LICENSE
drwxr-xr-x. 2 root root 12288 Jun 15 23:29 logs
-rw-r--r--. 1 root root   336 Oct 27  2017 NOTICE
drwxr-xr-x. 2 root root  4096 Oct 27  2017 site-docs
-rw-r--r--. 1 root root    31 Apr 22 04:58 test.sink.txt
-rw-r--r--. 1 root root    31 Apr 22 04:58 test.txt

进入config配置文件夹:

[root@localhost kafka_2.11-1.0.0]# cd config
[root@localhost config]# ll
total 64
-rw-r--r--. 1 root root  906 Oct 27  2017 connect-console-sink.properties
-rw-r--r--. 1 root root  909 Oct 27  2017 connect-console-source.properties
-rw-r--r--. 1 root root 5807 Oct 27  2017 connect-distributed.properties
-rw-r--r--. 1 root root  884 Apr 22 04:41 connect-file-sink.properties
-rw-r--r--. 1 root root  882 Apr 22 04:41 connect-file-source.properties
-rw-r--r--. 1 root root 1111 Oct 27  2017 connect-log4j.properties
-rw-r--r--. 1 root root 2730 Apr 22 04:45 connect-standalone.properties
-rw-r--r--. 1 root root 1221 Oct 27  2017 consumer.properties
-rw-r--r--. 1 root root 4727 Oct 27  2017 log4j.properties
-rw-r--r--. 1 root root 1919 Oct 27  2017 producer.properties
-rw-r--r--. 1 root root 6852 Oct 27  2017 server.properties
-rw-r--r--. 1 root root 1032 Oct 27  2017 tools-log4j.properties
-rw-r--r--. 1 root root 1023 Oct 27  2017 zookeeper.properties

编辑server.properties文件

[root@localhost config]# vi server.properties 

修改其中的zookeeper部分下的zookeeper.connect参数。这个参数的意思是指连接到zookeeper集群中的哪一个zookeeper。因为这里笔者用的是单机,所以直接设置为zookeeper.connect=localhost:2181

3.测试安装

step 01:
(1)先启动本地的zookeeper节点。命令:bin/zkServer.sh start conf/zoo.cfg
关键点:The crucial point is that you should know that kafka-server must need a zookeeper but you don't need use it in shell,because the configutration is in server.properties.

step 02:
(2)接着启动本地的kafka节点。命令:bin/kafka-server-start.sh config/server.properties

step 03:
(3)先创建主题,kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic dblab
关键点:kafka 连接到localhost下的zookeeper的2181端口

step 04:
(4)验证是否创建成功:kafka-topics.sh --list --zookeeper localhost:2181

step 05:
(5)使用生产者往其中生产消息:

[root@localhost bin]# ./kafka-console-producer.sh --broker-list localhost:9092 --topic dblab
>The writer is LittleLawson
>Could you follow me?

关键点:at the same time ,produce data don't need zookeeper server,the producer write data to broker not zookeeper

step 06:
(6)开启消费者,开始消费消息:

[root@localhost bin]# ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic dblab --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
The writer is LittleLawson
Could you follow me?

关键点:There is a problem,why consumer depend on zookeeper rather than broker
because the consumer need offset,so it connect with zookeeper to get zookeeper,at the same time,zookeeper will give broker's address to consumer.But in least version,the consumer could get same information from broker!

9092端口被kafka进程中的producer占用,kafka的consumer从zookeeper的2181端口获取数据

4.其它命令

1.Kafka中删除topic:bin/kafka-topics.sh --delete --zookeeper [zookeeper]:2181 --topic [topicName1,topicName2...]

猜你喜欢

转载自blog.csdn.net/liu16659/article/details/80712565