kafka install for ansible
时间:2022-07-22
本文章向大家介绍kafka install for ansible,主要内容包括其使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。
Kafka介绍
Kafka是一种高吞吐量的分布式发布订阅消息系统,它可以处理消费者规模的网站中的所有动作流数据。支持离线和在线日志处理,kafka对消息保存时根据topic进行归类,发送消息者成为Producer,消息接收者成为Consumer。
kafka名词介绍
- broker:kafka集群包含一个或多个服务器,这种服务器被称为broker
- topic:每条发布到kafka集群的消息都有一个类别,这个类别称为topic
- partition:每个topic包含一个或多个partition
- producer:消息生产者,负责发布消息到kafka broker
- consumer:消息消费者,向kafka broker读取消息的客户端
消息队列介绍
消息队列是一种异步通信协议,消息的发送者和接收者不需要同时与消息保持联系,发送者将消息存放在消息队列中,然后接受者去拿。
名词介绍
- 生产者:消息的发送者
- 消费者:消息的接收者
- 异步处理:将一些实时性要求不是很强的业务异步处理
- 解耦:消息队列将消息生产和订阅分离,可以实现应用解耦
- 削峰:通过在应用前端以消息队列接收请求来达到削峰的目的,请求超过队列长度直接不处理重定向至错误页面
应用场景
- 异步处理
- 解耦
- 削峰
- 提速
- 广播
在一些电商类的平台中,会经常有一些秒杀类的活动,消息队列在其中的应用:
- 控制活动的人数
- 缓解短时间内高流量压垮应用
- 用户的请求,服务器接收后,首先写入消息队列
- 秒杀业务根据消息队列中的请求信息,再做后续处理
ansible部署kafka集群
目录结构
$ tree roles/kafka
roles/kafka
├── tasks
│ ├── download.yml
│ ├── kafka.yml
│ ├── main.yml
│ └── zookeeper.yml
├── templates
│ ├── kafka.service.j2
│ ├── server.properties.j2
│ ├── zoo.cfg.j2
│ └── zookeeper.service.j2
└── vars
└── main.yml
kafka的安装需要在zookeeper已经部署好的基础上完成,所以我们需要先安装zookeeper
task文件参考
先下载zookeeper和kafka文件
$ cat download.yml
---
- name: Check if kafka tar file
stat:
path: "/tmp/{{ kafka_tar_file }}"
register: kafka_tar
- name: Check if zookeeper tar file
stat:
path: "/tmp/{{ zookeeper_tar_file }}"
register: zookeeper_tar
- name: Download zookeeper
get_url:
url: "{{ zookeeper_download_url }}"
dest: /tmp
mode: 0755
when: zookeeper_tar.stat.exists == False
become: true
- name: Download kafka
get_url:
url: "{{ kafka_download_url }}"
dest: /tmp
mode: 0755
when: kafka_tar.stat.exists == False
become: true
- name: Check kafka untar file
stat:
path: "{{ kafka_base_dir }}"
register: kafka_untar
- name: Unarchive kafka file
unarchive:
src: "/tmp/{{ kafka_tar_file }}"
dest: /tmp
remote_src: yes
owner: root
group: root
when: kafka_untar.stat.exists == False
become: true
- name: Check zookeeper untar file
stat:
path: "{{ zookeeper_base_dir }}"
register: zookeeper_untar
- name: Unarchive zookeeper file
unarchive:
src: "/tmp/{{ zookeeper_tar_file }}"
dest: /tmp
remote_src: yes
owner: root
group: root
when: zookeeper_untar.stat.exists == False
become: true
- name: Mv kafka to /usr/local
command: "mv {{ kafka_untar_dir }} {{ kafka_base_dir }}"
args:
chdir: /tmp
when: kafka_untar.stat.exists == False
become: true
- name: Mv zookeeper to /usr/local
command: "mv {{ zookeeper_untar_dir }} {{ zookeeper_base_dir }}"
args:
chdir: /tmp
when: zookeeper_untar.stat.exists == False
become: true
安装zookeeper
$ cat zookeeper.yml
---
- name: Create zookeeper dir
file:
path: "{{ item.path }}"
state: directory
mode: 0755
with_items:
- path: "{{ zookeeper_data_dir }}"
- path: "{{ zookeeper_log_dir }}"
become: true
- name: Create myid file
file:
path: "{{ zookeeper_data_dir }}/myid"
state: touch
mode: 0644
become: true
- name: Set myid
lineinfile:
path: "{{ zookeeper_data_dir }}/myid"
line: "{{ zookeeper_myid }}"
become: true
- name: Copy zoo file
template:
src: zoo.cfg.j2
dest: "{{ zookeeper_base_dir }}/conf/zoo.cfg"
become: true
- name: Delete zoo_sample file
file:
path: "{{ zookeeper_base_dir }}/conf/zoo_sample.cfg"
state: absent
become: true
- name: Set hosts
lineinfile:
path: /etc/hosts
line: "{{ item.host }} {{ item.domain }}"
with_items: "{{ zookeeper_hosts }}"
become: true
- name: Copy service file
template:
src: zookeeper.service.j2
dest: /lib/systemd/system/zookeeper.service
become: true
- name: Start zookeeper
systemd:
name: zookeeper
state: restarted
enabled: yes
daemon_reload: yes
masked: no
become: true
安装kafka
$ cat kafka.yml
---
- name: Create log dir
file:
path: "{{ kafka_log_dir }}"
state: directory
mode: 0755
become: true
- name: Copy server.properties file
template:
src: server.properties.j2
dest: "{{ kafka_base_dir }}/config/server.properties"
become: true
- name: Copy service file
template:
src: kafka.service.j2
dest: /lib/systemd/system/kafka.service
become: true
- name: Start kafka
systemd:
name: kafka
state: restarted
enabled: yes
daemon_reload: yes
masked: no
become: true
template文件参考
将kafka加入到系统服务里
$ cat kafka.service.j2
[Unit]
Description=Kafka
After=zookeeper.service
[Service]
Type=simple
Environment=LOG_DIR={{ kafka_log_dir }}
ExecStart={{ kafka_base_dir }}/bin/kafka-server-start.sh {{ kafka_base_dir }}/config/server.properties
ExecStop={{ kafka_base_dir }}/bin/kafka-server-stop.sh
Restart=always
[Install]
WantedBy=multi-user.target
将zookeeper加入到系统服务里
$ cat zookeeper.service.j2
[Unit]
Description=zookeeper.service
After=network.target
[Service]
Type=forking
Environment=ZOO_LOG_DIR={{ zookeeper_log_dir }}
ExecStart={{ zookeeper_base_dir }}/bin/zkServer.sh start
ExecStop={{ zookeeper_base_dir }}/bin/zkServer.sh stop
ExecReload={{ zookeeper_base_dir }}/bin/zkServer.sh restart
[Install]
WantedBy=multi-user.target
kafka配置文件
$ cat server.properties.j2
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id={{ broker_id }} #设置broker id,默认从0开始
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs={{ kafka_log_dir }}
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect={{ zookeeper_connect }} #设置连接zookeeper的地址
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
############################# Group Coordinator Settings #############################
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
zookeeper配置文件
$ cat zoo.cfg.j2
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir={{ zookeeper_data_dir }}
# the zookeepr log dir
dataLogDir={{ zookeeper_log_dir }}
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=1
#
#add server cluster addr
{% for zookeeper_cluster in zookeeper_cluster_server %}
server.{{ zookeeper_cluster.id }}={{ zookeeper_cluster.host }}:2888:3888
{% endfor %}
vars文件参考
$ cat main.yml
kafka_untar_dir: kafka_2.11-2.1.0
zookeeper_untar_dir: zookeeper-3.4.12
kafka_tar_file: "{{ kafka_untar_dir }}.tgz"
zookeeper_tar_file: "{{ zookeeper_untar_dir }}.tar.gz"
kafka_download_url: "http://mirrors.hust.edu.cn/apache/kafka/2.1.0/{{ kafka_tar_file }}"
zookeeper_download_url: "http://mirrors.hust.edu.cn/apache/zookeeper/stable/{{ zookeeper_tar_file }}"
zookeeper_base_dir: /usr/local/zookeeper
kafka_base_dir: /usr/local/kafka
zookeeper_data_dir: "{{ zookeeper_base_dir }}/data"
kafka_data_dir: "{{ kafka_base_dir }}/data"
zookeeper_log_dir: "{{ zookeeper_base_dir }}/logs"
kafka_log_dir: "{{ kafka_base_dir }}/logs"
zookeeper_cluster_server:
- id: 1
host: zoo1.tianchiapi.com
- id: 2
host: zoo2.tianchiapi.com
- id: 3
host: zoo3.tianchiapi.com
zookeeper_hosts:
- host: 10.0.3.150
domain: zoo1.tianchiapi.com
- host: 10.0.3.115
domain: zoo2.tianchiapi.com
- host: 10.0.3.116
domain: zoo3.tianchiapi.com
zookeeper_connect: "zoo1.tianchiapi.com:2181,zoo2.tianchiapi.com:2181,zoo3.tianchiapi.com:2181"
清单文件以及入口文件参考
$ cat setup.yml
- hosts: kafka
roles:
- role: kafka
$ cat hosts
[kafka]
10.0.3.150 zookeeper_myid=1 broker_id=0
10.0.3.115 zookeeper_myid=2 broker_id=1
10.0.3.116 zookeeper_myid=3 broker_id=2
以上是kafka的安装,使用方法这里不再讲述。
- [喵咪MQ(2)]RabbitMQ单机模式使用
- [喵咪MQ(1)]RabbitMQ简单介绍准备工作
- Dubbo 源码解析 —— Directory
- [喵咪大数据]Hbase搭建和基本使用
- CSS:模拟Windows窗口及DIV居中
- [喵咪大数据]Hive2搭建和基本操作
- [喵咪大数据]Hadoop节点添加下线和磁盘扩容操作
- [喵咪大数据]Hadoop集群模式
- 【教程】使用TensorFlow对象检测接口标注数据集
- [喵咪大数据]Hadoop单机模式
- 【死磕Java并发】—–Java内存模型之happens-before
- 9个,程序员又爱又恨的编程习惯
- Dubbo 源码解析 —— Cluster
- 【死磕Java并发】—–Java内存模型之从JMM角度分析DCL
- JavaScript 教程
- JavaScript 编辑工具
- JavaScript 与HTML
- JavaScript 与Java
- JavaScript 数据结构
- JavaScript 基本数据类型
- JavaScript 特殊数据类型
- JavaScript 运算符
- JavaScript typeof 运算符
- JavaScript 表达式
- JavaScript 类型转换
- JavaScript 基本语法
- JavaScript 注释
- Javascript 基本处理流程
- Javascript 选择结构
- Javascript if 语句
- Javascript if 语句的嵌套
- Javascript switch 语句
- Javascript 循环结构
- Javascript 循环结构实例
- Javascript 跳转语句
- Javascript 控制语句总结
- Javascript 函数介绍
- Javascript 函数的定义
- Javascript 函数调用
- Javascript 几种特殊的函数
- JavaScript 内置函数简介
- Javascript eval() 函数
- Javascript isFinite() 函数
- Javascript isNaN() 函数
- parseInt() 与 parseFloat()
- escape() 与 unescape()
- Javascript 字符串介绍
- Javascript length属性
- javascript 字符串函数
- Javascript 日期对象简介
- Javascript 日期对象用途
- Date 对象属性和方法
- Javascript 数组是什么
- Javascript 创建数组
- Javascript 数组赋值与取值
- Javascript 数组属性和方法
- Emoji表情在Android JNI中的兼容性问题详解
- 一个吸顶Item的简单实现方法分享
- [- Flutter福利篇 -] Hero转场组件共享 — 附赠-路由动画工具类
- Hue执行多条语句问题
- Android仿抖音列表效果
- com.android.support版本冲突解决方法
- [-Flutter趣玩篇-] 出神入化的Align+动画
- Hive Impala和Hue集成LDAP
- Android仿QQ分组实现二级菜单展示
- Android RecyclerView实现拼团倒计时列表实例代码
- 正则十八式-第一式:直捣黄龙
- 安装Grafana并使用Cloudera Manager DataSource
- Python中类变量、成员变量、局部变量的区别
- Android 友盟第三方登录与分享的实现代码
- Android实现双击返回键退出应用实现方法详解