Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgraded role #1

Merged
merged 1 commit into from
Nov 14, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 3 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,36 +3,13 @@
## Purpose
This ansible role installs kafka cluster or standalone instance based on KRaft protocol.

## Configuration

| Variable | Description | Default |
| ------ | ------ | ------ |
| kafka_version | Kafka version | 3.4.0 |
| kafka_scala_version | Kafka scala version | 2.13 |
| kafka_openjdk_version | Kafka OpenJDK version | 17 |
| kafka_download_url | Kafka archive download url | `https://dlcdn.apache.org/kafka/{{ kafka_version }}/kafka_{{ kafka_scala_version }}-{{ kafka_version }}.tgz` |
| kafka_hosts_group | Kafka hosts group in ansible inventory | kafka |
| kafka_user | Kafka user | kafka |
| kafka_group | Kafka group | kafka |
| kafka_config_directory | Kafka config directory | `/etc/kafka` |
| kafka_data_directory | Kafka data directory | `/var/lib/kafka` |
| kafka_log_directory | Kafka log directory | `/var/log/kafka` |
| kafka_extra_files | Defines extra files with some content. Files are located in `kafka_config_directory` | {} |
| kafka_extra_envs | Defines extra environment variables | {} |
| kafka_server_properties | Defines extra parameters in server.properties config | {} |
| kafka_log4j_properties | Overrides default log4j.properties | "" |
| kafka_sasl_enabled | Enables/Disables SASL | false |
| kafka_password | Password of kafka user (If SASL is enabled) | "changeMe" |
| kafka_users | Defines other users if it is required (if SASL is enabled) | `{admin.password: "changeMe"}` |
| kafka_opts | Defines KAFKA_OPTS environment variable | "" |

## Example of inventory and playbook
1) inventory file
```ini
[kafka]
kafka-1.example.com kafka_node_id=1
kafka-2.example.com kafka_node_id=2
kafka-3.example.com kafka_node_id=3
kafka-1.example.com kafka_node_id=1 kafka_process_roles=broker,controller
kafka-2.example.com kafka_node_id=2 kafka_process_roles=broker,controller
kafka-3.example.com kafka_node_id=3 kafka_process_roles=broker,controller
```
2) Playbook
```yaml
Expand Down
202 changes: 40 additions & 162 deletions defaults/main.yaml
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
---
kafka_version: 3.4.0
kafka_version: 3.6.0
kafka_scala_version: 2.13
kafka_openjdk_version: 17
kafka_download_url: https://dlcdn.apache.org/kafka/{{ kafka_version }}/kafka_{{ kafka_scala_version }}-{{ kafka_version }}.tgz

# must be defined as same as ansible inventory group
kafka_hosts_group: kafka
kafka_user: kafka
Expand All @@ -12,173 +11,52 @@ kafka_config_directory: /etc/kafka
kafka_data_directory: /var/lib/kafka
kafka_log_directory: /var/log/kafka
kafka_extra_files: {}
# prometheus-jmx-exporter.yml: |
# ---
# lowercaseOutputName: true
# lowercaseOutputLabelNames: true
# whitelistObjectNames: ["kafka.controller:*", "kafka.log:*", "kafka.network:*", "kafka.server:*"]
# rules:
# # kafka.controller:*
# - pattern: kafka.controller<type=ControllerStats, name=(\w+)(PerSec|RateAndTimeMs)><>Count
# name: jmx_kafka_controller_$1_total
# type: COUNTER
# help: ""
# - pattern: kafka.controller<type=KafkaController, name=ActiveControllerCount><>Value
# name: jmx_kafka_controller_activecontroller_total
# type: GAUGE
# help: ""
# # kafka.log:*
# - pattern: kafka.log<type=LogFlushStats, name=LogFlushRateAndTimeMs><>Count
# name: jmx_kafka_log_flush_total
# type: COUNTER
# help: ""
# - pattern: kafka.log<type=LogManager, name=OfflineLogDirectoryCount><>Value
# name: jmx_kafka_log_offlinedirectory_total
# type: GAUGE
# help: ""
# # kafka.network:*
# - pattern: kafka.network<type=RequestMetrics, name=ErrorsPerSec, request=(\w+), error=(.+)><>Count
# name: jmx_kafka_network_errors_total
# type: COUNTER
# labels:
# request: "$1"
# error: "$2"
# help: ""
# - pattern: kafka.network<type=RequestMetrics, name=RequestsPerSec, request=(Produce|FetchConsumer|FetchFollower), version=(\w+)><>Count
# name: jmx_kafka_network_requests_total
# type: COUNTER
# labels:
# request: "$1"
# version: "$2"
# help: ""
# - pattern: kafka.network<type=RequestMetrics, name=(Request|TemporaryMemory)Bytes, request=(\w+)><>Count
# name: jmx_kafka_network_$1_total
# type: COUNTER
# labels:
# request: "$2"
# help: ""
# - pattern: kafka.network<type=RequestMetrics, name=(Local|Remote|Total|RequestQueue|ResponseQueue|ResponseSend|MessageConversions)TimeMs, request=(Produce|Fetch|FetchConsumer|FetchFollower)><>Count
# name: jmx_kafka_network_$1time_seconds
# valueFactor: 0.001
# type: COUNTER
# labels:
# request: "$2"
# help: ""
# - pattern: kafka.network<type=SocketServer, name=(ExpiredConnectionsKilled|NetworkProcessorAvgIdle)(Count|Percent)><>Value
# name: jmx_kafka_network_$1_$2
# type: GAUGE
# help: ""
# #kafka.server:*
# - pattern: kafka.server<type=KafkaServer, name=BrokerState><>Value
# name: jmx_kafka_server_broker_state
# type: GAUGE
# help: "Broker state: 1=Starting, 2=RecoveringFromUncleanShutdown, 3=RunningAsBroker, 4=RunningAsController, 6=PendingControlledShutdown, 7=BrokerShuttingDown"
# - pattern: kafka.server<type=BrokerTopicMetrics, name=(\w+)PerSec><>Count
# name: jmx_kafka_server_brokertopic_$1_total
# type: COUNTER
# help: ""
# - pattern: kafka.server<type=BrokerTopicMetrics, name=(\w+)PerSec, topic=(.+)><>Count
# name: jmx_kafka_server_brokertopic_$1_total
# type: COUNTER
# labels:
# topic: $2
# help: ""
# - pattern: kafka.server<type=DelayedOperationPurgatory, name=PurgatorySize, delayedOperation=(\w+)><>Value
# name: jmx_kafka_server_delayedoperation_total
# type: GAUGE
# labels:
# purgatory: $1
# help: ""
# - pattern: kafka.server<type=ReplicaManager, name=(\w+)PerSec><>Count
# name: jmx_kafka_server_$1_total
# type: COUNTER
# help: ""
# - pattern: kafka.server<type=ReplicaManager, name=(\w+)(Count|Partitions)><>Value
# name: jmx_kafka_server_$1_total
# type: GAUGE
# help: ""
# - pattern: kafka.server<type=(Fetch|Produce|Request)><>queue-size
# name: jmx_kafka_server_queue_total
# type: GAUGE
# labels:
# type: $1
# help: ""
# - pattern: kafka.server<type=FetcherLagMetrics, name=ConsumerLag, clientId=ReplicaFetcherThread-(\d+-\d+), topic=(.+), partition=(\d+)><>Value
# name: jmx_kafka_server_replicafetcherlag_total
# type: GAUGE
# labels:
# thread: $1
# topic: $2
# partition: $3
# help: "Lag in messages per follower replica"
# - pattern: kafka.server<type=KafkaRequestHandlerPool, name=RequestHandlerAvgIdlePercent><>OneMinuteRate
# name: jmx_kafka_server_requesthandleravgidle_seconds
# type: GAUGE
# valueFactor: 0.000000001
# help: "The average fraction of time the request handler threads are idle"
# - pattern: kafka.server<type=ReplicaFetcherManager, name=MaxLag, clientId=Replica><>Value
# name: jmx_kafka_server_replicafetcherlag_max
# type: GAUGE
# help: "Max lag in messages btw follower and leader replicas"
# - pattern: kafka.server<type=SessionExpireListener, name=ZooKeeper(\w+)PerSec><>Count
# name: jmx_kafka_server_zookeeper_connection_total
# type: COUNTER
# labels:
# event_type: $1
# help: "ZooKeeper connection statuses"
# - pattern: kafka.server<type=ZooKeeperClientMetrics, name=ZooKeeperRequestLatencyMs><>Count
# name: jmx_kafka_server_zookeeper_requests_total
# type: COUNTER
# help: "ZooKeeper client requests"
# - pattern: kafka.server<type=socket-server-metrics, listener=(\w+), networkProcessor=(\d+)><>(.+)-count
# name: jmx_kafka_server_socket_$3_total
# type: COUNTER
# labels:
# listener: $1
# thread: $2
# help: ""
# - pattern: kafka.server<type=socket-server-metrics, listener=(\w+), networkProcessor=(\d+)><>(.+)-total
# name: jmx_kafka_server_socket_$3_total
# type: GAUGE
# labels:
# listener: $1
# thread: $2
# help: ""

kafka_extra_envs: {}
# KAFKA_HEAP_OPTS: -Xms4G -Xmx4G
# KAFKA_JVM_PERFORMANCE_OPTS: -server -XX:MetaspaceSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 -Djava.awt.headless=true -Duser.timezone=Europe/Moscow

#KAFKA_HEAP_OPTS: -Xms4G -Xmx4G
#KAFKA_JVM_PERFORMANCE_OPTS: -server -XX:MetaspaceSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true -Duser.timezone=Europe/Moscow
# https://kafka.apache.org/documentation/#brokerconfigs_process.roles
kafka_process_roles: broker,controller
kafka_server_properties: {}
# controller.quorum.election.backoff.max.ms: 2000
# auto.create.topics.enable: 'false'
# default.replication.factor: 2
# min.insync.replicas: 2
# message.max.bytes: 10485760
# num.network.threads: 128
# num.io.threads: 256
# num.partitions: 1
# num.replica.fetchers: 3
# num.recovery.threads.per.data.dir: 2
# socket.send.buffer.bytes: -1
# socket.receive.buffer.bytes: -1
# socket.request.max.bytes: 104857600
# offsets.topic.num.partitions: 20
# offsets.topic.replication.factor: 3
# transaction.state.log.num.partitions: 20
# transaction.state.log.replication.factor: 3
# log.retention.hours: 168
# log.retention.bytes: -1
# log.segment.bytes: 104857600
# log.retention.check.interval.ms: 60000
# log.flush.scheduler.interval.ms: 1000

#auto.create.topics.enable: 'false'
#default.replication.factor: 2
#min.insync.replicas: 2
#message.max.bytes: 10485760
#num.network.threads: 128
#num.io.threads: 256
#num.partitions: 1
#num.replica.fetchers: 3
#num.recovery.threads.per.data.dir: 2
#socket.send.buffer.bytes: -1
#socket.receive.buffer.bytes: -1
#socket.request.max.bytes: 104857600
#offsets.topic.num.partitions: 20
#offsets.topic.replication.factor: 3
#transaction.state.log.num.partitions: 20
#transaction.state.log.replication.factor: 3
#log.retention.hours: 168
#log.retention.bytes: -1
#log.segment.bytes: 104857600
#log.retention.check.interval.ms: 60000
#log.flush.scheduler.interval.ms: 1000
kafka_log4j_properties: ""
# enables SASL authorization
kafka_sasl_enabled: false
# enables ACLs
# https://kafka.apache.org/documentation/#security_authz
kafka_acl_enabled: false
# defines a password of default kafka superuser
kafka_password: "changeMe"
# defines extra users
kafka_users:
admin:
superuser: true
password: changeMe
foo:
superuser: false
password: bar
# kafka_opts provides you with an opportunity to define KAFKA_OPTS environment variable
kafka_opts: "" # -javaagent:/opt/prometheus/jmx_javaagent.jar=9071:/etc/kafka/prometheus-jmx-exporter.yml
kafka_opts: ""
# enables mirror maker if you need to migrate some data between two or more kafka clusters
# https://kafka.apache.org/documentation/#mirrormakerconfigs
kafka_mirror_maker_enabled: false
kafka_mirror_maker_properties: {}
7 changes: 7 additions & 0 deletions handlers/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,10 @@
daemon_reload: true
enabled: true
state: restarted

- name: Restart kafka-mirror-maker
ansible.builtin.systemd:
name: kafka-mirror-maker
daemon_reload: true
enabled: true
state: restarted
2 changes: 1 addition & 1 deletion meta/main.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
dependencies: []
galaxy_info:
description: A purpose of a role is to deploy kafka cluster/standalone instance
description: This ansible role installs kafka cluster or standalone instance based on KRaft protocol.
min_ansible_version: 2.13
platforms:
- name: Ubuntu
Expand Down
30 changes: 29 additions & 1 deletion tasks/install_kafka.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
owner: "{{ kafka_user }}"
group: "{{ kafka_group }}"
mode: "0750"
with_items:
loop:
- "{{ kafka_config_directory }}"
- "{{ kafka_data_directory }}"
- "{{ kafka_log_directory }}"
Expand Down Expand Up @@ -105,6 +105,16 @@
mode: "0644"
notify: Restart kafka

- name: Kafka | Create mirror maker config
ansible.builtin.template:
src: connect-mirror-maker.properties.j2
dest: "{{ kafka_config_directory }}/connect-mirror-maker.properties"
owner: "{{ kafka_user }}"
group: "{{ kafka_group }}"
mode: "0640"
notify: Restart kafka-mirror-maker
when: kafka_mirror_maker_enabled

- name: Kafka | Check Cluster UUID
ansible.builtin.stat:
path: "{{ kafka_config_directory }}/cluster_uuid"
Expand Down Expand Up @@ -154,9 +164,27 @@
mode: "0644"
notify: Restart kafka

- name: Kafka | Create mirror maker systemd service
ansible.builtin.template:
src: kafka-mirror-maker.service.j2
dest: /etc/systemd/system/kafka-mirror-maker.service
owner: root
group: root
mode: "0640"
notify: Restart kafka-mirror-maker
when: kafka_mirror_maker_enabled

- name: Kafka | Start service
ansible.builtin.systemd:
name: kafka
daemon_reload: true
enabled: true
state: started

- name: Kafka | Start mirror maker systemd service
ansible.builtin.systemd:
name: kafka-mirror-maker
daemon_reload: true
enabled: true
state: started
when: kafka_mirror_maker_enabled
6 changes: 4 additions & 2 deletions tasks/main.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
---
- ansible.builtin.import_tasks: install_openjdk.yaml
- name: Install openjdk
ansible.builtin.import_tasks: install_openjdk.yaml
tags:
- java

- ansible.builtin.import_tasks: install_kafka.yaml
- name: Install kafka
ansible.builtin.import_tasks: install_kafka.yaml
tags:
- kafka
5 changes: 5 additions & 0 deletions templates/connect-mirror-maker.properties.j2
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{% if kafka_mirror_maker_properties is defined and kafka_mirror_maker_properties %}
{% for key, value in kafka_mirror_maker_properties.items() %}
{{ key }}={{ value }}
{% endfor %}
{% endif %}
17 changes: 17 additions & 0 deletions templates/kafka-mirror-maker.service.j2
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
[Unit]
Description=Apache Kafka Mirror Maker
Wants=network.target
After=network.target

[Service]
Type=simple
User={{ kafka_user }}
Group={{ kafka_group }}
ExecStart=/opt/kafka/bin/connect-mirror-maker.sh {{ kafka_config_directory }}/connect-mirror-maker.properties
ExecStop=/bin/kill $MAINPID
Restart=always
RestartSec=10
KillMode=process

[Install]
WantedBy=multi-user.target
4 changes: 2 additions & 2 deletions templates/kafka.service.j2
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Type=simple
User={{ kafka_user }}
Group={{ kafka_group }}
Environment="LOG_DIR={{ kafka_log_directory }}"
Environment="KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:{{ kafka_config_directory }}/log4j.properties
Environment="KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:{{ kafka_config_directory }}/log4j.properties"
{% if kafka_sasl_enabled or kafka_opts %}
Environment="KAFKA_OPTS={% if kafka_sasl_enabled %}-Djava.security.auth.login.config={{ kafka_config_directory }}/jaas.conf{% endif %} {% if kafka_opts %}{{ kafka_opts }}{% endif %}"
{% endif %}
Expand All @@ -24,4 +24,4 @@ RestartSec=10
KillMode=process

[Install]
WantedBy=multi-user.target
WantedBy=multi-user.target
Loading