-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support send error-log to kafka brokers #8693
feat: support send error-log to kafka brokers #8693
Conversation
@@ -19,6 +19,7 @@ local core = require("apisix.core") | |||
local errlog = require("ngx.errlog") | |||
local batch_processor = require("apisix.utils.batch-processor") | |||
local plugin = require("apisix.plugin") | |||
local producer = require ("resty.kafka.producer") | |||
local timers = require("apisix.timers") | |||
local http = require("resty.http") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's group the resty module together.
apisix/plugins/error-log-logger.lua
Outdated
|
||
|
||
local function send_to_kafka(log_message) | ||
core.log.info("sending a batch logs to kafka brokers: ", core.json.encode(config.kafka.brokers)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use delay_encode?
apisix/plugins/error-log-logger.lua
Outdated
if not (metadata and metadata.value and metadata.modifiedIndex) then | ||
core.log.info("please set the correct plugin_metadata for ", plugin_name) | ||
return | ||
else |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can use end
here, no need to nest the code
apisix/plugins/error-log-logger.lua
Outdated
create_producer, config.kafka.brokers, broker_config, | ||
config.kafka.cluster_name) | ||
if not prod then | ||
return false, "get kafka producer failed " .. err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return false, "get kafka producer failed " .. err | |
return false, "get kafka producer failed: " .. err |
| kafka.kafka_topic | string | True | | | Target topic to push the logs for organisation. | | ||
| kafka.producer_type | string | False | async | ["async", "sync"] | Message sending mode of the producer. | | ||
| kafka.required_acks | integer | False | 1 | [0, 1, -1] | Number of acknowledgements the leader needs to receive for the producer to consider the request complete. This controls the durability of the sent records. The attribute follows the same configuration as the Kafka `acks` attribute. See [Apache Kafka documentation](https://kafka.apache.org/documentation/#producerconfigs_acks) for more. | | ||
| kafka.key | string | False | | | Key used for allocating partitions for messages. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cluster_name is not documented?
apisix/plugins/error-log-logger.lua
Outdated
return false, "failed to send data to Kafka topic: " .. err .. | ||
", brokers: " .. core.json.encode(config.kafka.brokers) | ||
end | ||
core.log.info("send data to kafka: ", core.json.encode(log_message[i])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use delay_encode?
core.log.info("sending a batch logs to kafka brokers: ", core.json.encode(config.kafka.brokers)) | ||
|
||
local broker_config = {} | ||
broker_config["request_timeout"] = config.timeout * 1000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is the config from? This function doesn't have an argument called config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it comes from
timeout = {type = "integer", minimum = 1, default = 3}, |
apisix/apisix/plugins/error-log-logger.lua
Line 110 in d852953
local config = {} |
and set by
apisix/apisix/plugins/error-log-logger.lua
Line 305 in d852953
config, err = lrucache(plugin_name, metadata.modifiedIndex, update_filter, metadata.value) |
apisix/plugins/error-log-logger.lua
Outdated
return | ||
else | ||
-- reuse producer via lrucache to avoid unbalanced partitions of messages in kafka | ||
prod, err = lrucache(plugin_name .. "#kafka", metadata.modifiedIndex, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to use a separate lrucache to cache different data.
t/plugin/error-log-logger-kafka.t
Outdated
core.log.error("this is a error message for test2.") | ||
} | ||
} | ||
--- response_body |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The response_body in these tests is unnecessary as we don't provide responses at all.
t/plugin/error-log-logger-kafka.t
Outdated
core.log.error("this is a error message for test3.") | ||
} | ||
} | ||
--- response_body |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto
i will change code and update PR later. |
apisix/plugins/error-log-logger.lua
Outdated
local metadata = plugin.plugin_metadata(plugin_name) | ||
if not (metadata and metadata.value and metadata.modifiedIndex) then | ||
core.log.info("please set the correct plugin_metadata for ", plugin_name) | ||
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should return a boolean here?
apisix/plugins/error-log-logger.lua
Outdated
|
||
local function send_to_kafka(log_message) | ||
core.log.info("sending a batch logs to kafka brokers: ", | ||
core.json.delay_encode(config.kafka.brokers)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
core.json.delay_encode(config.kafka.brokers)) | |
core.json.delay_encode(config.kafka.brokers)) |
apisix/plugins/error-log-logger.lua
Outdated
end | ||
|
||
-- reuse producer via kafka_prod_lrucache to avoid unbalanced partitions of messages in kafka | ||
local prod, err = kafka_prod_lrucache(plugin_name .. "#kafka", metadata.modifiedIndex, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to add "#kafka" suffix as this cache is individual
apisix/plugins/error-log-logger.lua
Outdated
config.kafka.key, core.json.encode(log_message[i])) | ||
if not ok then | ||
return false, "failed to send data to Kafka topic: " .. err .. | ||
", brokers: " .. core.json.delay_encode(config.kafka.brokers) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
delay_encode is only for log
apisix/plugins/error-log-logger.lua
Outdated
return false, "get kafka producer failed: " .. err | ||
end | ||
core.log.info("kafka cluster name ", config.kafka.cluster_name, ", broker_list[1] port ", | ||
prod.client.broker_list[1].port) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
prod.client.broker_list[1].port) | |
prod.client.broker_list[1].port) |
apisix/plugins/error-log-logger.lua
Outdated
broker_config["producer_type"] = config.kafka.producer_type | ||
broker_config["required_acks"] = config.kafka.required_acks | ||
|
||
local metadata = plugin.plugin_metadata(plugin_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems using the config passed from process
here might create a race:
Consider we have c1(config) and m1 (modifiedIndex) in process
, and c2/m2 in send
. It looks like we might use m2 as key and c1 as value in the cache below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like we might use m2 as key and c1 as value in the cache below.
Do you mean that we need to clone config
before we use it like below? then we can use m2 and c2 in send
.
local config = core.table.clone(config)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A clone of c1 still has a c1's value.
Maybe we can get the c2 in send
like what we have done in process
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, i will change the code and update the PR later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@spacewander Could you please take a look, thanks.
@soulbird Could you please take a look, thanks. |
* upstream/master: feat(elasticsearch-logger): support multi elasticsearch endpoints (apache#8604) chore: use operator # instead of string.len (apache#8751) chore: hi 2023 (apache#8748) refactor(admin): stream_routes/upstreams/protos/services/global_rules/consumer_groups/plugin_configs (apache#8661) feat: support send error-log to kafka brokers (apache#8693) chore: upgrade `casbin` to `1.41.5` (apache#8744)
Description
Fixes #8678
support send error-log to kafka brokers.
plugin_name .. "#kafka"
as key, andmetadata.modifiedIndex
as version.Checklist