-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add sample "input" packages #325
Changes from all commits
f8448fa
e1c2f0b
445f5d7
aa17a16
333aee9
4365099
268c69b
132b3d1
fb552ca
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
paths: | ||
{{#each paths}} | ||
- {{this}} | ||
{{/each}} | ||
|
||
{{#if tags}} | ||
tags: | ||
{{#each tags as |tag i|}} | ||
- {{tag}} | ||
{{/each}} | ||
{{/if}} | ||
|
||
{{#if pipeline}} | ||
pipeline: {{pipeline}} | ||
{{/if}} | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think there is a few minimum additions to this, that needs to be in every input package, and many of them are in other integrations as well: Pipelines:
Tags are slightly wrong, they should be like this:
Since an input can ingest data both from a local host and external host, we need to add support for removing the host fields:
And custom processors:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Custom processing has been intentionally left out of the MVP, it would also require custom mappings. Also, depending on how this is implemented, this wouldn't require anything on the spec, the pipelines and the mappings could be configured in Fleet and installed by it. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I added a few of your ideas to the sample package. |
||
data_stream: | ||
dataset: {{data_stream.dataset}} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
# newer versions go on top | ||
- version: "0.0.1" | ||
changes: | ||
- description: Initial draft of the package | ||
type: enhancement | ||
link: https://github.com/elastic/package-spec/pull/325 |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
# Custom Logs |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
- name: input.name | ||
jsoriano marked this conversation as resolved.
Show resolved
Hide resolved
|
||
type: constant_keyword | ||
description: Sample field to be added. | ||
value: logs |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,46 @@ | ||
format_version: 1.0.0 | ||
name: custom_logs | ||
title: Custom Logs | ||
description: >- | ||
Read lines from active log files with Elastic Agent. | ||
type: input | ||
version: 1.2.3 | ||
release: ga | ||
license: basic | ||
categories: | ||
- custom | ||
policy_templates: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do policy templates make sense for input packages? It is really convenient that we can reuse the same logic as we already have so I'm not necessarily asking to change anything here but have a discussion.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
It looks like we can improve on this. These are the properties available in integration's
Clearly, we don't need all of them, so I assumed that this
Good question, but I believe that the answer might be the same as for There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does the spec allow to use the same name but for example if it is defined as input package, to not allow certain params? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There will be totally new spec files here, so yes. We may consider extracting some common parts or referring to the Once we agree on the look, I will write down the spec files. I don't want to iterate on both at the same time as it usually means more work. |
||
- name: first_policy_template | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I didn't put the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, I wouldn't see a reason not to support the same options. Unless there is something we want to deprecate from the data stream manifests. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So, we have:
As we don't know in advance what would be the target index/ingest pipeline, we can't set these properties: Am I right? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Umm, I don't see these are used in current integrations repo. What are use cases for these options in integration packages? Could they apply in input packages? In any case, maybe you are right with not adding the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
If you can find any package here, then most likely it's APM or Endpoint. |
||
type: logs | ||
title: Custom log file | ||
description: Collect your custom log files. | ||
input: logfile | ||
template_path: input.yml.hbs | ||
vars: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Since we have all the settings here, does that mean we won't have any datastream at all? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Any data stream directories, but with Kibana UI you will be able to select/create the target data stream. |
||
- name: paths | ||
type: text | ||
title: Paths | ||
multi: true | ||
required: true | ||
show_user: true | ||
- name: tags | ||
type: text | ||
title: Tags | ||
multi: true | ||
required: true | ||
show_user: false | ||
- name: ignore_older | ||
type: text | ||
title: Ignore events older than | ||
required: false | ||
default: 72h | ||
icons: | ||
- src: "/img/sample-logo.svg" | ||
type: "image/svg+xml" | ||
screenshots: | ||
- src: "/img/sample-screenshot.png" | ||
title: "Sample screenshot" | ||
size: "600x600" | ||
type: "image/png" | ||
owner: | ||
github: elastic/integrations |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,139 @@ | ||
config_version: 2 | ||
data_stream: | ||
dataset: {{data_stream.dataset}} | ||
interval: {{request_interval}} | ||
{{#unless oauth_id}} | ||
{{#if username}} | ||
auth.basic.user: {{username}} | ||
{{/if}} | ||
{{#if password}} | ||
auth.basic.password: {{password}} | ||
{{/if}} | ||
{{/unless}} | ||
|
||
{{#if oauth_id}} | ||
{{#if oauth_id}} | ||
auth.oauth2.client.id: {{oauth_id}} | ||
{{/if}} | ||
{{#if oauth_secret}} | ||
auth.oauth2.client.secret: {{oauth_secret}} | ||
{{/if}} | ||
{{#if oauth_token_url}} | ||
auth.oauth2.token_url: {{oauth_token_url}} | ||
{{/if}} | ||
{{#if oauth_provider}} | ||
auth.oauth2.provider: {{oauth_provider}} | ||
{{/if}} | ||
{{#if oauth_scopes}} | ||
auth.oauth2.scopes: | ||
{{#each oauth_scopes as |scope i|}} | ||
- {{scope}} | ||
{{/each}} | ||
{{/if}} | ||
{{#if oauth_google_credentials_file}} | ||
auth.oauth2.google.credentials_file: {{oauth_google_credentials_file}} | ||
{{/if}} | ||
{{#if oauth_google_credentials_json}} | ||
auth.oauth2.google.credentials_json: '{{oauth_google_credentials_json}}' | ||
{{/if}} | ||
{{#if oauth_google_jwt_file}} | ||
auth.oauth2.google.jwt_file: {{oauth_google_jwt_file}} | ||
{{/if}} | ||
{{#if oauth_azure_tenant_id}} | ||
auth.oauth2.azure.tenant_id: {{oauth_azure_tenant_id}} | ||
{{/if}} | ||
{{#if oauth_azure_resource}} | ||
auth.oauth2.azure.resource: {{oauth_azure_resource}} | ||
{{/if}} | ||
{{#if oauth_endpoint_params}} | ||
auth.oauth2.endpoint_params: | ||
{{oauth_endpoint_params}} | ||
{{/if}} | ||
{{/if}} | ||
|
||
request.url: {{request_url}} | ||
request.method: {{request_method}} | ||
{{#if request_body}} | ||
request.body: | ||
{{request_body}} | ||
{{/if}} | ||
{{#if request_transforms}} | ||
request.transforms: | ||
{{request_transforms}} | ||
{{/if}} | ||
{{#if request_ssl}} | ||
request.ssl: | ||
{{request_ssl}} | ||
{{/if}} | ||
{{#if request_encode_as}} | ||
request.encode_as: {{request_encode_as}} | ||
{{/if}} | ||
{{#if request_timeout}} | ||
request.timeout: {{request_timeout}} | ||
{{/if}} | ||
{{#if request_proxy_url}} | ||
request.proxy_url: {{request_proxy_url}} | ||
{{/if}} | ||
{{#if request_retry_max_attempts}} | ||
request.retry.max_attempts: {{request_retry_max_attempts}} | ||
{{/if}} | ||
{{#if request_retry_wait_min}} | ||
request.retry.wait_min: {{request_retry_wait_min}} | ||
{{/if}} | ||
{{#if request_retry_wait_max}} | ||
request.retry.wait_max: {{request_retry_wait_max}} | ||
{{/if}} | ||
{{#if request_redirect_forward_headers}} | ||
request.redirect.forward_headers: {{request_redirect_forward_headers}} | ||
{{/if}} | ||
{{#if request_redirect_headers_ban_list}} | ||
request.redirect.headers_ban_list: | ||
{{#each request_redirect_headers_ban_list as |item i|}} | ||
- {{item}} | ||
{{/each}} | ||
{{/if}} | ||
{{#if request_redirect_max_redirects}} | ||
request.redirect.max_redirects: {{request_redirect_max_redirects}} | ||
{{/if}} | ||
{{#if request_rate_limit_limit}} | ||
request.rate_limit.limit: {{request_rate_limit_limit}} | ||
{{/if}} | ||
{{#if request_rate_limit_reset}} | ||
request.rate_limit.reset: {{request_rate_limit_reset}} | ||
{{/if}} | ||
{{#if request_rate_limit_remaining}} | ||
request.rate_limit.remaining: {{request_rate_limit_remaining}} | ||
{{/if}} | ||
|
||
{{#if response_transforms}} | ||
response.transforms: | ||
{{response_transforms}} | ||
{{/if}} | ||
{{#if response_split}} | ||
response.split: | ||
{{response_split}} | ||
{{/if}} | ||
{{#if response_pagination}} | ||
response.pagination: {{response_pagination}} | ||
{{/if}} | ||
{{#if response_decode_as}} | ||
response.decode_as: {{response_decode_as}} | ||
{{/if}} | ||
{{#if response_request_body_on_pagination}} | ||
response.request_body_on_pagination: {{response_request_body_on_pagination}} | ||
{{/if}} | ||
|
||
{{#if cursor}} | ||
cursor: | ||
{{cursor}} | ||
{{/if}} | ||
|
||
{{#if tags}} | ||
tags: | ||
{{#each tags as |tag i|}} | ||
- {{tag}} | ||
{{/each}} | ||
{{/if}} | ||
{{#contains "forwarded" tags}} | ||
publisher_pipeline.disable_host: true | ||
{{/contains}} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
# newer versions go on top | ||
- version: "0.0.1" | ||
changes: | ||
- description: Initial draft of the package | ||
type: enhancement | ||
link: https://github.com/elastic/package-spec/pull/325 |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
# HTTPJSON input |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
- name: input.type | ||
description: Type of Filebeat input. | ||
type: keyword | ||
- name: tags | ||
type: keyword | ||
description: User defined tags |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would remove this from here. Ok that this is only an example in the config, but it may look like if we support custom processing using pipelines, but is something that we are not doing in the MVP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But we already support both processors and ingest pipelines in the current input packages, are we planning on removing them @jsoriano ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have settings in input-like packages that happen to support custom processing, but this is not completely integrated with the solution, and is not enforced to be coherent with other inputs, or to require field mappings for possible new fields that are generated by this processing.
If we follow this way, every input package needs to implement its own way to support custom processing and custom fields. Now that we start with a clean state for this new package type, we would like to provide a more integrated solutions for this.
Custom pipelines support is still an open discussion for packages in general.
We consider that inputs without any custom processing already provide value in use cases of centralized log collection, and we are currently planning to go in this direction for the initial MVP. Next steps after that would be towards supporting custom processing.
It is though a good question how we are going to migrate users from the current input-like integration packages to the new input packages. Depending on the answer we may need to maintain these settings for backwards compatibility, maybe deprecating them, and eventually removing them when we have a complete solution for this.