Skip to content

Commit

Permalink
[Metricbeat] gcp: add dataproc metricset (#30008)
Browse files Browse the repository at this point in the history
* Add dataproc metricset

* Update metricbeat docs

(cherry picked from commit e61173f)

# Conflicts:
#	metricbeat/docs/fields.asciidoc
#	metricbeat/docs/modules/gcp.asciidoc
#	metricbeat/docs/modules_list.asciidoc
#	x-pack/metricbeat/metricbeat.reference.yml
#	x-pack/metricbeat/module/gcp/_meta/config.yml
#	x-pack/metricbeat/module/gcp/constants.go
#	x-pack/metricbeat/module/gcp/fields.go
#	x-pack/metricbeat/module/gcp/metrics/metrics_requester.go
#	x-pack/metricbeat/module/gcp/metrics/response_parser.go
#	x-pack/metricbeat/module/gcp/module.yml
#	x-pack/metricbeat/modules.d/gcp.yml.disabled
  • Loading branch information
gpop63 authored and mergify-bot committed Feb 10, 2022
1 parent a13c934 commit f1011d6
Show file tree
Hide file tree
Showing 20 changed files with 649 additions and 0 deletions.
1 change: 1 addition & 0 deletions CHANGELOG-developer.next.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,7 @@ The list below covers the major changes between 7.0.0-rc2 and master only.
- Add support for `credentials_json` in `gcp` module, all metricsets {pull}29584[29584]
- Add gcp firestore metricset. {pull}29918[29918]
- Added TESTING_FILEBEAT_FILEPATTERN option for filebeat module pytests {pull}30103[30103]
- Add gcp dataproc metricset. {pull}30008[30008]

==== Deprecated

Expand Down
240 changes: 240 additions & 0 deletions metricbeat/docs/fields.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -34394,6 +34394,246 @@ type: long
--

[float]
<<<<<<< HEAD
=======
=== dataproc

Google Cloud Dataproc metrics


*`gcp.dataproc.batch.spark.executors.count`*::
+
--
Indicates the number of Batch Spark executors.

type: long

--

*`gcp.dataproc.cluster.hdfs.datanodes.count`*::
+
--
Indicates the number of HDFS DataNodes that are running inside a cluster.

type: long

--

*`gcp.dataproc.cluster.hdfs.storage_capacity.value`*::
+
--
Indicates capacity of HDFS system running on cluster in GB.

type: double

--

*`gcp.dataproc.cluster.hdfs.storage_utilization.value`*::
+
--
The percentage of HDFS storage currently used.

type: double

--

*`gcp.dataproc.cluster.hdfs.unhealthy_blocks.count`*::
+
--
Indicates the number of unhealthy blocks inside the cluster.

type: long

--

*`gcp.dataproc.cluster.job.completion_time.value`*::
+
--
The time jobs took to complete from the time the user submits a job to the time Dataproc reports it is completed.

type: long

--

*`gcp.dataproc.cluster.job.duration.value`*::
+
--
The time jobs have spent in a given state.

type: long

--

*`gcp.dataproc.cluster.job.failed.count`*::
+
--
Indicates the number of jobs that have failed on a cluster.

type: long

--

*`gcp.dataproc.cluster.job.running.count`*::
+
--
Indicates the number of jobs that are running on a cluster.

type: long

--

*`gcp.dataproc.cluster.job.submitted.count`*::
+
--
Indicates the number of jobs that have been submitted to a cluster.

type: long

--

*`gcp.dataproc.cluster.operation.completion_time.value`*::
+
--
The time operations took to complete from the time the user submits a operation to the time Dataproc reports it is completed.

type: long

--

*`gcp.dataproc.cluster.operation.duration.value`*::
+
--
The time operations have spent in a given state.

type: long

--

*`gcp.dataproc.cluster.operation.failed.count`*::
+
--
Indicates the number of operations that have failed on a cluster.

type: long

--

*`gcp.dataproc.cluster.operation.running.count`*::
+
--
Indicates the number of operations that are running on a cluster.

type: long

--

*`gcp.dataproc.cluster.operation.submitted.count`*::
+
--
Indicates the number of operations that have been submitted to a cluster.

type: long

--

*`gcp.dataproc.cluster.yarn.allocated_memory_percentage.value`*::
+
--
The percentage of YARN memory is allocated.

type: double

--

*`gcp.dataproc.cluster.yarn.apps.count`*::
+
--
Indicates the number of active YARN applications.

type: long

--

*`gcp.dataproc.cluster.yarn.containers.count`*::
+
--
Indicates the number of YARN containers.

type: long

--

*`gcp.dataproc.cluster.yarn.memory_size.value`*::
+
--
Indicates the YARN memory size in GB.

type: double

--

*`gcp.dataproc.cluster.yarn.nodemanagers.count`*::
+
--
Indicates the number of YARN NodeManagers running inside cluster.

type: long

--

*`gcp.dataproc.cluster.yarn.pending_memory_size.value`*::
+
--
The current memory request, in GB, that is pending to be fulfilled by the scheduler.

type: double

--

*`gcp.dataproc.cluster.yarn.virtual_cores.count`*::
+
--
Indicates the number of virtual cores in YARN.

type: long

--

[float]
=== firestore

Google Cloud Firestore metrics


*`gcp.firestore.document.delete.count`*::
+
--
The number of successful document deletes.

type: long

--

*`gcp.firestore.document.read.count`*::
+
--
The number of successful document reads from queries or lookups.

type: long

--

*`gcp.firestore.document.write.count`*::
+
--
The number of successful document writes.

type: long

--

[float]
>>>>>>> e61173f979 ([Metricbeat] gcp: add dataproc metricset (#30008))
=== gke

`gke` contains the metrics that we scraped from GCP Stackdriver API containing monitoring metrics for GCP GKE
Expand Down
19 changes: 19 additions & 0 deletions metricbeat/docs/modules/gcp.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -275,6 +275,11 @@ metricbeat.modules:
metricsets:
- pubsub
- loadbalancing
<<<<<<< HEAD
=======
- firestore
- dataproc
>>>>>>> e61173f979 ([Metricbeat] gcp: add dataproc metricset (#30008))
zone: "us-central1-a"
project_id: "your project id"
credentials_file_path: "your JSON credentials file path"
Expand Down Expand Up @@ -339,6 +344,13 @@ The following metricsets are available:

* <<metricbeat-metricset-gcp-compute,compute>>

<<<<<<< HEAD
=======
* <<metricbeat-metricset-gcp-dataproc,dataproc>>
* <<metricbeat-metricset-gcp-firestore,firestore>>
>>>>>>> e61173f979 ([Metricbeat] gcp: add dataproc metricset (#30008))
* <<metricbeat-metricset-gcp-gke,gke>>
* <<metricbeat-metricset-gcp-loadbalancing,loadbalancing>>
Expand All @@ -353,6 +365,13 @@ include::gcp/billing.asciidoc[]
include::gcp/compute.asciidoc[]
<<<<<<< HEAD
=======
include::gcp/dataproc.asciidoc[]

include::gcp/firestore.asciidoc[]

>>>>>>> e61173f979 ([Metricbeat] gcp: add dataproc metricset (#30008))
include::gcp/gke.asciidoc[]

include::gcp/loadbalancing.asciidoc[]
Expand Down
24 changes: 24 additions & 0 deletions metricbeat/docs/modules/gcp/dataproc.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
////
This file is generated! See scripts/mage/docs_collector.go
////

[[metricbeat-metricset-gcp-dataproc]]
[role="xpack"]
=== Google Cloud Platform dataproc metricset

beta[]

include::../../../../x-pack/metricbeat/module/gcp/dataproc/_meta/docs.asciidoc[]


==== Fields

For a description of each field in the metricset, see the
<<exported-fields-gcp,exported fields>> section.

Here is an example document generated by this metricset:

[source,json]
----
include::../../../../x-pack/metricbeat/module/gcp/dataproc/_meta/data.json[]
----
7 changes: 7 additions & 0 deletions metricbeat/docs/modules_list.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -116,8 +116,15 @@ This file is generated! See scripts/mage/docs_collector.go
|<<metricbeat-metricset-etcd-self,self>>
|<<metricbeat-metricset-etcd-store,store>>
|<<metricbeat-module-gcp,Google Cloud Platform>> beta[] |image:./images/icon-yes.png[Prebuilt dashboards are available] |
<<<<<<< HEAD
.7+| .7+| |<<metricbeat-metricset-gcp-billing,billing>> beta[]
|<<metricbeat-metricset-gcp-compute,compute>> beta[]
=======
.9+| .9+| |<<metricbeat-metricset-gcp-billing,billing>> beta[]
|<<metricbeat-metricset-gcp-compute,compute>> beta[]
|<<metricbeat-metricset-gcp-dataproc,dataproc>> beta[]
|<<metricbeat-metricset-gcp-firestore,firestore>> beta[]
>>>>>>> e61173f979 ([Metricbeat] gcp: add dataproc metricset (#30008))
|<<metricbeat-metricset-gcp-gke,gke>> beta[]
|<<metricbeat-metricset-gcp-loadbalancing,loadbalancing>> beta[]
|<<metricbeat-metricset-gcp-metrics,metrics>> beta[]
Expand Down
5 changes: 5 additions & 0 deletions x-pack/metricbeat/metricbeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -539,6 +539,11 @@ metricbeat.modules:
metricsets:
- pubsub
- loadbalancing
<<<<<<< HEAD
=======
- firestore
- dataproc
>>>>>>> e61173f979 ([Metricbeat] gcp: add dataproc metricset (#30008))
zone: "us-central1-a"
project_id: "your project id"
credentials_file_path: "your JSON credentials file path"
Expand Down
5 changes: 5 additions & 0 deletions x-pack/metricbeat/module/gcp/_meta/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,11 @@
metricsets:
- pubsub
- loadbalancing
<<<<<<< HEAD
=======
- firestore
- dataproc
>>>>>>> e61173f979 ([Metricbeat] gcp: add dataproc metricset (#30008))
zone: "us-central1-a"
project_id: "your project id"
credentials_file_path: "your JSON credentials file path"
Expand Down
5 changes: 5 additions & 0 deletions x-pack/metricbeat/module/gcp/constants.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,11 @@ const (
ServiceLoadBalancing = "loadbalancing"
ServicePubsub = "pubsub"
ServiceStorage = "storage"
<<<<<<< HEAD
=======
ServiceFirestore = "firestore"
ServiceDataproc = "dataproc"
>>>>>>> e61173f979 ([Metricbeat] gcp: add dataproc metricset (#30008))
)

//Paths within the GCP monitoring.TimeSeries response, if converted to JSON, where you can find each ECS field required for the output event
Expand Down
Loading

0 comments on commit f1011d6

Please sign in to comment.