Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataproc: fix client_info bug, update docstrings. #6408

Merged
merged 1 commit into from
Nov 6, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
261 changes: 192 additions & 69 deletions dataproc/google/cloud/dataproc_v1/gapic/cluster_controller_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -146,9 +146,10 @@ def __init__(self,
)

if client_info is None:
client_info = (
google.api_core.gapic_v1.client_info.DEFAULT_CLIENT_INFO)
client_info.gapic_version = _GAPIC_LIBRARY_VERSION
client_info = google.api_core.gapic_v1.client_info.ClientInfo(
gapic_version=_GAPIC_LIBRARY_VERSION, )
else:
client_info.gapic_version = _GAPIC_LIBRARY_VERSION
self._client_info = client_info

# Parse out the default settings for retry and timeout for each RPC
Expand Down Expand Up @@ -180,13 +181,13 @@ def create_cluster(self,
>>>
>>> client = dataproc_v1.ClusterControllerClient()
>>>
>>> # TODO: Initialize ``project_id``:
>>> # TODO: Initialize `project_id`:
>>> project_id = ''
>>>
>>> # TODO: Initialize ``region``:
>>> # TODO: Initialize `region`:
>>> region = ''
>>>
>>> # TODO: Initialize ``cluster``:
>>> # TODO: Initialize `cluster`:
>>> cluster = {}
>>>
>>> response = client.create_cluster(project_id, region, cluster)
Expand All @@ -205,6 +206,7 @@ def create_cluster(self,
belongs to.
region (str): Required. The Cloud Dataproc region in which to handle the request.
cluster (Union[dict, ~google.cloud.dataproc_v1.types.Cluster]): Required. The cluster to create.

If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.dataproc_v1.types.Cluster`
retry (Optional[google.api_core.retry.Retry]): A retry object used
Expand Down Expand Up @@ -268,19 +270,19 @@ def update_cluster(self,
>>>
>>> client = dataproc_v1.ClusterControllerClient()
>>>
>>> # TODO: Initialize ``project_id``:
>>> # TODO: Initialize `project_id`:
>>> project_id = ''
>>>
>>> # TODO: Initialize ``region``:
>>> # TODO: Initialize `region`:
>>> region = ''
>>>
>>> # TODO: Initialize ``cluster_name``:
>>> # TODO: Initialize `cluster_name`:
>>> cluster_name = ''
>>>
>>> # TODO: Initialize ``cluster``:
>>> # TODO: Initialize `cluster`:
>>> cluster = {}
>>>
>>> # TODO: Initialize ``update_mask``:
>>> # TODO: Initialize `update_mask`:
>>> update_mask = {}
>>>
>>> response = client.update_cluster(project_id, region, cluster_name, cluster, update_mask)
Expand All @@ -300,51 +302,172 @@ def update_cluster(self,
region (str): Required. The Cloud Dataproc region in which to handle the request.
cluster_name (str): Required. The cluster name.
cluster (Union[dict, ~google.cloud.dataproc_v1.types.Cluster]): Required. The changes to the cluster.

If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.dataproc_v1.types.Cluster`
update_mask (Union[dict, ~google.cloud.dataproc_v1.types.FieldMask]): Required. Specifies the path, relative to ``Cluster``, of
the field to update. For example, to change the number of workers
in a cluster to 5, the ``update_mask`` parameter would be
specified as ``config.worker_config.num_instances``,
and the ``PATCH`` request body would specify the new value, as follows:
update_mask (Union[dict, ~google.cloud.dataproc_v1.types.FieldMask]): Required. Specifies the path, relative to ``Cluster``, of the field to
update. For example, to change the number of workers in a cluster to 5,
the ``update_mask`` parameter would be specified as
``config.worker_config.num_instances``, and the ``PATCH`` request body
would specify the new value, as follows:

::

{
\"config\":{
\"workerConfig\":{
\"numInstances\":\"5\"
}
}
}
{
"config":{
"workerConfig":{
"numInstances":"5"
}
}
}

Similarly, to change the number of preemptible workers in a cluster to 5,
the ``update_mask`` parameter would be
``config.secondary_worker_config.num_instances``, and the ``PATCH`` request
body would be set as follows:
Similarly, to change the number of preemptible workers in a cluster to
5, the ``update_mask`` parameter would be
``config.secondary_worker_config.num_instances``, and the ``PATCH``
request body would be set as follows:

::

{
\"config\":{
\"secondaryWorkerConfig\":{
\"numInstances\":\"5\"
}
}
}
{
"config":{
"secondaryWorkerConfig":{
"numInstances":"5"
}
}
}

Note: Currently, only the following fields can be updated:

.. raw:: html

<table>

.. raw:: html

<tbody>

.. raw:: html

<tr>

.. raw:: html

<td>

Mask

.. raw:: html

</td>

.. raw:: html

<td>

Purpose

.. raw:: html

</td>

.. raw:: html

</tr>

.. raw:: html

<tr>

.. raw:: html

<td>

labels

.. raw:: html

</td>

.. raw:: html

<td>

Update labels

.. raw:: html

.. note::
</td>

Currently, only the following fields can be updated:
.. raw:: html

* ``labels``: Update labels
* ``config.worker_config.num_instances``: Resize primary
worker group
* ``config.secondary_worker_config.num_instances``: Resize
secondary worker group
</tr>

If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.dataproc_v1.types.FieldMask`
.. raw:: html

<tr>

.. raw:: html

<td>

config.worker\_config.num\_instances

.. raw:: html

</td>

.. raw:: html

<td>

Resize primary worker group

.. raw:: html

</td>

.. raw:: html

</tr>

.. raw:: html

<tr>

.. raw:: html

<td>

config.secondary\_worker\_config.num\_instances

.. raw:: html

</td>

.. raw:: html

<td>

Resize secondary worker group

.. raw:: html

</td>

.. raw:: html

</tr>

.. raw:: html

</tbody>

.. raw:: html

</table>

If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.dataproc_v1.types.FieldMask`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
Expand Down Expand Up @@ -406,13 +529,13 @@ def delete_cluster(self,
>>>
>>> client = dataproc_v1.ClusterControllerClient()
>>>
>>> # TODO: Initialize ``project_id``:
>>> # TODO: Initialize `project_id`:
>>> project_id = ''
>>>
>>> # TODO: Initialize ``region``:
>>> # TODO: Initialize `region`:
>>> region = ''
>>>
>>> # TODO: Initialize ``cluster_name``:
>>> # TODO: Initialize `cluster_name`:
>>> cluster_name = ''
>>>
>>> response = client.delete_cluster(project_id, region, cluster_name)
Expand Down Expand Up @@ -490,13 +613,13 @@ def get_cluster(self,
>>>
>>> client = dataproc_v1.ClusterControllerClient()
>>>
>>> # TODO: Initialize ``project_id``:
>>> # TODO: Initialize `project_id`:
>>> project_id = ''
>>>
>>> # TODO: Initialize ``region``:
>>> # TODO: Initialize `region`:
>>> region = ''
>>>
>>> # TODO: Initialize ``cluster_name``:
>>> # TODO: Initialize `cluster_name`:
>>> cluster_name = ''
>>>
>>> response = client.get_cluster(project_id, region, cluster_name)
Expand Down Expand Up @@ -559,10 +682,10 @@ def list_clusters(self,
>>>
>>> client = dataproc_v1.ClusterControllerClient()
>>>
>>> # TODO: Initialize ``project_id``:
>>> # TODO: Initialize `project_id`:
>>> project_id = ''
>>>
>>> # TODO: Initialize ``region``:
>>> # TODO: Initialize `region`:
>>> region = ''
>>>
>>> # Iterate over all results
Expand All @@ -574,7 +697,7 @@ def list_clusters(self,
>>> # Alternatively:
>>>
>>> # Iterate over results one page at a time
>>> for page in client.list_clusters(project_id, region, options=CallOptions(page_token=INITIAL_PAGE)):
>>> for page in client.list_clusters(project_id, region).pages:
... for element in page:
... # process element
... pass
Expand All @@ -588,20 +711,21 @@ def list_clusters(self,

field = value [AND [field = value]] ...

where **field** is one of ``status.state``, ``clusterName``, or ``labels.[KEY]``,
and ``[KEY]`` is a label key. **value** can be ``*`` to match all values.
``status.state`` can be one of the following: ``ACTIVE``, ``INACTIVE``,
``CREATING``, ``RUNNING``, ``ERROR``, ``DELETING``, or ``UPDATING``. ``ACTIVE``
contains the ``CREATING``, ``UPDATING``, and ``RUNNING`` states. ``INACTIVE``
contains the ``DELETING`` and ``ERROR`` states.
``clusterName`` is the name of the cluster provided at creation time.
Only the logical ``AND`` operator is supported; space-separated items are
treated as having an implicit ``AND`` operator.
where **field** is one of ``status.state``, ``clusterName``, or
``labels.[KEY]``, and ``[KEY]`` is a label key. **value** can be ``*``
to match all values. ``status.state`` can be one of the following:
``ACTIVE``, ``INACTIVE``, ``CREATING``, ``RUNNING``, ``ERROR``,
``DELETING``, or ``UPDATING``. ``ACTIVE`` contains the ``CREATING``,
``UPDATING``, and ``RUNNING`` states. ``INACTIVE`` contains the
``DELETING`` and ``ERROR`` states. ``clusterName`` is the name of the
cluster provided at creation time. Only the logical ``AND`` operator is
supported; space-separated items are treated as having an implicit
``AND`` operator.

Example filter:

status.state = ACTIVE AND clusterName = mycluster
AND labels.env = staging AND labels.starred = *
status.state = ACTIVE AND clusterName = mycluster AND labels.env =
staging AND labels.starred = \*
page_size (int): The maximum number of resources contained in the
underlying API response. If page streaming is performed per-
resource, this parameter does not affect the return value. If page
Expand Down Expand Up @@ -668,22 +792,21 @@ def diagnose_cluster(self,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None):
"""
Gets cluster diagnostic information.
After the operation completes, the Operation.response field
contains ``DiagnoseClusterOutputLocation``.
Gets cluster diagnostic information. After the operation completes, the
Operation.response field contains ``DiagnoseClusterOutputLocation``.

Example:
>>> from google.cloud import dataproc_v1
>>>
>>> client = dataproc_v1.ClusterControllerClient()
>>>
>>> # TODO: Initialize ``project_id``:
>>> # TODO: Initialize `project_id`:
>>> project_id = ''
>>>
>>> # TODO: Initialize ``region``:
>>> # TODO: Initialize `region`:
>>> region = ''
>>>
>>> # TODO: Initialize ``cluster_name``:
>>> # TODO: Initialize `cluster_name`:
>>> cluster_name = ''
>>>
>>> response = client.diagnose_cluster(project_id, region, cluster_name)
Expand Down
Loading