Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(automl): pass params to underlying client #9794

Merged
merged 9 commits into from
Nov 15, 2019

Conversation

sirtorry
Copy link
Contributor

passes params from tables_client to gapic prediction_service_client

@sirtorry sirtorry requested a review from busunkim96 as a code owner November 14, 2019 00:48
@googlebot googlebot added the cla: yes This human has signed the Contributor License Agreement. label Nov 14, 2019
Copy link
Contributor

@busunkim96 busunkim96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with some nits. Is this something to be watching out for every time there's an update to AutoML?

Co-Authored-By: Bu Sun Kim <8822365+busunkim96@users.noreply.github.com>
@sirtorry
Copy link
Contributor Author

LGTM with some nits. Is this something to be watching out for every time there's an update to AutoML?

Looking at previous PRs, there's certainly a precedence at this point. #9779 #9491 are examples.

Additional domain-specific parameters, any string must be up to
25000 characters long.
``feature_importance`` - (boolean) Whether
[feature\_importance][[google.cloud.automl.v1beta1.TablesModelColumnInfo.feature\_importance]
Copy link
Contributor

@helinwang helinwang Nov 14, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-blocking: curious if there is documentation / spec about how to how to write docstring with links.
E.g.,

  • why \ in feature\_importance is needed,
  • what does (-s) mean, and
  • seems different from markdown, [text][[link] instead of [text][link] is used.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This formatting is from the proto comments. I believe it's some flavor of markdown that makes the comments render nicely on cloud.google.com. See https://github.com/googleapis/googleapis/tree/master/google/cloud/automl/v1 and https://cloud.google.com/automl/docs/reference/rpc/google.cloud.automl.v1.

When you need to add docstrings, please follow the Google Python Style Guide

@helinwang
Copy link
Contributor

LGTM

@helinwang
Copy link
Contributor

Regarding "Is this something to be watching out for every time there's an update to AutoML?".
Any recommendation of how to keep backend and Python SDK in-sync?
The currently process is manually trying the best to make sure it's in-sync.

@busunkim96
Copy link
Contributor

@helinwang Would it be possible to write some tests to help catch these incompatibilities?

@helinwang
Copy link
Contributor

helinwang commented Nov 14, 2019

@busunkim96 I think it's hard to do because:
New backend feature is added in piper first. The test for these features should be added during the feature development, so they have to be in piper too. If we add the test in Github, when we are adding the test, we already know the new feature is coming, so the test does not help catching new features. It only catches regressions - still good to have, but does not solve this problem.
However, Python SDK lives in Github, not piper. piper code can't reference Python SDK thus have no way of testing it.
Does it makes sense?

Do you know if there is plan to mirrow this repo into piper? I think it will solve this problem. We can setup integration test that reference the piper mirror code with TAP.

@sirtorry
Copy link
Contributor Author

@busunkim96 please merge if it looks good to you

@busunkim96
Copy link
Contributor

You're free to setup a mirror if you'd like, but that is not something the client libraries team is currently staffed to do.

I'll make sure to tag both of you in future PRs so you can manually inspect changes.

@busunkim96
Copy link
Contributor

@sirtorry Could you do nox -s blacken to appease the linter?

@helinwang
Copy link
Contributor

Thanks, @busunkim96 .

@busunkim96
Copy link
Contributor

I missed this earlier, but it looks like the unit tests are failing.

Copy link
Contributor

@helinwang helinwang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Torry could you update the unit test?
The test is failing with errors like:

value = AssertionError("expected call not found.\nExpected: predict('my_model', {'row'..., {'row': {'values': [{'string_value': '1'}, {'string_value': '2'}]}}, None)",)
from_value = None

    def raise_from(value, from_value):
>       raise value
E       AssertionError: expected call not found.
E       Expected: predict('my_model', {'row': {'values': [{'string_value': '1'}, {'string_value': '2'}]}})
E       Actual: predict('my_model', {'row': {'values': [{'string_value': '1'}, {'string_value': '2'}]}}, None)

@sirtorry
Copy link
Contributor Author

sirtorry commented Nov 15, 2019

could also add unit tests for non-empty param passing if y'all want

@helinwang
Copy link
Contributor

I think that's fine, since the method just forward param to predict.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla: yes This human has signed the Contributor License Agreement.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants