-
Notifications
You must be signed in to change notification settings - Fork 536
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add debug logging of repo sub processes #1735
Conversation
Debugging what the repo rules are up to can be difficult because Bazel doesn't provide many facilities to inspect what they're up to. This adds an environment variable, `RULES_PYTHON_REPO_DEBUG`, that, when set, will make our repo rules print out detailed information about the subprocesses they are running. This also makes failed commands dump much more comprehensive information.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, this is really great!
args, | ||
repo_utils.execute_checked( | ||
rctx, | ||
op = "whl_library.ResolveRequirement({}, {})".format(rctx.attr.name, rctx.attr.requirement), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we could also have this semantics for the op, where the first item is the function name and the remaining are parameters which will be formated to: op[0] + "(" + ",".join(op[1:]) + ")"
op = "whl_library.ResolveRequirement({}, {})".format(rctx.attr.name, rctx.attr.requirement), | |
op = ["whl_library.ResolveRequirement", rctx.attr.name, rctx.attr.requirement], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea. It also gave me another idea: automatically add the "whl_library" part (the repo rule name) by looking for a _repo_name
attribute we set on all our repo rules.
But, I'm strapped for time today, so I'm going to merge this as-is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried writing some tests for this but it started to spiral out of control pretty quickly (I tried mocking rctx, fail, and print 😵💫) and didn't test very well. I manually ran bazel build ...
with the env set and forcing failures instead.
args, | ||
repo_utils.execute_checked( | ||
rctx, | ||
op = "whl_library.ResolveRequirement({}, {})".format(rctx.attr.name, rctx.attr.requirement), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea. It also gave me another idea: automatically add the "whl_library" part (the repo rule name) by looking for a _repo_name
attribute we set on all our repo rules.
But, I'm strapped for time today, so I'm going to merge this as-is.
…on into repos.cmds
Debugging what the repo rules are up to can be difficult because Bazel doesn't provide many facilities to inspect what they're up to. This adds an environment variable,
RULES_PYTHON_REPO_DEBUG
, that, when set, will make our repo rules print out detailed information about the subprocesses they are running.This also makes failed commands dump much more comprehensive information.
This was driven by the recent report of failures on Windows during a repo rule.