-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: cmd/go: allow test binaries to identify as uncacheable #23799
Comments
If a test opens a file that is not under GOPATH or GOROOT, then its result will not be cached. Does that seem sufficient? |
As in, I could open |
No, wait, sorry, I got it wrong. If a test opens a file that is under GOPATH or GOROOT, then if that file changes, the cache is ignored. If a test opens a file that is not under GOPATH or GOROOT, that does not affect caching. |
Similar problem. Testing executable out of a process, by building it in by github.com/onsi/gomega/gexec and run it against golden files. The test result is cached even if executable sources were changed. |
https://tip.golang.org/doc/go1.11#gocache says this:
This is one such issue. It'd be nice if there was something formally supported by Go 1.11 in advance of a major workaround being removed in Go 1.12. I included a few suggested approaches in my original message. Who's the decider for this? |
I'm not convinced this is a property of test cases. The test is rerun if the binary has changed. If the binary has not changed, then I think it is reasonable to say "this binary passed the last time" and leave it out, especially if you've changed one line in one source file and are running I realize that in some situations you do really want the test to run even if there was a successful run with exactly the same binary in the past. When that's the case, be explicit about it and say Leaving open to look at again in Go 1.12 cycle but very likely the answer is no. |
I don't understand your response. This is only for a small minority of test cases, which would have to opt-in to this. If a test case has said "don't cache my result", then I do want to run that test every time., because that's exactly what the test case has indicated. Now if that's a problem, it's a problem because of the test having external dependencies (or whatever), not because it has opted out of the test caching. The use case of Using I see this analogous to |
I think for people working on dependencies down in the leaves it is critical that running It would be OK to have a 'mark this as depending on external things' as long as that result was still cached by default, and then we could also add a 'rerun all the tests that depend on external things' as an explicit flag. Not sure what that would look like exactly. Leaving for Go 1.13. |
Or cmd/go could automatically re-run such tests in the current module, but not in dependencies. |
Or, more uniformly, it could re-run tests that matched patterns ( |
As a workaround, I suppose you could always get the intended effect by using a package external
//go:generate bash -c 'echo -e "package external\nconst S = \"$(head -c 8 /dev/random | base64)\"" > random.go' package foo_test
import _ "example.com/external" go generate external && go test ./... |
I don't know if this comment here should be considered a distinct issue, or if a solution for "identify tests as uncacheable" would also be the solution for this. In short, it seems Based on your total count of direct/indirect dependencies, the chances start to climb relatively high that at least someone in your chain will have a test that is "expected" to fail unless you follow the steps in some README or set up some external system. Ideally, there would be some way to mark a test as requiring additional setup or "not expected to succeed as part of After raising this as a concern, @bcmills pointed me to this issue. edit: #31310 and #30595 are different than this issue here, but they might at least be partially related depending on approach. |
In #30595 (comment), Ian wrote:
The context there was discussion about separating integration and unit tests. Perhaps that flag-based approach could be part of the solution here for tests that rely on external dependencies or are otherwise uncachable, as well as to help with the concern expressed in #23799 (comment) about If that is going to be part of the path forward, it would help to get some type of convention established and promoted. It could be tied to the desire to increase the usefulness of The further you get away from your direct dependencies, the less likely you are to know about some quirk of how to avoid running tests that rely on external resources or manual setup. It would be nice to make it easier to avoid those, without the solution being "read 50 READMEs". |
This may be true in the majority of cases but if the binary’s purpose is to integrate with implicitly-linked dependencies then previous test results just boil down to the code still compiles and worked at some point. By skipping the tests go's test suite is making the assumption, on behalf of the developer running the tests, that the external dependencies didn’t change. The go toolset already has strong mechanisms that can be used to prevent ‘integration’ tests from running as part of the default suite of tests // +build integration
package myservice_test
func Test123(t *testing.T) {
....
} Using build tags we can workaround this issues by running tests twice:
This method works but requires extra developer communication on how to properly run a project’s test suite. Whereas, if the test developer could disable the cache for specific tests: // +build integration
package myservice_test
func Test123(t *testing.T) {
t.NoCache()
....
} the go test suite will never make the assumption that implicitly-linked dependencies went unchanged. Running all tests by default would still work as intended; and developers could opt into the integration tests via the build tag without having to use |
@dudleycodes, note that the cache key for a test result includes the values of all environment variables read by the test. If the test enables the “integration” tests based on an environment variable, then no special tag is needed (and the compile step will be easier to cache). |
@bcmills interesting solution - but it still seems like a workaround as it’d require communicating to downstream developers to set the env var and mutate its contents in between tests. Whereas if we could explicitly selectively disable the cache for specific tests this intent/knowledge would be expressed in the test code with no further human intervention needed. |
Still definitely an need for this, any additional thoughts? |
@stevenh What is your use case? Thanks. |
In our case we have a test which builds and runs a docker container with components from the repo and then runs tests against it. We hit a case today that local tests were all passing because the result of the test was cached as go didn't realise that it needed to be re-run. It happened to get caught by CI but that was more luck due to the different cache contents. Being able to either disable test caching for this test or add dependencies manually would allow this case to be handled. Does that help? |
Thanks. |
I'll add our use case: We have some Go code that verifies our Helm charts. The Helm charts and this Go code live in the same repository, effectively under The problem is that you can change the Helm charts, run the tests, and receive a cached Some thoughts on workarounds and ideas:
I tried various hacks involving I understand the concern about how this could affect libraries, it seems smart to consider that, but it's also a bummer that there's no workaround (other than documenting |
I wonder...if you add to your tests something like:
does it now automatically pick up on changed files? |
@abuchanan-airbyte Note that if a test opens a file in the same Go module, then changing the file will invalidate the cache, forcing the test to run again (technically it just checks the file modification time and size, not the actual contents). So it sounds like you are describing a case where the files are in the same repository but are not in the same Go module. |
@ianlancetaylor hmm. Does that work when files are discovered via globbing, and new files are added? |
Yes, it should. It tracks each opened file. |
Ah ok, interesting. I hadn't come across any notes of that behavior yet. The helm charts and the Go test code are currently in sibling directories, but maybe I can rearrange that by moving the |
The docs are buried in a lot of other detail, but they are at https://pkg.go.dev/cmd/go#hdr-Test_packages. |
Right. But suppose the text files that exist are FWIW, that's why I suggested the embed directive as a way to explicitly depend on what the globs resolve to. |
From what I recall, directory contents are also part of the cache keys. So globbing doing a ReadDir call would result in the entire directory list to be part of the cache. |
The new test caching stuff is neat, except when a test has an external dependency (e.g. it is testing code that hits a web service), and we don't want the test's result to be cached (so that we're always exercising the code against the real world).
There are ways to disable the test caching from the user's perspective (e.g. passing
-count=1
), but not from the test itself from what I can tell. It'd be nice if tests in this position could do something to indicate to thego
tool that its result and output should not be cached.Some ideas:
*testing.T
that can be invoked to signal this.$GOCACHE/something
).External
substring).The text was updated successfully, but these errors were encountered: