Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tidy up some correctness issues reported by go vet #1

Merged
merged 1 commit into from
Feb 14, 2014
Merged

tidy up some correctness issues reported by go vet #1

merged 1 commit into from
Feb 14, 2014

Conversation

andybons
Copy link
Contributor

I use go vet (http://golang.org/cmd/go/#hdr-Run_go_tool_vet_on_packages) as a presubmit hook on almost all of my go code. It definitely helps with finding correctness problems.

@@ -200,10 +200,10 @@ func TestGroups100Keys(t *testing.T) {

for i := 0; i < 100; i++ {
if infos[i].Key != minInfos[i].Key {
t.Errorf("key %d (%f != %f)", i, infos[i].Val, minInfos[i].Val)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, that's pretty smart of it.

@spencerkimball
Copy link
Member

How do I set up a git presubmit hook?

spencerkimball added a commit that referenced this pull request Feb 14, 2014
tidy up some correctness issues reported by go vet
@spencerkimball spencerkimball merged commit 1e95477 into cockroachdb:master Feb 14, 2014
@andybons
Copy link
Contributor Author

Take a look at https://github.com/andybons/hipchat, specifically https://github.com/andybons/hipchat/blob/master/githooks/pre-commit and https://github.com/andybons/hipchat/blob/master/initrepo.sh

I’ll include something like this when I send my next pull fixing the lint issues.

tbg added a commit that referenced this pull request May 11, 2016
```
go test -benchtime 1s -run - -bench TrackChoices1280_Cockroach -timeout 5m ./sql -benchmem
10000 127583 ns/op 27432 B/op 145 allocs/op
go test -benchtime 10s -run - -bench TrackChoices1280_Cockroach -timeout 5m ./sql -benchmem
100000 181085 ns/op 27421 B/op 144 allocs/op
go test -benchtime 20s -run - -bench TrackChoices1280_Cockroach -timeout 5m ./sql -benchmem
300000 166564 ns/op 27673 B/op 142 allocs/op
```

```

```
2016/05/10 15:31:53.900328	0.025948	node 1
15:31:53.900331	 .     3	... node 1
15:31:53.900338	 .     7	... read has no clock uncertainty
15:31:53.900426	 .    87	... executing 1282 requests
15:31:53.900706	 .   280	... read-write path
15:31:53.900707	 .     1	... command queue
15:31:53.901148	 .   441	... left command queue
15:31:53.901151	 .     3	... request leader lease (attempt #1)
15:31:53.901392	 .   240	... prep for ts cache
15:31:53.904028	 .  2637	... applied ts cache
15:31:53.904613	 .   584	... proposed to Raft
15:31:53.905698	 .  1085	... applying batch
15:31:53.905769	 .    72	... checked aborted txn
15:31:53.905959	 .   189	... checked leadership
15:31:53.906425	 .   466	... 1280 blind puts
15:31:53.914285	 .  7860	... executed batch
15:31:53.914340	 .    55	... prep for commit
15:31:53.915122	 .   782	... committed
15:31:53.915177	 .    55	... processed async intents
15:31:53.915178	 .     1	... applied batch
15:31:53.915186	 .     9	... response obtained
```

out.base
tbg added a commit that referenced this pull request May 11, 2016
The insert workload is randomized, so the blind puts were computed but nothing
was actually written blindly.

benchmark                                 old ns/op     new ns/op     delta
BenchmarkTrackChoices1280_Cockroach-8     171532        65505         -61.81%

benchmark                                 old allocs     new allocs     delta
BenchmarkTrackChoices1280_Cockroach-8     142            130            -8.45%

benchmark                                 old bytes     new bytes     delta
BenchmarkTrackChoices1280_Cockroach-8     27241         26018         -4.49%

```
2016/05/10 20:15:44.886244	0.006944	node 1
20:15:44.886246	 .     3	... node 1
20:15:44.886253	 .     6	... read has no clock uncertainty
20:15:44.886334	 .    81	... executing 1282 requests
20:15:44.886543	 .   209	... read-write path
20:15:44.886544	 .     1	... command queue
20:15:44.886963	 .   420	... left command queue
20:15:44.886966	 .     3	... request leader lease (attempt #1)
20:15:44.887265	 .   298	... prep for ts cache
20:15:44.887266	 .     1	... applied ts cache
20:15:44.887268	 .     2	... start marshal
20:15:44.887772	 .   504	... end marshal
20:15:44.887835	 .    63	... begin prop
20:15:44.887852	 .    17	... done prop
20:15:44.887855	 .     3	... proposed to Raft
20:15:44.888949	 .  1094	... applying batch
20:15:44.889015	 .    67	... checked aborted txn
20:15:44.889197	 .   181	... checked leadership
20:15:44.889199	 .     2	... executing as 1PC txn
20:15:44.889644	 .   445	... 1280 blind puts
20:15:44.892245	 .  2601	... executeBatch returns
20:15:44.892308	 .    63	... executed batch
20:15:44.892360	 .    52	... prep for commit
20:15:44.893101	 .   741	... committed
20:15:44.893154	 .    53	... processed async intents
20:15:44.893155	 .     1	... applied batch
20:15:44.893173	 .    18	... response obtained
20:15:44.893174	 .     1	... endCmds begins
20:15:44.893175	 .     1	... begin update tsCache
20:15:44.893176	 .     1	... end update tsCache
20:15:44.893177	 .     2	... removed from cmdQ
20:15:44.893178	 .     1	... endCmds ends
20:15:44.893180	 .     1	... Send returns
```
cuongdo pushed a commit to cuongdo/cockroach that referenced this pull request May 19, 2016
```
go test -benchtime 1s -run - -bench TrackChoices1280_Cockroach -timeout 5m ./sql -benchmem
10000 127583 ns/op 27432 B/op 145 allocs/op
go test -benchtime 10s -run - -bench TrackChoices1280_Cockroach -timeout 5m ./sql -benchmem
100000 181085 ns/op 27421 B/op 144 allocs/op
go test -benchtime 20s -run - -bench TrackChoices1280_Cockroach -timeout 5m ./sql -benchmem
300000 166564 ns/op 27673 B/op 142 allocs/op
```

```

```
2016/05/10 15:31:53.900328	0.025948	node 1
15:31:53.900331	 .     3	... node 1
15:31:53.900338	 .     7	... read has no clock uncertainty
15:31:53.900426	 .    87	... executing 1282 requests
15:31:53.900706	 .   280	... read-write path
15:31:53.900707	 .     1	... command queue
15:31:53.901148	 .   441	... left command queue
15:31:53.901151	 .     3	... request leader lease (attempt cockroachdb#1)
15:31:53.901392	 .   240	... prep for ts cache
15:31:53.904028	 .  2637	... applied ts cache
15:31:53.904613	 .   584	... proposed to Raft
15:31:53.905698	 .  1085	... applying batch
15:31:53.905769	 .    72	... checked aborted txn
15:31:53.905959	 .   189	... checked leadership
15:31:53.906425	 .   466	... 1280 blind puts
15:31:53.914285	 .  7860	... executed batch
15:31:53.914340	 .    55	... prep for commit
15:31:53.915122	 .   782	... committed
15:31:53.915177	 .    55	... processed async intents
15:31:53.915178	 .     1	... applied batch
15:31:53.915186	 .     9	... response obtained
```

out.base
cuongdo pushed a commit to cuongdo/cockroach that referenced this pull request May 19, 2016
The insert workload is randomized, so the blind puts were computed but nothing
was actually written blindly.

benchmark                                 old ns/op     new ns/op     delta
BenchmarkTrackChoices1280_Cockroach-8     171532        65505         -61.81%

benchmark                                 old allocs     new allocs     delta
BenchmarkTrackChoices1280_Cockroach-8     142            130            -8.45%

benchmark                                 old bytes     new bytes     delta
BenchmarkTrackChoices1280_Cockroach-8     27241         26018         -4.49%

```
2016/05/10 20:15:44.886244	0.006944	node 1
20:15:44.886246	 .     3	... node 1
20:15:44.886253	 .     6	... read has no clock uncertainty
20:15:44.886334	 .    81	... executing 1282 requests
20:15:44.886543	 .   209	... read-write path
20:15:44.886544	 .     1	... command queue
20:15:44.886963	 .   420	... left command queue
20:15:44.886966	 .     3	... request leader lease (attempt cockroachdb#1)
20:15:44.887265	 .   298	... prep for ts cache
20:15:44.887266	 .     1	... applied ts cache
20:15:44.887268	 .     2	... start marshal
20:15:44.887772	 .   504	... end marshal
20:15:44.887835	 .    63	... begin prop
20:15:44.887852	 .    17	... done prop
20:15:44.887855	 .     3	... proposed to Raft
20:15:44.888949	 .  1094	... applying batch
20:15:44.889015	 .    67	... checked aborted txn
20:15:44.889197	 .   181	... checked leadership
20:15:44.889199	 .     2	... executing as 1PC txn
20:15:44.889644	 .   445	... 1280 blind puts
20:15:44.892245	 .  2601	... executeBatch returns
20:15:44.892308	 .    63	... executed batch
20:15:44.892360	 .    52	... prep for commit
20:15:44.893101	 .   741	... committed
20:15:44.893154	 .    53	... processed async intents
20:15:44.893155	 .     1	... applied batch
20:15:44.893173	 .    18	... response obtained
20:15:44.893174	 .     1	... endCmds begins
20:15:44.893175	 .     1	... begin update tsCache
20:15:44.893176	 .     1	... end update tsCache
20:15:44.893177	 .     2	... removed from cmdQ
20:15:44.893178	 .     1	... endCmds ends
20:15:44.893180	 .     1	... Send returns
```
@cuongdo cuongdo mentioned this pull request Jul 11, 2017
6 tasks
craig bot pushed a commit that referenced this pull request Apr 7, 2022
craig bot pushed a commit that referenced this pull request Apr 29, 2022
79911: opt: refactor and test lookup join key column and expr generation r=mgartner a=mgartner

#### opt: simplify fetching outer column in CustomFuncs.findComputedColJoinEquality

Previously, `CustomFuncs.findComputedColJoinEquality` used
`CustomFuncs.OuterCols` to retrieve the outer columns of computed column
expressions. `CustomFuncs.OuterCols` returns the cached outer columns in
the expression if it is a `memo.ScalarPropsExpr`, and falls back to
calculating the outer columns with `memo.BuildSharedProps` otherwise.
Computed column expressions are never `memo.ScalarPropsExpr`s, so we use
just use `memo.BuildSharedProps` directly.

Release note: None

#### opt: make RemapCols a method on Factory instead of CustomFuncs

Release note: None

#### opt: use partial-index-reduced filters when building lookup expressions

This commit makes a minor change to `generateLookupJoinsImpl`.
Previously, equality filters were extracted from the original `ON`
filters. Now they are extracted from filters that have been reduced by
partial index implication. This has no effect on behavior because
equality filters that reference columns in two tables cannot exist in
partial index predicates, so they will never be eliminated during
partial index implication.

Release note: None

#### opt: moves some lookup join generation logic to lookup join package

This commit adds a new `lookupjoin` package. Logic for determining the
key columns and lookup expressions for lookup joins has been moved to
`lookupJoin.ConstraintBuilder`. The code was moved with as few changes
as possible, and the behavior does not change in any way. This move will
make it easier to test this code in isolation in the future, and allow
for further refactoring.

Release note: None

#### opt: generalize lookupjoin.ConstraintBuilder API

This commit makes the lookupjoin.ConstraintBuilder API more general to
make unit testing easier in a future commit.

Release note: None

#### opt: add data-driven tests for lookupjoin.ConstraintBuilder

Release note: None

#### opt: add lookupjoin.Constraint struct

The `lookupjoin.Constraint` struct has been added to encapsulate
multiple data structures that represent a strategy for constraining a
lookup join.

Release note: None

80511: pkg/cloud/azure: Support specifying Azure environments in storage URLs r=adityamaru a=nlowe-sx

The Azure Storage cloud provider learned a new parameter, AZURE_ENVIRONMENT,
which specifies which azure environment the storage account in question
belongs to. This allows cockroach to backup and restore data to Azure
Storage Accounts outside the main Azure Public Cloud. For backwards
compatibility, this defaults to "AzurePublicCloud" if AZURE_ENVIRONMENT
is not specified.
 
Fixes #47163
 
## Verification Evidence
 
I spun up a single node cluster:
 
```
nlowe@nlowe-z4l:~/projects/github/cockroachdb/cockroach [feat/47163-azure-storage-support-multiple-environments L|✚ 2] [🗓  2022-04-22 08:25:49]
$ bazel run //pkg/cmd/cockroach:cockroach -- start-single-node --insecure
WARNING: Option 'host_javabase' is deprecated
WARNING: Option 'javabase' is deprecated
WARNING: Option 'host_java_toolchain' is deprecated
WARNING: Option 'java_toolchain' is deprecated
INFO: Invocation ID: 11504a98-f767-413a-8994-8f92793c2ecf
INFO: Analyzed target //pkg/cmd/cockroach:cockroach (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //pkg/cmd/cockroach:cockroach up-to-date:
  _bazel/bin/pkg/cmd/cockroach/cockroach_/cockroach
INFO: Elapsed time: 0.358s, Critical Path: 0.00s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
*
* WARNING: ALL SECURITY CONTROLS HAVE BEEN DISABLED!
*
* This mode is intended for non-production testing only.
*
* In this mode:
* - Your cluster is open to any client that can access any of your IP addresses.
* - Intruders with access to your machine or network can observe client-server traffic.
* - Intruders can log in without password and read or write any data in the cluster.
* - Intruders can consume all your server's resources and cause unavailability.
*
*
* INFO: To start a secure server without mandating TLS for clients,
* consider --accept-sql-without-tls instead. For other options, see:
*
* - https://go.crdb.dev/issue-v/53404/dev
* - https://www.cockroachlabs.com/docs/dev/secure-a-cluster.html
*
*
* WARNING: neither --listen-addr nor --advertise-addr was specified.
* The server will advertise "nlowe-z4l" to other nodes, is this routable?
*
* Consider using:
* - for local-only servers:  --listen-addr=localhost
* - for multi-node clusters: --advertise-addr=<host/IP addr>
*
*
CockroachDB node starting at 2022-04-22 15:25:55.461315977 +0000 UTC (took 2.1s)
build:               CCL unknown @  (go1.17.6)
webui:               http://nlowe-z4l:8080/
sql:                 postgresql://root@nlowe-z4l:26257/defaultdb?sslmode=disable
sql (JDBC):          jdbc:postgresql://nlowe-z4l:26257/defaultdb?sslmode=disable&user=root
RPC client flags:    /home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach <client cmd> --host=nlowe-z4l:26257 --insecure
logs:                /home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach.runfiles/cockroach/cockroach-data/logs
temp dir:            /home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach.runfiles/cockroach/cockroach-data/cockroach-temp4100501952
external I/O path:   /home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach.runfiles/cockroach/cockroach-data/extern
store[0]:            path=/home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach.runfiles/cockroach/cockroach-data
storage engine:      pebble
clusterID:           bb3942d7-f241-4d26-aa4a-1bd0d6556e4d
status:              initialized new cluster
nodeID:              1
```
 
I was then able to view the contents of a backup hosted in an azure
government storage account:
 
```
root@:26257/defaultdb> SELECT DISTINCT object_name FROM [SHOW BACKUP 'azure://container/path/to/backup?AZURE_ACCOUNT_NAME=account&AZURE_ACCOUNT_KEY=***&AZURE_ENVIRONMENT=AzureUSGovernmentCloud'] WHERE object_type = 'database';
               object_name
------------------------------------------
  example_database
  ...
(17 rows)
 
Time: 5.859632889s
```
 
Omitting the `AZURE_ENVIRONMENT` parameter, we can see cockroach
defaults to the public cloud where my storage account does not exist:
 
```
root@:26257/defaultdb> SELECT DISTINCT object_name FROM [SHOW BACKUP 'azure://container/path/to/backup?AZURE_ACCOUNT_NAME=account&AZURE_ACCOUNT_KEY=***'] WHERE object_type = 'database';
ERROR: reading previous backup layers: unable to list files for specified blob: Get "https://account.blob.core.windows.net/container?comp=list&delimiter=path%2Fto%2Fbackup&restype=container&timeout=61": dial tcp: lookup account.blob.core.windows.net on 8.8.8.8:53: no such host
```
 
## Tests
 
Two new tests are added to verify that the storage account URL is correctly
built from the provided Azure Environment name, and that the Environment
defaults to the Public Cloud if unspecified for backwards compatibility. I
verified the existing tests pass against a government storage account after
specifying `AZURE_ENVIRONMENT` as `AzureUSGovernmentCloud` in the backup URL
query parameters:
 
```
nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓  2022-04-22 17:38:26]
$ export AZURE_ACCOUNT_NAME=account
nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓  2022-04-22 17:38:42]
$ export AZURE_ACCOUNT_KEY=***
nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓  2022-04-22 17:39:25]
$ export AZURE_CONTAINER=container
nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓  2022-04-22 17:39:48]
$ export AZURE_ENVIRONMENT=AzureUSGovernmentCloud
nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓  2022-04-22 17:40:15]
$ bazel test --test_output=streamed --test_arg=-test.v --action_env=AZURE_ACCOUNT_NAME --action_env=AZURE_ACCOUNT_KEY --action_env=AZURE_CONTAINER --action_env=AZURE_ENVIRONMENT //pkg/cloud/azure:azure_test
INFO: Invocation ID: aa88a942-f3c7-4df6-bade-8f5f0e18041f
WARNING: Streamed test output requested. All tests will be run locally, without sharding, one at a time
INFO: Build option --action_env has changed, discarding analysis cache.
INFO: Analyzed target //pkg/cloud/azure:azure_test (468 packages loaded, 16382 targets configured).
INFO: Found 1 test target...
initialized metamorphic constant "span-reuse-rate" with value 28
=== RUN   TestAzure
=== RUN   TestAzure/simple_round_trip
=== RUN   TestAzure/exceeds-4mb-chunk
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#00
    cloud_test_helpers.go:226: read 3345 of file at 4778744
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#1
    cloud_test_helpers.go:226: read 7228 of file at 226589
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#2
    cloud_test_helpers.go:226: read 634 of file at 256284
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#3
    cloud_test_helpers.go:226: read 7546 of file at 3546208
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#4
    cloud_test_helpers.go:226: read 24123 of file at 4821795
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#5
    cloud_test_helpers.go:226: read 16899 of file at 403428
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#6
    cloud_test_helpers.go:226: read 29467 of file at 4886370
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#7
    cloud_test_helpers.go:226: read 11700 of file at 1876920
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#8
    cloud_test_helpers.go:226: read 2928 of file at 489781
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#9
    cloud_test_helpers.go:226: read 19933 of file at 1483342
=== RUN   TestAzure/read-single-file-by-uri
=== RUN   TestAzure/write-single-file-by-uri
=== RUN   TestAzure/file-does-not-exist
=== RUN   TestAzure/List
=== RUN   TestAzure/List/root
=== RUN   TestAzure/List/file-slash-numbers-slash
=== RUN   TestAzure/List/root-slash
=== RUN   TestAzure/List/file
=== RUN   TestAzure/List/file-slash
=== RUN   TestAzure/List/slash-f
=== RUN   TestAzure/List/nothing
=== RUN   TestAzure/List/delim-slash-file-slash
=== RUN   TestAzure/List/delim-data
--- PASS: TestAzure (34.81s)
    --- PASS: TestAzure/simple_round_trip (9.66s)
    --- PASS: TestAzure/exceeds-4mb-chunk (16.45s)
        --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats (6.41s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#00 (0.15s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#1 (0.64s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#2 (0.65s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#3 (0.60s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#4 (0.75s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#5 (0.80s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#6 (0.75s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#7 (0.65s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#8 (0.65s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#9 (0.77s)
    --- PASS: TestAzure/read-single-file-by-uri (0.60s)
    --- PASS: TestAzure/write-single-file-by-uri (0.60s)
    --- PASS: TestAzure/file-does-not-exist (1.05s)
    --- PASS: TestAzure/List (2.40s)
        --- PASS: TestAzure/List/root (0.30s)
        --- PASS: TestAzure/List/file-slash-numbers-slash (0.30s)
        --- PASS: TestAzure/List/root-slash (0.30s)
        --- PASS: TestAzure/List/file (0.30s)
        --- PASS: TestAzure/List/file-slash (0.30s)
        --- PASS: TestAzure/List/slash-f (0.30s)
        --- PASS: TestAzure/List/nothing (0.15s)
        --- PASS: TestAzure/List/delim-slash-file-slash (0.15s)
        --- PASS: TestAzure/List/delim-data (0.30s)
=== RUN   TestAntagonisticAzureRead
--- PASS: TestAntagonisticAzureRead (103.90s)
=== RUN   TestParseAzureURL
=== RUN   TestParseAzureURL/Defaults_to_Public_Cloud_when_AZURE_ENVIRONEMNT_unset
=== RUN   TestParseAzureURL/Can_Override_AZURE_ENVIRONMENT
--- PASS: TestParseAzureURL (0.00s)
    --- PASS: TestParseAzureURL/Defaults_to_Public_Cloud_when_AZURE_ENVIRONEMNT_unset (0.00s)
    --- PASS: TestParseAzureURL/Can_Override_AZURE_ENVIRONMENT (0.00s)
=== RUN   TestMakeAzureStorageURLFromEnvironment
=== RUN   TestMakeAzureStorageURLFromEnvironment/AzurePublicCloud
=== RUN   TestMakeAzureStorageURLFromEnvironment/AzureUSGovernmentCloud
--- PASS: TestMakeAzureStorageURLFromEnvironment (0.00s)
    --- PASS: TestMakeAzureStorageURLFromEnvironment/AzurePublicCloud (0.00s)
    --- PASS: TestMakeAzureStorageURLFromEnvironment/AzureUSGovernmentCloud (0.00s)
PASS
Target //pkg/cloud/azure:azure_test up-to-date:
  _bazel/bin/pkg/cloud/azure/azure_test_/azure_test
INFO: Elapsed time: 159.865s, Critical Path: 152.35s
INFO: 66 processes: 2 internal, 64 darwin-sandbox.
INFO: Build completed successfully, 66 total actions
//pkg/cloud/azure:azure_test                                             PASSED in 139.9s
 
INFO: Build completed successfully, 66 total actions
```

80705: kvclient: fix gRPC stream leak in rangefeed client r=tbg,srosenberg a=erikgrinaker

When the DistSender rangefeed client received a `RangeFeedError` message
and propagated a retryable error up the stack, it would fail to close
the existing gRPC stream, causing stream/goroutine leaks.

Release note (bug fix): Fixed a goroutine leak when internal rangefeed
clients received certain kinds of retriable errors.

80762: joberror: add ConnectionReset/ConnectionRefused to retryable err allow list r=miretskiy a=adityamaru

Bulk jobs will no longer treat `sysutil.IsErrConnectionReset`
and `sysutil.IsErrConnectionRefused` as permanent errors. IMPORT,
RESTORE and BACKUP will treat this error as transient and retry.

Release note: None

80773: backupccl: break dependency to testcluster r=irfansharif a=irfansharif

Noticed we were building testing library packages when building CRDB
binaries.

    $ bazel query "somepath(//pkg/cmd/cockroach-short, //pkg/testutils/testcluster)"
    //pkg/cmd/cockroach-short:cockroach-short
    //pkg/cmd/cockroach-short:cockroach-short_lib
    //pkg/ccl:ccl
    //pkg/ccl/backupccl:backupccl
    //pkg/testutils/testcluster:testcluster

Release note: None

Co-authored-by: Marcus Gartner <marcus@cockroachlabs.com>
Co-authored-by: Nathan Lowe <nathan.lowe@spacex.com>
Co-authored-by: Erik Grinaker <grinaker@cockroachlabs.com>
Co-authored-by: Aditya Maru <adityamaru@gmail.com>
Co-authored-by: irfan sharif <irfanmahmoudsharif@gmail.com>
stevendanna added a commit to msbutler/cockroach that referenced this pull request Mar 5, 2024
pav-kv pushed a commit to pav-kv/cockroach that referenced this pull request Mar 5, 2024
itsbilal added a commit to itsbilal/cockroach that referenced this pull request May 2, 2024
For some reason, `StopServiceForVirtualCluster` fails with this error on
drt clusters:

```
20:23:41 node_kill.go:51: operation status: killing node 1  with signal 15
20:23:41 cluster.go:2148: stoping virtual cluster
20:23:41 operation_impl.go:128: operation failure cockroachdb#1: no service for virtual cluster ""
```

The debug message has a bug, the virtual cluster is set to "system" but it
seems like the service discovery process isn't able to determine the cockroach
process based on dns settings in the drt project. This change makes the
node-kill operation more dns-agnostic by looking for the cockroach process.

Epic: none

Release note: None
craig bot pushed a commit that referenced this pull request May 2, 2024
123517: roachtest: move node-kill operation to pkill/pgrep-based kill approach r=renatolabs a=itsbilal

For some reason, `StopServiceForVirtualCluster` fails with this error on drt clusters:

```
20:23:41 node_kill.go:51: operation status: killing node 1  with signal 15
20:23:41 cluster.go:2148: stoping virtual cluster
20:23:41 operation_impl.go:128: operation failure #1: no service for virtual cluster ""
```

The debug message has a bug, the virtual cluster is set to "system" but it seems like the service discovery process isn't able to determine the cockroach process based on dns settings in the drt project. This change makes the node-kill operation more dns-agnostic by looking for the cockroach process.

Epic: none

Release note: None

Co-authored-by: Bilal Akhtar <bilal@cockroachlabs.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants