Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: duplicated pd_servers.name in the topology before truly deploy #922

Merged
merged 4 commits into from
Nov 23, 2020

Conversation

anywhy
Copy link
Contributor

@anywhy anywhy commented Nov 19, 2020

What problem does this PR solve?

fix #764 pd_servers.name has same name

What is changed and how it works?

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No code

Code changes

  • Has exported function/method change
  • Has exported variable/fields change
  • Has interface methods change
  • Has persistent data change

Side effects

  • Possible performance regression
  • Increased code complexity
  • Breaking backward compatibility

Related changes

  • Need to cherry-pick to the release branch
  • Need to update the documentation

Release notes:

NONE

@CLAassistant
Copy link

CLAassistant commented Nov 19, 2020

CLA assistant check
All committers have signed the CLA.

@codecov-io
Copy link

codecov-io commented Nov 19, 2020

Codecov Report

Merging #922 (4124d33) into master (abfd109) will increase coverage by 0.00%.
The diff coverage is 77.77%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #922   +/-   ##
=======================================
  Coverage   55.22%   55.22%           
=======================================
  Files         261      261           
  Lines       19253    19262    +9     
=======================================
+ Hits        10632    10638    +6     
- Misses       6917     6919    +2     
- Partials     1704     1705    +1     
Flag Coverage Δ
cluster 43.50% <33.33%> (+<0.01%) ⬆️
dm 24.08% <0.00%> (-0.03%) ⬇️
integrate 50.04% <33.33%> (-0.02%) ⬇️
playground 20.23% <ø> (ø)
tiup 17.12% <0.00%> (-0.05%) ⬇️
unittest 21.67% <77.77%> (+0.03%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
pkg/cluster/spec/validate.go 92.73% <77.77%> (-0.31%) ⬇️
pkg/repository/store/txn.go 59.37% <0.00%> (-2.35%) ⬇️
pkg/cluster/api/binlog.go 37.16% <0.00%> (-1.77%) ⬇️
pkg/cluster/manager.go 67.60% <0.00%> (-0.15%) ⬇️
pkg/cluster/api/pdapi.go 61.30% <0.00%> (+1.23%) ⬆️
pkg/utils/http_client.go 72.22% <0.00%> (+5.55%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update abfd109...4124d33. Read the comment docs.

@anywhy anywhy force-pushed the validate-podname branch 2 times, most recently from 9918e4b to ec88bb4 Compare November 20, 2020 03:02
Copy link
Member

@lucklove lucklove left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rest LGTM

@@ -794,6 +794,22 @@ func (s *Specification) validateTLSEnabled() error {
return nil
}

func (s *Specification) validatePdPodNames() error {
// check pdserver pod name
if len(s.PDServers) > 1 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check is redundant, prefer to remove the if statement

@@ -794,6 +794,22 @@ func (s *Specification) validateTLSEnabled() error {
return nil
}

func (s *Specification) validatePdPodNames() error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prefer to use the name validatePDNames because Pod is a term of K8S and we can really deploy TiDB to K8S. So we'd better change it to avoid misunderstanding

func (s *Specification) validatePdPodNames() error {
// check pdserver pod name
if len(s.PDServers) > 1 {
cnt := make(map[string]int)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prefer to use StringSet:

names := set.NewStringSet()

for _, pd := range s.PDServers {
    if pd.Name == "" {
        continue
    }
    if names.Exist(pd.Name) { 
        return errors.Errorf("xxxxxx")
    }
    names.Insert(pd.Name)
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

got it

@ti-srebot ti-srebot added the status/LGT1 Indicates that a PR has LGTM 1. label Nov 23, 2020
@AstroProfundis AstroProfundis added good-first-issue type/enhancement Categorizes issue or PR as related to an enhancement. labels Nov 23, 2020
@AstroProfundis
Copy link
Contributor

/merge

@ti-srebot ti-srebot added the status/can-merge Indicates a PR has been approved by a committer. label Nov 23, 2020
@ti-srebot
Copy link
Contributor

Your auto merge job has been accepted, waiting for:

  • 860
  • 916

@ti-srebot
Copy link
Contributor

/run-all-tests

@ti-srebot
Copy link
Contributor

@anywhy merge failed.

@AstroProfundis AstroProfundis merged commit e8ecfc5 into pingcap:master Nov 23, 2020
@anywhy anywhy deleted the validate-podname branch November 24, 2020 09:08
AstroProfundis added a commit that referenced this pull request Nov 27, 2020
…e cluster(#764) (#922)

Co-authored-by: ti-srebot <66930949+ti-srebot@users.noreply.github.com>
Co-authored-by: Allen Zhong <zhongbenli@pingcap.com>
@lucklove lucklove added this to the v1.2.5 milestone Nov 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
first-time-contributor status/can-merge Indicates a PR has been approved by a committer. status/LGT1 Indicates that a PR has LGTM 1. type/enhancement Categorizes issue or PR as related to an enhancement.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

check duplicated pd_servers.name in the topology before truly deploy the cluster
6 participants