Skip to content

Commit

Permalink
fix the data structure of Annotations and Labels
Browse files Browse the repository at this point in the history
Signed-off-by: LiZhenCheng9527 <lizhencheng6@huawei.com>
  • Loading branch information
LiZhenCheng9527 committed Oct 24, 2023
1 parent 748073b commit e07b11f
Show file tree
Hide file tree
Showing 5 changed files with 53 additions and 125 deletions.
20 changes: 8 additions & 12 deletions docs/content/en/references/fleet_v1alpha1_types.html
Original file line number Diff line number Diff line change
Expand Up @@ -910,25 +910,24 @@ <h3 id="fleet.kurator.dev/v1alpha1.MgrSpec">MgrSpec
<td>
<code>annotations</code><br>
<em>
github.com/rook/rook/pkg/apis/ceph.rook.io/v1.AnnotationsSpec
map[string]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Use Annotations/labels to achieve the goal of placing two managers on different nodes.
The annotations-related configuration to add/set on each Pod related object.</p>
<p>The annotation-related configuration to add/set on each Pod related object. Including Pod, Deployment.</p>
</td>
</tr>
<tr>
<td>
<code>labels</code><br>
<em>
github.com/rook/rook/pkg/apis/ceph.rook.io/v1.LabelsSpec
map[string]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>The labels-related configuration to add/set on each Pod related object.</p>
<p>The label-related configuration to add/set on each Pod related object. Including Pod, Deployment.</p>
</td>
</tr>
<tr>
Expand Down Expand Up @@ -980,28 +979,25 @@ <h3 id="fleet.kurator.dev/v1alpha1.MonSpec">MonSpec
<td>
<code>annotations</code><br>
<em>
github.com/rook/rook/pkg/apis/ceph.rook.io/v1.AnnotationsSpec
map[string]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>In a ceph cluster, it is recommended that the monitor pod be deployed on a different node in order to ensure high availability of data.
In practice, you can label the node where the monitor pod is deployed with Annotation/Labels.
Then use kubernetes node affinity rules to achieve the goal of deploying the monitor to different nodes.
The annotations-related configuration to add/set on each Pod related object.</p>
<p>The annotation-related configuration to add/set on each Pod related object. Including Pod, Deployment.</p>
</td>
</tr>
<tr>
<td>
<code>labels</code><br>
<em>
github.com/rook/rook/pkg/apis/ceph.rook.io/v1.LabelsSpec
map[string]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Similar to Annotation, but more graphical than Annotation.
The labels-related configuration to add/set on each Pod related object.</p>
The label-related configuration to add/set on each Pod related object. Including Pod, Deployment.</p>
</td>
</tr>
<tr>
Expand Down
24 changes: 10 additions & 14 deletions docs/proposals/distributedstorage/distributedstorage.md
Original file line number Diff line number Diff line change
Expand Up @@ -244,25 +244,22 @@ When the number of monitors is 3, it requires 2 active monitors to work properly
type MonSpec struct {
// Count is the number of Ceph monitors.
// Default is three and preferably an odd number.
// +kubebuilder:validation:Minimum=0
// +kubebuilder:validation:Minimum=1
// +kubebuilder:validation:Maximum=9
// +optional
Count *int `json:"count,omitempty"`
// In a ceph cluster, it is recommended that the monitor pod be deployed on a different node in order to ensure high availability of data.
// In practice, you can label the node where the monitor pod is deployed with Annotation/Labels.
// Then use kubernetes node affinity rules to achieve the goal of deploying the monitor to different nodes.
// The annotations-related configuration to add/set on each Pod related object.
// The annotation-related configuration to add/set on each Pod related object. Including Pod, Deployment.
// +nullable
// +optional
Annotations rookv1.AnnotationsSpec `json:"annotations,omitempty"`
Annotations map[string]string `json:"annotations,omitempty"`

// Similar to Annotation, but more graphical than Annotation.
// The labels-related configuration to add/set on each Pod related object.
// The label-related configuration to add/set on each Pod related object. Including Pod, Deployment.
// +kubebuilder:pruning:PreserveUnknownFields
// +nullable
// +optional
Labels rookv1.LabelsSpec `json:"labels,omitempty"`
Labels map[string]string `json:"labels,omitempty"`

// The placement-related configuration to pass to kubernetes (affinity, node selector, tolerations).
// +kubebuilder:pruning:PreserveUnknownFields
Expand All @@ -274,22 +271,21 @@ type MonSpec struct {
type MgrSpec struct {
// Count is the number of manager to run
// Default is two, one for use and one for standby.
// +kubebuilder:validation:Minimum=0
// +kubebuilder:validation:Minimum=1
// +kubebuilder:validation:Maximum=2
// +optional
Count *int `json:"count,omitempty"`

// Use Annotations/labels to achieve the goal of placing two managers on different nodes.
// The annotations-related configuration to add/set on each Pod related object.
// The annotation-related configuration to add/set on each Pod related object. Including Pod, Deployment.
// +nullable
// +optional
Annotations rookv1.AnnotationsSpec `json:"annotations,omitempty"`
Annotations map[string]string `json:"annotations,omitempty"`

// The labels-related configuration to add/set on each Pod related object.
// The labels-related configuration to add/set on each Pod related object. Including Pod, Deployment.
// +kubebuilder:pruning:PreserveUnknownFields
// +nullable
// +optional
Labels rookv1.LabelsSpec `json:"labels,omitempty"`
Labels map[string]string `json:"labels,omitempty"`

// The placement-related configuration to pass to kubernetes (affinity, node selector, tolerations).
// +kubebuilder:pruning:PreserveUnknownFields
Expand Down
54 changes: 17 additions & 37 deletions manifests/charts/fleet-manager/crds/fleet.kurator.dev_fleet.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -254,31 +254,23 @@ spec:
properties:
annotations:
additionalProperties:
additionalProperties:
type: string
description: Annotations are annotations
type: object
description: Use Annotations/labels to achieve the
goal of placing two managers on different nodes.
The annotations-related configuration to add/set
on each Pod related object.
type: string
description: The annotation-related configuration
to add/set on each Pod related object. Including
Pod, Deployment.
nullable: true
type: object
x-kubernetes-preserve-unknown-fields: true
count:
description: Count is the number of manager to run
Default is two, one for use and one for standby.
maximum: 2
minimum: 0
minimum: 1
type: integer
labels:
additionalProperties:
additionalProperties:
type: string
description: Labels are label for a given daemons
type: object
description: The labels-related configuration to add/set
on each Pod related object.
type: string
description: The label-related configuration to add/set
on each Pod related object. Including Pod, Deployment.
nullable: true
type: object
x-kubernetes-preserve-unknown-fields: true
Expand Down Expand Up @@ -1663,37 +1655,25 @@ spec:
properties:
annotations:
additionalProperties:
additionalProperties:
type: string
description: Annotations are annotations
type: object
description: In a ceph cluster, it is recommended
that the monitor pod be deployed on a different
node in order to ensure high availability of data.
In practice, you can label the node where the monitor
pod is deployed with Annotation/Labels. Then use
kubernetes node affinity rules to achieve the goal
of deploying the monitor to different nodes. The
annotations-related configuration to add/set on
each Pod related object.
type: string
description: The annotation-related configuration
to add/set on each Pod related object. Including
Pod, Deployment.
nullable: true
type: object
x-kubernetes-preserve-unknown-fields: true
count:
description: Count is the number of Ceph monitors.
Default is three and preferably an odd number.
maximum: 9
minimum: 0
minimum: 1
type: integer
labels:
additionalProperties:
additionalProperties:
type: string
description: Labels are label for a given daemons
type: object
type: string
description: Similar to Annotation, but more graphical
than Annotation. The labels-related configuration
to add/set on each Pod related object.
than Annotation. The label-related configuration
to add/set on each Pod related object. Including
Pod, Deployment.
nullable: true
type: object
x-kubernetes-preserve-unknown-fields: true
Expand Down
24 changes: 10 additions & 14 deletions pkg/apis/fleet/v1alpha1/types.go
Original file line number Diff line number Diff line change
Expand Up @@ -389,25 +389,22 @@ type DistributedStorage struct {
type MonSpec struct {
// Count is the number of Ceph monitors.
// Default is three and preferably an odd number.
// +kubebuilder:validation:Minimum=0
// +kubebuilder:validation:Minimum=1
// +kubebuilder:validation:Maximum=9
// +optional
Count *int `json:"count,omitempty"`

// In a ceph cluster, it is recommended that the monitor pod be deployed on a different node in order to ensure high availability of data.
// In practice, you can label the node where the monitor pod is deployed with Annotation/Labels.
// Then use kubernetes node affinity rules to achieve the goal of deploying the monitor to different nodes.
// The annotations-related configuration to add/set on each Pod related object.
// The annotation-related configuration to add/set on each Pod related object. Including Pod, Deployment.
// +nullable
// +optional
Annotations rookv1.AnnotationsSpec `json:"annotations,omitempty"`
Annotations map[string]string `json:"annotations,omitempty"`

// Similar to Annotation, but more graphical than Annotation.
// The labels-related configuration to add/set on each Pod related object.
// The label-related configuration to add/set on each Pod related object. Including Pod, Deployment.
// +kubebuilder:pruning:PreserveUnknownFields
// +nullable
// +optional
Labels rookv1.LabelsSpec `json:"labels,omitempty"`
Labels map[string]string `json:"labels,omitempty"`

// The placement-related configuration to pass to kubernetes (affinity, node selector, tolerations).
// +kubebuilder:pruning:PreserveUnknownFields
Expand All @@ -419,22 +416,21 @@ type MonSpec struct {
type MgrSpec struct {
// Count is the number of manager to run
// Default is two, one for use and one for standby.
// +kubebuilder:validation:Minimum=0
// +kubebuilder:validation:Minimum=1
// +kubebuilder:validation:Maximum=2
// +optional
Count *int `json:"count,omitempty"`

// Use Annotations/labels to achieve the goal of placing two managers on different nodes.
// The annotations-related configuration to add/set on each Pod related object.
// The annotation-related configuration to add/set on each Pod related object. Including Pod, Deployment.
// +nullable
// +optional
Annotations rookv1.AnnotationsSpec `json:"annotations,omitempty"`
Annotations map[string]string `json:"annotations,omitempty"`

// The labels-related configuration to add/set on each Pod related object.
// The label-related configuration to add/set on each Pod related object. Including Pod, Deployment.
// +kubebuilder:pruning:PreserveUnknownFields
// +nullable
// +optional
Labels rookv1.LabelsSpec `json:"labels,omitempty"`
Labels map[string]string `json:"labels,omitempty"`

// The placement-related configuration to pass to kubernetes (affinity, node selector, tolerations).
// +kubebuilder:pruning:PreserveUnknownFields
Expand Down
56 changes: 8 additions & 48 deletions pkg/apis/fleet/v1alpha1/zz_generated.deepcopy.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

0 comments on commit e07b11f

Please sign in to comment.