diff --git a/docs/book/src/component-config-tutorial/api-changes.md b/docs/book/src/component-config-tutorial/api-changes.md
index edd39223edd..d12974131d3 100644
--- a/docs/book/src/component-config-tutorial/api-changes.md
+++ b/docs/book/src/component-config-tutorial/api-changes.md
@@ -97,11 +97,11 @@ leaderElection:
# leaderElectionReleaseOnCancel defines if the leader should step down volume
# when the Manager ends. This requires the binary to immediately end when the
# Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
-# speeds up voluntary leader transitions as the new leader don't have to wait
-# LeaseDuration time first.
+# speeds up voluntary leader transitions as the new leader doesn't have to wait
+# the LeaseDuration time first.
# In the default scaffold provided, the program ends immediately after
-# the manager stops, so would be fine to enable this option. However,
-# if you are doing or is intended to do any operation such as perform cleanups
+# the manager stops, so it would be fine to enable this option. However,
+# if you are doing, or are intending to do, any operation such as perform cleanups
# after the manager stops then its usage might be unsafe.
# leaderElectionReleaseOnCancel: true
```
diff --git a/docs/book/src/cronjob-tutorial/testdata/project/cmd/main.go b/docs/book/src/cronjob-tutorial/testdata/project/cmd/main.go
index 14317caacd8..86c2f38af63 100644
--- a/docs/book/src/cronjob-tutorial/testdata/project/cmd/main.go
+++ b/docs/book/src/cronjob-tutorial/testdata/project/cmd/main.go
@@ -127,12 +127,12 @@ func main() {
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
- // speeds up voluntary leader transitions as the new leader don't have to wait
- // LeaseDuration time first.
+ // speeds up voluntary leader transitions as the new leader doesn't have to wait
+ // the LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
- // the manager stops, so would be fine to enable this option. However,
- // if you are doing or is intended to do any operation such as perform cleanups
+ // the manager stops, so it would be fine to enable this option. However,
+ // if you are doing, or are intending to do, any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
diff --git a/docs/book/src/getting-started.md b/docs/book/src/getting-started.md
index 6af3298e229..198ed9257d9 100644
--- a/docs/book/src/getting-started.md
+++ b/docs/book/src/getting-started.md
@@ -16,7 +16,7 @@ We will create a sample project to let you know how it works. This sample will:
- Not allow more instances than the size defined in the CR which will be applied
- Update the Memcached CR status
-Following the steps.
+Use the following the steps.
## Create a project
@@ -33,7 +33,7 @@ kubebuilder init --domain=example.com
Next, we'll create a new API responsible for deploying and managing our Memcached solution. In this instance, we will utilize the [Deploy Image Plugin][deploy-image] to get a comprehensive code implementation for our solution.
```
-kubebuilder create api --group example.com --version v1alpha1 --kind Memcached --image=memcached:1.4.36-alpine --image-container-command="memcached,-m=64,-o,modern,-v" --image-container-port="11211" --run-as-user="1001" --plugins="deploy-image/v1-alpha" --make=false
+kubebuilder create api --group cache --version v1alpha1 --kind Memcached --image=memcached:1.4.36-alpine --image-container-command="memcached,-m=64,-o,modern,-v" --image-container-port="11211" --run-as-user="1001" --plugins="deploy-image/v1-alpha" --make=false
```
### Understanding APIs
@@ -45,7 +45,7 @@ This command's primary aim is to produce the Custom Resource (CR) and Custom Res
Consider a typical scenario where the objective is to run an application and its database on a Kubernetes platform. In this context, one object might represent the Frontend App, while another denotes the backend Data Base. If we define one CRD for the App and another for the DB, we uphold essential concepts like encapsulation, the single responsibility principle, and cohesion. Breaching these principles might lead to complications, making extension, reuse, or maintenance challenging.
-In essence, the App CRD and the DB CRD will have their controller. Let's say, for instance, that the application requires a Deployment and Service to run. In this example, the App’s Controller will cater to these needs. Similarly, the DB’s controller will manage the business logic of its items.
+In essence, the App CRD and the DB CRD will each have their own controller. Let's say, for instance, that the application requires a Deployment and Service to run. In this example, the App’s Controller will cater to these needs. Similarly, the DB’s controller will manage the business logic of its items.
Therefore, for each CRD, there should be one distinct controller, adhering to the design outlined by the [controller-runtime][controller-runtime]. For further information see [Groups and Versions and Kinds, oh my!][group-kind-oh-my].
@@ -94,14 +94,14 @@ type MemcachedStatus struct {
}
```
-Thus, when we introduce new specifications to this file and execute the `make generate` command, we utilize [controller-gen][controller-gen] to generate the CRD manifest, which is located under the `config/crds` directory.
+Thus, when we introduce new specifications to this file and execute the `make generate` command, we utilize [controller-gen][controller-gen] to generate the CRD manifest, which is located under the `config/crd/bases` directory.
#### Markers and validations
-Moreover, it's important to note that we're employing `markers`, such as `+kubebuilder:validation:Minimum=1`. These markers help in defining validations and criteria, ensuring that data provided by users—when they create or edit a Custom Resource for the Memcached Kind—is properly validated. For a comprehensive list and details of available markers, refer [here][markers].
+Moreover, it's important to note that we're employing `markers`, such as `+kubebuilder:validation:Minimum=1`. These markers help in defining validations and criteria, ensuring that data provided by users — when they create or edit a Custom Resource for the Memcached Kind — is properly validated. For a comprehensive list and details of available markers, refer [the Markers documentation][markers].
Observe the validation schema within the CRD; this schema ensures that the Kubernetes API properly validates the Custom Resources (CRs) that are applied:
-From: `config/crd/bases/example.com.testproject.org_memcacheds.yaml`
+From: `config/crd/bases/cache.example.com_memcacheds.yaml`
```yaml
description: MemcachedSpec defines the desired state of Memcached
properties:
@@ -115,8 +115,8 @@ From: `config/crd/bases/example.com.testproject.org_memcacheds.yaml`
markers will use OpenAPI v3 schema to validate the value More info:
https://book.kubebuilder.io/reference/markers/crd-validation.html'
format: int32
- maximum: 3 ## See here from the marker +kubebuilder:validation:Maximum=3
- minimum: 1 ## See here from the marker +kubebuilder:validation:Minimum=1
+ maximum: 3 ## Generated from the marker +kubebuilder:validation:Maximum=3
+ minimum: 1 ## Generated from the marker +kubebuilder:validation:Minimum=1
type: integer
type: object
@@ -127,10 +127,10 @@ From: `config/crd/bases/example.com.testproject.org_memcacheds.yaml`
The manifests located under the "config/samples" directory serve as examples of Custom Resources that can be applied to the cluster.
In this particular example, by applying the given resource to the cluster, we would generate a Deployment with a single instance size (see `size: 1`).
-From: `config/samples/example.com_v1alpha1_memcached.yaml`
+From: `config/samples/cache_v1alpha1_memcached.yaml`
```shell
-apiVersion: example.com.testproject.org/v1alpha1
+apiVersion: cache.example.com/v1alpha1
kind: Memcached
metadata:
name: memcached-sample
@@ -144,7 +144,7 @@ spec:
### Reconciliation Process
-The reconciliation function plays a pivotal role in ensuring synchronization between resources and their specifications based on the business logic embedded within them. Essentially, it operates like a loop, continuously checking conditions and performing actions until all conditions align with its implementation. Here's a pseudo-code to illustrate this:
+The reconciliation function plays a pivotal role in ensuring synchronization between resources and their specifications based on the business logic embedded within them. Essentially, it operates like a loop, continuously checking conditions and performing actions until all conditions align with its implementation. Here's pseudo-code to illustrate this:
```go
reconcile App {
@@ -206,9 +206,9 @@ return ctrl.Result{RequeueAfter: nextRun.Sub(r.Now())}, nil
#### In the context of our example
-When a Custom Resource is applied to the cluster, there's a designated controller to manage the Memcached Kind. You can check its reconciliation implemented:
+When a Custom Resource is applied to the cluster, there's a designated controller to manage the Memcached Kind. You can how its reconciliation is implemented:
-From `testdata/project-v4-with-deploy-image/internal/controller/memcached_controller.go`:
+From `internal/controller/memcached_controller.go`:
```go
func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
@@ -221,7 +221,7 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
err := r.Get(ctx, req.NamespacedName, memcached)
if err != nil {
if apierrors.IsNotFound(err) {
- // If the custom resource is not found then, it usually means that it was deleted or not created
+ // If the custom resource is not found then it usually means that it was deleted or not created
// In this way, we will stop the reconciliation
log.Info("memcached resource not found. Ignoring since object must be deleted")
return ctrl.Result{}, nil
@@ -231,7 +231,7 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
return ctrl.Result{}, err
}
- // Let's just set the status as Unknown when no status are available
+ // Let's just set the status as Unknown when no status is available
if memcached.Status.Conditions == nil || len(memcached.Status.Conditions) == 0 {
meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{Type: typeAvailableMemcached, Status: metav1.ConditionUnknown, Reason: "Reconciling", Message: "Starting reconciliation"})
if err = r.Status().Update(ctx, memcached); err != nil {
@@ -239,9 +239,9 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
return ctrl.Result{}, err
}
- // Let's re-fetch the memcached Custom Resource after update the status
+ // Let's re-fetch the memcached Custom Resource after updating the status
// so that we have the latest state of the resource on the cluster and we will avoid
- // raise the issue "the object has been modified, please apply
+ // raising the error "the object has been modified, please apply
// your changes to the latest version and try again" which would re-trigger the reconciliation
// if we try to update it again in the following operations
if err := r.Get(ctx, req.NamespacedName, memcached); err != nil {
@@ -251,7 +251,7 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
}
// Let's add a finalizer. Then, we can define some operations which should
- // occurs before the custom resource to be deleted.
+ // occur before the custom resource can be deleted.
// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers
if !controllerutil.ContainsFinalizer(memcached, memcachedFinalizer) {
log.Info("Adding Finalizer for Memcached")
@@ -273,7 +273,7 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
if controllerutil.ContainsFinalizer(memcached, memcachedFinalizer) {
log.Info("Performing Finalizer Operations for Memcached before delete CR")
- // Let's add here an status "Downgrade" to define that this resource begin its process to be terminated.
+ // Let's add here a status "Downgrade" to reflect that this resource began its process to be terminated.
meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{Type: typeDegradedMemcached,
Status: metav1.ConditionUnknown, Reason: "Finalizing",
Message: fmt.Sprintf("Performing finalizer operations for the custom resource: %s ", memcached.Name)})
@@ -283,7 +283,7 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
return ctrl.Result{}, err
}
- // Perform all operations required before remove the finalizer and allow
+ // Perform all operations required before removing the finalizer and allow
// the Kubernetes API to remove the custom resource.
r.doFinalizerOperationsForMemcached(memcached)
@@ -291,9 +291,9 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
// then you need to ensure that all worked fine before deleting and updating the Downgrade status
// otherwise, you should requeue here.
- // Re-fetch the memcached Custom Resource before update the status
+ // Re-fetch the memcached Custom Resource before updating the status
// so that we have the latest state of the resource on the cluster and we will avoid
- // raise the issue "the object has been modified, please apply
+ // raising the error "the object has been modified, please apply
// your changes to the latest version and try again" which would re-trigger the reconciliation
if err := r.Get(ctx, req.NamespacedName, memcached); err != nil {
log.Error(err, "Failed to re-fetch memcached")
@@ -374,9 +374,9 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
log.Error(err, "Failed to update Deployment",
"Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name)
- // Re-fetch the memcached Custom Resource before update the status
+ // Re-fetch the memcached Custom Resource before updating the status
// so that we have the latest state of the resource on the cluster and we will avoid
- // raise the issue "the object has been modified, please apply
+ // raising the error "the object has been modified, please apply
// your changes to the latest version and try again" which would re-trigger the reconciliation
if err := r.Get(ctx, req.NamespacedName, memcached); err != nil {
log.Error(err, "Failed to re-fetch memcached")
@@ -458,9 +458,9 @@ manifest files present in `config/rbac/`. These markers can be found (and should
how it is implemented in our example:
```go
-//+kubebuilder:rbac:groups=example.com.testproject.org,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
-//+kubebuilder:rbac:groups=example.com.testproject.org,resources=memcacheds/status,verbs=get;update;patch
-//+kubebuilder:rbac:groups=example.com.testproject.org,resources=memcacheds/finalizers,verbs=update
+//+kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
+//+kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch
+//+kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update
//+kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch
@@ -473,9 +473,9 @@ After making the necessary changes, run the `make generate` command. This will p
@@ -495,12 +495,12 @@ If you inspect the `cmd/main.go` file, you'll come across the following:
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
- // speeds up voluntary leader transitions as the new leader don't have to wait
- // LeaseDuration time first.
+ // speeds up voluntary leader transitions as the new leader doesn't have to wait
+ // the LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
- // the manager stops, so would be fine to enable this option. However,
- // if you are doing or is intended to do any operation such as perform cleanups
+ // the manager stops, so it would be fine to enable this option. However,
+ // if you are doing, or are intending to do, any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
@@ -516,7 +516,7 @@ that are produced for your operator's APIs.
### Checking the Project running in the cluster
-At this point, you can primarily execute the commands highlighted in the [quick-start][quick-start].
+At this point, you can execute the commands highlighted in the [quick-start][quick-start].
By executing `make build IMG=myregistry/example:1.0.0`, you'll build the image for your project. For testing purposes, it's recommended to publish this image to a
public registry. This ensures easy accessibility, eliminating the need for additional configurations. Once that's done, you can deploy the image
to the cluster using the `make deploy IMG=myregistry/example:1.0.0` command.
@@ -537,4 +537,4 @@ to the cluster using the `make deploy IMG=myregistry/example:1.0.0` command.
[manager]: https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/manager
[options-manager]: https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/manager#Options
[quick-start]: ./quick-start.md
-[best-practices]: ./reference/good-practices.md
\ No newline at end of file
+[best-practices]: ./reference/good-practices.md
diff --git a/docs/book/src/multiversion-tutorial/testdata/project/cmd/main.go b/docs/book/src/multiversion-tutorial/testdata/project/cmd/main.go
index 404a0efec35..0d9e93bff06 100644
--- a/docs/book/src/multiversion-tutorial/testdata/project/cmd/main.go
+++ b/docs/book/src/multiversion-tutorial/testdata/project/cmd/main.go
@@ -90,12 +90,12 @@ func main() {
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
- // speeds up voluntary leader transitions as the new leader don't have to wait
- // LeaseDuration time first.
+ // speeds up voluntary leader transitions as the new leader doesn't have to wait
+ // the LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
- // the manager stops, so would be fine to enable this option. However,
- // if you are doing or is intended to do any operation such as perform cleanups
+ // the manager stops, so it would be fine to enable this option. However,
+ // if you are doing, or are intending to do, any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
diff --git a/pkg/plugins/common/kustomize/v1/scaffolds/internal/templates/config/manager/controller_manager_config.go b/pkg/plugins/common/kustomize/v1/scaffolds/internal/templates/config/manager/controller_manager_config.go
index 31735256005..36ed86e4d8e 100644
--- a/pkg/plugins/common/kustomize/v1/scaffolds/internal/templates/config/manager/controller_manager_config.go
+++ b/pkg/plugins/common/kustomize/v1/scaffolds/internal/templates/config/manager/controller_manager_config.go
@@ -64,14 +64,14 @@ webhook:
leaderElection:
leaderElect: true
resourceName: {{ hashFNV .Repo }}.{{ .Domain }}
-# leaderElectionReleaseOnCancel defines if the leader should step down volume
+# leaderElectionReleaseOnCancel defines if the leader should step down voluntarily
# when the Manager ends. This requires the binary to immediately end when the
# Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
-# speeds up voluntary leader transitions as the new leader don't have to wait
-# LeaseDuration time first.
+# speeds up voluntary leader transitions as the new leader doesn't have to wait
+# the LeaseDuration time first.
# In the default scaffold provided, the program ends immediately after
-# the manager stops, so would be fine to enable this option. However,
-# if you are doing or is intended to do any operation such as perform cleanups
+# the manager stops, so it would be fine to enable this option. However,
+# if you are doing, or are intending to do, any operation such as perform cleanups
# after the manager stops then its usage might be unsafe.
# leaderElectionReleaseOnCancel: true
`
diff --git a/pkg/plugins/common/kustomize/v2/scaffolds/internal/templates/config/manager/controller_manager_config.go b/pkg/plugins/common/kustomize/v2/scaffolds/internal/templates/config/manager/controller_manager_config.go
index 31735256005..533ac61cefc 100644
--- a/pkg/plugins/common/kustomize/v2/scaffolds/internal/templates/config/manager/controller_manager_config.go
+++ b/pkg/plugins/common/kustomize/v2/scaffolds/internal/templates/config/manager/controller_manager_config.go
@@ -64,14 +64,14 @@ webhook:
leaderElection:
leaderElect: true
resourceName: {{ hashFNV .Repo }}.{{ .Domain }}
-# leaderElectionReleaseOnCancel defines if the leader should step down volume
-# when the Manager ends. This requires the binary to immediately end when the
-# Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
-# speeds up voluntary leader transitions as the new leader don't have to wait
-# LeaseDuration time first.
-# In the default scaffold provided, the program ends immediately after
-# the manager stops, so would be fine to enable this option. However,
-# if you are doing or is intended to do any operation such as perform cleanups
-# after the manager stops then its usage might be unsafe.
-# leaderElectionReleaseOnCancel: true
+ # leaderElectionReleaseOnCancel defines if the leader should step down volume
+ # when the Manager ends. This requires the binary to immediately end when the
+ # Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
+ # speeds up voluntary leader transitions as the new leader doesn't have to wait
+ # the LeaseDuration time first.
+ # In the default scaffold provided, the program ends immediately after
+ # the manager stops, so it would be fine to enable this option. However,
+ # if you are doing, or are intending to do, any operation such as perform cleanups
+ # after the manager stops then its usage might be unsafe.
+ # leaderElectionReleaseOnCancel: true
`
diff --git a/pkg/plugins/golang/deploy-image/v1alpha1/scaffolds/internal/templates/controllers/controller.go b/pkg/plugins/golang/deploy-image/v1alpha1/scaffolds/internal/templates/controllers/controller.go
index d7e6f9262af..9ae18ab927b 100644
--- a/pkg/plugins/golang/deploy-image/v1alpha1/scaffolds/internal/templates/controllers/controller.go
+++ b/pkg/plugins/golang/deploy-image/v1alpha1/scaffolds/internal/templates/controllers/controller.go
@@ -113,7 +113,7 @@ const {{ lower .Resource.Kind }}Finalizer = "{{ .Resource.Group }}.{{ .Resource.
const (
// typeAvailable{{ .Resource.Kind }} represents the status of the Deployment reconciliation
typeAvailable{{ .Resource.Kind }} = "Available"
- // typeDegraded{{ .Resource.Kind }} represents the status used when the custom resource is deleted and the finalizer operations are must to occur.
+ // typeDegraded{{ .Resource.Kind }} represents the status used when the custom resource is deleted and the finalizer operations are yet to occur.
typeDegraded{{ .Resource.Kind }} = "Degraded"
)
@@ -125,7 +125,7 @@ type {{ .Resource.Kind }}Reconciler struct {
}
// The following markers are used to generate the rules permissions (RBAC) on config/rbac using controller-gen
-// when the command is executed.
+// when the command is executed.
// To know more about markers see: https://book.kubebuilder.io/reference/markers.html
//+kubebuilder:rbac:groups={{ .Resource.QualifiedGroup }},resources={{ .Resource.Plural }},verbs=get;list;watch;create;update;patch;delete
@@ -137,10 +137,10 @@ type {{ .Resource.Kind }}Reconciler struct {
// Reconcile is part of the main kubernetes reconciliation loop which aims to
// move the current state of the cluster closer to the desired state.
-// It is essential for the controller's reconciliation loop to be idempotent. By following the Operator
+// It is essential for the controller's reconciliation loop to be idempotent. By following the Operator
// pattern you will create Controllers which provide a reconcile function
-// responsible for synchronizing resources until the desired state is reached on the cluster.
-// Breaking this recommendation goes against the design principles of controller-runtime.
+// responsible for synchronizing resources until the desired state is reached on the cluster.
+// Breaking this recommendation goes against the design principles of controller-runtime.
// and may lead to unforeseen consequences such as resources becoming stuck and requiring manual intervention.
// For further info:
// - About Operator Pattern: https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
@@ -156,8 +156,8 @@ func (r *{{ .Resource.Kind }}Reconciler) Reconcile(ctx context.Context, req ctrl
err := r.Get(ctx, req.NamespacedName, {{ lower .Resource.Kind }})
if err != nil {
if apierrors.IsNotFound(err) {
- // If the custom resource is not found then, it usually means that it was deleted or not created
- // In this way, we will stop the reconciliation
+ // If the custom resource is not found then it usually means that it was deleted or not created
+ // In this way, we will stop the reconciliation
log.Info("{{ lower .Resource.Kind }} resource not found. Ignoring since object must be deleted")
return ctrl.Result{}, nil
}
@@ -166,7 +166,7 @@ func (r *{{ .Resource.Kind }}Reconciler) Reconcile(ctx context.Context, req ctrl
return ctrl.Result{}, err
}
- // Let's just set the status as Unknown when no status are available
+ // Let's just set the status as Unknown when no status is available
if {{ lower .Resource.Kind }}.Status.Conditions == nil || len({{ lower .Resource.Kind }}.Status.Conditions) == 0 {
meta.SetStatusCondition(&{{ lower .Resource.Kind }}.Status.Conditions, metav1.Condition{Type: typeAvailable{{ .Resource.Kind }}, Status: metav1.ConditionUnknown, Reason: "Reconciling", Message: "Starting reconciliation"})
if err = r.Status().Update(ctx, {{ lower .Resource.Kind }}); err != nil {
@@ -174,9 +174,9 @@ func (r *{{ .Resource.Kind }}Reconciler) Reconcile(ctx context.Context, req ctrl
return ctrl.Result{}, err
}
- // Let's re-fetch the {{ lower .Resource.Kind }} Custom Resource after update the status
+ // Let's re-fetch the {{ lower .Resource.Kind }} Custom Resource after updating the status
// so that we have the latest state of the resource on the cluster and we will avoid
- // raise the issue "the object has been modified, please apply
+ // raising the error "the object has been modified, please apply
// your changes to the latest version and try again" which would re-trigger the reconciliation
// if we try to update it again in the following operations
if err := r.Get(ctx, req.NamespacedName, {{ lower .Resource.Kind }}); err != nil {
@@ -186,7 +186,7 @@ func (r *{{ .Resource.Kind }}Reconciler) Reconcile(ctx context.Context, req ctrl
}
// Let's add a finalizer. Then, we can define some operations which should
- // occurs before the custom resource to be deleted.
+ // occur before the custom resource is deleted.
// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers
if !controllerutil.ContainsFinalizer({{ lower .Resource.Kind }}, {{ lower .Resource.Kind }}Finalizer) {
log.Info("Adding Finalizer for {{ .Resource.Kind }}")
@@ -208,7 +208,7 @@ func (r *{{ .Resource.Kind }}Reconciler) Reconcile(ctx context.Context, req ctrl
if controllerutil.ContainsFinalizer({{ lower .Resource.Kind }}, {{ lower .Resource.Kind }}Finalizer) {
log.Info("Performing Finalizer Operations for {{ .Resource.Kind }} before delete CR")
- // Let's add here an status "Downgrade" to define that this resource begin its process to be terminated.
+ // Let's add here a status "Downgrade" to reflect that this resource began its process to be terminated.
meta.SetStatusCondition(&{{ lower .Resource.Kind }}.Status.Conditions, metav1.Condition{Type: typeDegraded{{ .Resource.Kind }},
Status: metav1.ConditionUnknown, Reason: "Finalizing",
Message: fmt.Sprintf("Performing finalizer operations for the custom resource: %s ", {{ lower .Resource.Kind }}.Name)})
@@ -218,17 +218,17 @@ func (r *{{ .Resource.Kind }}Reconciler) Reconcile(ctx context.Context, req ctrl
return ctrl.Result{}, err
}
- // Perform all operations required before remove the finalizer and allow
+ // Perform all operations required before removing the finalizer and allow
// the Kubernetes API to remove the custom resource.
r.doFinalizerOperationsFor{{ .Resource.Kind }}({{ lower .Resource.Kind }})
- // TODO(user): If you add operations to the doFinalizerOperationsFor{{ .Resource.Kind }} method
+ // TODO(user): If you add operations to the doFinalizerOperationsFor{{ .Resource.Kind }} method
// then you need to ensure that all worked fine before deleting and updating the Downgrade status
// otherwise, you should requeue here.
- // Re-fetch the {{ lower .Resource.Kind }} Custom Resource before update the status
+ // Re-fetch the {{ lower .Resource.Kind }} Custom Resource before updating the status
// so that we have the latest state of the resource on the cluster and we will avoid
- // raise the issue "the object has been modified, please apply
+ // raising the error "the object has been modified, please apply
// your changes to the latest version and try again" which would re-trigger the reconciliation
if err := r.Get(ctx, req.NamespacedName, {{ lower .Resource.Kind }}); err != nil {
log.Error(err, "Failed to re-fetch {{ lower .Resource.Kind }}")
@@ -280,7 +280,7 @@ func (r *{{ .Resource.Kind }}Reconciler) Reconcile(ctx context.Context, req ctrl
return ctrl.Result{}, err
}
- log.Info("Creating a new Deployment",
+ log.Info("Creating a new Deployment",
"Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
if err = r.Create(ctx, dep); err != nil {
log.Error(err, "Failed to create new Deployment",
@@ -288,30 +288,30 @@ func (r *{{ .Resource.Kind }}Reconciler) Reconcile(ctx context.Context, req ctrl
return ctrl.Result{}, err
}
- // Deployment created successfully
+ // Deployment created successfully
// We will requeue the reconciliation so that we can ensure the state
// and move forward for the next operations
return ctrl.Result{RequeueAfter: time.Minute}, nil
} else if err != nil {
log.Error(err, "Failed to get Deployment")
- // Let's return the error for the reconciliation be re-trigged again
+ // Let's return the error for the reconciliation be re-trigged again
return ctrl.Result{}, err
}
- // The CRD API is defining that the {{ .Resource.Kind }} type, have a {{ .Resource.Kind }}Spec.Size field
- // to set the quantity of Deployment instances is the desired state on the cluster.
- // Therefore, the following code will ensure the Deployment size is the same as defined
+ // The CRD API defines that the {{ .Resource.Kind }} type have a {{ .Resource.Kind }}Spec.Size field
+ // to set the quantity of Deployment instances to the desired state on the cluster.
+ // Therefore, the following code will ensure the Deployment size is the same as defined
// via the Size spec of the Custom Resource which we are reconciling.
size := {{ lower .Resource.Kind }}.Spec.Size
if *found.Spec.Replicas != size {
found.Spec.Replicas = &size
if err = r.Update(ctx, found); err != nil {
- log.Error(err, "Failed to update Deployment",
+ log.Error(err, "Failed to update Deployment",
"Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name)
- // Re-fetch the {{ lower .Resource.Kind }} Custom Resource before update the status
+ // Re-fetch the {{ lower .Resource.Kind }} Custom Resource before updating the status
// so that we have the latest state of the resource on the cluster and we will avoid
- // raise the issue "the object has been modified, please apply
+ // raising the error "the object has been modified, please apply
// your changes to the latest version and try again" which would re-trigger the reconciliation
if err := r.Get(ctx, req.NamespacedName, {{ lower .Resource.Kind }}); err != nil {
log.Error(err, "Failed to re-fetch {{ lower .Resource.Kind }}")
@@ -357,9 +357,9 @@ func (r *{{ .Resource.Kind }}Reconciler) doFinalizerOperationsFor{{ .Resource.Ki
// of finalizers include performing backups and deleting
// resources that are not owned by this CR, like a PVC.
- // Note: It is not recommended to use finalizers with the purpose of delete resources which are
- // created and managed in the reconciliation. These ones, such as the Deployment created on this reconcile,
- // are defined as depended of the custom resource. See that we use the method ctrl.SetControllerReference.
+ // Note: It is not recommended to use finalizers with the purpose of deleting resources which are
+ // created and managed in the reconciliation. These ones, such as the Deployment created on this reconcile,
+ // are defined as dependent of the custom resource. See that we use the method ctrl.SetControllerReference.
// to set the ownerRef which means that the Deployment will be deleted by the Kubernetes API.
// More info: https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/
@@ -476,7 +476,7 @@ func imageFor{{ .Resource.Kind }}() (string, error) {
}
// SetupWithManager sets up the controller with the Manager.
-// Note that the Deployment will be also watched in order to ensure its
+// Note that the Deployment will be also watched in order to ensure its
// desirable state on the cluster
func (r *{{ .Resource.Kind }}Reconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
diff --git a/pkg/plugins/golang/v3/scaffolds/internal/templates/main.go b/pkg/plugins/golang/v3/scaffolds/internal/templates/main.go
index e5fdbbf76e1..4720a25a6d2 100644
--- a/pkg/plugins/golang/v3/scaffolds/internal/templates/main.go
+++ b/pkg/plugins/golang/v3/scaffolds/internal/templates/main.go
@@ -272,12 +272,12 @@ func main() {
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
- // speeds up voluntary leader transitions as the new leader don't have to wait
- // LeaseDuration time first.
+ // speeds up voluntary leader transitions as the new leader doesn't have to wait
+ // the LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
- // the manager stops, so would be fine to enable this option. However,
- // if you are doing or is intended to do any operation such as perform cleanups
+ // the manager stops, so it would be fine to enable this option. However,
+ // if you are doing, or are intending to do, any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
diff --git a/pkg/plugins/golang/v4/scaffolds/internal/templates/main.go b/pkg/plugins/golang/v4/scaffolds/internal/templates/main.go
index 82e06942395..a277543de14 100644
--- a/pkg/plugins/golang/v4/scaffolds/internal/templates/main.go
+++ b/pkg/plugins/golang/v4/scaffolds/internal/templates/main.go
@@ -283,12 +283,12 @@ func main() {
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
- // speeds up voluntary leader transitions as the new leader don't have to wait
- // LeaseDuration time first.
+ // speeds up voluntary leader transitions as the new leader doesn't have to wait
+ // the LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
- // the manager stops, so would be fine to enable this option. However,
- // if you are doing or is intended to do any operation such as perform cleanups
+ // the manager stops, so it would be fine to enable this option. However,
+ // if you are doing, or are intending to do, any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
diff --git a/testdata/project-v3/main.go b/testdata/project-v3/main.go
index 5586c891121..9d0f32fd751 100644
--- a/testdata/project-v3/main.go
+++ b/testdata/project-v3/main.go
@@ -99,12 +99,12 @@ func main() {
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
- // speeds up voluntary leader transitions as the new leader don't have to wait
- // LeaseDuration time first.
+ // speeds up voluntary leader transitions as the new leader doesn't have to wait
+ // the LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
- // the manager stops, so would be fine to enable this option. However,
- // if you are doing or is intended to do any operation such as perform cleanups
+ // the manager stops, so it would be fine to enable this option. However,
+ // if you are doing, or are intending to do, any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
diff --git a/testdata/project-v4-multigroup-with-deploy-image/cmd/main.go b/testdata/project-v4-multigroup-with-deploy-image/cmd/main.go
index e387e978fc8..e3dd1a78467 100644
--- a/testdata/project-v4-multigroup-with-deploy-image/cmd/main.go
+++ b/testdata/project-v4-multigroup-with-deploy-image/cmd/main.go
@@ -133,12 +133,12 @@ func main() {
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
- // speeds up voluntary leader transitions as the new leader don't have to wait
- // LeaseDuration time first.
+ // speeds up voluntary leader transitions as the new leader doesn't have to wait
+ // the LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
- // the manager stops, so would be fine to enable this option. However,
- // if you are doing or is intended to do any operation such as perform cleanups
+ // the manager stops, so it would be fine to enable this option. However,
+ // if you are doing, or are intending to do, any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
diff --git a/testdata/project-v4-multigroup/cmd/main.go b/testdata/project-v4-multigroup/cmd/main.go
index eaf0f69d2e1..8a5b8caa024 100644
--- a/testdata/project-v4-multigroup/cmd/main.go
+++ b/testdata/project-v4-multigroup/cmd/main.go
@@ -133,12 +133,12 @@ func main() {
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
- // speeds up voluntary leader transitions as the new leader don't have to wait
- // LeaseDuration time first.
+ // speeds up voluntary leader transitions as the new leader doesn't have to wait
+ // the LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
- // the manager stops, so would be fine to enable this option. However,
- // if you are doing or is intended to do any operation such as perform cleanups
+ // the manager stops, so it would be fine to enable this option. However,
+ // if you are doing, or are intending to do, any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
diff --git a/testdata/project-v4-with-deploy-image/cmd/main.go b/testdata/project-v4-with-deploy-image/cmd/main.go
index 319c3d8605f..9be51d6c556 100644
--- a/testdata/project-v4-with-deploy-image/cmd/main.go
+++ b/testdata/project-v4-with-deploy-image/cmd/main.go
@@ -108,12 +108,12 @@ func main() {
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
- // speeds up voluntary leader transitions as the new leader don't have to wait
- // LeaseDuration time first.
+ // speeds up voluntary leader transitions as the new leader doesn't have to wait
+ // the LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
- // the manager stops, so would be fine to enable this option. However,
- // if you are doing or is intended to do any operation such as perform cleanups
+ // the manager stops, so it would be fine to enable this option. However,
+ // if you are doing, or are intending to do, any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
diff --git a/testdata/project-v4-with-deploy-image/internal/controller/busybox_controller.go b/testdata/project-v4-with-deploy-image/internal/controller/busybox_controller.go
index d8d299a90fa..95a3a1807b0 100644
--- a/testdata/project-v4-with-deploy-image/internal/controller/busybox_controller.go
+++ b/testdata/project-v4-with-deploy-image/internal/controller/busybox_controller.go
@@ -45,7 +45,7 @@ const busyboxFinalizer = "example.com.testproject.org/finalizer"
const (
// typeAvailableBusybox represents the status of the Deployment reconciliation
typeAvailableBusybox = "Available"
- // typeDegradedBusybox represents the status used when the custom resource is deleted and the finalizer operations are must to occur.
+ // typeDegradedBusybox represents the status used when the custom resource is deleted and the finalizer operations are yet to occur.
typeDegradedBusybox = "Degraded"
)
@@ -88,7 +88,7 @@ func (r *BusyboxReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct
err := r.Get(ctx, req.NamespacedName, busybox)
if err != nil {
if apierrors.IsNotFound(err) {
- // If the custom resource is not found then, it usually means that it was deleted or not created
+ // If the custom resource is not found then it usually means that it was deleted or not created
// In this way, we will stop the reconciliation
log.Info("busybox resource not found. Ignoring since object must be deleted")
return ctrl.Result{}, nil
@@ -98,7 +98,7 @@ func (r *BusyboxReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct
return ctrl.Result{}, err
}
- // Let's just set the status as Unknown when no status are available
+ // Let's just set the status as Unknown when no status is available
if busybox.Status.Conditions == nil || len(busybox.Status.Conditions) == 0 {
meta.SetStatusCondition(&busybox.Status.Conditions, metav1.Condition{Type: typeAvailableBusybox, Status: metav1.ConditionUnknown, Reason: "Reconciling", Message: "Starting reconciliation"})
if err = r.Status().Update(ctx, busybox); err != nil {
@@ -106,9 +106,9 @@ func (r *BusyboxReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct
return ctrl.Result{}, err
}
- // Let's re-fetch the busybox Custom Resource after update the status
+ // Let's re-fetch the busybox Custom Resource after updating the status
// so that we have the latest state of the resource on the cluster and we will avoid
- // raise the issue "the object has been modified, please apply
+ // raising the error "the object has been modified, please apply
// your changes to the latest version and try again" which would re-trigger the reconciliation
// if we try to update it again in the following operations
if err := r.Get(ctx, req.NamespacedName, busybox); err != nil {
@@ -118,7 +118,7 @@ func (r *BusyboxReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct
}
// Let's add a finalizer. Then, we can define some operations which should
- // occurs before the custom resource to be deleted.
+ // occur before the custom resource is deleted.
// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers
if !controllerutil.ContainsFinalizer(busybox, busyboxFinalizer) {
log.Info("Adding Finalizer for Busybox")
@@ -140,7 +140,7 @@ func (r *BusyboxReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct
if controllerutil.ContainsFinalizer(busybox, busyboxFinalizer) {
log.Info("Performing Finalizer Operations for Busybox before delete CR")
- // Let's add here an status "Downgrade" to define that this resource begin its process to be terminated.
+ // Let's add here a status "Downgrade" to reflect that this resource began its process to be terminated.
meta.SetStatusCondition(&busybox.Status.Conditions, metav1.Condition{Type: typeDegradedBusybox,
Status: metav1.ConditionUnknown, Reason: "Finalizing",
Message: fmt.Sprintf("Performing finalizer operations for the custom resource: %s ", busybox.Name)})
@@ -150,7 +150,7 @@ func (r *BusyboxReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct
return ctrl.Result{}, err
}
- // Perform all operations required before remove the finalizer and allow
+ // Perform all operations required before removing the finalizer and allow
// the Kubernetes API to remove the custom resource.
r.doFinalizerOperationsForBusybox(busybox)
@@ -158,9 +158,9 @@ func (r *BusyboxReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct
// then you need to ensure that all worked fine before deleting and updating the Downgrade status
// otherwise, you should requeue here.
- // Re-fetch the busybox Custom Resource before update the status
+ // Re-fetch the busybox Custom Resource before updating the status
// so that we have the latest state of the resource on the cluster and we will avoid
- // raise the issue "the object has been modified, please apply
+ // raising the error "the object has been modified, please apply
// your changes to the latest version and try again" which would re-trigger the reconciliation
if err := r.Get(ctx, req.NamespacedName, busybox); err != nil {
log.Error(err, "Failed to re-fetch busybox")
@@ -230,8 +230,8 @@ func (r *BusyboxReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct
return ctrl.Result{}, err
}
- // The CRD API is defining that the Busybox type, have a BusyboxSpec.Size field
- // to set the quantity of Deployment instances is the desired state on the cluster.
+ // The CRD API defines that the Busybox type have a BusyboxSpec.Size field
+ // to set the quantity of Deployment instances to the desired state on the cluster.
// Therefore, the following code will ensure the Deployment size is the same as defined
// via the Size spec of the Custom Resource which we are reconciling.
size := busybox.Spec.Size
@@ -241,9 +241,9 @@ func (r *BusyboxReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct
log.Error(err, "Failed to update Deployment",
"Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name)
- // Re-fetch the busybox Custom Resource before update the status
+ // Re-fetch the busybox Custom Resource before updating the status
// so that we have the latest state of the resource on the cluster and we will avoid
- // raise the issue "the object has been modified, please apply
+ // raising the error "the object has been modified, please apply
// your changes to the latest version and try again" which would re-trigger the reconciliation
if err := r.Get(ctx, req.NamespacedName, busybox); err != nil {
log.Error(err, "Failed to re-fetch busybox")
@@ -289,9 +289,9 @@ func (r *BusyboxReconciler) doFinalizerOperationsForBusybox(cr *examplecomv1alph
// of finalizers include performing backups and deleting
// resources that are not owned by this CR, like a PVC.
- // Note: It is not recommended to use finalizers with the purpose of delete resources which are
+ // Note: It is not recommended to use finalizers with the purpose of deleting resources which are
// created and managed in the reconciliation. These ones, such as the Deployment created on this reconcile,
- // are defined as depended of the custom resource. See that we use the method ctrl.SetControllerReference.
+ // are defined as dependent of the custom resource. See that we use the method ctrl.SetControllerReference.
// to set the ownerRef which means that the Deployment will be deleted by the Kubernetes API.
// More info: https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/
diff --git a/testdata/project-v4-with-deploy-image/internal/controller/memcached_controller.go b/testdata/project-v4-with-deploy-image/internal/controller/memcached_controller.go
index c662113e13a..593332c308f 100644
--- a/testdata/project-v4-with-deploy-image/internal/controller/memcached_controller.go
+++ b/testdata/project-v4-with-deploy-image/internal/controller/memcached_controller.go
@@ -45,7 +45,7 @@ const memcachedFinalizer = "example.com.testproject.org/finalizer"
const (
// typeAvailableMemcached represents the status of the Deployment reconciliation
typeAvailableMemcached = "Available"
- // typeDegradedMemcached represents the status used when the custom resource is deleted and the finalizer operations are must to occur.
+ // typeDegradedMemcached represents the status used when the custom resource is deleted and the finalizer operations are yet to occur.
typeDegradedMemcached = "Degraded"
)
@@ -88,7 +88,7 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
err := r.Get(ctx, req.NamespacedName, memcached)
if err != nil {
if apierrors.IsNotFound(err) {
- // If the custom resource is not found then, it usually means that it was deleted or not created
+ // If the custom resource is not found then it usually means that it was deleted or not created
// In this way, we will stop the reconciliation
log.Info("memcached resource not found. Ignoring since object must be deleted")
return ctrl.Result{}, nil
@@ -98,7 +98,7 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
return ctrl.Result{}, err
}
- // Let's just set the status as Unknown when no status are available
+ // Let's just set the status as Unknown when no status is available
if memcached.Status.Conditions == nil || len(memcached.Status.Conditions) == 0 {
meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{Type: typeAvailableMemcached, Status: metav1.ConditionUnknown, Reason: "Reconciling", Message: "Starting reconciliation"})
if err = r.Status().Update(ctx, memcached); err != nil {
@@ -106,9 +106,9 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
return ctrl.Result{}, err
}
- // Let's re-fetch the memcached Custom Resource after update the status
+ // Let's re-fetch the memcached Custom Resource after updating the status
// so that we have the latest state of the resource on the cluster and we will avoid
- // raise the issue "the object has been modified, please apply
+ // raising the error "the object has been modified, please apply
// your changes to the latest version and try again" which would re-trigger the reconciliation
// if we try to update it again in the following operations
if err := r.Get(ctx, req.NamespacedName, memcached); err != nil {
@@ -118,7 +118,7 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
}
// Let's add a finalizer. Then, we can define some operations which should
- // occurs before the custom resource to be deleted.
+ // occur before the custom resource is deleted.
// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers
if !controllerutil.ContainsFinalizer(memcached, memcachedFinalizer) {
log.Info("Adding Finalizer for Memcached")
@@ -140,7 +140,7 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
if controllerutil.ContainsFinalizer(memcached, memcachedFinalizer) {
log.Info("Performing Finalizer Operations for Memcached before delete CR")
- // Let's add here an status "Downgrade" to define that this resource begin its process to be terminated.
+ // Let's add here a status "Downgrade" to reflect that this resource began its process to be terminated.
meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{Type: typeDegradedMemcached,
Status: metav1.ConditionUnknown, Reason: "Finalizing",
Message: fmt.Sprintf("Performing finalizer operations for the custom resource: %s ", memcached.Name)})
@@ -150,7 +150,7 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
return ctrl.Result{}, err
}
- // Perform all operations required before remove the finalizer and allow
+ // Perform all operations required before removing the finalizer and allow
// the Kubernetes API to remove the custom resource.
r.doFinalizerOperationsForMemcached(memcached)
@@ -158,9 +158,9 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
// then you need to ensure that all worked fine before deleting and updating the Downgrade status
// otherwise, you should requeue here.
- // Re-fetch the memcached Custom Resource before update the status
+ // Re-fetch the memcached Custom Resource before updating the status
// so that we have the latest state of the resource on the cluster and we will avoid
- // raise the issue "the object has been modified, please apply
+ // raising the error "the object has been modified, please apply
// your changes to the latest version and try again" which would re-trigger the reconciliation
if err := r.Get(ctx, req.NamespacedName, memcached); err != nil {
log.Error(err, "Failed to re-fetch memcached")
@@ -230,8 +230,8 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
return ctrl.Result{}, err
}
- // The CRD API is defining that the Memcached type, have a MemcachedSpec.Size field
- // to set the quantity of Deployment instances is the desired state on the cluster.
+ // The CRD API defines that the Memcached type have a MemcachedSpec.Size field
+ // to set the quantity of Deployment instances to the desired state on the cluster.
// Therefore, the following code will ensure the Deployment size is the same as defined
// via the Size spec of the Custom Resource which we are reconciling.
size := memcached.Spec.Size
@@ -241,9 +241,9 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
log.Error(err, "Failed to update Deployment",
"Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name)
- // Re-fetch the memcached Custom Resource before update the status
+ // Re-fetch the memcached Custom Resource before updating the status
// so that we have the latest state of the resource on the cluster and we will avoid
- // raise the issue "the object has been modified, please apply
+ // raising the error "the object has been modified, please apply
// your changes to the latest version and try again" which would re-trigger the reconciliation
if err := r.Get(ctx, req.NamespacedName, memcached); err != nil {
log.Error(err, "Failed to re-fetch memcached")
@@ -289,9 +289,9 @@ func (r *MemcachedReconciler) doFinalizerOperationsForMemcached(cr *examplecomv1
// of finalizers include performing backups and deleting
// resources that are not owned by this CR, like a PVC.
- // Note: It is not recommended to use finalizers with the purpose of delete resources which are
+ // Note: It is not recommended to use finalizers with the purpose of deleting resources which are
// created and managed in the reconciliation. These ones, such as the Deployment created on this reconcile,
- // are defined as depended of the custom resource. See that we use the method ctrl.SetControllerReference.
+ // are defined as dependent of the custom resource. See that we use the method ctrl.SetControllerReference.
// to set the ownerRef which means that the Deployment will be deleted by the Kubernetes API.
// More info: https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/
diff --git a/testdata/project-v4-with-grafana/cmd/main.go b/testdata/project-v4-with-grafana/cmd/main.go
index 38dd02fbb01..7a8bb26cb35 100644
--- a/testdata/project-v4-with-grafana/cmd/main.go
+++ b/testdata/project-v4-with-grafana/cmd/main.go
@@ -104,12 +104,12 @@ func main() {
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
- // speeds up voluntary leader transitions as the new leader don't have to wait
- // LeaseDuration time first.
+ // speeds up voluntary leader transitions as the new leader doesn't have to wait
+ // the LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
- // the manager stops, so would be fine to enable this option. However,
- // if you are doing or is intended to do any operation such as perform cleanups
+ // the manager stops, so it would be fine to enable this option. However,
+ // if you are doing, or are intending to do, any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
diff --git a/testdata/project-v4/cmd/main.go b/testdata/project-v4/cmd/main.go
index a8606888c6a..a9c3064851e 100644
--- a/testdata/project-v4/cmd/main.go
+++ b/testdata/project-v4/cmd/main.go
@@ -108,12 +108,12 @@ func main() {
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
- // speeds up voluntary leader transitions as the new leader don't have to wait
- // LeaseDuration time first.
+ // speeds up voluntary leader transitions as the new leader doesn't have to wait
+ // the LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
- // the manager stops, so would be fine to enable this option. However,
- // if you are doing or is intended to do any operation such as perform cleanups
+ // the manager stops, so it would be fine to enable this option. However,
+ // if you are doing, or are intending to do, any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})