Skip to content
This repository has been archived by the owner on Jun 22, 2022. It is now read-only.

Commit

Permalink
minor changes
Browse files Browse the repository at this point in the history
Signed-off-by: Etai Lev Ran <etai@il.ibm.com>
  • Loading branch information
Etai Lev Ran committed Dec 20, 2021
1 parent dec367e commit 05250d2
Showing 1 changed file with 14 additions and 14 deletions.
28 changes: 14 additions & 14 deletions submariner/centralized-control-plane.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@ Current Submariner service management follows the model and guidelines
- Service are exported and imported using the same namespace and name; and
- Service exports are managed from workload clusters.

The above work well in an environment where clusters are used by a single
The above works well in an environment where clusters are used by a single
administrative domain and services are commonly shared across all clusters.
For example, when a company runs clusters as IaaS and developers may deploy
For example, when a company runs clusters as the runtime infrastructure and developers can deploy
to any cluster for availability, redundancy, or geographic proximity. Another
common set-up is clusters under different administrative domains (e.g., separated
by team). In those environments, service naming and sharing may be controlled
differently.
We would like to propose a different approach for Service management, that allows
We would like to propose a different approach for Service management, that allows:

1. Independent service naming (i.e., allow use of different names in different clusters).
1. Selective imports (i.e., import services into a subset of clusters in the ClusterSet).
Expand Down Expand Up @@ -55,8 +55,8 @@ The design proposal attempts to achieve the above with minimal changes to worklo
The Service object model is built around three new CRDs, defined only in the Broker
cluster. The new CRDs are used to generate the corresponding MCS CRDs, which are
then replicated to the workload clusters, as today. An `axon` tag is used as the
API `Group` for the [k8s API](https://book.kubebuilder.io/cronjob-tutorial/gvks.html)
to differentiate from the Kubernetes and MCS objects with the same name):
[K8s API `Group`](https://book.kubebuilder.io/cronjob-tutorial/gvks.html)
to differentiate from the Kubernetes and MCS objects with the same name:

1. `axon:Service` defines a service that can consumed by and/or provided from
multiple clusters. The Service object represents some API deployed
Expand All @@ -69,7 +69,7 @@ The Service object model is built around three new CRDs, defined only in the Bro
extending the MCS definition, where and when needed.

The following CRDs are proposed to support the new design. The CRD definitions
are partial and capture only essential parts needed to illustrate the design.
below are partial and capture only essential parts needed to illustrate the design.
For brevity, the standard Kubernetes top level CRD definition (e.g., `TypeMeta`,
`ObjectMeta`, `Spec` and `Status`) is omitted and left as an exercise for the
implementor... Similarly, all `Status` types are assumed to contain a `Conditions`
Expand Down Expand Up @@ -100,10 +100,8 @@ type ServiceStatus struct {
BackendClusters []string `json:"backends",omitempty"`
}

// ObjectRef defines a reference to another k8s object
// Caveats:
// - objects can be Namespaced or cluster scoped. Current Namespace is assumed if undefined.
// - Kind might not be unique if considering sub-resources (argued over in SIG apimachinery).
// ObjectRef defines a reference to another k8s object - this is shown for completeness
// and we may be able to use the corev1.ObjectReference or similar built-in object instead.
type ObjectRef struct {
Group string `json:"group,omitempty"`
Kind string `json:"kind,omitempty"`
Expand Down Expand Up @@ -137,7 +135,7 @@ type ServiceImportSpec struct {
}
```

The management of the above CRDs is accomplished by a new Controller, running on the Broker
The above CRDs are managed by a new Controller, running on the Broker
cluster. Logically, the controller operates within a context of a ClusterSet (i.e., a single
Broker namespace) and watches the new CRDs defined as well as existing Cluster objects. It
reconciles desired and actual state based on the following logic:
Expand All @@ -162,22 +160,24 @@ The management of the above CRDs is accomplished by a new Controller, running on
cluster. Similar `Status.Conditions` interaction may be used between the workload cluster
agent and the Broker controller.

Some notes
Optional implementation aspects and alternatives:

- The Broker controller may add a label based on the cluster identity (e.g., `ClusterID`
or cluster name) to allow each cluster agent to efficiently filter for its own objects.
or cluster name) to allow each cluster agent to efficiently filter for its own objects.
- `mcs:ServiceExport` and `mcs:ServiceImport` are references for objects in the same namespace
and thus can not be used directly for independent naming. A workaround (barring changes to
the MCS specification), is to replicate the equivalent `axon` objects to the workload clusters
and create the MCS objects locally in each. A better (short term?) alternative would be to use
the current Submariner workaround which uses predefined labels and annotations to communicate
this information.
- Full reconciliation is required but not detailed above. For example, `ServiceBinding` status
may change over time, Cluster objects might be deleted, etc.
may change over time, as Cluster or Service objects might be deleted, etc.
- Since we don't propose to leverage any of the Lighthouse `ServiceExport` functionality,
we could create a `GlobalIngressIP` object instead of creating `ServiceExport` objects. This
requires decoupling GlobalNet behavior from `ServiceExport`s (which may already be
sufficiently decoupled).
- Is a new workload cluster agent required or may we only tweak the behavior
of an existing controller, such as Lighthouse?
- Currently (and in this proposal as well), workload cluster agents, such as Lighthouse, have
access permissions on all objects in their Broker namespace. This allows them to read and,
possibly, write objects belonging to other clusters. Running the agents in an administrator
Expand Down

0 comments on commit 05250d2

Please sign in to comment.