If you are a project maintainer and consider mentoring during the GSoC 2023 cycle, please, submit your ideas below using the template.
Google summer of code timeline.
### CNCF Project Name
#### Title
- Description: (2-5+ sentences)
- Expected outcome:
- Recommended Skills:
- Mentor(s): (Name, github, email)
- Expected project size: (175 or 350 Hours)
- Difficulty: (Easy, Medium, or Hard)
- Upstream Issue (URL):
- Ideas
- Armada
- Cilium
- Cloud Native Buildpacks
- CoreDNS
- CRI-O
- Curve
- Falco
- in-toto
- Jaeger
- Keptn
- Knative
- Multiple Knative Eventing control planes
- Eventing Sender Identity
- NetworkPolicy support for Knative Serving
- Improving Observability in Knative Eventing
- Dataplane migration for Apache Kafka communications: From Vert.x to Project Loom
- Porting Knative Serving to Microshift
- Self-Balancing Knative Kafka Broker partitions
- KubeArmor
- Kubebuilder
- Kubescape
- KubeVela
- Meshery
- OpenFeature
- Prometheus
- Strimzi
- The Update Framework (TUF)
- WasmEdge
- Description: Armada is a multi-cluster batch scheduling platform for running high-throughput jobs. Armada provides a CLI for interacting with jobs, queues and watching events. A common way that Kubernetes users interact with Kubernetes is via kubectl. This project is to build a kubectl plugin that allows users to submit jobs to Armada using kubectl.
- Expected outcomes:
- Publish a Krew Plugin for Armada
- Users can submit jobs, create queues, watch jobs etc using kubectl.
- Recommend Skills: Go, Kubernetes, Kubectl
- Mentor(s): Kevin Hannon, @kannon92, kannon1992@gmail.com
- Expected project size: 175 Hours
- Difficulty: Medium
- Upstream Issue (URL): armadaproject/armada#2120
- Description: Open source projects should not be hard coded to a particular Database. Armada currently only allows users to use Postgres. This project is to build interfaces around our connections to Postgres so we can allow other databases.
- Expected outcomes:
- A interface is created that allows Armada to interact with any SQL database without exposing implementation details of postgres
- With an interface, mocking and testing becomes much easier as you can implement mocks for the DB access. Currently we require a postgres DB for unit testing.
- Test coverage could increase
- Recommend Skills: Go, SQL
- Mentor(s):
- Kevin Hannon, @kannon92, kannon1992@gmail.com
- Geoffrey Wilson, @suprjinx, geoff@gr-oss.com
- Expected project size: 350 Hours
- Difficulty: Hard
- Upstream Issue (URL): armadaproject/armada#2121
-
Description: Tetragon can run both with and without Cilium on the same node. Some functionality, however, still depends on the Cilium agent being present. Specifically, Tetragon uses Cilium to retrieve the pod information for destination IPs for pods which are not local to the node. The goal of this project is to introduce this functionality on Tetragon. One approach would be for the Tetragon agent to keep information about all pods in the cluster, but this approach does not scale well due to the Kubernetes API server needing to propagate all pod information to all nodes. Instead, the plan is to introduce a new custom resource (CR) which is maintained by the Tetragon operator and provides a mapping from IPs to the small subset of pod information that Tetragon needs. The Tetragon operator will monitor pod information and update the resource as needed. Tetragon agents will watch this CR to provide pod information for destination IPs.
-
Expected outcome: Cilium dependency is removed from Tetragon
-
Recommended Skills: Go, Kubernetes
-
Mentor(s): Michi Mutsuzaki, michi-covalent, michi@isovalent.com. Kornilios Kourtis, kkourt,kornilios@isovalent.com
-
Expected project size: 350 Hours
-
Difficulty: Medium
-
Upstream Issue (URL): cilium/tetragon#794
- Description: Cloud Native Buildpacks are integral to the release and deployment of multiple platforms, which means speed of execution is a feature our users depend on. However there's currently no suite of benchmark tests to catch regressions in speed, nor have we done systematic profiling to look for easy wins or small changes that could decrease runtime of common operations. This is an excellent skill to build early in one's career as it tends to be portable between projects and is often appreciated.
- Expected outcome: Existence of one or more benchmark tests in the lifecycle repository that can run as part of the Pull Request suite. After creating benchmarks, we can apply profiling to look for speedups, so the best possible outcome will also include documented user-facing performance improvements.
- Recommended Skills: Golang, software development literacy. We will provide mentorship re. benchmarking and profiling and integrating results with github actions.
- Mentor(s): Natalie Arellano (@natalieparellano); Joe Kimmel (@joe-kimmel-vmw)
- Expected project size: 350 Hours
- Difficulty: Medium
- Upstream Issue (URL): buildpacks/lifecycle#993
- Description: Build out samples and workflows showing how to use Dockerfiles in harmony with a cloud native buildpacks platform.
- Expected outcome: updating the
pack
implementation to be more performant by taking advantage of the available daemon, and updating documentation and sample workflows to reflect the changes - Recommended Skills: Golang, software development literacy, familiarity with the Cloud Native Buildpacks spec, familiarity with Docker and Dockerfiles.
- Mentor(s): Natalie Arellano (@natalieparellano)
- Expected project size: 175 Hours
- Difficulty: Medium
- Upstream Issue (URL): buildpacks/pack#1623
- Description: DNS-over-QUIC (DoQ) and DNS-over-HTTP/3 (DoH3) are relatively new protocols for transmitting DNS queries with security and privacy. Additionally, DoQ and DoH3 also offers other benefits such as improved latency and better error detection. The goal of this proposal is to add DoQ and/or DoH3 support to CoreDNS.
- Expected outcome: An implementation of DoQ or DoH3 for CoreDNS. A stretch goal of adding both DoQ and DoH3 is also within scope.
- Recommended Skills: Golang, DNS.
- Mentor(s): Yong Tang @yongtang (yong.tang.github@outlook.com); Chris O'Haver @chrisohaver (cohaver@infoblox.com)
- Expected project size: 175 Hours
- Difficulty: Medium
- Upstream Issue (URL): coredns/coredns#5583, coredns/coredns#5539
- Description: conmon-rs is a container monitor written in Rust, used by CRI-O to monitor a container's lifecycle. Part of its responsibilities is log forwarding--taking the output of the container and writing that output to various places. Currently, conmon-rs supports one format: the one required by the Kubernetes CRI. The goal of this proposal is to add new log formats from the list of standardized ones, like JSON, Splunk, Journald.
- Expected outcome: A JSON log driver and Journald log driver are added to conmon-rs. A stretch goal of adding a Splunk log driver is also within scope.
- Recommended Skills: Rust, familiarity with containers
- Mentor(s): Peter Hunt, haircommander, pehunt@redhat.com; Sascha Grunert, saschagrunert, mail@saschagrunert.de
- Expected project size: 175 Hours
- Difficulty: Medium
- Upstream Issue (URL): containers/conmon-rs#1126
- Description: conmon-rs is a container monitor written in Rust, used by CRI-O to monitor a container's lifecycle. In order to declare stability of conmon-rs for CRI-O, many features have to be added for its sibling project: podman. The goal of this proposal is to add additional features that are required by podman to conmon-rs, so conmon-rs can be stabalied
- Expected outcome: Additional CreateContainer request options are added and supported in conmon-rs to finalize support for podman. Further, the RestoreContainer RPC should be added and supported.
- Recommended Skills: Rust, familiarity with containers
- Mentor(s): Peter Hunt, haircommander, pehunt@redhat.com; Matt Heon, mheon, mheon@redhat.com
- Expected project size: 350 Hours
- Difficulty: Hard
- Upstream Issue (URL): containers/conmon-rs#1127
- Description: Currently the metadata of current CurveBS is stored in Etcd. However, Etcd has limited scalability, and the amount of metadata that can be stored is limited. In order to support larger clusters, it is hoped that the metadata can add a sql database like mysql, MariaDB, PostgreSQL, etc. as one of the storage engines. Add a new sqlstorageclient for metadata.
- Expected outcome: It is hoped that the user can choose whether the metadata is stored in the kv engine or the sql engine through the configuration file.
- Recommended Skills: C++, SQL
- Mentor(s): Xiaocui Li @ilixiaocui (ilixiaocui@163.com)
- Expected project size: 175h
- Difficulty: Medium
- Upstream Issue (URL): opencurve/curve#2224
- Description: Curve snapshots are currently uploaded to s3, and no compression is used. Using compression can greatly reduce the space occupied by snapshots, thereby saving storage costs.
- Expected outcome: 1) Detailed Design Documentation 2) Realize curve snapshot compression upload and download decompression 3) Add unit tests and integration test cases 4) Merge your PR into the opencurve repo
- Recommended Skills: C++
- Mentor(s): Chaojie Xu @xu-chaojie (xuchaojie@outlook.com)
- Expected project size: 175 hours
- Difficulty: Medium
- Upstream Issue (URL): opencurve/curve#2223
- Description: At present, Curve has logging and metrics, both of which can be used to analyze performance as well as locate problems. While they improve the observability of the system, the granularity is coarse and does not allow for precise analysis of how long requests take at each stage. Tracing is a powerful tool that can concatenate invocation relationships between services and log invocation time in the request dimension, preserving essential information and concatenating dispersed log events to help us better understand system behavior, assist in debugging and troubleshooting performance issues.
- Expected outcome: Design the solution and implement it, introduce it into CurveBS, and analyze the latency of IO requests. The implementation needs to be well scalable and can be applied to other modules.
- Recommended Skills: C++, OpenTracing
- Mentor(s): Hanqing Wu @wu-hanqing (wuhanqing@corp.netease.com)
- Expected project size: 350 hours
- Difficulty: Hard
- Upstream Issue (URL): opencurve/curve#2229
- Description: Falco provides an intuitive and highly expressive rule language for configuring its powerful runtime security engine. However, the community still lacks an official and frictionless IDE solution for writing and testing Falco rules. Since the last few releases, the Falco libraries increased the support for multiple architectures and platforms, and the integrated rules validator added a new output in machine-readable JSON format. The idea for this project is to add WebAssembly as a new officially-supported compilation target for Falco by leveraging the Emscripten toolchain, and creating a new development environment for security rules in the form of a web single-page application by running Falco right inside the browser. The end result is envisioned to be similar to the Go Playground, but without the need of any backend. The beauty of this idea is the opportunity of experiencing very different technologies of the cloud-native landscape all in a single project: low-level system code close to the Linux kernel, the fast-growing WebAssembly world, and frontend development for a web application.
- Expected outcome: The contributor will collaborate with the Falco maintainers to introduce WebAssembly as a new supported compilation target, and will work on the frontend development of the web application. The feasibility of the project has already been assessed. The rules editor playground will dramatically benefit the learning curve and the development experience of security practitioners writing Falco rules, and will be the basis on which new educational content could be created for the community. A stretch goal would be to provide reusable groundwork for future integrations with other IDEs supporting WebAssembly, such as Visual Studio Code.
- Recommended Skills: Familiarity with C/C++, JavaScript or Typescript, a frontend framework of choice (React, Vue, Angular, Svelte, etc…)
- Mentor(s): Jason Dellaluce, @jasondellaluce, jasondellaluce@gmail.com
- Expected project size: 350 Hours
- Difficulty: Medium
- Upstream Issue (URL): falcosecurity/evolution#262
- Description: in-toto's Python implementation currently does not support the generation of the new-style in-toto Attestations. The implementation must be updated with the schemas for attestations and some commonly used predicate types such as SLSA Provenance.
- Expected outcome: in-toto-python includes support for attestations and some predicate types.
- Recommended Skills: Python
- Mentor(s): Aditya Sirish A Yelgundhalli (@adityasaky, aditya DOT sirish AT nyu DOT edu), Santiago Torres-Arias (@SantiagoTorres, santiagotorres AT purdue DOT edu)
- Expected project size: 350 Hours
- Difficulty: Medium
- Upstream Issue (URL): in-toto/in-toto#562
- Description: in-toto maintains four implementations written in Python, Go, Java, and Rust. While all of them were written using the same specification, they are not currently tested against each other to ensure compatibility. Part of this task, therefore, is to implement cross-implementation testing for common in-toto workflows such as generating metadata with one implementation and verifying it with another. Additionally, in-toto's implementation releases are not tested against releases of dependencies that have breaking changes. In some situations, users who interact with in-toto's tooling may update a dependency that breaks in-toto. The other part of this task is to implement this testing framework.
- Expected outcome: in-toto implementations are better tested against each other and their dependencies.
- Recommended Skills: Python, Go, optionally some Rust and Java
- Mentor(s): Aditya Sirish A Yelgundhalli (@adityasaky, aditya DOT sirish AT nyu DOT edu), Santiago Torres-Arias (@SantiagoTorres, santiagotorres AT purdue DOT edu)
- Expected project size: 350 Hours
- Difficulty: Medium
- Upstream Issue (URL): in-toto/in-toto#563
- Description: Jaeger (https://jaegertracing.io) is a popular platform for distributed tracing. ClickHouse is a powerful columnar database, utilized in many commercial and some open source monitoring products. Jaeger currently has a community-provided (unofficial) plugin that implements Jaeger storage backend on top of ClickHouse. This project is to build first-class support for ClickHouse in Jaeger as a core functionality.
- Expected outcomes:
- Design table schema, document trade-offs
- Implement ClickHouse support as core storage backend
- Address schema creation problem / tooling
- Add relevant documentation to the Jaeger website
- Stretch goals:
- Use Kafka Connector for ClickHouse for bulk inserts
- Support Monitoring tab in Jaeger directly from ClickHouse
- Recommended Skills: Go, SQL
- Mentor(s): Yuri Shkuro, @yurishkuro
- Expected project size: 350 Hours
- Difficulty: Medium
- Upstream Issue (URL): jaegertracing/jaeger#4196
- Description: Jaeger (https://jaegertracing.io) is a popular platform for distributed tracing. Critical path analysis is an important tool in the latency investigations. This project aims to add support for critical path analysis to Jaeger UI.
- Expected outcomes:
- Implement critical path determination algorithm
- Enhance Trace Timeline view to overlay critical path on top of the trace.
- Add relevant documentation to the Jaeger website
- Author a blog post on Jaeger blog explaining the new feature
- Stretch goals:
- Add critical path visualization to other trace views (graph, table, flamechart)
- Recommended Skills: Javascript, Typescript
- Mentor(s): Yuri Shkuro, @yurishkuro
- Expected project size: 175 Hours
- Difficulty: Medium
- Upstream Issue (URL): jaegertracing/jaeger-ui#1288
- Description: Backstage is an open platform for building developer portals. As the website states, "powered by a centralized software catalog, Backstage restores order to your infrastructure and enables your product teams to ship high-quality code quickly — without compromising autonomy". It would be nice to provide Keptn integration for this portal so that Keptn can connect as many as other tools. See all Backstage Plugins. A Possible solution could be a specialized plugin for Keptn using Keptn REST API and/or CLI. It might be also possible to create a unified plugin for Keptn and other CI/CD tools, e.g. implemented on the top of the Cloud Events / CDEvents standard.
- Expected outcome: Finding issues blocking multiple control planes, and possibly fixing them.
- Recommended Skills: Typescript, Javascript, React, Backstage
- Mentor(s): Brad McCoy bradmccoydev@gmail.com
- Expected project size: 175h
- Difficulty: Hard
- Upstream Issue (URL): keptn/integrations#19
- Description: Keptn Lifecycle Toolkit (KLT) can run pre and post deployment tasks. Currently KLT supports the Deno runtime. KeptnTasks can be used for almost anything that an individual can dream up and script in JavaScript. However, users (especially new adopters) usually need a useful set of example tasks that they can use and / or modify to suit their requirements.
This project would build such a library of re-usable tasks for common tasks such as:
- Notifying Chat tools
- Triggering external tools (eg. a pipeline)
- Integrations with Systems of Record (i.e. to update a CMDB)
This project lends itself to GSoC due to the modular nature of the tasks which almost guarantees that something will be delivered - the task delivery can be planned an organised easily into sprints across the project lifecycle. It is also relatively easy - given that the tasks are simply JavaScript - it should be easy for most first time contributors.
- Expected outcome: A catalogue of useful, re-usable
KeptnTaskDefinitions
that will serve as useful reference material for adopters - Recommended Skills: Typescript and / or JavaScript, basic understanding of Keptn Lifecycle Toolkit, Kubernetes basic knowledge, knowledge of CRDs
- Mentor[s]: Adam Gardner (adam at agardner dot net)
- Expected project size: 175h
- Difficulty: Easy
- Upstream Issue (URL): keptn-contrib/klt-tasks#3
- Description: At the moment, Keptn Metrics only support Prometheus and Dynatrace providing metrics, but there are many other metric providers out there. The goal of this project is to add support for additional metric providers.
- Expected outcome: Create at least 3-5 additional metrics providers for the lifecycle toolkit, Document their usage and provide examples
- Recommended Skills: Go, Kubernetes, Prometheus, Observability in General
- Mentor(s): Thomas Schuetz @thschue (thomas.schuetz at t-sc dot eu), Anna Reale @RealAnna (anna.reale at dynatrace dot com)
- Expected project size: 175h
- Difficulty: Easy
- Upstream Issue (URL): keptn/lifecycle-toolkit#745
- Description: The main idea of this proposal is to make it possible to define timeframes for metrics and get standardized aggregated results for this timeframe. For instance, it should be possible to query the metrics for the last n minutes, hours, days, weeks, months, years, etc. and get the aggregated results for this timeframe.
- Expected outcome: Propose a solution for this issue, Implement the solution and provide docs and examples, provide tests
- Recommended Skills: Go, Kubernetes, Prometheus, Observability in General
- Mentor(s): Thomas Schuetz @thschue (thomas.schuetz at t-sc dot eu), Florian Bacher @bacherfl (florian.bacher at dynatrace dot com)
- Expected project size: 175h
- Difficulty: Hard
- Upstream Issue (URL): keptn/lifecycle-toolkit#859
- Description: We see more users asking for complete isolation in Knative Eventing deployments. While there are Knative Eventing components that support isolated data planes, we are interested in to see isolated control planes as well. There were discussions about this in the community before and many of the asks were left unadressed. However, we have tools that support multitenancy today, such as kcp. We see this project as an experiment.
- Expected outcome: Finding issues blocking multiple control planes, and possibly fixing them.
- Recommended Skills: Golang, Kubernetes, Knative, Kubernetes Controllers
- Mentor(s): Ali Ok @aliok (aliok AT redhat DOT com), Christoph Stäbler @creydr (cstaebler AT redhat DOT com)
- Expected project size: 350h
- Difficulty: Hard
- Upstream Issue (URL): knative/eventing#6601
- Description: Leveraging Kubernetes Service Account identity and OAuth audiences, users should be able to configure Knative Eventing components to authenticate CloudEvent deliveries using the identity of the Subscription, Trigger, or Source. Additionally, Brokers and Channels can leverage the OAuth identity associated with incoming CloudEvents to implement policy.
- Expected outcome: At least the following components are able to use service accounts for authentication: in-memory-channel, multitenant broker, apiserver source, ping source. Ideally, the primitives from this project could be re-used by other channel and broker implementations.
- Recommended Skills: Golang, Kubernetes, OAuth or OIDC
- Mentor(s): Evan Anderson @evankanderson
- Expected project size: 350h
- Difficulty: Medium
- Upstream Issue (URL): knative/eventing#5047
- Description: Implement port-level network routing for Knative Serving internal addresses. This will be an extension of knative-extensions/net-kourier#852 to other Knative Ingress implementations, along with implementation of default NetworkPolicies for the activator and Knative Revisions to enforce that requests are routed through the Knative Ingress.
- Expected outcome: Users will be able to use NetworkPolicy to restrict access to their Knative Services at a network (L4 / TCP) level.
- Recommended Skills: Golang, Kubernetes networking, Envoy or protocol familiarity with HTTP
- Mentor(s): Evan Anderson @evankanderson
- Expected project size: 350h
- Difficulty: Hard
- Upstream Issue (URL): knative/serving#13780
- Description: We see users struggling to observe what happens inside Knative Eventing and we want that to be improved by providing easy to use tools. The idea is divided into stages. First stage is to write code (python and/or golang) that implements a plugin for Knative command-line interface (kn CLI). The plugin takes output from observability data gathered by Knative and answers simple questions such as: where did my event go based on event id? Show clusters/groups of common traces and show exceptions? The plugin should be able to verify that necessary Knative configuration for observability is enabled, access server side components. Possible next stages may be to create active probes that add to CLI capability to send probe events to specific Knative components (such as Kafka source or broker) and report if they work as expected (health checks). Another area to explore is using conversational AI to improve the plugin by using AI to parse natural language input and/or process available observability data for common Knative questions. As part of the work there may be proposed fixes and improvements for identified gaps in Knative observability that may be discovered when testing the plugin.
- Expected outcome: Improved Knative Eventing observability, improved documentation and published one or more blog posts
- Recommended Skills: Python, data science skills, Golang, Kubernetes
- Mentor(s): Aleksander Slominski @aslom (aslomins AT redhat DOT com), Ansu Varghese @aavarghese (anvarghe AT redhat DOT com), and Lionel Villard @lionelvillard (lvillard AT redhat DOT com)
- Expected project size: 350h
- Difficulty: Medium
- Upstream Issue (URL): knative/eventing#6247
- Description: The Knative Broker's data-plane communication with Apache Kafka for consuming and producing records is currently done via the Vertx Kafka client library. The library is basically a wrapper for communications with Apache Kafka inside of the Vertx threading model. This project requires an evaluation the OpenJDK 19 "Project Loom" itself and how to leverage it for efficient communications with an Apache Kafka cluster. The goal of the project is to migrate the usage of Vertx for any communications with Apache Kafka to pure OpenJDK 19's Loom, and reduce the number of 3rd party frameworks, such as vertx for Apache Kafka communication.
- Expected outcome: OpenJDK 19 gets evaluated for our use case, Knative Kafka Broker gets migrated to use Loom for Apache Kafka communication.
- Recommended Skills: Java, Apache Kafka, understanding of Kubernetes
- Mentor(s): Matthias Wessendorf @matzew (matzew AT redhat DOT com), Pierangelo Di Pilato @pierDipi (pierdipi AT redhat DOT com)
- Expected project size: 350h
- Difficulty: Medium
- Upstream Issue (URL): knative-extensions/eventing-kafka-broker#2928
- Description: More and more workload is moving towards running on the edge. We saw experiments running Kubernetes on vehicles, fighter jets, 5G antenna and various far edge, near edge and fat edge environments. We would like to see what the challenges are when Knative is run in a resource limited environment. While there are multiple edge-friendly Kubernetes distributions, we would like to see Microshift used as the base platform. Knative consists of Serving and Eventing modules but focusing on Knative Serving as a first step should be more approachable. The project consists of 2 stages. First one is to run Knative on Microshift with minimal resources. This requires finding out problems here, solving them. A stretch goal is to find out what happens with architectures other than x86_64.
- Expected outcome: Having Knative Serving with an ingress layer running on top of Microshift. Having a Hello-World Knative Service running on top. Finding issues blocking the edge setup, and possibly fixing them.
- Recommended Skills: Golang, Kubernetes, Knative, good understanding of networking, good understanding of CI/CD
- Mentor(s): Reto Lehmann @ReToCode (rlehmann AT redhat DOT com), Stavros Kontopoulos @skonto (skontopo AT redhat DOT com)
- Expected project size: 350h
- Difficulty: Hard
- Upstream Issue (URL): knative/serving#12718
- Description: Creating a Knative Kafka Broker requires developers to specify the number of partitions the backing Kafka topic should have, however, this is not an easy decision to make and it requires planning based on the expected load the Knative Broker could receive. This project aims to remove this configuration setting by having an autoscaler that is responsible to add or remove partitions based on the effective load the Knative Kafka Broker receives while preserving ordered and unordered delivery features for Triggers.
- Expected outcome: A Knative Kafka Broker can be created without setting the number of partitions and the number of partitions for a topic backing the Knative Kafka Broker increases or decreases based on the observed load received.
- Recommended Skills: Kubernetes, Apache Kafka, Go, Java
- Mentor(s): Pierangelo Di Pilato @pierDipi (pierdipi AT redhat DOT com), Ali Ok @aliok (aliok AT redhat DOT com)
- Expected project size: 350h
- Difficulty: Hard
- Upstream Issue (URL): knative-extensions/eventing-kafka-broker#2917
- Description: Build a GitHub action to allow the usage of KubeArmor in the CI. KubeArmor should be able to identify change in the application posture early in the dev life cycle. If the app changes results in new app behavior such as new process invocation or new file system access or new network connections, then the same has to be highlighted early in the application life cycle so that the security posture changes can be handled accordingly.
- Expected outcome:
karmor summary
provides a way to verify the application behavior. The aim here would be to baseline the application behavior and check for any deviation during subsequent application updates. It then should look for any potential security gaps and recommend policies leveraging based on that. The action should be able to generate a summary using baseline benchmark and then show the application based changes in the graphical mode. - Mentor(s): Ankur Kothiwal(Ankurk99, ankur DOT kothiwal99 AT gmail DOT com), Anurag Kumar(kranurag7, contact DOT anurag7 AT gmail DOT com), Barun Acharya(daemon1024, barun1024 AT gmail DOT com)
- Expected project size: 175 Hours
- Recommended Skills: Kubernetes, GitHub Actions
- Difficulty: Medium
- Upstream Issue (URL): kubearmor/KubeArmor#1128
- Description: Store KubeArmor policies & host policies as OCI Artifacts. OCI Artifacts are a set of conventions that allows us to store assets other than the container images, like Helm Charts, inside an OCI registry. Artifact Hub is a website where users can find, install, and publish packages and configurations for CNCF projects. The idea is to use Artifact Hub for pushing, pulling and verifying the authenticity of KubeArmor policies using cosign.
- Expected outcome: The contributor is expected to create subcommand for
karmor
to interact with OCI registries for pushing, pulling and verifying policies based on OCI Artifacts specification. - Mentor(s): Ankur Kothiwal(Ankurk99, ankur DOT kothiwal99 AT gmail DOT com), Anurag Kumar(kranurag7, contact DOT anurag7 AT gmail DOT com), Barun Acharya(daemon1024, barun1024 AT gmail DOT com)
- Expected project size: 175 Hours
- Recommended Skills: Go, Containers
- Difficulty: Medium
- Upstream Issue (URL): kubearmor/KubeArmor#1130
-
Description: Kubebuilder introduced Phase 2 Plugins to make itself more extensible as a CLI and as a library. This enables the community to develop, use, maintain and release their own plugins independently. Phase 2 Plugins has been available for a few releases, and in order to further increase and help facilitate adoption of this functionality, we need to work on creating an extensive sample, documentation and tutorial on how to build and use external plugins. To be able to achieve this goal, we can use Go, Ansible, Python or Helm to create the sample external plugin that will be used as a central reference plugin that serves as an example of an external plugin that leverages Phase 2 Plugins.
-
Expected outcome:
- Finish the remaining tasks in the meta task (the big task that tracks various subtasks for the feature): kubernetes-sigs/kubebuilder#2600
- Ensure the quality of the sample and Phase 2 Plugins API by:
- Adding e2e tests which should validate that the sample can be built and used with the Kubebuilder CLI.
- Implementing a maintainable solution to ensure that the sample will not get outdated.
- We might need to so something like: kubernetes-sigs/kubebuilder#3191
- However, a dependabot for the sample might suffice in this case, just to ensure the dependencies.
- Review and improve the Plugin documentation to ensure that we the community can easily leverage the Phase 2 Plugins API: https://book.kubebuilder.io/plugins/plugins.html
- Verify that we are generating go docs for the Phase 2 Plugins API.
- Create a tutorial with the sample to let the community and users know how to build and use their own external plugins.
- Add an extra sample using any other language other than Golang.
- Create a video demo to showcase the Phase 2 Plugin working with the sample. This will demonstrate the achievements and will be shared with the community. We also will add the link to the demo in the docs.
- Ensure the quality of the sample and Phase 2 Plugins API by:
- Finish the remaining tasks in the meta task (the big task that tracks various subtasks for the feature): kubernetes-sigs/kubebuilder#2600
-
Recommended Skills: Go
-
Mentor(s): Rashmi Gottipati (rgottipa AT redhat DOT com), Bryce Palmer (bpalmer AT redhat DOT com)
-
Expected project size: 350h
-
Difficulty: Medium
-
Upstream Issue (URL): kubernetes-sigs/kubebuilder#2600
- Description: Things change, and we constantly grow the KubeBuilder, providing new features and bug fixes. Also, sometimes it is required to address incompatible changes via new plugin versions. However, all changes and growth bring some complexities to its users keeping their solutions upgraded and adopting all that is new. The primary motivation of this project is to provide a helper via a command CLI that will automate a common and manual part of this process and try to make it less painful. Also, this project will add a lot of value for Kubebuilder, and its maintainers since it can encourage their users to move forward more frequently. We might be able to use this feature to create lovely automation using git and provide GitHub actions in the future. Note that we have a design proposal to develop the initial version of this feature, which is expected in this project. However, your ideas and input to solve this challenge will be very welcome!
- Expected outcome:
- Develop a helper to achieve the goal, making it easier for users to upgrade their projects.
- Interact with the community: make sure the roadmap is generally expected, and continuously receive feedback to move forward with the project.
- Make the helpers covered adequately with tests and documentation
- Demonstrate the feature in the community meeting
- Recommended Skills: Go
- Mentor(s): Camila Macedo (camilamacedo86 AT gmail DOT com), Varsha (vnarsing AT redhat DOT com), Tony (TianYi) (kavinjsir AT gmail DOT com)
- Expected project size: 350h
- Difficulty: Medium
- Upstream Issue (URL): kubernetes-sigs/kubebuilder#3244
- Description: Kubescape has the ability to submit scan results to a backend service, which can then store them and provide UIs to give insight over time as to a users' security posture. There is an implementation of this API by the commercial service that Kubescape's original creators, ARMO, runs. We would like to fully document this API, and improve it for future use cases, which may eventually have a wider scope than just Kubescape itself. If the student wants to extend the project, they could also build a sample backend to store data in-cluster using this API. (Please specify in your application if you are looking to work half or full time.)
- Expected outcome: An improved and fully-documented API for how to send scan results from a Kubescape client to a security backend service.
- Recommended Skills: Go, OpenAPI
- Mentor(s): Craig Box, Ben Hirschberg
- Expected project size: 175h for API only, 350h for API + sample backend
- Difficulty: Medium
- Upstream Issue (URL): kubescape/kubescape#1172
- Description: Kubescape is a platform that can scan clusters for configuration and vulnerability issues. Some work has been done to integrate Kubescape with Backstage, to present the data from a cluster. This project will build on that work by making Backstage a fully-supported method for interacting with Kubescape's in-cluster components and API server. If the student is interested the project could be extended to add a Grafana dashboard to display information in a similar fashion. (Please specify in your application if you are looking to work half or full time.)
- Expected outcome: A production-ready Backstage plugin for Kubescape in the plugin marketplace
- Recommended Skills: TypeScript, React, Go
- Mentor(s): Craig Box, Ben Hirschberg
- Expected project size: 175h for Backstage only, 350h for Backstage + Grafana
- Difficulty: Medium
- Upstream Issue (URL): kubescape/kubescape#1173
- Description: With more and more developers start writing KubeVela (OAM) application YAML file and CUE-based X-Definitions, it is increasingly important to provide some tools to help user make correct configurations. IDE plugin is one way to reach that. We can make VSCode/Jetbrains plugins for formatting and previewing the written applications and KubeVela's X-Definitions. The plugin should be able to sync environment data (including installed X-Definitions) and make dryrun preview for the rendering result of applications intelligently. It can also provide preview capability for the selected clusters when topology policy is written. Live-diff for the current application and remote application configuration is useful as well. In addition to the application, when users write X-Defintinitions based on CUELang, auto validating and previewing for rendering results under mock inputs can also support them.
- Expected outcome: IDE Plugins for KubeVela, or standalone desktop apps
- Recommended Skills: Golang, Kubernetes, NodeJS, KubeVela, CUELang
- Mentor(s): Da Yin @Somefive
- Expected project size: 350h
- Difficulty: Hard
- Upstream Issues (URL): kubevela/kubevela#5366
- Description: Some developers have started to integrate KubeVela into the enterprise system. It is getting increasingly important to provide an API SDK to help developers start faster. Java and TypeScript are currently the two most in-demand languages. KubeVela API server follows the Open API schema. It means the Open API generator tools could provide some help.
- Expected outcomes:
- Commit the high-quality SDK code, include the TypeScript and Java languages.
- Commit the GitHub action config and tools to auto-generate SDK.
- Recommended Skills: Java, TypeScript, NodeJS, KubeVela, GithubAction
- Mentor(s): QingGuo Zeng @barnettZQG (barnett.zqg AT gmail.com)
- Expected project size: 175h
- Difficulty: Medium
- Upstream Issues (URL): kubevela/kubevela#5428
- Description: The API server of the KubeVela needs to save the metadata data to the database to obtain more efficient query efficiency and larger storage space. Now we only support MongoDB. Some users don't have the MongoDB infrastructure. Mysql and PostgreSQL drivers are needed by many users. Most users install the KubeVela without the database. When they want to use it for production, the existing data need to migrate to one database. The migration tool is important.
- Expected outcomes:
- Commit the Mysql driver code.
- Commit the PostgreSQL driver code.
- Commit the migrate data tool code.
- Recommended Skills: Go, KubeVela, Mysql, PostgreSQL
- Mentor(s): QingGuo Zeng @barnettZQG (barnett.zqg AT gmail.com)
- Expected project size: 175h
- Difficulty: Medium
- Upstream Issues (URL): kubevela/kubevela#5426
- Description: KubeVela can be deployed in a single management cluster.The KubeVela management cluster has a single point of risk. When KubeVela management clusterdisaster occurs, business continuity cannot be guaranteed. In the production environment of enterprise use case(financial industry), KubeVela needs to have higher availability requirements. Consider these three enhancements to achieve this goal: cluster meta backup & recovery, master redundancy, failover support. Cluster meta backup & recovery is baseline feature for velaha.
- Expected outcomes:
- implement velero as KubeVela addon to install velero in kubevela cluster.
- implement
cli> vela ha enable
to enable cluster meta backup with config scheduled(cron). - implement
cli> vela ha restore
to restore cluster meta.
- Recommended Skills: Golang, Kubernetes, KubeVela, CUELang
- Mentor(s): Jiahang Xu @jefree-cat(jiahang_xu@cmbchina.com), Xiangbo Ma @fourierr(maxiangboo@gmail.com)
- Expected project size: 350h
- Difficulty: Medium
- Upstream Issues (URL): kubevela/kubevela#5483
- Description: Meshery's highly dynamic infrastructure configuration capabilities require real-time evaluation of complex policies. Policies of various types and with a high number of parameters need to be evaluted client-side. With policies expressed in Rego, the goal of this project is to incorporate use of the https://github.com/open-policy-agent/golang-opa-wasm project into Meshery UI, so that a powerful, real-time user experience is possible.
- Expected outcome:
- Recommended Skills: Golang, Open Policy Agent, WASM
- Mentor(s): Lee Calcote @leecalcote (leecalcote@gmail.com), Abhishek Kumar @Abhishek-kumar09 (abhimait1909@gmail.com)
- Expected project size: 350 hours
- Difficulty: Hard
- Upstream Issue (URL): meshery/meshery#7019
- Description: Network topologies and graph databases go hand-in-hand. The OpenAPI specifications for Kubernetes provides taxonomy, but augmenting a graph data model with formalized ontologies enables any number of capabilities, one of the more straightforward is the inferencing requisite for natural language processing, and consequently, a human-centric query / response interaction becomes becomes possible. More importantly, more advanced systems can be built when a graph data model of connected systems is upgraded to be a knowledge semantic graph.
- Expected outcome:
- Web-based MeshModel capabilities browser
- Modeling in graph database
- Augmentation of cuelang-based component generator
- Stretch: Import/export of MeshModel models and components as OCI images
- Recommended Skills: Reactjs, Golang, Cuelang, GraphQL, OpenAPI Schema
- Mentor(s): Lee Calcote @leecalcote (leecalcote@gmail.com), Abhishek Kumar @Abhishek-kumar09 (abhimait1909@gmail.com)
- Expected project size: 350 hours
- Difficulty: Hard
- Upstream Issue (URL): meshery/meshery#7465
- Description: Meshery MeshModels represent a schema-based description of cloud native infratructure. MeshModels need to be portable between Meshery deployments as well as easily versionable in external repositories.
- Expected outcome:
- Meshery clients (mesheryctl and Meshery UI) should be able to import/export MeshModels as OCI images.
- Meshery clients (mesheryctl and Meshery UI) should be able to push/pull from OCI-compatible registries.
- Stretch Goal: OCI image signing; Verify the authenticity of MeshModels using cosign.
- Target registries: Meshery Catalog (https://meshery.io/catalog), Artifact Hub.
- Recommended Skills: Reactjs, Golang, GraphQL
- Mentor(s): Lee Calcote @leecalcote (leecalcote@gmail.com), Abhishek Kumar @Abhishek-kumar09 (abhimait1909@gmail.com)
- Expected project size: 175 hours
- Difficulty: Medium
- Upstream Issue (URL): meshery/meshery#6447
- Description: It’s important that non-trivial changes in OpenFeature are agreed upon and documented. The OpenFeature Enhancement Proposal process should be used to gather community feedback and ensure overall alignment with the project. This process should be streamlined to encourage community participation by creating documentation, scripts, and automated workflows.
- Expected outcome: Create an end-to-end workflow for creating, reviewing, approving, and publishing OpenFeature Enhancement Proposals.
- Recommended Skills: TypeScript, Python, GitHub Actions
- Mentor(s): Michael Beemer @beeme1mr (michael.beemer at dynatrace dot com), David Hirsch @DavidPHirsch (david.hirsch at dynatrace dot com)
- Expected project size: 175
- Difficulty: Easy
- Upstream Issue (URL): open-feature/ofep#48
- Description: OpenMetrics specifies the text and protobuf format for metrics the application exposes to systems like Prometheus. One aspect of the Open Metrics format is the exposition of the created time for some metrics like counters, summaries and histograms. It allows greater accuracy of rates (counter resets and initial value), compatibility with OpenTelemetry data model and more. Currently, Prometheus does not handle "create timestamps". Thus, this project aims to design and implement initial support for created timestamps in Prometheus (within its APIs and database storage). Note that we don't expect a detailed proposal as a part of the application process. We will work on the final design document together. However, initial thoughts and ideas are welcome and will help the selection process. Mentors are in the EMEA and NA time zones.
- Expected outcome: Prometheus when scraping metrics in OpenMetrics format stores the created timestamp for export and querying purposes.
- Recommended Skills: Go, protobuf, Prometheus
- Mentor(s): Bartlomiej Plotka @bwplotka (bwplotka@gmail.com), Daniel Hrabovcak @TheSpiritXIII (thespiritxiii@gmail.com), Max Amin @macxamin (maxamin25@gmail.com)
- Expected project size: 350h
- Difficulty: Hard
- Upstream Issue (URL): prometheus/prometheus#6541
- Description: A really common use case we have been seeing is about enabling an IoT scenario with MQTT based devices and using an Apache Kafka cluster as the events and storage platform running on Kubernetes via Strimzi. In order to do that, there is the need to map the MQTT protocol to the custom Apache Kafka one and bridge from one to the other. This project idea is about designing such a mapping and developing a pure Netty based MQTT server component (not a full MQTT broker) able to accept MQTT client connections and handling the corresponding communication based on the MQTT 3.1.1 specification. Finally, developing the Kafka producer part to get messages from MQTT clients and sending them to an Apache Kafka cluster.
- Expected outcome: POC source code for an MQTT to Apache Kafka bridge
- Recommended Skills: Java, MQTT protocol (not mandatory but to be learned)
- Mentor(s): Paolo Patierno @ppatierno (ppatiern AT redhat DOT com), Kyle Liberti @kyguy (kliberti AT redhat DOT com)
- Expected project size: 350h
- Difficulty: Medium
- Upstream Issue (URL): strimzi/strimzi-kafka-operator#8030
- Description: TUF’s specification was written with artifacts stored in traditional file systems in mind. As such, it specifies explicitly how artifacts must be hashed in order to guarantee their integrity. Since TUF was first created, however, content addressable systems for storage and data transmission have become more prominent. Some examples of these systems are Git, the InterPlanetary File System (IPFS), and OSTree. All of these can present a file-like interface for artifacts they store, and have built-in mechanisms for ensuring the integrity of artifacts. When TUF is used with these systems, it is redundant for it to also ensure artifact integrity. Instead, TUF can delegate these guarantees to the underlying content addressable system, and focus on higher level security properties the specification provides. As part of this GSoC project, the participant will add support to an existing TUF implementation to delegate artifact integrity verification to the underlying content addressable system, specifically IPFS.
- Expected Outcome: Prototype of support in TUF for artifacts stored in IPFS
- Recommended Skills: Python
- Mentor(s): Aditya Sirish A Yelgundhalli (@adityasaky, aditya DOT sirish AT nyu DOT edu), John Ericson (@Ericson2314, gsoc AT johnericson DOT me), Marina Moore (@mnm678, mnm678 AT gmail DOT com)
- Expected Project Size: 350 Hours
- Difficulty: Hard
- Upstream Issue (URL): theupdateframework/python-tuf#2325
- Description: WasmEdge is a WebAssembly runtime in both interpreter and ahead-of-time mode. However, WasmEdge only supports the binary format for the input WebAssembly file. To help the text format WebAssembly loader feature in the future, the implementation of serializing a WebAssembly module is necessary. In this mentorship, we hope the mentee should complete the serialization functions already in the
dev/serialize
branch of theWasmEdge
repo. - Expected outcome: Complete the serialization functions of WebAssembly modules, such as the element segment and data segment encoding. Complete the WebAssembly instructions encoding. Generate the unit test data and pass the unit tests. >80% of code coverage for serialization.
- Recommended Skills: C/C++, WebAssembly
- Mentors: Yi-Ying He @q82419 (yiying at secondstate dot io), Hung-Ying Tai @hydai (hydai at secondstate dot io)
- Expected project size: 350h
- Difficulty: Medium
- Upstream Issue (URL): WasmEdge/WasmEdge#2262
- Description: The zlib is required for compiling and running many existing C / C++ / Rust apps in Wasm. Most noticeably, it is needed in the Python port to Wasm. The VMWare Wasm Labs team is using a zlib port from Singlestore in their Python Wasm runtime. In WasmEdge, we could support the zlib host functions through our plugin system. This way, any existing zlib apps can be compiled to Wasm and runs inside WasmEdge.
- Expected outcome: Create a new WasmEdge plugin that exports all public functions in
zlib
. Implement SDK (in C/Rust) that uses the C ABI to generate corresponding headers for the above plugin. Generate the unit tests and pass the unit tests. >80% of code coverage for verification. - Recommended Skills: C/C++, Rust, WebAssembly
- Mentors: Yi-Ying He @q82419 (yiying at secondstate dot io), Hung-Ying Tai @hydai (hydai at secondstate dot io)
- Expected project size: 350h
- Difficulty: Hard
- Upstream Issue (URL): WasmEdge/WasmEdge#2244