Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(formatting): 🚀 pre-commmit,pre-commit config,import and code fixes #111

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 14 additions & 6 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,30 +15,38 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.7', '3.8', '3.9', '3.10', '3.11']
python-version: ['3.8', '3.9', '3.10', '3.11']

steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies

- name: Set up a virtual environment for Python ${{ matrix.python-version }}
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
pip install -e .
python -m pip install --upgrade pip
python -m pip install --upgrade virtualenv
virtualenv venv
source venv/bin/activate
python -m pip install --upgrade poetry
poetry install

- name: Lint with flake8
run: |
source venv/bin/activate
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test
run: |
source venv/bin/activate
kfk
cd tests && python -m pytest
- name: Build
run: |
python setup.py sdist bdist_wheel
source venv/bin/activate
python -m build
twine check --strict dist/*
2 changes: 0 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -136,5 +136,3 @@ dmypy.json

#macOs
.DS_Store


74 changes: 74 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@

ci:
autofix_prs: true
autoupdate_schedule: weekly
autofix_commit_msg: "fix(pre_commit): 🎨 auto format pre-commit hooks"
autoupdate_commit_msg: "chore(pre_commit): ⬆ pre_commit autoupdate"

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: end-of-file-fixer
- id: trailing-whitespace
- id: check-yaml
- id: check-docstring-first
- id: check-executables-have-shebangs
- id: check-toml
- id: check-case-conflict
- id: check-added-large-files
args: ['--maxkb=2048']
exclude: ^logo/
- id: detect-private-key
- id: forbid-new-submodules
- id: pretty-format-json
args: ['--autofix', '--no-sort-keys', '--indent=4']
- id: end-of-file-fixer
- id: mixed-line-ending


- repo: https://github.com/PyCQA/isort
rev: 5.12.0
hooks:
- id: isort
name: Sort imports

- repo: https://github.com/PyCQA/flake8
rev: 6.1.0
hooks:
- id: flake8
name: Flake8 Checks
entry: flake8
additional_dependencies: [Flake8-pyproject]


- repo: https://github.com/PyCQA/bandit
rev: '1.7.5'
hooks:
- id: bandit
args: ["-c", "pyproject.toml"]
additional_dependencies: ["bandit[toml]"]

- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
- id: isort
name: isort (python)
- id: isort
name: isort (cython)
types: [cython]
- id: isort
name: isort (pyi)
types: [pyi]


- repo: https://github.com/psf/black
rev: 23.9.1
hooks:
- id: black

- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.0.290
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix]
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@ USER root
RUN adduser -D kfkuser
RUN pip install strimzi-kafka-cli==0.1.0a68
USER kfkuser
RUN kfk --version
RUN kfk --version
2 changes: 1 addition & 1 deletion examples/.gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
*.jks
*.p12
*.bck
*.bck
2 changes: 1 addition & 1 deletion examples/2_tls_authentication/client.properties
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@ security.protocol=SSL
ssl.truststore.location=./truststore.jks
ssl.truststore.password=123456
ssl.keystore.location=./user.p12
ssl.keystore.password=123456
ssl.keystore.password=123456
2 changes: 1 addition & 1 deletion examples/2_tls_authentication/get_keys.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ oc extract secret/my-cluster-cluster-ca-cert -n kafka --keys=ca.crt --to=- > ca.
echo "yes" | keytool -import -trustcacerts -file ca.crt -keystore truststore.jks -storepass 123456
RANDFILE=/tmp/.rnd openssl pkcs12 -export -in user.crt -inkey user.key -name my-user -password pass:123456 -out user.p12

rm user.crt user.key ca.crt
rm user.crt user.key ca.crt
8 changes: 4 additions & 4 deletions examples/2_tls_authentication/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ First lets list the clusters and see our clusters list.
```shell
kfk clusters --list
```

---
**IMPORTANT**

Expand Down Expand Up @@ -210,7 +210,7 @@ user.password: 12 bytes
In order create the truststore and keystore files just run the get_keys.sh file in the [example directory](https://github.com/systemcraftsman/strimzi-kafka-cli/blob/master/examples/2_tls_authentication/get_keys.sh):

```shell
chmod a+x ./get_keys.sh;./get_keys.sh
chmod a+x ./get_keys.sh;./get_keys.sh
```

This will generate two files:
Expand All @@ -220,7 +220,7 @@ This will generate two files:

TLS authentications are made with bidirectional TLS handshake. In order to do this apart from a truststore that has the public key imported, a keystore file that has both the public and private keys has to be created and defined in the client configuration file.

So let's create our client configuration file.
So let's create our client configuration file.

Our client configuration should have a few definitions like:

Expand Down Expand Up @@ -254,7 +254,7 @@ Be careful to run producer and consumer commands from example's directory. Other
```shell
kfk console-producer --topic my-topic -n kafka -c my-cluster --producer.config client.properties
```
The console producer seems to be working just fine since we can produce messages.
The console producer seems to be working just fine since we can produce messages.

```
>message1
Expand Down
18 changes: 8 additions & 10 deletions examples/3_simple_acl_authorization/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
In the previous example we implemented TLS authentication on Strimzi Kafka cluster with Strimzi Kafka CLI. In this example, we will be continuing with enabling the ACL authorization, so that we will be able to restrict access to our topics and only allow the users or groups we want to.


Let's first see our cluster list.
Let's first see our cluster list.

```shell
kfk clusters --list
Expand All @@ -16,7 +16,7 @@ kafka my-cluster 3 3
---
**IMPORTANT**

You should have a cluster called `my-cluster` on the namespace `kafka` we created before. If you don't have the cluster and haven't yet done the authentication part please go back to the previous example and do it first since for authorization you will need authentication to be set up before.
You should have a cluster called `my-cluster` on the namespace `kafka` we created before. If you don't have the cluster and haven't yet done the authentication part please go back to the previous example and do it first since for authorization you will need authentication to be set up before.

Also please copy the `truststore.jks` and the `user.p12` files or recreate them as explained in the previous example and put it along the example folder which we ignore in git.

Expand Down Expand Up @@ -47,7 +47,7 @@ my-user tls

As you can see we have the `my-user` user that we created and authenticated in the previous example.

Now let's configure our cluster to enable for ACL authorization. We have to alter our cluster for this:
Now let's configure our cluster to enable for ACL authorization. We have to alter our cluster for this:

```shell
kfk clusters --alter --cluster my-cluster -n kafka
Expand Down Expand Up @@ -89,7 +89,7 @@ Processed a total of 0 messages

As you might also observe, both the producer and consumer returned `TopicAuthorizationException` by saying `Not authorized to access topics: [my-topic]`. So let's define authorization access to this topic for the user `my-user`.

In order to enable user's authorization, we have to both define the user's authorization type as `simple` for it to use `SimpleAclAuthorizer` of Apache Kafka, and the ACL definitions for the relevant topic -in this case it is `my-topic`. To do this, we need to alter the user with the following command options:
In order to enable user's authorization, we have to both define the user's authorization type as `simple` for it to use `SimpleAclAuthorizer` of Apache Kafka, and the ACL definitions for the relevant topic -in this case it is `my-topic`. To do this, we need to alter the user with the following command options:

```shell
kfk users --alter --user my-user --authorization-type simple --add-acl --resource-type topic --resource-name my-topic -n kafka -c my-cluster
Expand Down Expand Up @@ -122,7 +122,7 @@ So in this case we used the defaults of `type:allow`, `host:*` and `operation:Al
kfk users --alter --user my-user --authorization-type simple --add-acl --resource-type topic --resource-name my-topic --type allow --host * --operation All -n kafka -c my-cluster
```

In order to see the ACL that is defined for allowing all operations of `my-topic` for the user `my-user`, let's describe it, in this case as YAML format:
In order to see the ACL that is defined for allowing all operations of `my-topic` for the user `my-user`, let's describe it, in this case as YAML format:

```shell
kfk users --describe --user my-user -n kafka -c my-cluster -o yaml
Expand Down Expand Up @@ -177,7 +177,7 @@ org.apache.kafka.common.errors.GroupAuthorizationException: Not authorized to ac
Processed a total of 0 messages
```

Whoops! It did not work like the producer. But why? Because the consumer group that is randomly generated for us (because we did not define it anywhere) doesn't have at least `read` permission on `my-topic` topic.
Whoops! It did not work like the producer. But why? Because the consumer group that is randomly generated for us (because we did not define it anywhere) doesn't have at least `read` permission on `my-topic` topic.

---
**IMPORTANT**
Expand All @@ -188,7 +188,7 @@ In Apache Kafka, if you want to consume messages you have to do it via a consume

Ok then. Now let's add the ACL for a group in order to give `read` permission for `my-topic` topic. Let's call this group `my-group`, which we will also use it as the group id in our consumer client configuration. This time let's use `kfk acls` command which works like `kfk users --alter --add-acl` command. In order to give the best traditional experience to Strimzi CLI users, just like the traditional `bin/kafka-acls.sh` command, we have the `kfk acls` command which works mostly the same with the traditional one.

With the following command, we give the `my-group` group the `read` right for consuming the messages.
With the following command, we give the `my-group` group the `read` right for consuming the messages.

```shell
kfk acls --add --allow-principal User:my-user --group my-group --operation Read -n kafka -c my-cluster
Expand Down Expand Up @@ -260,7 +260,7 @@ ssl.keystore.password=123456
group.id=my-group
```

Running the consumer again with the updated client configuration -this time consuming from the beginning- let's see the previously produced logs:
Running the consumer again with the updated client configuration -this time consuming from the beginning- let's see the previously produced logs:

```shell
kfk console-consumer --topic my-topic -n kafka -c my-cluster --consumer.config client.properties --from-beginning
Expand All @@ -275,5 +275,3 @@ message3
Voilà!

We are able to configure the Strimzi cluster for ACL authorization, define ACLs easily with different methods and use the client configurations successfully with Strimzi Kafka CLI.


Loading