Before, when Kafka client or broker connected to the authorization server during authentication or token validation, there was no connect timeout and no read timeout applied. As a result, if a reverse proxy was in front of the authorization server or a network component glitch prevented normal connectivity, it could happen that the authentication request would stall for a long time.
In order to address this, the default connect timeout and read timeout are now both set to 60 seconds and they are configurable via oauth.connect.timeout.seconds
and oauth.read.timeout.seconds
.
Added an authentication time mechanism on the broker where a JsonPath query can be configured to extract a set of groups from a JWT token during authentication. A custom authorizer can then retrieve this information through OAuthKafkaPrincipal
object available during the authorize()
call.
When writing a custom authorizer you may need access to the already parsed JWT token or a map of claims returned by the introspection endpoint. A getJSON()
method has been added to BearerTokenWithPayload
.
The authorizers have been migrated to Authorizer API that has been introduced in Kafka 2.4.0. As a result the authorizer no longer works in Kafka 2.3.x and earlier versions.
The logging to .grant
and .deny
logs now takes into account hints from Kafka about whether the authorization decision for specific action should be logged or not.
Fixed a bug when parsing a kafka-cluster
section of a Keycloak authorization services resource pattern.
If Keycloak authorization services grants were targeted using a pattern ending with *
as in kafka-cluster:*,Topic:my_topic
or kafka-cluster:prod_*,Topic:my_topic
, the parsing was invalid and resulted in matching of the grant rule to always fail (authorization was denied).
Using just Topic:my_topic
would correctly match any cluster, and kafka-cluster:my-cluster,Topic:my_topic
would match only if my-cluster
was set as a cluster name.
If multiple producers or consumers were used concurrently with the same credentials there was a high likelihood of principal presenting as KafkaPrincipal rather than OAuthKafkaPrincipal after successful authentication. As a result, custom authorizer would not recognise and properly match such a session during authorization check. Depending on the custom authorizer it could result in the delegation of authorization decisions to ACL Authorizer, or it might result in denial of permissions.
When preparing an https connection to authorization server the reported error would say that the URL was malformed, and the actual cause was not logged.
By default enabled option oauth.check.access.token.type
triggeres a token type check which checks that the value of typ
claim in JWT token is set to Bearer
. If typ
claim is not present it now falls back to checking if token_type
claim with value Bearer
is present in the access token.
PEM certificates can now be used directly without being converted to Java Keystore or PKCS12 formats.
To use PEM certificates, set the oauth.ssl.truststore.type
option to PEM
and either specify location of the PEM file in oauth.ssl.truststore.location
or set the certificates directly in oauth.ssl.truststore.certificates
.
Now JWT token validation uses a different third-party library. As a result ECDSA support no longer requires the BouncyCastle library. Also, some JWT tokens that would fail previously, can now be handled, widening the support of different authorization servers.
Sometimes authorization server may require audience
option to be passed when authenticating to the token endpoint.
Pass the configured oauth.scope
option on the Kafka broker as scope
when performing clientId + secret authentication on the broker
While the option has existed, it was only used for inter-broker authentication, but not for OAuth over PLAIN
.
It is now possible to not specify the oauth.token.endpoint.uri
parameter when configuring OAuth over PLAIN on the listener, which
results in username
to always be treated as an account id, and password
to always be treated as a raw access token without any prefix. In this mode the client can't authenticate using client credentials (Client ID + secret) - even if the client sends them, the server will always interpret it as account id, and an access token, resulting in authentication failing due to invalid token
.
The initial implementation of OAuth over PLAIN was based on an invalid assumptions about how threads are assigned to handle messages from different connections in Kafka. An internal assertion often got triggered during concurrent usage, causing authentication failures.
A breaking change with how $accessToken
is used in 0.7.0 had to be introduced.
Before, you could authenticate with an access token by setting username
to $accessToken
and setting the password
to the access token string.
Now, you have to set username
to be the same as the principal name that the broker will extract from the given access token (for example, the value of the claim configured by oauth.username.claim
) and set the password
to $accessToken:
followed by the access token string.
So, compared to before, you now prefix the access token string with $accessToken:
in the password
parameter, and you have to be careful to properly set the username
parameter to the principal name matching that in the access token.
SASL/PLAIN can now be used to perform the authentication using a service account clientId and secret or a long-lived access token.
When configuring OAuth authentication you should configure the custom principal builder factory:
principal.builder.class=io.strimzi.kafka.oauth.server.OAuthKafkaPrinipalBuilder
That is needed by OAuth over PLAIN to function correctly, and is also required by KeycloakRBACAuthorizer
to function correctly, so it is best to just always configure it.
See [README.md] for instructions on how to set up the brokers and the clients.
Additional server-side configuration option was added to enable / disable the audience checking during authentication:
oauth.check.audience
(e.g. "true")
See [README.md] for more information.
Another token validation mechanism was added which allows using the JSONPath filter queries to express the additional conditions that the token has to match during authentication in order to pass validation.
To enable it set the following option to a valid JSONPath filter query:
oauth.custom.claim.check
(e.g. "'kafka-user' in @.roles")
See [README.md] for more information.
Redundant multiple instances of validators were being instantiated resulting in service components, supposed to be singletons, to be instantiated multiple times. That was fixed through internal refactoring.
Improvements have been made to logging for easier tracking of the requests, and better error reporting.
Many improvements have been made to address problems with access tokens expiring, or becoming invalid.
To prevent active sessions operating beyond the access token lifetime, Kafka 2.2 and later has re-authentication support, which has to be explicitly enabled.
Use the following server.properties
configuration to enable re-authentication, and force clients to re-authenticate within one hour:
connections.max.reauth.ms=3600000
If the access token expires before that, the re-authentication will be enforced within the access token lifetime. Any non-authentication operation after token expiry will cause the connection to be terminated by the broker.
Re-authentication should be enabled if you want to prevent authenticated sessions from continuing beyond access token expiry.
Also, without re-authentication enabled, the KeycloakRBACAuthorizer
will now deny further access as soon as the access token expires.
It would in any case fail fairly quickly as it relies on a valid access token when refreshing the list of grants for the current session.
In previous versions the re-authentication support was broken due to the bug #60. That should now be fixed.
Re-authentication forces active sessions whose access tokens have expired to immediately become invalid.
However, if for some reason you don't want to, or can't use re-authentication, and you don't use KeycloakRBACAuthorizer
, but still want to enforce session expiry, or if you simply want to better log if token expiry occurs, the new authorizer denies all actions after the access token of the session has expired.
The client will receive the org.apache.kafka.common.errors.AuthorizationException
.
Usage of OAuthSessionAuthorizer
is optional.
It 'wraps' itself around another authorizer, and delegates all calls after determining that the current session still contains a valid token.
This authorizer should not be used together with KeycloakRBACAuthorizer
, since the latter already performs all the same checks.
Two configuration options have been added for use by this authorizer:
-
strimzi.authorizer.delegate.class.name
Specifies the delegate authorizer class name to be used.
-
strimzi.authorizer.grant.when.no.delegate
Enables this authorizer to work without the delegate.
In order to support the newly added OAuthSessionAuthorizer
the JwtKafkaPrincipalBuilder
had to be moved to oauth-server
module, which called for a different package.
We took the opportunity to also give the class a better name. The old class still exists, but simply extends the new class.
To use the new class your server.properties
configuration should contain:
principal.builder.class=io.strimzi.kafka.oauth.server.OAuthKafkaPrincipalBuilder
-
Added session expiry enforcement to KeycloakRBACAuthorizer. If the access token used during authentication expires all the authorizations will automatically be denied. Note, that if re-authentication is enabled, and properly functioning, the access token should be refreshed on the server before ever expiring.
-
The KeycloakRBACAuthorizer will now regularly refresh the list of grants for every active session, and thus allows any permissions changes made at authorization server (Keycloak / RH-SSO) to take effect on Kafka brokers.
For example, if the access token expires in 30 minutes, and grants are refreshed every minute, the revocation of grants will be detected within one minute, and immediately enforced.
Additional server.properties
configuration options have been introduced:
-
strimzi.authorization.grants.refresh.period.seconds
The time between two grants refresh job runs. The default value is 60 seconds. If this value is set to 0 or less, refreshing of grants is turned off.
-
strimzi.authorization.grants.refresh.pool.size
The number of threads that can fetch grants in parallel. The default value is 5.
When using fast local signature validation using JWKS endpoint keys, if signing keys are suddenly revoked at the authorization server, it takes a while for the Kafka broker to be aware of this change. Until then, the Kafka broker keeps successfully authorizing new connections with access tokens signed using old keys. At the same time it rejects newly issued access tokens that are properly signed with the new signing keys which Kafka broker doesn't yet know about. In order to shorten this mismatch period as much as possible, the broker will now trigger JWKS keys refresh as soon as it detects a new signing key. And it will keep trying if not successful using a so called exponential back-off, where after the unsuccessful attempt it waits a second, then two, then 4, 8, 16, 32, ... It will not flood the server with requests, always pausing for a minimum time between two consecutive keys refresh attempts.
While there would still be a few invalid token errors
, the clients, if coded correctly, to reinitialise the KafkaProducer / KafkaConsumer in order to force access token refresh, should quickly recover.
The following new configuration option has been introduced:
-
oauth.jwks.refresh.min.pause.seconds
The minimum pause between two consecutive reload attempts. This prevents flooding the authorization server. The default value is 1 second.
In some cases the client would not receive org.apache.kafka.common.errors.AuthenticationException
when it should.
Error messages were also improved to give a more precise reason for the failure.
Some claims are no longer required in token or Introspection Endpoint response (iat
).
Others can be configured to not be required:
iss
claim is not required ifoauth.check.issuer
is set tofalse
sub
claim is no longer required ifoauth.username.claim
is configured since then it is no longer used to extract principal.
Additional options were added to improve interoperability with authorization servers.
The following options were added:
-
oauth.scope
Scope can now be specified for the Token endpoint on the Kafka clients and on the Kafka broker for inter-broker communication.
-
oauth.check.issuer
Issuer check can now be disabled when configuring token validation on the Kafka broker - some authorization servers don't provide
iss
claim. -
oauth.fallback.username.claim
Principal can now be extracted from JWT token or Introspection endpoint response by using multiple claims. First
oauth.username.claim
is attempted (if configured). If the value is not present, the fallback claim is attempted. If neitheroauth.username.claim
noroauth.fallback.username.claim
is specified or its value present,sub
claim is used. -
oauth.fallback.username.prefix
If principal is set by
oauth.fallback.username.claim
then its value will be prefixed by the value ofoauth.fallback.username.prefix
, if specified. -
oauth.userinfo.endpoint.uri
Sometimes the introspection endpoint doesn't provide any claim that could be used for the principal. In such a case User Info Endpoint can be used, and configuration of
oauth.username.claim
,oauth.fallback.username.claim
, andoauth.fallback.username.prefix
is taken into account. -
oauth.valid.token.type
When using the Introspection Endpoint, some servers use custom values for
token_type
. If this configuration parameter is set then thetoken_type
attribute has to be present in Introspection Token response, and has to have the specified value.
The job that refreshes the keys would be cancelled if fetching of keys failed due to network error or authorization server glitch.
If token_type
was present it was expected to be equal to access_token
which is not an OAuth 2.0 spec compliant value.
Token type check is now disabled unless the newly introduced oauth.valid.token.type
configuration option is set.
-
Fixed an issue with
keycloak
andhydra
containers not visible when starting services in separate shells.The instructions for running
keycloak
/hydra
separately omitted the required-f compose.yml
as a first compose file, resulting in a separate bridge network being used. -
Added Spring Security Authorization Server
There is now some TRACE logging support which should only ever be used in development / testing environment because it outputs secrets into the log.
When integrating with your authorization server, enabling TRACE logging on io.strimzi.kafka.oauth
logger will output the authorization server responses which can point you to how to correctly configure oauth.*
parameters to make the integration work.
The helper library used for JWT / JWKS handling was bumped to version 10.0.0
The following configuration options have been deprecated:
oauth.tokens.not.jwt
is now calledoauth.access.token.is.jwt
and has a reverse meaning.oauth.validation.skip.type.check
is now calledoauth.check.access.token.type
and has a reverse meaning.
See: Align configuration with Kafka Operator PR (#36).
Scope claim is no longer required in an access token. (#30) That improves compatibility with different authorization servers, since the attribute is not required by OAuth 2.0 specification neither is it used by validation logic.
jackson-core
, and jackson-databind
libraries have been updated to latest versions. (#33)
Instructions for preparing the environment, building and deploying the latest version of Strimzi Kafka OAuth library with Strimzi Kafka Operator have been added.
See: Hacking on OAuth and deploying with Strimzi Kafka Operator PR (#34)
Fixed enabled remote debugging mode in example compose-authz.yml
(#39)
It is now possible to use Keycloak Authorization Services to centrally manage access control to resources on Kafka Brokers (#24) See the tutorial which explains many concepts. For configuration details also see KeycloakRBACAuthorizer JavaDoc.
The JWTSignatureValidator now supports ECDSA signatures, but requires explicit enablement of BouncyCastle security provider (#25)
To enable BouncyCastle set oauth.crypto.provider.bouncycastle
to true
.
Optionally you may control the order where the provider is installed by using oauth.crypto.provider.bouncycastle.position
- by default it is installed at the end of the list of existing providers.
A testsuite based on Arquillian Cube, and using docker containers was added.
Added Ory Hydra authorization server to examples.
Support for token-based authentication that plugs into Kafka's SASL/OAUTHBEARER mechanism to provide:
- Different ways of access token retrieval for Kafka clients (clientId + secret, refresh token, or direct access token)
- Fast signature-checking token validation mechanism (using authorization server's JWKS endpoint)
- Introspection based token validation mechanism (using authorization server's introspection endpoint)
See the tutorial.