-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Potential Memory Leak in JwtRealm #101752
Labels
>bug
:Security/Security
Security issues without another label
Team:Security
Meta label for security team
Comments
original-brownbear
added
>bug
:Security/Security
Security issues without another label
labels
Nov 2, 2023
Pinging @elastic/es-security (Team:Security) |
jakelandis
added a commit
that referenced
this issue
Nov 6, 2023
…he) (#101799) This commit fixes a memory leak and ensures that the JWT authentication cache is actually used to short circuit authenticate. When JWT authentication is successful we cache a hash(JWT) -> User. On subsequent authentication attempts we should short circuit some of the more expensive validation thanks to this cache. The existing cache key is a `BytesArray` constructed from `jwtAuthenticationToken.getUserCredentialsHash()` (the hash value of the JWT). However, `jwtAuthenticationToken` is comes from the security context which is effectively a request scoped object. When the security context goes out of scope the value of `jwtAuthenticationToken.getUserCredentialsHash()` is zero'ed out to help keep sensitive data out of the heap. It is arguable if zero'ing out that data is useful especially for a hashed value, but is inline with the internal contract/expectations. Since the cache key is derived from `jwtAuthenticationToken.getUserCredentialsHash()` when that get object is zero'ed out, so is the cache key. This results in a cache key that changes from a valid value to an empty byte array. This results in a junk cache entry. Subsequent authentication with the same JWT will result in a new cache entry which will then follow the same pattern of getting zero'ed out. This results in a useless cache with nothing but zero'ed out cache keys. This negates any benefits of having a cache at all in that a full authentication is preformed all the time which can be expensive for JWT (especially since JWT requires role mappings and role mappings are not cached). Fortunately the default cache size is 100k (by count) so the actual memory leak is technically capped but can vary depending on how large the cache values. This is an approximate cap of ~55MB where 3.125 MB (100k * 256 bits) for the sha256 cache keys + ~50MB (100k * ~50 bytes) cache values, however, it is possible for the cache to be larger if the values in the cache are larger. The fix here is to ensure the cache key used is a copy of the value that is zero'ed out (before it is zero'ed out). fixes: #101752
jakelandis
added a commit
to jakelandis/elasticsearch
that referenced
this issue
Nov 6, 2023
…he) (elastic#101799) This commit fixes a memory leak and ensures that the JWT authentication cache is actually used to short circuit authenticate. When JWT authentication is successful we cache a hash(JWT) -> User. On subsequent authentication attempts we should short circuit some of the more expensive validation thanks to this cache. The existing cache key is a `BytesArray` constructed from `jwtAuthenticationToken.getUserCredentialsHash()` (the hash value of the JWT). However, `jwtAuthenticationToken` is comes from the security context which is effectively a request scoped object. When the security context goes out of scope the value of `jwtAuthenticationToken.getUserCredentialsHash()` is zero'ed out to help keep sensitive data out of the heap. It is arguable if zero'ing out that data is useful especially for a hashed value, but is inline with the internal contract/expectations. Since the cache key is derived from `jwtAuthenticationToken.getUserCredentialsHash()` when that get object is zero'ed out, so is the cache key. This results in a cache key that changes from a valid value to an empty byte array. This results in a junk cache entry. Subsequent authentication with the same JWT will result in a new cache entry which will then follow the same pattern of getting zero'ed out. This results in a useless cache with nothing but zero'ed out cache keys. This negates any benefits of having a cache at all in that a full authentication is preformed all the time which can be expensive for JWT (especially since JWT requires role mappings and role mappings are not cached). Fortunately the default cache size is 100k (by count) so the actual memory leak is technically capped but can vary depending on how large the cache values. This is an approximate cap of ~55MB where 3.125 MB (100k * 256 bits) for the sha256 cache keys + ~50MB (100k * ~50 bytes) cache values, however, it is possible for the cache to be larger if the values in the cache are larger. The fix here is to ensure the cache key used is a copy of the value that is zero'ed out (before it is zero'ed out). fixes: elastic#101752 (cherry picked from commit cef2b80) # Conflicts: # x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/authc/jwt/JwtRealm.java
elasticsearchmachine
pushed a commit
that referenced
this issue
Nov 6, 2023
…he) (#101799) (#101841) This commit fixes a memory leak and ensures that the JWT authentication cache is actually used to short circuit authenticate. When JWT authentication is successful we cache a hash(JWT) -> User. On subsequent authentication attempts we should short circuit some of the more expensive validation thanks to this cache. The existing cache key is a `BytesArray` constructed from `jwtAuthenticationToken.getUserCredentialsHash()` (the hash value of the JWT). However, `jwtAuthenticationToken` is comes from the security context which is effectively a request scoped object. When the security context goes out of scope the value of `jwtAuthenticationToken.getUserCredentialsHash()` is zero'ed out to help keep sensitive data out of the heap. It is arguable if zero'ing out that data is useful especially for a hashed value, but is inline with the internal contract/expectations. Since the cache key is derived from `jwtAuthenticationToken.getUserCredentialsHash()` when that get object is zero'ed out, so is the cache key. This results in a cache key that changes from a valid value to an empty byte array. This results in a junk cache entry. Subsequent authentication with the same JWT will result in a new cache entry which will then follow the same pattern of getting zero'ed out. This results in a useless cache with nothing but zero'ed out cache keys. This negates any benefits of having a cache at all in that a full authentication is preformed all the time which can be expensive for JWT (especially since JWT requires role mappings and role mappings are not cached). Fortunately the default cache size is 100k (by count) so the actual memory leak is technically capped but can vary depending on how large the cache values. This is an approximate cap of ~55MB where 3.125 MB (100k * 256 bits) for the sha256 cache keys + ~50MB (100k * ~50 bytes) cache values, however, it is possible for the cache to be larger if the values in the cache are larger. The fix here is to ensure the cache key used is a copy of the value that is zero'ed out (before it is zero'ed out). fixes: #101752 (cherry picked from commit cef2b80) # Conflicts: # x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/authc/jwt/JwtRealm.java
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
>bug
:Security/Security
Security issues without another label
Team:Security
Meta label for security team
The logic for the cache in
JwtRealm
produces a brokenCache
instance to some degree.We create a
BytesArray
from the user credential hash:and use it as as a cache key in a
Cache
instance down the line.But then we also clear the bytes in that
BytesArray
to0
in org.elasticsearch.xpack.security.authc.jwt.JwtAuthenticationToken#clearCredentials.This leads to the hash map in the
Cache
getting filled with loads ofBytesArray
that all have the same 0-filled content but that were not equal to each other when first inserted into the map, causing the map to put them all in the same bucket.ref path to the broken map:
The text was updated successfully, but these errors were encountered: