You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The helidon's default AuditProvider is used when no custom audit provider is used. Based on what we are observing this seems to be common case.
In the implementation of DefaultAuditProvider.audit() doesn't check the logger level prior to invoking the logEvent() methods. For example events for SUCCESS, INFO etc are logged at FINEST level. In most cases production environments will not have FINEST logging level.
The code ends up spending significant time formatting complex and large messages which eventually do not get logged. In reviewing theJFR from one of the services we do see that over 50% of the time spent on the Forkjoin common pool threads is on the below stack
The ask here is to have check based on the logger's level and avoid processing associated with logEvent if the event is not going to be logged based on the log levels.
Environment Details
Helidon Version: 2.2.0
Helidon MO
JDK version: 11
OS: Linux x64
The text was updated successfully, but these errors were encountered:
Problem Description
The helidon's default AuditProvider is used when no custom audit provider is used. Based on what we are observing this seems to be common case.
In the implementation of DefaultAuditProvider.audit() doesn't check the logger level prior to invoking the logEvent() methods. For example events for SUCCESS, INFO etc are logged at FINEST level. In most cases production environments will not have FINEST logging level.
The code ends up spending significant time formatting complex and large messages which eventually do not get logged. In reviewing theJFR from one of the services we do see that over 50% of the time spent on the Forkjoin common pool threads is on the below stack
java.util.Arrays.copyOf(byte[], int)
java.lang.AbstractStringBuilder.ensureCapacityInternal(int)
java.lang.AbstractStringBuilder.append(String)
java.lang.StringBuilder.append(String)
javax.security.auth.Subject.toString()
io.helidon.security.Subject.toString()
java.util.Formatter$FormatSpecifier.printString(Object, Locale)
java.util.Formatter$FormatSpecifier.print(Object, Locale)
java.util.Formatter.format(Locale, String, Object[])
java.util.Formatter.format(String, Object[])
java.lang.String.format(String, Object[])
java.util.Optional.toString()
java.util.Formatter$FormatSpecifier.printString(Object, Locale)
java.util.Formatter$FormatSpecifier.print(Object, Locale)
java.util.Formatter.format(Locale, String, Object[])
java.util.Formatter.format(String, Object[])
java.lang.String.format(String, Object[])
io.helidon.security.DefaultAuditProvider.formatMessage(AuditEvent)
io.helidon.security.DefaultAuditProvider.logEvent(String, AuditProvider$TracedAuditEvent, Level)
io.helidon.security.DefaultAuditProvider.audit(AuditProvider$TracedAuditEvent)
io.helidon.security.DefaultAuditProvider$$Lambda$1291.1693004675.accept(Object)
io.helidon.security.Security.audit(String, AuditEvent)
io.helidon.security.SecurityContextImpl.audit(AuditEvent)
io.helidon.security.AuthorizationClientImpl.lambda$submit$0(AuthorizationProvider, AuthorizationResponse
io.helidon.security.AuthorizationClientImpl$$Lambda$2382.1521780927.apply(Object)
java.util.concurrent.CompletableFuture$UniApply.tryFire(int)
java.util.concurrent.CompletableFuture.postComplete()
java.util.concurrent.CompletableFuture$AsyncSupply.run()
java.util.concurrent.CompletableFuture$AsyncSupply.exec()
java.util.concurrent.ForkJoinTask.doExec()
java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinTask, ForkJoinPool$WorkQueue, int)
java.util.concurrent.ForkJoinPool.scan(ForkJoinPool$WorkQueue, int)
java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool$WorkQueue)
java.util.concurrent.ForkJoinWorkerThread.run()
The ask here is to have check based on the logger's level and avoid processing associated with logEvent if the event is not going to be logged based on the log levels.
Environment Details
The text was updated successfully, but these errors were encountered: