Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sqlite logging is slow #1168

Open
suprjinx opened this issue Apr 29, 2024 · 2 comments · Fixed by #1219
Open

Sqlite logging is slow #1168

suprjinx opened this issue Apr 29, 2024 · 2 comments · Fixed by #1219
Assignees

Comments

@suprjinx
Copy link
Contributor

suprjinx commented Apr 29, 2024

Image

After some optimizations, it's looking about on-par:

Image

@suprjinx
Copy link
Contributor Author

suprjinx commented May 21, 2024

See below, http_request_duration average for logging load test goes from 96.68ms to 51.14ms with this PR. However, still seeing elevated timing in the benchmarks app :-/

`
(fasttrackml-m1UwZguE) vscode ➜ /workspaces/fasttrackml/docs/example (main) $ k6 run k6_load.js

      /\      |‾‾| /‾‾/   /‾‾/   
 /\  /  \     |  |/  /   /  /    
/  \/    \    |     (   /   ‾‾\  

/ \ | |\ \ | (‾) |
/ __________ \ |__| _\ ____/ .io

 execution: local
    script: k6_load.js
    output: -

 scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
          * default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)

 data_received..................: 16 kB  1.7 kB/s
 data_sent......................: 1.8 MB 199 kB/s
 http_req_blocked...............: avg=6.36µs  min=2.02µs  med=3.02µs  max=279.36µs p(90)=4.24µs   p(95)=5.44µs  
 http_req_connecting............: avg=1.47µs  min=0s      med=0s      max=134.36µs p(90)=0s       p(95)=0s      
 http_req_duration..............: avg=96.68ms min=1.62ms  med=95.63ms max=176.63ms p(90)=143.03ms p(95)=153.2ms 
   { expected_response:true }...: avg=96.68ms min=1.62ms  med=95.63ms max=176.63ms p(90)=143.03ms p(95)=153.2ms 
 http_req_failed................: 0.00%  ✓ 0         ✗ 91 
 http_req_receiving.............: avg=81.71µs min=44.02µs med=75.13µs max=294.43µs p(90)=105.96µs p(95)=132.32µs
 http_req_sending...............: avg=80.92µs min=14.61µs med=74.64µs max=411.6µs  p(90)=99.17µs  p(95)=121.8µs 
 http_req_tls_handshaking.......: avg=0s      min=0s      med=0s      max=0s       p(90)=0s       p(95)=0s      
 http_req_waiting...............: avg=96.52ms min=1.56ms  med=95.51ms max=176.5ms  p(90)=142.9ms  p(95)=153.03ms
 http_reqs......................: 91     10.227085/s
 iteration_duration.............: avg=8.89s   min=8.89s   med=8.89s   max=8.89s    p(90)=8.89s    p(95)=8.89s   
 iterations.....................: 1      0.112386/s
 vus............................: 1      min=1       max=1
 vus_max........................: 1      min=1       max=1

running (00m08.9s), 0/1 VUs, 1 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs 00m08.9s/10m0s 1/1 iters, 1 per VU
(fasttrackml-m1UwZguE) vscode ➜ /workspaces/fasttrackml/docs/example (main) $ k6 run k6_load.js

      /\      |‾‾| /‾‾/   /‾‾/   
 /\  /  \     |  |/  /   /  /    
/  \/    \    |     (   /   ‾‾\  

/ \ | |\ \ | (‾) |
/ __________ \ |__| _\ ____/ .io

 execution: local
    script: k6_load.js
    output: -

 scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
          * default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)


 data_received..................: 16 kB  3.3 kB/s
 data_sent......................: 1.8 MB 371 kB/s
 http_req_blocked...............: avg=6.58µs  min=1.9µs   med=2.9µs   max=254.41µs p(90)=4.25µs   p(95)=5.43µs  
 http_req_connecting............: avg=1.27µs  min=0s      med=0s      max=116.03µs p(90)=0s       p(95)=0s      
 http_req_duration..............: avg=51.14ms min=3.83ms  med=36.81ms max=117.3ms  p(90)=79.83ms  p(95)=90.5ms  
   { expected_response:true }...: avg=51.14ms min=3.83ms  med=36.81ms max=117.3ms  p(90)=79.83ms  p(95)=90.5ms  
 http_req_failed................: 0.00%  ✓ 0         ✗ 91 
 http_req_receiving.............: avg=84.25µs min=46.9µs  med=78.4µs  max=182.95µs p(90)=122.38µs p(95)=136.24µs
 http_req_sending...............: avg=94.86µs min=11.88µs med=74.04µs max=1.86ms   p(90)=103.79µs p(95)=126.26µs
 http_req_tls_handshaking.......: avg=0s      min=0s      med=0s      max=0s       p(90)=0s       p(95)=0s      
 http_req_waiting...............: avg=50.96ms min=3.75ms  med=36.66ms max=115.36ms p(90)=79.66ms  p(95)=90.35ms 
 http_reqs......................: 91     19.113314/s
 iteration_duration.............: avg=4.76s   min=4.76s   med=4.76s   max=4.76s    p(90)=4.76s    p(95)=4.76s   
 iterations.....................: 1      0.210036/s
 vus............................: 1      min=1       max=1
 vus_max........................: 1      min=1       max=1

running (00m04.8s), 0/1 VUs, 1 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs 00m04.8s/10m0s 1/1 iters, 1 per VU
`

@suprjinx suprjinx reopened this Jun 5, 2024
@dsuhinin dsuhinin self-assigned this Jul 1, 2024
@dsuhinin
Copy link
Collaborator

dsuhinin commented Jul 3, 2024

Ok folks, I did a lot of different experiments with code and I can say that unfortunately what we have now is a final result. So current performance fully related to number of calls which we do right now:

  • to fetch namespace and check it.
  • to fetch run, join it with namespace.
  • process context information and save it.
    So, when I tried to disable almost all additional logic we did I got next results:

Image

I didn't disable namespace middleware check so I think result could be even better. So for now I think there is nothing to improve with our code or how we work with database. All the steps we have are necessary and they are optimal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants