-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setting server.basePath causes an optimize (transpilation) cycle #10724
Comments
ES url should include default port, I guess It's still takes minutes to start. |
@rokka-n Can you attempt this setup without a docker container? We'd like to pin down whether this is an issue with the docker image or with the core Kibana build. @jarpy Have you seen anything like this with the kibana-docker images? |
I've started to forget how to run things without docker |
Setting server.basepath is triggering this:
And yes. It is, indeed, taking quite a while. Edit: Almost exactly two minutes on my laptop. |
Presumably, this would only be seen on the first "boot" in a traditional installation, so it wouldn't be a problem long term. However, since each container starts with a pristine filesystem, the optimization step is running on each launch. |
Building a custom image that is pre-optimized seems like the way forward, but the existing techniques are a little hacky. Refer: #6057 |
Setting |
Easy to reproduce outside Docker.
It's just exacerbated by Docker because of the stateless filesystem. |
@jarpy Thanks for checking. I was traveling at the time, but I really wanted to get an answer to this :) That is definitely a bug in Kibana. I'm not sure how or why that started happening, but it's a bad one. |
No problem. I was guilty of speculating without empirical evidence. Thanks for reminding me to science! :) |
@spalger Any thoughts on why basePath would be kicking off an optimize run? |
I can confirm changing the basePath has triggered an optimise cycle since at least 4.4, and from memory much earlier. |
Not sure why this is labeled a bug. It was intentional and required if we want webpack to know the URL it should be loading output from. #10854 might remove this restriction, but so far it has not proven that webpack behaves properly when it needs to load additional files from it's build output from arbitrary urls. e.g.:
Kibana currently utilized Option 1, because with Option 2 the relative urls need to be different when the bundle is loaded from |
Just to start kibana in a container with this "optimization", 3GB and 2Ghz
must be allocated in a container scheduler.
And then it barely uses 5-10% of it.
…On Fri, Apr 7, 2017 at 8:09 AM Spencer ***@***.***> wrote:
Not sure why this is labeled a bug. It was intentional and required if we
want webpack to know the URL it should be loading output from.
#10854 <#10854> might remove this
restriction, but so far it has not proven that webpack behaves properly
when it needs to load additional files from it's build output from
arbitrary urls.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#10724 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AENT-svsVvu7yQZB635xX36tKZVEIB9Vks5rtlGbgaJpZM4MV3mi>
.
|
A parameter change that needs a rebuild... Why is the server.basePath needed at all? That build makes it useless in the setting where it's needed the most. Maybe someone can make an nginx config (with |
Got the same issue in v5.4.1, when set
|
Using the docker container 5.4.3 and have the same issue. For some reason, running it with
|
This is a pretty significant breaking change. Suddenly requiring kibana to have 5X the previous memory available to start/optimize and not printing a reason has been a huge headache trying to switch out 2GB containers for the new version. Please at least print the reason for the optimization run. |
I understand the reasoning behind the optimization kibana needs at the very beginning but this becomes annoying in the initial setup of the Last optimization result is: Please note the 654.36 seconds I had to wait for the server to come alive just to check if my last setting was right. The Is there a way to extract the bundle or the optimization result of kibana out of the docker container in a volume on host disk so this optimization should run only once? |
Yes, we ship builds similarly pre-optimized. If you need to change any settings or install plugins that will cause a bundling cycle you can do so on a different/host machine, and after the optimize step zip it back up if necessary and send it over. |
I'd gladly do that but I am now trying to figure out the best settings for my project and kibana optimization slows down this process terribly. I don't have something to bundle yet, just a Any hints on this kind of setup, extracting out of kibana docker the folder with the optimization result? This way we'd be able to optimize it once and then deploy the dockers together with the associated folders with the optimization. Thanks for the quick answer, by the way. |
@rbosneag You can build your own image using the workaround here: #6057 (comment) For example, my
|
Just encountered this issue with 6.0.0 Beta 2, would be great to have a solution before 6.0 is finalised.
|
So far started Kibana with:
But after changing
To resolve issue I reinstalled Kibana and left experiments with base path for better time (( |
Edit by @epixa: The root cause of this issue is that setting
server.basePath
results in an optimizer run, which it should not. See this comment.Kibana version:
5.2.2 in official docker image
Elasticsearch version:
5.2.2
Server OS version:
linux
Description of the problem including expected versus actual behavior:
When
SERVER_BASEPATH
is enabled in env vars, containers takes 10x longer to start and uses all much more CPU (that's restricted via cgroup). RemovingSERVER_BASEPATH
makes container start almost instant.Steps to reproduce:
Run kibana in nomad.
Provide logs and/or server output (if relevant):
This request is broken (502) in chrome debugger.
Same 502 is in the kibana logs:
The text was updated successfully, but these errors were encountered: