Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Smoother installation (good defaults) #14614

Closed
4 of 6 tasks
l0rd opened this issue Sep 20, 2019 · 15 comments
Closed
4 of 6 tasks

Smoother installation (good defaults) #14614

l0rd opened this issue Sep 20, 2019 · 15 comments
Labels
area/install Issues related to installation, including offline/air gap and initial setup kind/epic A long-lived, PM-driven feature request. Must include a checklist of items that must be completed. severity/P1 Has a major impact to usage or development of the system.

Comments

@l0rd
Copy link
Contributor

l0rd commented Sep 20, 2019

Is your enhancement related to a problem?

We have observed that running Che on a real world, multi-user, scenario requires some changes to the default configuration.

To make it easier for most of the user that install Che (kubernetes cluster admins) we should consider to change the default configurations to reflect the most used ones.

Describe the solution you'd like

These should be the defaults in che.properties and the installers (helm and operator) should not override it.

@l0rd l0rd added kind/enhancement A feature request - must adhere to the feature request template. team/platform severity/P1 Has a major impact to usage or development of the system. labels Sep 20, 2019
@gorkem gorkem modified the milestones: Backlog - Platform, 7.4.0 Sep 20, 2019
@skabashnyuk
Copy link
Contributor

@l0rd can you explain the target goal of this change? I worried that "hosted" Che and Che that is running locally may require different configuration parameters because it has different performance requirements.

@ibuziuk
Copy link
Member

ibuziuk commented Sep 23, 2019

@skabashnyuk I do not think that it is related to "hosted" Che - those are related to improvements of the default values for any multi-user installation so that the setup would be minimum and desired config would be applied OOTB

@l0rd
Copy link
Contributor Author

l0rd commented Sep 23, 2019

That's not specific to hosted Che. We continuously get feedbacks from real world usage of Che (che.osio is and example but @kameshsampath workshops, @gorkem, and others as well) and I have done a list of default configs that IMHO should be updated. @skabashnyuk that's a proposal, if there is some concern on any of those configurations let's discuss it.

@skabashnyuk
Copy link
Contributor

@l0rd my concerns that some of the values can simply kill or make almost impossible some Che usage scenarios. Like Che server memory request/limit should match osio (512Mi/4Gi) that is too match for local Che. It's hard to use it locally with 16G of RAM with that change it becomes even more harder.

Let's review values one by one.

Limit the number of running workspace per user to 1 (CHE_LIMITS_USER_WORKSPACES_RUN_COUNT: "1")

I think ok for any scenario by default

Enable TLS by default

Are we talking about helm and operator?
In some cases it will make installation a bit harder, is that acceptable compromise?

Idle inactive workspaces after 30 minute of inactivity (CHE_LIMITS_WORKSPACE_IDLE_TIMEOUT: "1800000")

I think ok for any scenario by default

Che server memory request/limit should match osio (512Mi/4Gi)

Value of 4Gi was set in Che6 period, with issues with JsonRPC.
In my latest observation, 1Gi should be enough for the hosted Che load.

Set Che server cpu request/limit as well (500m/1)

Not clear why we need that?

Pool type should be cached (CHE_WORKSPACE_POOL_TYPE: "cached")

Yes and No. No - it doesn't make any sense for local use of Che, Yes - it's an easy fix for hosted, however, it would give some boost
on first X thread, all remaining will be just wasting of resources.
This two tasks #14601 #14600
can give us a better understanding of what we need. Ideally, we should define what the target level of concurrency we want to achieve.

@ibuziuk
Copy link
Member

ibuziuk commented Sep 23, 2019

@rhopp could you also prioritize redhat-developer/rh-che#1337 as part of this issue ? JSONRPC setting must also be unified

@l0rd
Copy link
Contributor Author

l0rd commented Sep 24, 2019

@skabashnyuk I will let you and @ibuziuk decide about the memory settings. Your point of view makes perfect sense. I would just like che.osio memory settings to match the default.

For TLS that's the point: we should make it so easy to configure (with automation) that there won't be any reason to disable TLS. That may be a separate issue since it's not just a matter of changing a config.

@l0rd
Copy link
Contributor Author

l0rd commented Oct 2, 2019

@skabashnyuk I have reduced the number of default configs to change in this issue and created a distinct issue for TLS #14742.

@bmicklea
Copy link

bmicklea commented Oct 3, 2019

When we say "local" Che are we thinking of someone running it on their laptop? If so that's not a scenario we ever talk about with prospects and customers - we always suggest running Che on a shared dev cluster. I don't think we should be optimizing defaults for a laptop, although documenting what we'd suggest when running on a laptop would be fine.

Similarly we do often talk to people about the value of being able to run a couple of Workspaces at the same time so seeing the default Workspaces set to 1 might be an issue. I'm less worried about that one though since it's fairly obvious how to change it.

@l0rd
Copy link
Contributor Author

l0rd commented Oct 3, 2019

@bmicklea the goal of this issue is make the default config optimized for the cloud, not for the laptop.

For the number of concurrent workspaces I think we should be conservative. Given the amount of resources needed to start a workspace, when a user try to start a second one without stopping the first it usually fail. I am talking about real world OpenShift cluster where every user has limited resources. And we are still not able to dynamically figure out if a user has enough memory to start a new workspace or not so the failure happens late in the workspace bootstrap process and the error message is an OpenShift event that may be difficult to interpret. To be safe I would recommend limiting the number of running workspaces to 1. And of course that's only the default configuration so it can be changed.

@bmicklea
Copy link

bmicklea commented Oct 3, 2019

Makes sense. From the thread it was unclear if we were talking about a laptop. Thanks for clarifying.

@slemeur slemeur added kind/epic A long-lived, PM-driven feature request. Must include a checklist of items that must be completed. and removed kind/enhancement A feature request - must adhere to the feature request template. labels Oct 14, 2019
@skabashnyuk
Copy link
Contributor

@l0rd can we mark this issue as resolved or you want to wait for #14795?

@l0rd
Copy link
Contributor Author

l0rd commented Oct 16, 2019

@skabashnyuk let's close it. We are already tracking The default workspaces namespace/project should be <username>-che. as a separated issue.

@slemeur
Copy link
Contributor

slemeur commented Nov 12, 2019

I'm reopening this epic as, there are subtasks that are not completed.

@slemeur slemeur reopened this Nov 12, 2019
@slemeur slemeur changed the title Change some wsmaster defaults Smoother installation (good defaults) Nov 12, 2019
@skabashnyuk skabashnyuk modified the milestones: 7.3.0, Backlog - Platform Nov 12, 2019
@skabashnyuk skabashnyuk added the area/install Issues related to installation, including offline/air gap and initial setup label Nov 12, 2019
@skabashnyuk skabashnyuk removed this from the Backlog - Platform milestone Nov 12, 2019
@skabashnyuk
Copy link
Contributor

@slemeur @l0rd is this an umbrella issue or there is a DOD for this epic? Also please take a look #14742 (comment). It might require some untrivial operation with certificates. Are you sure we want to make it by default?

@skabashnyuk skabashnyuk added this to the Backlog - Platform milestone Nov 20, 2019
@skabashnyuk skabashnyuk removed this from the Backlog - Platform milestone Dec 16, 2019
@slemeur
Copy link
Contributor

slemeur commented Jan 8, 2020

Closing to in favor of following the work on enabling TLS by default: #14742

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/install Issues related to installation, including offline/air gap and initial setup kind/epic A long-lived, PM-driven feature request. Must include a checklist of items that must be completed. severity/P1 Has a major impact to usage or development of the system.
Projects
None yet
Development

No branches or pull requests

7 participants