-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Technical FAQ
https://stackoverflow.com/questions/tagged/eclipse-che+theia
It is eclipse/che-theia, which is located here.
The version of this image consists of two parts:
[THEIA_VERSION]-[CHE_VERSION]
- first one - the version of Theia inside the image
- second one - the Che version itself (ex.
eclipse/che-theia:0.3.10-nightly
,eclipse/che-theia:0.3.10-6.7.0
, etc)
You need to change the value of argument THEIA_VERSION
in Dockerfile.
Beware of the CQ to be created for each Theia version upgrade.
Patches are per Theia version, so no need to remove them.
Integration tests are executed by default. Upgrading Theia may require updating integration tests.
Patches are per version
let say you want to patch 0.3.12 version
you put patches in dockerfiles/theia/src/patches/0.3.12
folder and name your patches like 001-this-is-my.patch
, 002-another.patch
For 0.3.13
, patches will go in dockerfiles/theia/src/patches/0.3.13
, etc
The sources of eclipse/che-theia is located here. After the changes are made, you need to rebuild the image with the following command: using build script:
$ ./build.sh --build-args:GITHUB_TOKEN=$GITHUB_TOKEN,THEIA_VERSION=0.3.13 --tag:0.3.13-nightly
or using docker:
$ docker build -t eclipse/che-theia:0.3.13-nightly --build-arg GITHUB_TOKEN={your token} --build-arg THEIA_VERSION=0.3.13 .
Integration tests are launched by default during the build. It is possible to skip with the option --skip-tests
./build.sh --skip-tests
You can use the yeoman generator as describe here: https://www.theia-ide.org/doc/Authoring_Plugins.html
To use cutom Theia IDE Che plug-in one need to host custom plugin registry and plug-in itself.
In this example we'll use GitHub to host needed files, but it is possible to use other solution.
First we need to create a Che plugin with Theia IDE.
It should be che-plugin.yaml
file inside some *.tar.gz
file.
To simplify the process one may just fork existing sample,
make some changes in etc/che-plugin.yaml
(for example specify other Theia image, add an enviroment variable, etc.) and create plugin metadata file
(one may use build.sh
from the repo, which just creates an archive from the config file in etc
directory).
Then create new release in own fork and attach (via Edit
-> Attach binaries
) the che-editor-plugin.tar.gz
file.
Second step is to add the plugin from the first step in custom plugin registry.
Fork official registry,
and create a new plugin or edit existing configuration plugins/org.eclipse.che.editor.theia/1.0.0/meta.yaml
.
In case of custom Theia one should edit url
field to point to the tar file from the first step.
For example, the url may look like:
https://github.com/ws-skeleton/che-editor-theia/releases/download/untagged-efe6c6ceb88545c76b94/che-editor-plugin.tar.gz
Finally, create a worspace, go to workspace settings, open Config
tab and edit editor
attribute
(or plugins
if it is not an editor).
Instead of plugin name, url to the plugin root should be specified following by colon and plugin version.
For example:
https://raw.githubusercontent.com/username/che-custom-plugins/master/plugins/org.eclipse.che.editor.theia:1.0.0
Then save changes and start workspace.
Complete documentation is located in this Theia documentation section: https://github.com/theia-ide/theia/blob/master/packages/plugin/API.md
WIP documentation is located here: https://github.com/theia-ide/theia/blob/plugin-api-documentation/packages/plugin-ext/doc/how-to-add-new-plugin-api.md
- Frontend: https://github.com/eclipse/che/wiki/Contributing-to-Che7-and-Che-Theia#frontend-plugin
- Backend: https://github.com/eclipse/che/wiki/Contributing-to-Che7-and-Che-Theia#backend-plugin
With the error on the browser console:
Uncaught (in promise) Error: pods is forbidden: User "system:serviceaccount:mini-che:default" cannot list pods in the namespace "mini-che": User "system:serviceaccount:mini-che:default" cannot list pods in project "mini-che"
To fix this, provide privilege to your user:
oc login -u system:admin
oc adm policy add-cluster-role-to-user admin system:serviceaccount:mini-che:default
To figure out which cluster your workspace is running in:
- Connect to the openshift.io dashboard
- Create a new space ("New space" in top left corner)
- Create a test project by clicking the "Create an application" button, and following the wizard
- Kick off a pipeline and view its progress. You should see the application in Stage section of the applciation view.
- Click on the "Build #x" link for the pipeline, you will be taken to an OpenShift Online cluster
- In the OpenShift Online console, there are multiple namespaces with a common root available in a drop-down at the top left, including -che, -jenkins, -run, and -stage. Select the "username-che" namespace
You are now looking at the OpenShift project which contains your Che workspaces.
Performance when starting workspaces is affected by the number of files in your Persistent Volume in the Che namespace on OpenShift. When projects are deleted, you may ending up with several workspace folders with lots of files on this volume.
You can connect directly to OpenShift, attach this volume to another container, and remove any files which you do not require any more. Here are the commands to clean them up.
To connect to OpenShift, you first need to install the oc
command line tool, and log in to your OpenShift instance.
To log in with the oc
command line,
- Install the OpenShift CLI tools
- Find the OpenShift instance your account is allocated to (by following the previous question)
- Connect to the OpenShift dashboard you found (eg. https://console.starter-us-east-2.openshift.com/)
- Click on your username in the top right, and in the pop-up menu,
Copy Login command
- Paste and execute the line into a local terminal - it should be in the form
oc login https://api.starter-us-east-2.openshift.com --token=<hidden>
After logging in to OpenShift (see previous question), you can attach to a container with this volume mounted, and clean up the files:
oc project xxxx-che # usually xxx-che is username-che - selects your Che project
oc run cleanup --image=registry.access.redhat.com/rhel7 -- tail -f /dev/null # This file will remove unused images from your local instance cache
oc volume dc/cleanup --add -t pvc --name=cleanup --claim-name=claim-che-workspace --mount-path=/workspaces # Mount your persistent volume to the local path /workspaces
oc get pods #Identify the name of the active Che pod
oc rsh cleanup-X-XXXXX # Use the name found above here - this will log you in to the container with your workspaces mounted in the `/workspaces` folder
# find and remove the orphans workpace folders if any in the `/workspaces` using normal Unix shell commands
oc delete all -l app=cleanup # once the folders removed, delete the image