An interactive app designed for OSC OnDemand that launches Galaxy within an Owens batch job.
This Batch Connect app requires the following software be installed on the compute nodes that the batch job is intended to run on (NOT the OnDemand node):
- Lmod 6.0.1+ or any other
module restore
andmodule load <modules>
based CLI used to load appropriate environments within the batch job before launching Galaxy.
The Install process runs on the login node
Use git to clone this app and checkout the desired branch/version you want to use:
git clone <repo>
cd <dir>
git checkout <tag/branch>
Install Galaxy and dependencies
sh install-galaxy.sh
You will not need to do anything beyond this as all necessary assets are installed. You will also not need to restart this app as it isn't a Passenger app.
To update the app you would:
cd <dir>
git fetch
git checkout <tag/branch>
Again, you do not need to restart the app as it isn't a Passenger app.
- Fork it ( https://github.com/OSC/bc_osc_galaxy/fork )
- Create your feature branch (
git checkout -b my-new-feature
) - Commit your changes (
git commit -am 'Add some feature'
) - Push to the branch (
git push origin my-new-feature
) - Create a new Pull Request
See the inline comments on PR #7 for more information. The main PR topic description was copied below.
Galaxy interface app runs on Owens. The users can install, manage and run tools and workflows.
- Remove galaxy submodule
- Add install-galaxy.sh to install galaxy 19.09
- Galaxy config files are removed and will be generated by before.sh.yml instead
- Update Readme
- Job Configuration. Currently, the jobs submitted via Galaxy will run on the same node as Galaxy.
- Get Data from external data sources
- Match workflow to destination
- The links in the sidebar and the main section are the same. The links in the sidebar are working but the links in the main section are broken. As the screenshot shows, the
admin/roles
is not correctly appended to the URL. -
pbs-python
is installed at run time when launching.
Collecting pbs_python (from -r /dev/stdin (line 1))
Installing collected packages: pbs-python
Successfully installed pbs-python-4.4.2.1
may need to add something similiar to this to install-galaxy.sh
galaxy_user@galaxy_server% git clone https://github.com/ehiggs/pbs-python
galaxy_user@galaxy_server% cd pbs-python
galaxy_user@galaxy_server% source /clusterfs/galaxy/galaxy-app/.venv/bin/activate
galaxy_user@galaxy_server% python setup.py install
- When sharing the app, some files are trying to write to the galaxy.
Failed to mount the app to /pun/dev/galaxy
due to 404 not found error.
After git clone this repo, run sh install-galaxy.sh
to git clone Galaxy release_19.09 to ./galaxy
folder and install dependencies in the virtual environment under ./galaxy.venv
folder and _conda
under ./galaxy/database/dependencies
folder. This script will also build custom visualization plugins
After completing the sh install-galaxy.sh
, galaxy can be launched as an interactive app. In before.sh.erb
, galaxy.yml
(general configuration), job_resource_params_conf.xml
(job resource configuration for users to select), job_conf.xml
(job runners configuration) are generated.
Galaxy is mounted on /node/${HOSTNAME}/8080=galaxy.webapps.galaxy.buildapp:uwsgi_app()
Data files are stored in the user’s dataroot (default to ~/.galaxy/
configured in Galaxy.yml
azhu $ ls ~/.galaxy/
citations compiled_templates control.sqlite files jobs_directory object_store_cache pbs tmp universe.sqlite
Galaxy.yml
takes in the user email address as the user authentication in the single-user mode. User identification has to be in email format, so ${USER}@osc.edu
is passed to Galaxy.yml
as a temporary solution. Further authentication can be configured as described [here].(https://galaxyproject.org/admin/config/external-user-auth/)
The users select the tool runner before starting the app. The developer adds destinations to job config file and assigns the user-selected runner to default.
bc_osc_galaxy/template/before.sh.erb
Lines 51 to 62 in 9afb0b7
- tool jobs won't be queued and will run immediately
- The number of concurrent jobs is limited, the maximum is the number of cores.
- When the session ends, the unfinished jobs will end too.
- When the session ends, the unfinished jobs will continue to run.
- Unlimited number of concurrent jobs
- Galaxy can only submit the jobs to the same cluster Galaxy is running on. For now, we run Galaxy on Owens, It's not able to submit jobs to quick. Therefore, there's a waiting time for jobs to run.
- It's very configurable and flexible. We can configure different resources for different tools. The user can choose the default runner or specify resources: If the user chooses to specify resources:
- Because we can configure different resources for different tools, we have to specify the resources for each tool in the job conf file. If the user installs new tools, we need to find a way to also add the configuration for the new tools to the job conf file.
- Because the resource selection is part of the tool form, for tools without tool forms like tools under
GET DATA
section, the users can't specify resources.
As an example, I configured BED-to-GFF tool to provide resources selection fields. Steps to configure a tool to use the dynamic runner based on resource parameters selected by the user:
- Specify the parameters in the job resource configuration file (https://github.com/galaxyproject/galaxy/blob/dev/lib/galaxy/config/sample/job_resource_params_conf.xml.sample). The following example contains
cores
andwalltime
. The input field can be an input box or a dropdown with several options.bc_osc_galaxy/template/before.sh.erb
Lines 67 to 76 in 9afb0b7
- Add rules to https://github.com/galaxyproject/galaxy/tree/dev/lib/galaxy/jobs/rules directory to match job resource parameters entered by the user to destinations. The following example matches the default runner to the default destination. If the user enters cores and walltime, we construct a resource list and run the tool with pbs runner.
bc_osc_galaxy/install-galaxy.sh
Lines 11 to 43 in 9afb0b7
- Add dynamic job runner to the
<plugins>
in job config file.rules_module
field indicates the location of the files we created at step 2.bc_osc_galaxy/template/before.sh.erb
Lines 47 to 49 in 9afb0b7
- Inside of
<resources>
in the job config file, add a group of parameters we defined at step 1 and define the group id.bc_osc_galaxy/template/before.sh.erb
Lines 38 to 40 in 9afb0b7
- Inside of
<tools>
in the job config file, specify the id="tool_id", destination="destination_id" and resource="resource_group_id"bc_osc_galaxy/template/before.sh.erb
Lines 41 to 43 in 9afb0b7
Tools are defined under https://github.com/galaxyproject/galaxy/tree/dev/tools
in the xml files. To find tool id, it's defined in the <tool>
tag such as <tool id="createInterval" name="Create single interval" version="1.0.0">
.