-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MongoDB Archive Install #244
Conversation
This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. To create a new application, login to your server and then run new-app: oc login https://mycluster.mycompany.com oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git oc logs -f bc/ruby-ex This will create an application based on the Docker image 'centos/ruby-22-centos7' that builds the source code from GitHub. A build will start automatically, push the resulting image to the registry, and a deployment will roll that change out in your project. Once your application is deployed, use the status, describe, and get commands to see more about the created components: oc status oc describe deploymentconfig ruby-ex oc get pods To make this application visible outside of the cluster, use the expose command on the service we just created to create a 'route' (which will connect your application over the HTTP port to a public domain name). oc expose svc/ruby-ex oc status You should now see the URL the application can be reached at. To see the full list of commands supported, run 'oc --help'. workspace CLI
Can one of the admins verify this patch? |
3 similar comments
Can one of the admins verify this patch? |
Can one of the admins verify this patch? |
Can one of the admins verify this patch? |
@ecwpz91 thanks for this contribution, although I must admit I'm not sure how to answer, because from what I'm aware we've never talked about maintaining Dockerfiles that would use upstream tarballs in these repos so far. If you'd only want to get to the MongoDB 3.4 version, I'd rather create the Dockerfile that would use Fedora packages or even copr packages.. Anyway, can you explain more what was your intention? since it looks like you combined new features with new version of MongoDB and it's not clear what was the original purpose.. |
@hhorak my main intention is to share my knowledge/experience when migrating from 3.2 -> 3.4 release. For instance, I experienced this bug and implemented changes to the Also, I'm hoping to enhance the experience for newcomers such as myself, e.g. project structure, comments, and documentation. Backstory The history behind my efforts originate from a proof of concept. Meaning, I was asked to use upstream tarballs, but copr packages is a better implementation imho. Anyways, I do not plan on maintaining this project going forward, and just I wanted to generate awareness as to what I'd already done. That way the community could pick and choose what desirable non-conventional things to implement. Also, I'm impartial as to what gets adopted. I just want to help, but knew I made a lot of changes independently that could be disruptive to the current state of the project. |
FWIW, I'm picking up @ecwpz91 's work on my own fork https://github.com/jbornemann/mongodb-container/tree/working_branch Adding some requested features to 3.4 (sharding, config server, mongos), and also fixing a few minor issues with elections. I plan on working to get it pulled upstream when I am finished. I'd be happy to work with @ecwpz91 on it. |
See PR #255 for details. |
Installs MongoDB 3.4 when distribution packages are unavailable from tarball.
Although not a typical scl install, and somewhat redundant when considering s2i, notables are:
env
variables.Also, I've got two private projects as well, hopefully they're of interest?