Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BREAK] Require OPLOG/REPLICASET to run Rocket.Chat #14227

Merged
merged 1 commit into from
Apr 24, 2019
Merged

Conversation

rodrigok
Copy link
Member

@rodrigok rodrigok commented Apr 23, 2019

How to use Rocket.Chat with Oplog

  1. Start your mongodb in a replicaset mode:
mongod --smallfiles --oplogSize 128 --replSet rs0
  1. Start the replicaset via mongodb shell:
mongo mongo/rocketchat --eval "rs.initiate({ _id: ''rs0'', members: [ { _id: 0, host: ''localhost:27017'' } ]})"
  1. Start your Rocket.Chat instance with OPLOG configuration:
export PORT=3000
export ROOT_URL=http://localhost:3000
export MONGO_URL=mongodb://localhost:27017/rocketchat
export MONGO_OPLOG_URL=mongodb://localhost:27017/local
export MAIL_URL=smtp://smtp.email
node main.js

Docker Compose

To start mongodb replicaset via docker compose you need to have an additional job to execute the command and add the MONGO_OPLOG_URL env var into the rocket.chat config:

version: '2'

services:
  rocketchat:
    image: rocketchat/rocket.chat:latest
    command: bash -c 'for i in `seq 1 30`; do node main.js && s=$$? && break || s=$$?; echo "Tried $$i times. Waiting 5 secs..."; sleep 5; done; (exit $$s)'
    restart: unless-stopped
    volumes:
      - ./uploads:/app/uploads
    environment:
      - PORT=3000
      - ROOT_URL=http://localhost:3000
      - MONGO_URL=mongodb://mongo:27017/rocketchat
      - MONGO_OPLOG_URL=mongodb://mongo:27017/local
      - MAIL_URL=smtp://smtp.email
    depends_on:
      - mongo
    ports:
      - 3000:3000

  mongo:
    image: mongo:4.0
    restart: unless-stopped
    volumes:
     - ./data/db:/data/db
    command: mongod --smallfiles --oplogSize 128 --replSet rs0 --storageEngine=mmapv1

  # this container's job is just run the command to initialize the replica set.
  # it will run the command and remove himself (it will not stay running)
  mongo-init-replica:
    image: mongo:4.0
    command: 'bash -c "for i in `seq 1 30`; do mongo mongo/rocketchat --eval \"rs.initiate({ _id: ''rs0'', members: [ { _id: 0, host: ''localhost:27017'' } ]})\" && s=$$? && break || s=$$?; echo \"Tried $$i times. Waiting 5 secs...\"; sleep 5; done; (exit $$s)"'
    depends_on:
      - mongo

@rodrigok rodrigok added this to the 1.0.0 milestone Apr 23, 2019
@rodrigok rodrigok merged commit b587351 into develop Apr 24, 2019
@engelgabriel engelgabriel deleted the require-oplog branch April 24, 2019 14:00
@rodrigok rodrigok mentioned this pull request Apr 28, 2019
@LeeThompson
Copy link

Why was this done? I have no need to run this on a cluster and now I have to install all this extra crap?

@xcpdq
Copy link

xcpdq commented Apr 28, 2019

This is incredible. WHY was this done? Who needs clusters and whatnot? Our entire rocket chat installation is now messed up because of this!!!! And the medium.com link is so old. Why didn't you provide installation/upgrade instructions for this? Unbeliveable!

@LeeThompson
Copy link

LeeThompson commented Apr 28, 2019

I managed to get replication setup on MongoDB and RocketChat still won't start due to no oplog (even though there is). My installation is completely broken!

EDIT: Working after 2.5 hours of frantic reading and trying things. I don't know why Rocket.Chat decided to do this (it's even called [BREAK]) but this was really irresponsible.

@LeeThompson
Copy link

LeeThompson commented Apr 28, 2019

I got this to work again finally but it was a lot of work, I'm including my steps below in the hope that this will help someone else.

Please note, my installation is using Synology Docker and MongoDB is only there for RocketChat so security is completely turned off.

  1. I set MongoDB be a single server replica set, I had to recreate the container with the following as startup parameters: --smallfiles --replSet rs0 --config /etc/mongo/mongod.conf (the .conf isn't really being used but it's set in case I want to make changes). If you need to re-create the container and you're using docker links (only required if the containers are using docker's private network space), you'll likely need to delete and re-create it the link (named mongo in this case).

  2. In MongoDB Container start a bash shell:

bash:

mongo

mongo command client:

db.createUser({user: "oploguser", pwd: "password", roles: [{role: "read", db: "local"}]})
use local
rs.initiate()

NOTE: The createUser is only necessary if you're using authentication with MongoDB.

  1. In the RocketChat container, add to environment variables (I'm using a = to note between the variable name and the value):

In my case since I don't have any MongoDB security whatsoever:
MONGO_OPLOG_URL=mongodb://mongodb:27017/local

with authentication:
MONGO_OPLOG_URL=mongodb://oploguser:password@mongodb:27017/local?authSource=admin&replicaSet=rs0

@franckadil
Copy link

@LeeThompson Thank you !!! My install is broken too :( I'll give your solution a try .

@franckadil
Copy link

franckadil commented Apr 28, 2019

@LeeThompson Do you have any suggestion for Docker-compose ?

This is my file:

version: '3'
services:

 web:
   build: ./img/nginx
   container_name: web
   restart: unless-stopped
   ports:
      - "80:80"
      - "443:443"
   environment:
      - DOMAIN=test.com
      - EMAIL=test@test.com
   volumes:
      - ./letsencrypt:/etc/letsencrypt
      - ./apps:/var/www/html
      - ./conf/nginx_config.conf:/etc/nginx/conf.d/default.conf
   links:
     - rocketchat
 db:
   image: mongo
   container_name: db
   restart: unless-stopped
   volumes:
     - ./data/runtime/db:/data/db
     - ./data/dump:/dump
   command: mongod --smallfiles

 rocketchat:
   image: rocketchat/rocket.chat:latest
   container_name: rocketchat
   restart: unless-stopped
   volumes:
     - ./uploads:/uploads
     #- ./fixes/lame.min.js:/app/bundle/programs/web.browser/app/lame.min.js
     #The line above fixes mp3 recording until we get new release.
     #https://github.com/RocketChat/Rocket.Chat/issues/10530
   environment:
     - MONGO_URL=mongodb://db:27017/rocketchat
     - ROOT_URL=https://test.com
     - Accounts_UseDNSDomainCheck=True
   links:
     - db:db
   ports:
     - 3000:3000

 hubot:
   image: rocketchat/hubot-rocketchat:latest
   container_name: hubot
   restart: unless-stopped
   environment:
     - ROCKETCHAT_URL=https://test.com
     - ROCKETCHAT_ROOM=GENERAL
     - ROCKETCHAT_USER=bot
     - ROCKETCHAT_PASSWORD=xxxxx
     - LISTEN_ON_ALL_PUBLIC=true
     - BOT_NAME=bot
     - EXTERNAL_SCRIPTS=hubot-help,hubot-seen,hubot-links,hubot-greetings,hubot-diagnostics,hubot-google,hubot-reddit,hubot-bofh,hubot-bookmark,hubot-shipit,hubot-maps,hubot-thesimpsons 
   links:
     - rocketchat:rocketchat
 # this is used to expose the hubot port for notifications on the host on port 3001, e.g. for hubot-jenkins-notifier
   ports:
     - 3001:8080

@LeeThompson
Copy link

LeeThompson commented Apr 28, 2019

@franckadil I haven't used docker-compose but maybe this:

NOTE: Backup your data first if you care about it in case this goes horribly wrong.

Once your MongoDB is running with the --replSet option, you will need to attach a terminal to the Mongo container and use the "mongo" command client to initiate the replset (and create the oplog.rs collection):

use local
rs.initiate()

docker-compose snippets:

db:
image: mongo
container_name: db
restart: unless-stopped
volumes:
- ./data/runtime/db:/data/db
- ./data/dump:/dump
command: mongod --smallfiles --replSet rs0
rocketchat:
image: rocketchat/rocket.chat:latest
container_name: rocketchat
restart: unless-stopped
volumes:
- ./uploads:/uploads
#- ./fixes/lame.min.js:/app/bundle/programs/web.browser/app/lame.min.js
#The line above fixes mp3 recording until we get new release.
##10530
environment:
- MONGO_URL=mongodb://db:27017/rocketchat
- MONGO_OPLOG_URL=mongodb://db:27017/local
- ROOT_URL=https://test.com
- Accounts_UseDNSDomainCheck=True
links:
- db:db
ports:
- 3000:3000

@franckadil
Copy link

franckadil commented Apr 28, 2019

@LeeThompson Thank you very much, I finally got it working !
I created an intermediate container to initialise the replica instead of doing this manually.
My data persisted.

If somebody could update the docker-compose documentation …

  1. First add this for your database:
    command: mongod --smallfiles --replSet rs0
    Ex:
 mongo:
   image: mongo
   container_name: db
   restart: unless-stopped
   volumes:
     - ./data/runtime/db:/data/db
     - ./data/dump:/dump
   command: mongod --smallfiles --replSet rs0
  1. Then add this:
    This container's job is just run the command to initialize the replica set, it will run the command and remove himself (it will not stay running):
 mongo-init-replica:
   image: mongo
   command: 'bash -c "for i in `seq 1 30`; do mongo mongo/rocketchat --eval \"rs.initiate({ _id: ''rs0'', members: [ { _id: 0, host: ''localhost:27017'' } ]})\" && s=$$? && break || s=$$?; echo \"Tried $$i times. Waiting 5 secs...\"; sleep 5; done; (exit $$s)"'
   depends_on:
     - mongo
  1. This is for the Rocket Chat container:
    • PORT=3000
    • MONGO_URL=mongodb://mongo:27017/rocketchat
    • MONGO_OPLOG_URL=mongodb://mongo:27017/local
      Ex:
 rocketchat:
   image: rocketchat/rocket.chat:latest
   container_name: rocketchat
   restart: unless-stopped
   volumes:
     - ./uploads:/uploads
   environment:
     - PORT=3000
     - MONGO_URL=mongodb://mongo:27017/rocketchat
     - MONGO_OPLOG_URL=mongodb://mongo:27017/local
     - ROOT_URL=https://yoururl.com
     - Accounts_UseDNSDomainCheck=True
   links:
     - mongo:mongo
   ports:
     - 3000:3000

@geekgonecrazy
Copy link
Contributor

Which docker-compose did you use? The one here on the repo actually has oplog configured. https://github.com/RocketChat/Rocket.Chat/blob/develop/docker-compose.yml

Just to be clear. This does not require an actual replicaset. This just requires at least the single node be configured in replicaset mode so that the oplog is enabled.

In addition make sure you set MONGO_OPLOG_URL so the Rocket.Chat is aware of this

@franckadil
Copy link

franckadil commented Apr 28, 2019

@geekgonecrazy I created my own stack after searching on the rocket.chat official documentation and Google.

I had to customize docker-compose to be able to automate the letsencrypt/nginx certification by building a custom dockerfile for nginx/certbot. Are you interested in receiving the complete stack to update the Official documentation for people new to docker and Rocket.Chat ?

@rodrigok
Copy link
Member Author

@franckadil please send us your docker-compose, we can add some parts to our own or create a new file just for a more complete stack.

Thanks

@dimm0
Copy link

dimm0 commented Apr 29, 2019

Any chance you'll change your mind and make this optional? I don't even understand what this does..
Our rocketchat service was broken for a day, with 450 users.
Do you have an example for kubernetes?

@zogith
Copy link

zogith commented Apr 29, 2019

Hi,
I don't know that much about MongoDB and installed my server from a fairly simple set of "docker run" commands as a per a tutorial on how to do it a while back, so was quite blindsided by this change. It wasn't clear that my mongodb container as created per instructions would not have this "OPLOG/REPLICASET".

I wouldn't have expected a change between a v.1.0.0 Release Candidate and a v.1.0.0 to completely break my simple rocketchat install. I fixed eventually with the instructions provided by Lee Thompson above, but it would have been really great if this was communicated more clearly. thanks.

I used the edit docker config.v2.json hack to restart mongo docker container with the --replSet rs0 to get back and running..

@jekuno
Copy link

jekuno commented Apr 29, 2019

The OPLOG/REPLICASET requirement also breaks Rocket.Chat deployments on Heroku. Would it be an option to keep OPLOG/REPLICASET optional?

@xcpdq
Copy link

xcpdq commented Apr 29, 2019

I can't get this to work :( I initiated the replset with the following commands:
First I started the mongo container with docker-compose up db, then I entered the container and typed:
$ mongo
use local
rs.initiate()

I stopped the container and then typed docker-compose up. I'm getting the following error now...

rocketchat_1  | /app/bundle/programs/server/node_modules/fibers/future.js:313
rocketchat_1  |                                                 throw(ex);
rocketchat_1  |                                                 ^
rocketchat_1  |
rocketchat_1  | MongoError: not master and slaveOk=false
rocketchat_1  |     at queryCallback (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/cursor.js:248:25)
rocketchat_1  |     at /app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/pool.js:532:18
rocketchat_1  |     at _combinedTickCallback (internal/process/next_tick.js:131:7)
rocketchat_1  |     at process._tickCallback (internal/process/next_tick.js:180:9)
db_1          | 2019-04-29T07:57:05.129+0000 I -        [conn1] end connection 172.17.0.3:60152 (3 connections now open)
db_1          | 2019-04-29T07:57:05.133+0000 I -        [conn3] end connection 172.17.0.3:60156 (2 connections now open)
rocketchat_rocketchat_1 exited with code 1

My docker-compose.yml file looks like this:

db:
  image: mongo:3.4
  volumes:
    - ./data/runtime/db:/data/db
    - ./data/dump:/dump
  command: mongod --smallfiles --replSet rs0

rocketchat:
  image: rocketchat/rocket.chat:latest
#  image: chat:backup
  environment:
    - MONGO_URL=mongodb://db:27017/rocketchat
    - MONGO_OPLOG_URL=mongodb://db:27017/local
    - ROOT_URL=https://myurl
  links:
    - db:db
  ports:
    - 3000:3000

Can someone tell me what I did wrong please?

@rockneverdies55
Copy link

Even using @franckadil 's simple compose I'm still getting the same Error: $MONGO_OPLOG_URL must be set to the 'local' database of a Mongo replica set... error though. Weird.

@rockneverdies55
Copy link

Doing this #6963 (comment) has worked.

Thanks @franckadil and @geekgonecrazy

@franckadil
Copy link

@rockneverdies55 Yes sorry I forgot to mention you should delete your old containers first. And in some rare cases do a docker system prune can help delete cash and reset everything for a clean run. I am happy to know you've come on top of these errors !

@exitsoundhh
Copy link

@exitsoundhh Just commenting your code here ;)

  1. Start with the database first
    -You should be able to use the lates Mongo version as it brings more robust features and security updates.
  • So your service is called db and your container image is mongo (for future reference db:mongo)
  • Your db-init-replica service is missing the container link:
   depends_on:
     - db

The first part you want:

db:
  image: mongo
  volumes:
    - ./data/runtime/db:/data/db
    - ./data/dump:/dump
 command: mongod --smallfiles --replSet rs0

db-init-replica:
  image: mongo
  command: 'bash -c "for i in `seq 1 30`; do mongo mongo/rocketchat --eval \"rs.initiate({ _id: ''rs0'', members: [ { _id: 0, host: ''localhost:27017'' } ]})\" && s=$$? && break || s=$$?; echo \"Tried $$i times. Waiting 5 secs...\"; sleep 5; done; (exit $$s)"'
  depends_on:
    - db
  1. Then you should start the Rocketchat service.
  • your link reference should be db:mongo

The second part:

rocketchat:
  image: rocketchat/rocket.chat:latest
  environment:
    - MONGO_URL=mongodb://db:27017/rocketchat
    - MONGO_OPLOG_URL=mongodb://mongo:27017/local
    - ROOT_URL=https://test.chat.com
  links:
    - db:mongo
  ports:
    - 3000:3000

This is what I could spot.

@franckadil big thanks for your help, i have modified the docker-compose.yml file but it doesn't worked for me.

db:
  image: mongo:3.2
  volumes:
    - ./data/runtime/db:/data/db
    - ./data/dump:/dump
  command: mongod --smallfiles --replSet rs0

db-init-replica:
  image: mongo:3.2
  command: 'bash -c "for i in `seq 1 30`; do mongo mongo/rocketchat --eval \"rs.initiate({ _id: ''rs0'', members: [ { _id: 0, host: ''localhost:27017'' } ]})\" && s=$$? && break || s=$$?; echo \"Tried $$i times. Waiting 5 secs...\"; sleep 5; done; (exit $$s)"'
  links:
    - db
# Unsupported config option for db-init-replica service: 'depends_on'

rocketchat:
  image: rocketchat/rocket.chat:latest
  environment:
    - MONGO_URL=mongodb://db:27017/rocketchat
    - MONGO_OPLOG_URL=mongodb://mongo:27017/local
    - ROOT_URL=https://test.chat.com
  links:
    - db:mongo
  ports:
    - 3000:3000

my commands :

  1. docker-compose up -d db
  2. docker-compose up -d db-init-replica
  3. docker-compose up

failure code:

rocketchat_1      | MongoNetworkError: failed to connect to server [db:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND db db:27017]
rocketchat_1      |     at Pool.<anonymous> (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/topologies/server.js:564:11)
rocketchat_1      |     at emitOne (events.js:116:13)
rocketchat_1      |     at Pool.emit (events.js:211:7)
rocketchat_1      |     at Connection.<anonymous> (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/pool.js:317:12)
rocketchat_1      |     at Object.onceWrapper (events.js:317:30)
rocketchat_1      |     at emitTwo (events.js:126:13)
rocketchat_1      |     at Connection.emit (events.js:214:7)
rocketchat_1      |     at Socket.<anonymous> (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/connection.js:246:50)
rocketchat_1      |     at Object.onceWrapper (events.js:315:30)
rocketchat_1      |     at emitOne (events.js:116:13)
rocketchat_1      |     at Socket.emit (events.js:211:7)
rocketchat_1      |     at emitErrorNT (internal/streams/destroy.js:64:8)
rocketchat_1      |     at _combinedTickCallback (internal/process/next_tick.js:138:11)
rocketchat_1      |     at process._tickCallback (internal/process/next_tick.js:180:9)
rocketchat_rocketchat_1 exited with code 1

@franckadil
Copy link

franckadil commented May 5, 2019

@exitsoundhh This should work, but first delete your containers: docker rm $(docker ps -a -q)

The error message you get # Unsupported config option for db-init-replica service: 'depends_on' is because you're not using a docker service file. You must specify version: '3' like in my example simple compose.

If you don't use docker services the links: should work, but here is the correction:

  1. Use the latest Mongo it is compatible with your database ;)
  2. your MONGO_OPLOG_URL was using wrong container referenece.
    here is the correct syntax : MONGO_OPLOG_URL=mongodb://db:27017/local
  3. Your link was also wrong.
    here is the correct syntax : - db:mongo

Note: docker-compose up is more than enough, and to stop everything do a docker-compose stop.

When your stack is correctly launched, I would recommend you stop the stack, then you run it in the background with docker-compose up -d

db:
  image: mongo
  volumes:
    - ./data/runtime/db:/data/db
    - ./data/dump:/dump
  command: mongod --smallfiles --replSet rs0

db-init-replica:
  image: mongo
  command: 'bash -c "for i in `seq 1 30`; do mongo mongo/rocketchat --eval \"rs.initiate({ _id: ''rs0'', members: [ { _id: 0, host: ''localhost:27017'' } ]})\" && s=$$? && break || s=$$?; echo \"Tried $$i times. Waiting 5 secs...\"; sleep 5; done; (exit $$s)"'
  links:
    - db:mongo

# Unsupported config option for db-init-replica service: 'depends_on'

rocketchat:
  image: rocketchat/rocket.chat:latest
  environment:
    - MONGO_URL=mongodb://db:27017/rocketchat
    - MONGO_OPLOG_URL=mongodb://db:27017/local
    - ROOT_URL=https://test.chat.com
  links:
    - db:mongo
  ports:
    - 3000:3000

@exitsoundhh
Copy link

@exitsoundhh Just commenting your code here ;)

  1. Start with the database first
    -You should be able to use the lates Mongo version as it brings more robust features and security updates.
  • So your service is called db and your container image is mongo (for future reference db:mongo)
  • Your db-init-replica service is missing the container link:
   depends_on:
     - db

The first part you want:

db:
  image: mongo
  volumes:
    - ./data/runtime/db:/data/db
    - ./data/dump:/dump
 command: mongod --smallfiles --replSet rs0

db-init-replica:
  image: mongo
  command: 'bash -c "for i in `seq 1 30`; do mongo mongo/rocketchat --eval \"rs.initiate({ _id: ''rs0'', members: [ { _id: 0, host: ''localhost:27017'' } ]})\" && s=$$? && break || s=$$?; echo \"Tried $$i times. Waiting 5 secs...\"; sleep 5; done; (exit $$s)"'
  depends_on:
    - db
  1. Then you should start the Rocketchat service.
  • your link reference should be db:mongo

The second part:

rocketchat:
  image: rocketchat/rocket.chat:latest
  environment:
    - MONGO_URL=mongodb://db:27017/rocketchat
    - MONGO_OPLOG_URL=mongodb://mongo:27017/local
    - ROOT_URL=https://test.chat.com
  links:
    - db:mongo
  ports:
    - 3000:3000

This is what I could spot.

@franckadil big thanks for your help, i have modificted the docker-compose.yml file

db:
  image: mongo:3.2
  volumes:
    - ./data/runtime/db:/data/db
    - ./data/dump:/dump
  command: mongod --smallfiles --replSet rs0

db-init-replica:
  image: mongo:3.2
  command: 'bash -c "for i in `seq 1 30`; do mongo mongo/rocketchat --eval \"rs.initiate({ _id: ''rs0'', members: [ { _id: 0, host: ''localhost:27017'' } ]})\" && s=$$? && break || s=$$?; echo \"Tried $$i times. Waiting 5 secs...\"; sleep 5; done; (exit $$s)"'
  links:
    - db

rocketchat:
  image: rocketchat/rocket.chat:latest
  environment:
    - MONGO_URL=mongodb://db:27017/rocketchat
    - MONGO_OPLOG_URL=mongodb://mongo:27017/local
    - ROOT_URL=https://test.chat.com
  links:
    - db:mongo
  ports:
    - 3000:3000

@exitsoundhh
Copy link

@franckadil thank you man for your help, i think my last error is i must migrate my data for the mongo version 4? Do you have any idea?

I think the configuration are now correct?

db:
  image: mongo
  volumes:
    - ./data/runtime/db:/data/db
    - ./data/dump:/dump
  command: mongod --smallfiles --replSet rs0

db-init-replica:
  image: mongo
  command: 'bash -c "for i in `seq 1 30`; do mongo mongo/rocketchat --eval \"rs.initiate({ _id: ''rs0'', members: [ { _id: 0, host: ''localhost:27017'' } ]})\" && s=$$? && break || s=$$?; echo \"Tried $$i times. Waiting 5 secs...\"; sleep 5; done; (exit $$s)"'
  links:
    - db:mongo

# Unsupported config option for db-init-replica service: 'depends_on'

rocketchat:
  image: rocketchat/rocket.chat:latest
  environment:
    - MONGO_URL=mongodb://db:27017/rocketchat
    - MONGO_OPLOG_URL=mongodb://db:27017/local
    - ROOT_URL=https://test.chat.com
  links:
    - db:mongo
  ports:
    - 3000:3000
db_1              | 2019-05-05T15:02:50.050+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
db_1              | 2019-05-05T15:02:50.058+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=4403895f7180
db_1              | 2019-05-05T15:02:50.058+0000 I CONTROL  [initandlisten] db version v4.0.9
db_1              | 2019-05-05T15:02:50.058+0000 I CONTROL  [initandlisten] git version: fc525e2d9b0e4bceff5c2201457e564362909765
db_1              | 2019-05-05T15:02:50.058+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
db_1              | 2019-05-05T15:02:50.058+0000 I CONTROL  [initandlisten] allocator: tcmalloc
db_1              | 2019-05-05T15:02:50.058+0000 I CONTROL  [initandlisten] modules: none
db_1              | 2019-05-05T15:02:50.058+0000 I CONTROL  [initandlisten] build environment:
db_1              | 2019-05-05T15:02:50.058+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
db_1              | 2019-05-05T15:02:50.058+0000 I CONTROL  [initandlisten]     distarch: x86_64
db_1              | 2019-05-05T15:02:50.058+0000 I CONTROL  [initandlisten]     target_arch: x86_64
db_1              | 2019-05-05T15:02:50.058+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true }, replication: { replSet: "rs0" }, storage: { mmapv1: { smallFiles: true } } }
db_1              | 2019-05-05T15:02:50.060+0000 I STORAGE  [initandlisten]
db_1              | 2019-05-05T15:02:50.060+0000 I STORAGE  [initandlisten] ** WARNING: Support for MMAPV1 storage engine has been deprecated and will be
db_1              | 2019-05-05T15:02:50.061+0000 I STORAGE  [initandlisten] **          removed in version 4.2. Please plan to migrate to the wiredTiger
db_1              | 2019-05-05T15:02:50.061+0000 I STORAGE  [initandlisten] **          storage engine.
db_1              | 2019-05-05T15:02:50.061+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/deprecated-mmapv1
db_1              | 2019-05-05T15:02:50.061+0000 I STORAGE  [initandlisten]
db_1              | 2019-05-05T15:02:50.061+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'mmapv1' storage engine, so setting the active storage engine to 'mmapv1'.
db_1              | 2019-05-05T15:02:50.084+0000 I JOURNAL  [initandlisten] journal dir=/data/db/journal
db_1              | 2019-05-05T15:02:50.084+0000 I JOURNAL  [initandlisten] recover : no journal files present, no recovery needed
db_1              | 2019-05-05T15:02:50.092+0000 I JOURNAL  [durability] Durability thread started
db_1              | 2019-05-05T15:02:50.093+0000 I JOURNAL  [journal writer] Journal writer thread started
db_1              | 2019-05-05T15:02:50.094+0000 I CONTROL  [initandlisten]
db_1              | 2019-05-05T15:02:50.094+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
db_1              | 2019-05-05T15:02:50.095+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
db_1              | 2019-05-05T15:02:50.095+0000 I CONTROL  [initandlisten]
db_1              | 2019-05-05T15:02:50.113+0000 F CONTROL  [initandlisten] ** IMPORTANT: UPGRADE PROBLEM: The data files need to be fully upgraded to version 3.6 before attempting an upgrade to 4.0; see http://dochub.mongodb.org/core/4.0-upgrade-fcv for more details.
db_1              | 2019-05-05T15:02:50.114+0000 I NETWORK  [initandlisten] shutdown: going to close listening sockets...
db_1              | 2019-05-05T15:02:50.114+0000 I NETWORK  [initandlisten] removing socket file: /tmp/mongodb-27017.sock
db_1              | 2019-05-05T15:02:50.114+0000 I REPL     [initandlisten] shutting down replication subsystems
db_1              | 2019-05-05T15:02:50.114+0000 W REPL     [initandlisten] ReplicationCoordinatorImpl::shutdown() called before startup() finished.  Shutting down without cleaning up the replication system
db_1              | 2019-05-05T15:02:50.114+0000 I STORAGE  [initandlisten] shutdown: waiting for fs preallocator...
db_1              | 2019-05-05T15:02:50.114+0000 I STORAGE  [initandlisten] shutdown: final commit...
db_1              | 2019-05-05T15:02:50.115+0000 I JOURNAL  [initandlisten] journalCleanup...
db_1              | 2019-05-05T15:02:50.115+0000 I JOURNAL  [initandlisten] removeJournalFiles
db_1              | 2019-05-05T15:02:50.118+0000 I JOURNAL  [initandlisten] old journal file /data/db/journal/j._0 will be reused as /data/db/journal/prealloc.0
db_1              | 2019-05-05T15:02:50.120+0000 I JOURNAL  [initandlisten] Terminating durability thread ...
db_1              | 2019-05-05T15:02:50.217+0000 I JOURNAL  [journal writer] Journal writer thread stopped
db_1              | 2019-05-05T15:02:50.217+0000 I JOURNAL  [durability] Durability thread stopped
db_1              | 2019-05-05T15:02:50.217+0000 I STORAGE  [initandlisten] shutdown: closing all files...
db_1              | 2019-05-05T15:02:50.219+0000 I STORAGE  [initandlisten] closeAllFiles() finished
db_1              | 2019-05-05T15:02:50.219+0000 I STORAGE  [initandlisten] shutdown: removing fs lock...
db_1              | 2019-05-05T15:02:50.219+0000 I CONTROL  [initandlisten] now exiting
db_1              | 2019-05-05T15:02:50.219+0000 I CONTROL  [initandlisten] shutting down with code:62
rocketchat_1      | /app/bundle/programs/server/node_modules/fibers/future.js:313
rocketchat_1      | 						throw(ex);
rocketchat_1      | 						^
rocketchat_1      |
rocketchat_1      | MongoNetworkError: failed to connect to server [db:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND db db:27017]
rocketchat_1      |     at Pool.<anonymous> (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/topologies/server.js:564:11)
rocketchat_1      |     at emitOne (events.js:116:13)
rocketchat_1      |     at Pool.emit (events.js:211:7)
rocketchat_1      |     at Connection.<anonymous> (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/pool.js:317:12)
rocketchat_1      |     at Object.onceWrapper (events.js:317:30)
rocketchat_1      |     at emitTwo (events.js:126:13)
rocketchat_1      |     at Connection.emit (events.js:214:7)
rocketchat_1      |     at Socket.<anonymous> (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/connection.js:246:50)
rocketchat_1      |     at Object.onceWrapper (events.js:315:30)
rocketchat_1      |     at emitOne (events.js:116:13)
rocketchat_1      |     at Socket.emit (events.js:211:7)
rocketchat_1      |     at emitErrorNT (internal/streams/destroy.js:64:8)
rocketchat_1      |     at _combinedTickCallback (internal/process/next_tick.js:138:11)
rocketchat_1      |     at process._tickCallback (internal/process/next_tick.js:180:9)
rocketchat_rocketchat_1 exited with code 1
Gracefully stopping... (press Ctrl+C again to force)
Stopping rocketchat_db-init-replica_1... done

@franckadil
Copy link

franckadil commented May 5, 2019

@exitsoundhh yes indeed, IMPORTANT: UPGRADE PROBLEM: The data files need to be fully upgraded to version 3.6 before attempting an upgrade to 4.0; see http://dochub.mongodb.org/core/4.0-upgrade-fcv for more details. That's maybe why you was fiddling with the versions, my bad...

But isn't there a database upgrade script by Rocket.Chat ?
Or maybe I am confusing with this https://rocket.chat/docs/administrator-guides/database-migration/ ?

@exitsoundhh
Copy link

@franckadil i have downgrade rocket chat to the last worked version 0.74.2 for me.

i fixed my migration problem with this command:

use rocketchat
db.migrations.update({_id: 'control'},{$set:{locked:false,version:19}})

@franckadil
Copy link

Thank you @exitsoundhh ! I will test this asap.

@hkbnman
Copy link

hkbnman commented May 6, 2019

@franckadil thank you for your help!
I have follow your instruction to setup everything successfully and it even show that rocket,chat server is running in the shell. Somehow when I access the rocket.chat page using my domain, it show 504 Gateway Time-out.

FYI, I am using AWS to deploy my rocket.chat server and using the docker compose created by you.

@franckadil
Copy link

franckadil commented May 7, 2019

@hkbnman Hi!,

The 504 error indicates that the server, while acting as a gateway or proxy fails. this can happen for multiple reasons.

On top of my mind I would ask you to check the NGINX conf files, if you followed my steps, you should edit those files and replace replace the site URL with yours.

The whole Idea of Using NGINX is to avoid having to use the port prefix at the end of your URL (http://test.com:3000) and to be able to get SSL termination (https).

Otherwise if that's done, I would need more details:

  • Can you share your logs ?
  • Did you follow the steps wth Nginx and SSL or used the simple-compose ?
  • Are you using a VPC ? if so make sure your security group allows all the ports (443, 3000 etc...) .
  • If you're not using a VPC did you follow the UFW Firewall steps ?

Having more information about your setup will be helpful!

@hkbnman
Copy link

hkbnman commented May 8, 2019

@franckadil

  • Can you share your logs ?
    Yes, Which logs do you need?

  • Did you follow the steps wth Nginx and SSL or used the simple-compose ?
    I did followed exactly the instruction as you provided:
    https://github.com/franckadil/Rocket.Chat-Docker-Compose-Stack

  • Are you using a VPC ? if so make sure your security group allows all the ports (443, 3000 etc...) .
    I am using AWS EC2 and Route 53 for my rocket.chat deployment and have created security group for
    both port 22, 80 and 443.

  • If you're not using a VPC did you follow the UFW Firewall steps ?
    I did all the step for setting UFW Firewall based on your instruction.

Thanks for your kindly assistance!

@franckadil
Copy link

franckadil commented May 8, 2019

@hkbnman

While I haven't tested on AWS, It should normally work unless there is an issue with the security on the instance or the VPC level. I will run some tests on my end too as soon as I can.

Checking your logs:

  • Let's connect to the web container: docker exec -ti web /bin/bash

  • Let's display the logs: cat /var/log/nginx/rocketchat_error.log

  • Can you share also the result of the internal firewall: sudo ufw status

On a terminal external to your instance check if port 80/443 are correctly allowed for inbound traffic. curl -I http://ec2-xx-xx-xx-xx.xx-xx-x.compute.amazonaws.com:80

You're supposed to get something similar to this:

HTTP/1.1 200 OK
Server: nginx/1.10.3 (Ubuntu)
Date: Sun, 26 Nov 2017 20:17:12 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
Vary: Accept-Encoding

If you get an error:

Can you check your security-group inbound and outbound rules?
Reference: https://aws.amazon.com/premiumsupport/knowledge-center/connect-http-https-ec2/

You can also for testing purposes allow the following rules for Inbound traffic:

ALL Traffic | ALL | ALL | ::/0 | ALLOW
ALL Traffic | ALL | ALL | 0.0.0.0/0 | ALLOW

@hkbnman
Copy link

hkbnman commented May 9, 2019

Hi @franckadil
I got the logs as below:

  • cat /var/log/nginx/rocketchat_error.log:
    2019/05/09 01:54:21 [error] 6#6: *13 upstream timed out (110: Connection timed out) while connecting to upstream, client: myip, server: mydomain.com, request: "GET /websocket HTTP/1.1", upstream: "http://myip:3000/ websocket", host: "mydomain.com"

-sudo ufw status:
To Action From


22/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
3000/tcp ALLOW Anywhere
80/tcp ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
3000/tcp (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)

<title>404 Not Found</title>

404 Not Found


nginx/1.15.12

Thanks!@franckadil

@franckadil
Copy link

franckadil commented May 9, 2019

Thanks for sharing this, It seems like an issue with the Nginx config files or the Security Group preventing access to port 3000.

Have you tried testing the Security Group with these rules ?
ALL Traffic | ALL | ALL | ::/0 | ALLOW
ALL Traffic | ALL | ALL | 0.0.0.0/0 | ALLOW

I'am investigating.

@franckadil
Copy link

franckadil commented May 9, 2019

@hkbnman So here is what worked for me on Amazon AWS:

  • When creating the instance I also created a new security group with a friendly name that I can remember. Then created the following rules
HTTP TCP 80 0.0.0.0/0 Web
HTTP TCP 80 ::/0 Web
SSH TCP 22 MYIP/32 Admin SSH
Custom TCP Rule TCP 3000 0.0.0.0/0 Rocketchat
Custom TCP Rule TCP 3000 ::/0 Rocketchat
HTTPS TCP 443 0.0.0.0/0 SSL
HTTPS TCP 443 ::/0 SSL
  • I also made sure my domain name is properly setup and pointing to my EC2 instance IPV4 address.
    After that I logged in and followed the guide until I reached the step where you run the first docker-compose up command.

After getting the Rocketchat screen :
image

  • I browsed my site URL, and got the Hello Mars Page.

  • Now time for the SSL, in order to achieve that I open a new terminal instance, so that the main docker-compose process is not halted. You need to keep your containers up for this.

  • And to access my web container (the one that is in charge of proxying) I type: docker exec -ti web /bin/bash and after that ./certify.sh.

  • This where I started wearing my Hawayan shirt and Mexican hat, but that is not yet the end, we have to configure NGINX configuration files.

image

  • So now we need to exit from the web container with the command: exit

  • Make sure we are inside our stack folder and then run docker-compose stop

  • Time for some real suspense here, I rename the nginx_config.conf file to old_nginx_config.conf, and rename the rename_me_ssl.conf to nginx_config.conf (but hey wait, you need to check that those files contain your site url in my case I replaced yourstites.com by something.junglewp.com.)
    image

  • Let's get back to the initial terminal window and run again: docker-compose up
    🐵 ... **this will take quite sometime, my advice: go run your errands **...

  • Hubot will throw some nasty errors, that's fine at this point the Hubot user is not created yet on the site and the site is not up 😸 and the initial launch process is not done yet...

wait 15~ minutes then browse your site it should be up and running:
image

@hkbnman
Copy link

hkbnman commented May 9, 2019

Great Jobs! @franckadil
This time it work! I can finally access the rocket,chat admin page!

But there is 2 problems:
1, hubot keep rebooting script:
hubot | [Thu May 09 2019 05:33:59 GMT+0000 (UTC)] ERROR Unable to Login: {"isClientSafe":true,"error":403,"reason":"User not found","message":"User not found [403]","errorType":"Meteor.Error"} Reason: User not found

2, Testing for live.chat in android app for new message :
When a new message from client is sent , andorid app showed notification for the new message but show error "The required "roomId" or "roomname" param provided does not match any channel [error-room-not-found" and I can't join the room to read any message from client

I know this may not be related to the rocket-chat installation issue, but please help to test if these 2 problems also occur in your rocket.chat installation or just my cases.

Thank you for your kindly assistance!

@franckadil
Copy link

franckadil commented May 9, 2019

@hkbnman I am Happy you got your Rocket successfully launched 👍

  1. You need to log in your administration console and create a user following the configuration in your docker-compose file:
     - ROCKETCHAT_USER=Hubot
     - ROCKETCHAT_PASSWORD=BXyQnE93-9Uz

Then stop the stack docker-compose stop and restart again.

Reference: https://rocket.chat/docs/bots/running-a-hubot-bot/

  1. This is out of the scope of this issue thread 😃 I genuinely wanted to provide some help and test the stack so I can help improve the official Rocketchat documentation with these docker-stack steps in the future.

I can guarantee that my installation is free of these errors after I setup the site. But you may want to check the Rocket.Chat Mobile app section with your issue this will help everyone having the same errors and provide valuable feedback to the Developers.

I am happy I could help !

@hkbnman
Copy link

hkbnman commented May 9, 2019

Thanks! @franckadil

@Srinivasan2017
Copy link

Srinivasan2017 commented May 10, 2019

Hi team

Am trying to run the rocketchat on kubernetes in IBM Cloud platform and using the latest version @1.0.3 / #1.0.3 .

My mongo database is running in the container : {redacted}:27017

Replicaset Name : mongodb-7cbd69f547

Mapped it in the MongoURL & Mongo_OPLOG URL as below. Still error persits.

Can someone guide me here

"spec": {
"containers": [
{
"name": "rocketchat",
"image": "rocketchat/rocket.chat:1.0.3",
"env": [
{
"name": "MONGO_URL",
"value": "mongodb://{redacted}:27017/rocketchat"
},
{
"name": "ROOT_URL",
"value": "http://{redacted}:3000"
},
{
"name": "MONGO_OPLOG_URL",
"value": "mongodb://{redacted}:27017/local"
}
],

@geekgonecrazy
Copy link
Contributor

geekgonecrazy commented May 10, 2019

@Srinivasan2017 please don’t paste with public ips like that. I’ve redacted, please also secure your mongo server it should not be exposed to the world with a public ip.

All in this thread have already seen the information, so please make sure at least your mongo servers ip is changed.

I’d also recommend getting support on our forum or #support on our community server

@lucbarr
Copy link

lucbarr commented May 14, 2019

could someone please add proper docker-compose.yaml to AWS deploy documentation, step 7? https://rocket.chat/docs/installation/paas-deployments/aws/

@reetp
Copy link

reetp commented May 15, 2019

could someone please add proper docker-compose.yaml to AWS deploy documentation, step 7? https://rocket.chat/docs/installation/paas-deployments/aws/

Open a bug against the documentation. Even better do a fix and pull request....

https://github.com/RocketChat/docs/blob/master/installation/paas-deployments/aws/README.md

@AlexVonB
Copy link

AlexVonB commented May 23, 2019

Expanding @LeeThompson's godsend post on authenticated instances: make sure you create the oploguser while useing the admin database:

use admin
db.createUser({user: "oploguser", pwd: "password", roles: [{role: "read", db: "local"}]})
use local
rs.initiate()

If you don't, you might get an authentication error when starting rocketchat.

Also, when following the official docs, you might want to change the env-variable to have localhost as host and the replica set rs01:

MONGO_OPLOG_URL=mongodb://oploguser:password@localhost:27017/local?authSource=admin&replicaSet=rs01

When mongodb says you are not in the primary replica, revert the changes made to mongod.conf and restart mongod. Then do all the initiate stuff and then activcate sharding again.

Last but not least, when you cannot create the oploguser because of missing rights, give those rights to yourself:

use admin
db.grantRolesToUser('admin',[{ role: "root", db: "admin" }])

@hleb-rubanau
Copy link

From cluster perspective, I am strongly concerned about potential security implications.

One of my projects is multi-tenant -- for each customer there's independent Rocket.Chat setup, which connects to dedicated database with dedicated credentials set.

All databases are running on the shared cluster though.

It all looks fairly isolated from security viewpoint... until we introduce the oplog access, which (as far as I understand) is shared across all logical databases.

So, now, if customer's RocketChat is hacked into execution of arbitrary code (e.g. due to vulnerability in node or whatever), attacker can gain read access to shared oplog and steal info about all customer databases. Not good... :(

I'd strongly vote to make this feature optional ASAP.

@geekgonecrazy
Copy link
Contributor

If you use username / password for each database you will be unable to subscribe to events from another database

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.