Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[xray] XRAY is trying to connect postgresql which is not defined anywhere within yaml files #1720

Closed
guresonur opened this issue Jan 12, 2023 · 5 comments

Comments

@guresonur
Copy link

Is this a request for help?:

Yes

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes: v3.10.3

Which chart: xray

Which product license (Enterprise/Pro/oss): oss

JFrog support reference (if already raised with support team):

What happened: artifactory is installed successfully. when trying to install xray with helm, I can see it is trying to connect an internal postgresql (in fact it is disabled). It is getting attributes of this DB from JF Environment Variables. Strange thing is it is also connecting my external Postgresql successfully.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible): Install Xray with Helm chart and use external Postgresql

Anything else we need to know:

2023-01-12T11:55:39.000Z [shell] �[38;5;69m[INFO ]�[0m [] [installerCommon.sh:2819       ] [main] - System.yaml validation succeeded
Attempt to connect JFrogURL succeeded
Database ping failed with error dial tcp 10.12.69.8:5432: connect: connection refused
Database connection check failed <nil>

Why it is trying to connect this psql DB even in the case that internal postgresql is not used?

@oumkale
Copy link
Member

oumkale commented Jan 13, 2023

Hi @guresonur, Will you please share values.yaml file?

@guresonur
Copy link
Author

please see below, i've changed URLs

  imagePullSecrets:
    - registrypullsecret
  versions: {}






  customCertificates:
    enabled: false
  nodeSelector: {}

initContainerImage: changed_url/changed_url
imagePullPolicy: IfNotPresent
initContainers:
  resources:
    requests:
      memory: "50Mi"
      cpu: "10m"
    limits:
      memory: "1Gi"
      cpu: "1"
imagePullSecrets:

systemYamlOverride:
  existingSecret:
  dataKey:
replicaCount: 2
waitForDatabase: true
xray:
  name: xray
  labels: {}
  persistence:
    mountPath: /var/opt/jfrog/xray
  unifiedSecretInstallation: false
  schedulerName:
  priorityClass:
    create: false
    value: 1000000000
  customCertificates:
    enabled: false
  annotations: {}
  masterKeySecretName: xray-master-key


  joinKeySecretName: artifactory-joinkey-secret
  consoleLog: false
  jfrogUrl: xxxxx

  podAntiAffinity:
    type: "soft"
    topologyKey: "kubernetes.io/hostname"
  openMetrics:
    enabled: false
    filebeat:
      enabled: false
      log:
        enabled: false
        level: "info"
      elasticsearch:
        url: "Elasticsearch url where JFrog Insight is installed For example, http://<ip_address>:8082"
        username: ""
        password: ""
  systemYaml: |
    configVersion: 1
    router:
      serviceRegistry:
        insecure: {{ .Values.router.serviceRegistry.insecure }}
    shared:
    {{- if .Values.xray.openMetrics.enabled }}
      metrics:
        enabled: true
      {{- if .Values.xray.openMetrics.filebeat.enabled }}
        filebeat: {{ toYaml .Values.xray.openMetrics.filebeat | nindent 6 }}
      {{- end }}
    {{- end }}
      logging:
        consoleLog:
          enabled: {{ .Values.xray.consoleLog }}
      jfrogUrl: "{{ tpl (required "\n\nxray.jfrogUrl or global.jfrogUrl is required! This allows to connect to Artifactory.\nYou can copy the JFrog URL from Admin > Security > Settings" (include "xray.jfrogUrl" .)) . }}"
      database:
      {{- if .Values.postgresql.enabled }}
        type: "postgresql"
        driver: "org.postgresql.Driver"
        username: "{{ .Values.postgresql.postgresqlUsername }}"
        url: "postgres://{{ .Release.Name }}-postgresql:{{ .Values.postgresql.service.port }}/{{ .Values.postgresql.postgresqlDatabase }}?sslmode=disable"
      {{- else }}
        type: {{ .Values.database.type }}
        driver: {{ .Values.database.driver }}
      {{- end }}
      {{- if and (not .Values.rabbitmq.enabled) (not .Values.common.rabbitmq.connectionConfigFromEnvironment) }}
      rabbitMq:
        erlangCookie:
          value: "{{ .Values.rabbitmq.external.erlangCookie }}"
      {{- if not .Values.rabbitmq.external.secrets }}
        url: "{{ tpl .Values.rabbitmq.external.url . }}"
        username: "{{ .Values.rabbitmq.external.username }}"
        password: "{{ .Values.rabbitmq.external.password }}"
      {{- end }}
      {{- end }}
      {{- if .Values.xray.mongoUrl }}
      mongo:
        url: "{{ .Values.xray.mongoUrl }}"
        username: "{{ .Values.xray.mongoUsername }}"
        password: "{{ .Values.xray.mongoPassword }}"
      {{- end }}
    {{- if or .Values.server.mailServer .Values.server.indexAllBuilds }}
    server:
      {{- if .Values.server.mailServer }}
      mailServer: "{{ .Values.server.mailServer }}"
      {{- end }}
      {{- if .Values.server.indexAllBuilds }}
      indexAllBuilds: {{ .Values.server.indexAllBuilds }}
      {{- end }}
    {{- end }}
  loggers: []

  loggersResources: {}
rbac:
  create: false
  role:
    rules:
      - apiGroups:
          - ''
        resources:
          - services
          - endpoints
          - pods
        verbs:
          - get
          - watch
          - list
      - apiGroups:
          - 'batch'
        resources:
          - jobs
        verbs:
          - get
          - watch
          - list
          - create
          - delete
networkpolicy: []

nodeSelector: {}
affinity: {}
tolerations: []
autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 3
  targetCPUUtilizationPercentage: 70
logger:
  image:
    registry: changed_url
    repository: changed_url
    tag: 8.7.1
serviceAccount:
  create: false
  name:
  automountServiceAccountToken: true
postgresql:
  enabled: false
  image:
    registry: changed_url
    repository: changed_url
    tag: 13.4.0-debian-10-r39
  postgresqlUsername: xray
  postgresqlPassword: ""
  postgresqlDatabase: xraydb
  postgresqlExtendedConf:
    listenAddresses: "*"
    maxConnections: "1500"
  service:
    port: 5432
  persistence:
    enabled: true
    size: 300Gi
  primary:
    nodeSelector: {}
    affinity: {}
    tolerations: []
  readReplicas:
    nodeSelector: {}
    affinity: {}
    tolerations: []
  resources:
    requests:
      memory: "1Gi"
      cpu: "1"
    limits:
      memory: "4Gi"
      cpu: "4"
database:
  type: "postgresql"
  driver: "org.postgresql.Driver"
  url: changedpostgresqlurl
  user: xxxx
  password: xxxx
  actualUsername:
  secrets: {}
rabbitmq:
  enabled: true
  replicaCount: 3
  schedulers: "2"
  vm_memory_high_watermark_absolute: 1700MB
  resources:
    requests:
      memory: "512Mi"
      cpu: "500m"
    limits:
      memory: "2Gi"
      cpu: "2"
  rbac:
    create: true
  image:
    registry: changed_url
    repository: changed_url
    tag: 3.9.15-debian-10-r5
  auth:
    username: guest
    password: ""
    erlangCookie: XRAYRABBITMQCLUSTER
  maxAvailableSchedulers: null
  onlineSchedulers: null
  extraEnvVars:
    - name: RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS
      value: "+S 2:2 +sbwt none +sbwtdcpu none +sbwtdio none"
  service:
    port: 5672
  external:
    username:
    password:
    url:
    erlangCookie:
    secrets: {}
  persistence:
    enabled: false
    accessMode: ReadWriteOnce
    size: 20Gi
  extraSecretsPrependReleaseName: true
  extraSecrets:
    load-definition:
      load_definition.json: |
        {
          "permissions": [
            {
              "user": "{{ .Values.auth.username }}",
              "vhost": "/",
              "configure": ".*",
              "write": ".*",
              "read": ".*"
            }
          ],
          "vhosts": [
            {
              "name": "/"
            }
          ],
          "policies": [
            {
              "name": "ha-all",
              "apply-to": "all",
              "pattern": ".*",
              "vhost": "/",
              "definition": {
                "ha-mode": "all",
                "ha-sync-mode": "automatic",
              }
            }
          ]
        }
  loadDefinition:
    enabled: true
    existingSecret: '{{ .Release.Name }}-load-definition'
  nodeSelector: {}
  tolerations: []
  affinity: {}
common:
  xrayUserId: 1035
  xrayGroupId: 1035

  xrayConfig:
    stdOutEnabled: true
    indexAllBuilds: false
    support-router: true
  rabbitmq:
    connectionConfigFromEnvironment: true
  preStartCommand:
  customVolumes: ""

  customVolumeMounts: ""

  configMaps: ""

  customInitContainersBegin: ""

  customInitContainers: ""

  customSidecarContainers: ""

  customSecrets:

  persistence:
    enabled: false

    accessMode: ReadWriteOnce
    size: 50Gi
analysis:
  name: xray-analysis
  image:
    registry: changed_url
    repository: changed_url
  internalPort: 7000
  externalPort: 7000
  annotations: {}
  lifecycle: {}

  customVolumeMounts: ""

  livenessProbe:
    enabled: true
    config: |
      exec:
        command:
          - sh
          - -c
          - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.analysis.internalPort }}/api/v1/system/liveness
      initialDelaySeconds: {{ if semverCompare "<v1.20.0-0" .Capabilities.KubeVersion.Version }}90{{ else }}0{{ end }}
      periodSeconds: 10
      timeoutSeconds: {{ .Values.probes.timeoutSeconds }}
      failureThreshold: 3
      successThreshold: 1
  startupProbe:
    enabled: true
    config: |
      exec:
        command:
          - sh
          - -c
          - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.analysis.internalPort }}/api/v1/system/readiness
      initialDelaySeconds: 30
      failureThreshold: 30
      periodSeconds: {{ .Values.probes.timeoutSeconds }}
      timeoutSeconds: 1
  preStartCommand:
  resources:
    requests:
      memory: "300Mi"
      cpu: "50m"
    limits:
      memory: "8Gi"
      cpu: "6"
indexer:
  name: xray-indexer
  image:
    registry: changed_url
    repository: changed_url
  internalPort: 7002
  externalPort: 7002
  annotations: {}
  lifecycle: {}

  customVolumeMounts: ""

  livenessProbe:
    enabled: true
    config: |
      exec:
        command:
          - sh
          - -c
          - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.indexer.internalPort }}/api/v1/system/liveness
      initialDelaySeconds: {{ if semverCompare "<v1.20.0-0" .Capabilities.KubeVersion.Version }}90{{ else }}0{{ end }}
      periodSeconds: 10
      timeoutSeconds: {{ .Values.probes.timeoutSeconds }}
      failureThreshold: 3
      successThreshold: 1
  startupProbe:
    enabled: true
    config: |
      exec:
        command:
          - sh
          - -c
          - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.indexer.internalPort }}/api/v1/system/readiness
      initialDelaySeconds: 30
      failureThreshold: 30
      periodSeconds: 5
      timeoutSeconds: {{ .Values.probes.timeoutSeconds }}
  preStartCommand:
  resources:
    requests:
      memory: "300Mi"
      cpu: "50m"
    limits:
      memory: "8Gi"
      cpu: "8"
persist:
  name: xray-persist
  image:
    registry: changed_url
    repository: changed_url
  internalPort: 7003
  externalPort: 7003
  annotations: {}
  lifecycle: {}

  customVolumeMounts: ""

  livenessProbe:
    enabled: true
    config: |
      exec:
        command:
          - sh
          - -c
          - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.persist.internalPort }}/api/v1/system/liveness
      initialDelaySeconds: {{ if semverCompare "<v1.20.0-0" .Capabilities.KubeVersion.Version }}90{{ else }}0{{ end }}
      periodSeconds: 10
      timeoutSeconds: {{ .Values.probes.timeoutSeconds }}
      failureThreshold: 3
      successThreshold: 1
  startupProbe:
    enabled: true
    config: |
      exec:
        command:
          - sh
          - -c
          - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.persist.internalPort }}/api/v1/system/readiness
      initialDelaySeconds: 30
      failureThreshold: 30
      periodSeconds: 5
      timeoutSeconds: {{ .Values.probes.timeoutSeconds }}
  preStartCommand:
  resources:
    requests:
      memory: "300Mi"
      cpu: "50m"
    limits:
      memory: "8Gi"
      cpu: "6"
server:
  name: xray-server
  image:
    registry: changed_url
    repository: changed_url
  internalPort: 8000
  externalPort: 80
  annotations: {}
  lifecycle: {}


  customVolumeMounts: ""

  service:
    type: ClusterIP
    name: xray
    annotations: {}
    additionalSpec: ""
  statefulset:
    annotations: {}
  livenessProbe:
    enabled: true
    config: |
      exec:
        command:
          - sh
          - -c
          - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.server.internalPort }}/api/v1/system/liveness
      initialDelaySeconds: {{ if semverCompare "<v1.20.0-0" .Capabilities.KubeVersion.Version }}90{{ else }}0{{ end }}
      periodSeconds: 10
      timeoutSeconds: {{ .Values.probes.timeoutSeconds }}
      failureThreshold: 3
      successThreshold: 1
  startupProbe:
    enabled: true
    config: |
      exec:
        command:
          - sh
          - -c
          - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.server.internalPort }}/api/v1/system/readiness
      initialDelaySeconds: 30
      failureThreshold: 30
      periodSeconds: 5
      timeoutSeconds: {{ .Values.probes.timeoutSeconds }}
  preStartCommand:
  resources:
    requests:
      memory: "300Mi"
      cpu: "100m"
    limits:
      memory: "8Gi"
      cpu: "6"
router:
  name: router
  image:
    registry: changed_url
    repository: changed_url
    tag: 7.56.0
    imagePullPolicy: IfNotPresent
  serviceRegistry:
    insecure: false
  internalPort: 8082
  externalPort: 8082
  tlsEnabled: false
  resources: {}

  lifecycle: {}

  annotations: {}
  customVolumeMounts: ""

  livenessProbe:
    enabled: true
    config: |
      exec:
        command:
          - sh
          - -c
          - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} {{ include "xray.scheme" . }}://localhost:{{ .Values.router.internalPort }}/router/api/v1/system/liveness
      initialDelaySeconds: {{ if semverCompare "<v1.20.0-0" .Capabilities.KubeVersion.Version }}90{{ else }}0{{ end }}
      periodSeconds: 10
      timeoutSeconds: {{ .Values.probes.timeoutSeconds }}
      failureThreshold: 5
      successThreshold: 1
  readinessProbe:
    enabled: true
    config: |
      exec:
        command:
          - sh
          - -c
          - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} {{ include "xray.scheme" . }}://localhost:{{ .Values.router.internalPort }}/router/api/v1/system/readiness
      initialDelaySeconds: {{ if semverCompare "<v1.20.0-0" .Capabilities.KubeVersion.Version }}60{{ else }}0{{ end }}
      periodSeconds: 10
      timeoutSeconds: {{ .Values.probes.timeoutSeconds }}
      failureThreshold: 5
      successThreshold: 1
  startupProbe:
    enabled: true
    config: |
      exec:
        command:
          - sh
          - -c
          - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} {{ include "xray.scheme" . }}://localhost:{{ .Values.router.internalPort }}/router/api/v1/system/readiness
      initialDelaySeconds: 30
      failureThreshold: 30
      periodSeconds: 5
      timeoutSeconds: {{ .Values.probes.timeoutSeconds }}
  persistence:
    mountPath: "/var/opt/jfrog/router"
  loggers: []
observability:
  name: observability
  image:
    registry: changed_url
    repository: changed_url
    tag: 1.12.0
    imagePullPolicy: IfNotPresent
  internalPort: 8036
  resources: {}

  lifecycle: {}

  livenessProbe:
    enabled: true
    config: |
      exec:
        command:
          - sh
          - -c
          - curl --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.observability.internalPort }}/api/v1/system/liveness
      initialDelaySeconds: {{ if semverCompare "<v1.20.0-0" .Capabilities.KubeVersion.Version }}90{{ else }}0{{ end }}
      failureThreshold: 5
      timeoutSeconds: {{ .Values.probes.timeoutSeconds }}
      periodSeconds: 10
      successThreshold: 1
  startupProbe:
    enabled: true
    config: |
      exec:
        command:
          - sh
          - -c
          - curl --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.observability.internalPort }}/api/v1/system/readiness
      initialDelaySeconds: 30
      failureThreshold: 90
      periodSeconds: 5
      timeoutSeconds: {{ .Values.probes.timeoutSeconds }}
  persistence:
    mountPath: "/var/opt/jfrog/observability"
filebeat:
  enabled: false
  name: xray-filebeat
  image:
    repository: "changed_url"
    version: 7.16.2
  logstashUrl: "logstash:5044"
  annotations: {}
  terminationGracePeriod: 10
  livenessProbe:
    exec:
      command:
        - sh
        - -c
        - |
          curl --fail 127.0.0.1:5066
    failureThreshold: 3
    initialDelaySeconds: 10
    periodSeconds: 10
    timeoutSeconds: 5
  readinessProbe:
    exec:
      command:
        - sh
        - -c
        - |
          filebeat test output
    failureThreshold: 3
    initialDelaySeconds: 10
    periodSeconds: 10
    timeoutSeconds: 5
  resources: {}

  filebeatYml: |
    logging.level: info
    path.data: {{ .Values.xray.persistence.mountPath }}/log/filebeat
    name: xray-filebeat
    queue.spool:
      file:
        permissions: 0760
    filebeat.inputs:
    - type: log
      enabled: true
      close_eof: ${CLOSE:false}
      paths:
         - {{ .Values.xray.persistence.mountPath }}/log/*.log
      fields:
        service: "jfxr"
        log_type: "xray"
    output:
      logstash:
         hosts: ["{{ .Values.filebeat.logstashUrl }}"]
additionalResources: ""
hostAliases: []

probes:
  timeoutSeconds: 5
quota:
  enabled: true
  jobCount: 100
```

@oumkale
Copy link
Member

oumkale commented Jan 13, 2023

@guresonur Can you please reach out to jfrog support team who can do a one-on-one session to diagnose and provide an optimal solution

@chukka chukka closed this as completed Jan 31, 2023
@andeki92
Copy link

andeki92 commented Oct 24, 2024

We're seeing the same thing using a fairly minimal setup in our values.yaml file. Did you resolve this? If so, what was the magic dust that made it look to the externally configured postgres db?

Edit: Would be nice to add a resolution to the issue, not just close it 😇

Edit2: (considering my initial edit - I'll post our solution) - turns out trying to connect with jdbc to xray is a bad idea (since it is go-based). Changing from jdbc:postgresql:// to postgres:// worked!

Thanks for a quick reply @guresonur 🙌🏻

@guresonur
Copy link
Author

guresonur commented Oct 24, 2024

@andeki92 I dont recall what happened in the end around helm chart but we ended up installing it on VM as Linux Archive.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants