Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Envoy HTTPS ingress always return 404 when SSL config specifies a domain name #5292

Closed
bnlcnd opened this issue Dec 13, 2018 · 16 comments
Closed
Labels
question Questions that are neither investigations, bugs, nor enhancements stale stalebot believes this issue/PR has not been touched recently

Comments

@bnlcnd
Copy link

bnlcnd commented Dec 13, 2018

Title: The https listener always return 404 if the domain setting is the real domain name of the certificate

Bug Description:

I have a very simple envoy yaml config file. It defines a simple routing rule with a set of cert and key for SSL. The certificate is a self signed and issued to hello.com. In the envoy.yaml file, if I let the domain to be *, the request can be routed propery. But, if I set the domain to be "hello.com", the request will get a 404. The docker instance has an IP. I add the IP on my testing client with a host name hello.com. When I send request, I also send to the domain name. I also tried the IP. Same error.

SOAP UI request URL:
https://hello.com:8443/mock-domain
or
https://192.168.64.135:8443/mock-domain

SOAP UI http log

Thu Dec 13 14:45:50 EST 2018:DEBUG:<< "HTTP/1.1 404 Not Found[\r][\n]"
Thu Dec 13 14:45:50 EST 2018:DEBUG:<< "date: Thu, 13 Dec 2018 19:45:50 GMT[\r][\n]"
Thu Dec 13 14:45:50 EST 2018:DEBUG:<< "server: envoy[\r][\n]"
Thu Dec 13 14:45:50 EST 2018:DEBUG:<< "connection: close[\r][\n]"
Thu Dec 13 14:45:50 EST 2018:DEBUG:<< "content-length: 0[\r][\n]"
Thu Dec 13 14:45:50 EST 2018:DEBUG:<< "[\r][\n]"

Config envoy.yaml:

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 8443 }
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
#              domains: ["*"]
              domains: ["hello.com"]
              routes:
              - match:
                  prefix: "/mock-domain" # a test for mock-domain
                route:
                  cluster: mock-domain
          http_filters:
          - name: envoy.router
      tls_context:
        common_tls_context:
            tls_certificates:
            - certificate_chain: { filename: "/etc/crt" }
              private_key: { filename: "/etc/key" }
  clusters:
  - name: mock-domain
    connect_timeout: 0.25s
    type: strict_dns
    lb_policy: round_robin
    hosts:
    - socket_address:
        address: mock-domain
        port_value: 10080
admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 8001

Startup Logs:


[2018-12-13 19:45:26.928][000005][info][main] [source/server/server.cc:207] initializing epoch 0 (hot restart version=10.200.16384.127.options=capacity=16384, num_slots=8209 hash=228984379728933363 size=2654312)
[2018-12-13 19:45:26.928][000005][info][main] [source/server/server.cc:209] statically linked extensions:
[2018-12-13 19:45:26.928][000005][info][main] [source/server/server.cc:211]   access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2018-12-13 19:45:26.928][000005][info][main] [source/server/server.cc:214]   filters.http: envoy.buffer,envoy.cors,envoy.ext_authz,envoy.fault,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.rbac,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash
[2018-12-13 19:45:26.928][000005][info][main] [source/server/server.cc:217]   filters.listener: envoy.listener.original_dst,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2018-12-13 19:45:26.928][000005][info][main] [source/server/server.cc:220]   filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.dubbo_proxy,envoy.filters.network.rbac,envoy.filters.network.sni_cluster,envoy.filters.network.thrift_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
[2018-12-13 19:45:26.928][000005][info][main] [source/server/server.cc:222]   stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
[2018-12-13 19:45:26.928][000005][info][main] [source/server/server.cc:224]   tracers: envoy.dynamic.ot,envoy.lightstep,envoy.tracers.datadog,envoy.zipkin
[2018-12-13 19:45:26.928][000005][info][main] [source/server/server.cc:227]   transport_sockets.downstream: envoy.transport_sockets.alts,envoy.transport_sockets.capture,raw_buffer,tls
[2018-12-13 19:45:26.928][000005][info][main] [source/server/server.cc:230]   transport_sockets.upstream: envoy.transport_sockets.alts,envoy.transport_sockets.capture,raw_buffer,tls
[2018-12-13 19:45:26.933][000005][info][main] [source/server/server.cc:272] admin address: 0.0.0.0:8001
[2018-12-13 19:45:26.934][000005][debug][main] [source/server/overload_manager_impl.cc:171] No overload action configured for envoy.overload_actions.stop_accepting_connections.
[2018-12-13 19:45:26.934][000005][debug][main] [source/server/overload_manager_impl.cc:171] No overload action configured for envoy.overload_actions.stop_accepting_connections.
[2018-12-13 19:45:26.934][000005][info][config] [source/server/configuration_impl.cc:51] loading 0 static secret(s)
[2018-12-13 19:45:26.934][000005][info][config] [source/server/configuration_impl.cc:57] loading 1 cluster(s)
[2018-12-13 19:45:26.936][000005][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:818] adding TLS initial cluster mock-domain
[2018-12-13 19:45:26.936][000005][debug][upstream] [source/common/upstream/upstream_impl.cc:1183] starting async DNS resolution for mock-domain
[2018-12-13 19:45:26.936][000005][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 3436 milliseconds
[2018-12-13 19:45:26.936][000005][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:63] cm init: adding: cluster=mock-domain primary=1 secondary=0
[2018-12-13 19:45:26.936][000005][info][config] [source/server/configuration_impl.cc:62] loading 1 listener(s)
[2018-12-13 19:45:26.936][000005][debug][config] [source/server/configuration_impl.cc:64] listener #0:
[2018-12-13 19:45:26.936][000005][debug][config] [source/server/listener_manager_impl.cc:640] begin add/update listener: name=listener_0 hash=6635500297793231887
[2018-12-13 19:45:26.937][000005][debug][config] [source/server/listener_manager_impl.cc:40]   filter #0:
[2018-12-13 19:45:26.937][000005][debug][config] [source/server/listener_manager_impl.cc:41]     name: envoy.http_connection_manager
[2018-12-13 19:45:26.937][000005][debug][config] [source/server/listener_manager_impl.cc:44]   config: {"http_filters":[{"name":"envoy.router"}],"stat_prefix":"ingress_http","codec_type":"AUTO","route_config":{"virtual_hosts":[{"domains":["hello.com"],"routes":[{"match":{"prefix":"/mock-domain"},"route":{"cluster":"mock-domain"}}],"name":"local_service"}],"name":"local_route"}}
[2018-12-13 19:45:26.938][000005][debug][config] [source/extensions/filters/network/http_connection_manager/config.cc:312]     http filter #0
[2018-12-13 19:45:26.938][000005][debug][config] [source/extensions/filters/network/http_connection_manager/config.cc:313]       name: envoy.router
[2018-12-13 19:45:26.938][000005][debug][config] [source/extensions/filters/network/http_connection_manager/config.cc:317]     config: {}
[2018-12-13 19:45:26.942][000005][debug][config] [source/server/listener_manager_impl.cc:527] add active listener: name=listener_0, hash=6635500297793231887, address=0.0.0.0:8443
[2018-12-13 19:45:26.942][000005][info][config] [source/server/configuration_impl.cc:95] loading tracing configuration
[2018-12-13 19:45:26.942][000005][info][config] [source/server/configuration_impl.cc:115] loading stats sink configuration
[2018-12-13 19:45:26.942][000005][info][main] [source/server/server.cc:458] starting main dispatch loop
[2018-12-13 19:45:26.942][000009][debug][grpc] [source/common/grpc/google_async_client_impl.cc:41] completionThread running
[2018-12-13 19:45:26.943][000005][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 4686 milliseconds
[2018-12-13 19:45:26.953][000005][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 3436 milliseconds
[2018-12-13 19:45:26.961][000005][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 4374 milliseconds
[2018-12-13 19:45:26.962][000005][debug][upstream] [source/common/upstream/upstream_impl.cc:1190] async DNS resolution complete for mock-domain
[2018-12-13 19:45:26.962][000005][debug][upstream] [source/common/upstream/upstream_impl.cc:1212] DNS hosts have changed for mock-domain
[2018-12-13 19:45:26.962][000005][debug][upstream] [source/common/upstream/upstream_impl.cc:587] initializing secondary cluster mock-domain completed
[2018-12-13 19:45:26.962][000005][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:953] membership update for TLS cluster mock-domain
[2018-12-13 19:45:26.962][000005][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:91] cm init: init complete: cluster=mock-domain primary=0 secondary=0
[2018-12-13 19:45:26.962][000005][info][upstream] [source/common/upstream/cluster_manager_impl.cc:136] cm init: all clusters initialized
[2018-12-13 19:45:26.962][000005][info][main] [source/server/server.cc:430] all clusters initialized. initializing init manager
[2018-12-13 19:45:26.962][000005][info][config] [source/server/listener_manager_impl.cc:910] all dependencies initialized. starting workers
[2018-12-13 19:45:26.962][000011][debug][main] [source/server/worker_impl.cc:98] worker entering dispatch loop
[2018-12-13 19:45:26.962][000011][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:818] adding TLS initial cluster mock-domain
[2018-12-13 19:45:26.962][000011][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:953] membership update for TLS cluster mock-domain
[2018-12-13 19:45:26.962][000012][debug][main] [source/server/worker_impl.cc:98] worker entering dispatch loop
[2018-12-13 19:45:26.962][000012][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:818] adding TLS initial cluster mock-domain
[2018-12-13 19:45:26.962][000014][debug][grpc] [source/common/grpc/google_async_client_impl.cc:41] completionThread running
[2018-12-13 19:45:26.962][000012][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:953] membership update for TLS cluster mock-domain
[2018-12-13 19:45:26.962][000013][debug][grpc] [source/common/grpc/google_async_client_impl.cc:41] completionThread running
[2018-12-13 19:45:31.943][000005][debug][main] [source/server/server.cc:144] flushing stats
[2018-12-13 19:45:31.961][000005][debug][upstream] [source/common/upstream/upstream_impl.cc:1183] starting async DNS resolution for mock-domain
[2018-12-13 19:45:31.962][000005][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2018-12-13 19:45:31.965][000005][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 3124 milliseconds
[2018-12-13 19:45:31.979][000005][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 2812 milliseconds
[2018-12-13 19:45:31.992][000005][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 3750 milliseconds
[2018-12-13 19:45:31.993][000005][debug][upstream] [source/common/upstream/upstream_impl.cc:1190] async DNS resolution complete for mock-domain

Trace Logs for the transaction:

[2018-12-13 19:45:35.229][000012][debug][main] [source/server/connection_handler_impl.cc:236] [C0] new connection
[2018-12-13 19:45:35.230][000012][debug][connection] [source/common/ssl/ssl_socket.cc:135] [C0] handshake error: 2
[2018-12-13 19:45:35.248][000012][debug][connection] [source/common/ssl/ssl_socket.cc:135] [C0] handshake error: 2
[2018-12-13 19:45:35.248][000012][debug][connection] [source/common/ssl/ssl_socket.cc:135] [C0] handshake error: 2
[2018-12-13 19:45:35.265][000012][debug][connection] [source/common/ssl/ssl_socket.cc:135] [C0] handshake error: 2
[2018-12-13 19:45:35.265][000012][debug][connection] [source/common/ssl/ssl_socket.cc:135] [C0] handshake error: 2
[2018-12-13 19:45:35.270][000012][debug][connection] [source/common/ssl/ssl_socket.cc:124] [C0] handshake complete
[2018-12-13 19:45:35.274][000012][debug][http] [source/common/http/conn_manager_impl.cc:200] [C0] new stream
[2018-12-13 19:45:35.277][000012][debug][http] [source/common/http/conn_manager_impl.cc:529] [C0][S13148359650461284382] request headers complete (end_stream=false):
':authority', 'hello.com:8443'
':path', '/mock-domain'
':method', 'POST'
'accept-encoding', 'gzip,deflate'
'content-type', 'text/xml;charset=UTF-8'
'content-length', '799'
'connection', 'Keep-Alive'
'user-agent', 'Apache-HttpClient/4.1.1 (java 1.5)'

[2018-12-13 19:45:35.277][000012][debug][router] [source/common/router/router.cc:221] [C0][S13148359650461284382] no cluster match for URL '/mock-domain'
[2018-12-13 19:45:35.277][000012][debug][http] [source/common/http/conn_manager_impl.cc:1180] [C0][S13148359650461284382] encoding headers via codec (end_stream=true):
':status', '404'
'date', 'Thu, 13 Dec 2018 19:45:34 GMT'
'server', 'envoy'
'connection', 'close'

[2018-12-13 19:45:35.277][000012][debug][connection] [source/common/network/connection_impl.cc:101] [C0] closing data_to_write=116 type=2
[2018-12-13 19:45:35.277][000012][debug][connection] [source/common/network/connection_impl.cc:153] [C0] setting delayed close timer with timeout 1000 ms
[2018-12-13 19:45:35.277][000012][debug][connection] [source/common/network/connection_impl.cc:101] [C0] closing data_to_write=116 type=2
[2018-12-13 19:45:35.293][000012][debug][connection] [source/common/network/connection_impl.cc:460] [C0] remote early close
[2018-12-13 19:45:35.293][000012][debug][connection] [source/common/network/connection_impl.cc:183] [C0] closing socket: 0
[2018-12-13 19:45:35.293][000012][debug][connection] [source/common/ssl/ssl_socket.cc:233] [C0] SSL shutdown: rc=0
[2018-12-13 19:45:35.293][000012][debug][main] [source/server/connection_handler_impl.cc:68] [C0] adding to cleanup list
@mattklein123 mattklein123 added the question Questions that are neither investigations, bugs, nor enhancements label Dec 13, 2018
@bnlcnd
Copy link
Author

bnlcnd commented Dec 14, 2018

Some extra background here.

The command to generate the crt and key is:

openssl req -x509 -newkey rsa:4096 -keyout docker-build/pem/key -out docker-build/pem/crt -days 365 -nodes -subj '/CN=hello.com'

I tried to follow an article https://www.learnenvoy.io/articles/ssl.html before. it didn't work. The command to generate key and cert in this article does not work. The key and cert cannot be recognized by Envoy latest alpine image. The Envoy cannot start up.

Only after I switch to the proper command, the Envoy can startup and function normally when the domain is a wild cast.

domains: ["*"]

@lizan
Copy link
Member

lizan commented Dec 15, 2018

You'll need to include port number in domains if the port is non-standard, i.e. "hello.com:8443"

@bnlcnd
Copy link
Author

bnlcnd commented Jan 7, 2019

Thanks Lizan. It does work if I add the port number.
But, it seems the HTTP redirection will not work with this setting. Please refer to the config file below:

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address:
        address: 0.0.0.0
        port_value: 8443
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains:
              - "hello.com:8443"
              routes:
              - match:
                  prefix: "/mock-domain" # a test for mock-domain
                route:
                  cluster: mock-domain
          http_filters:
          - name: envoy.router
      tls_context:
        common_tls_context:
            tls_certificates:
            - certificate_chain: { filename: "/etc/crt" }
              private_key: { filename: "/etc/key" }
  - address:
      socket_address:
        address: 0.0.0.0
        port_value: 8080
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          codec_type: auto
          stat_prefix: ingress_http
          route_config:
            virtual_hosts:
            - name: httpsbackend
              domains:
              - "hello.com:8080"   # IP not allowed. only domain name is allowed. 
              routes:
              - match:
                  prefix: "/"
                redirect:
                  port_redirect: 8443
                  https_redirect: true
          http_filters:
          - name: envoy.router
            config: {}
  clusters:
  - name: mock-domain
    connect_timeout: 0.25s
    type: strict_dns
    lb_policy: round_robin
    hosts:
    - socket_address:
        address: mock-domain
        port_value: 10080
admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 8001

When I use soap UI to hit

https://hello.com:8443/mock-domain

I will get the response.

If I hit

http://hello.com:8080/mock-domain

I expect it to redirect to the previous https URL. But I actually see a connection timeout. The Envoy does not give any debug log either.

@stale
Copy link

stale bot commented Feb 6, 2019

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.

@stale stale bot added the stale stalebot believes this issue/PR has not been touched recently label Feb 6, 2019
@stale
Copy link

stale bot commented Feb 13, 2019

This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted". Thank you for your contributions.

@stale stale bot closed this as completed Feb 13, 2019
@bnlcnd
Copy link
Author

bnlcnd commented Feb 15, 2019

can someone mark it as help wanted?

@dio dio added the help wanted Needs help! label Feb 15, 2019
@dio dio reopened this Feb 15, 2019
@stale stale bot removed the stale stalebot believes this issue/PR has not been touched recently label Feb 15, 2019
@lizan
Copy link
Member

lizan commented Feb 15, 2019

@bndw do you have the stat from envoy?

@bndw
Copy link
Contributor

bndw commented Feb 15, 2019

@lizan not sure if you intentionally tagged me here. If so, what stat are you referring to?

@lizan
Copy link
Member

lizan commented Feb 15, 2019

ugh sorry, I meant @bnlcnd

@bnlcnd
Copy link
Author

bnlcnd commented Feb 17, 2019

@lizan - where can I get the stat? do you mean the log?

@bnlcnd
Copy link
Author

bnlcnd commented Feb 21, 2019

The stats result is attached below:
Command:

[etapp@rh75docker serviceproxy]$ curl -X GET http://192.168.64.135:8001/stats

Console output:

cluster.content-router.bind_errors: 0
cluster.content-router.circuit_breakers.default.cx_open: 0
cluster.content-router.circuit_breakers.default.rq_open: 0
cluster.content-router.circuit_breakers.default.rq_pending_open: 0
cluster.content-router.circuit_breakers.default.rq_retry_open: 0
cluster.content-router.circuit_breakers.high.cx_open: 0
cluster.content-router.circuit_breakers.high.rq_open: 0
cluster.content-router.circuit_breakers.high.rq_pending_open: 0
cluster.content-router.circuit_breakers.high.rq_retry_open: 0
cluster.content-router.external.upstream_rq_200: 1
cluster.content-router.external.upstream_rq_2xx: 1
cluster.content-router.external.upstream_rq_503: 1
cluster.content-router.external.upstream_rq_5xx: 1
cluster.content-router.external.upstream_rq_completed: 2
cluster.content-router.lb_healthy_panic: 0
cluster.content-router.lb_local_cluster_not_ok: 0
cluster.content-router.lb_recalculate_zone_structures: 0
cluster.content-router.lb_subsets_active: 0
cluster.content-router.lb_subsets_created: 0
cluster.content-router.lb_subsets_fallback: 0
cluster.content-router.lb_subsets_removed: 0
cluster.content-router.lb_subsets_selected: 0
cluster.content-router.lb_zone_cluster_too_small: 0
cluster.content-router.lb_zone_no_capacity_left: 0
cluster.content-router.lb_zone_number_differs: 0
cluster.content-router.lb_zone_routing_all_directly: 0
cluster.content-router.lb_zone_routing_cross_zone: 0
cluster.content-router.lb_zone_routing_sampled: 0
cluster.content-router.max_host_weight: 1
cluster.content-router.membership_change: 1
cluster.content-router.membership_healthy: 1
cluster.content-router.membership_total: 1
cluster.content-router.original_dst_host_invalid: 0
cluster.content-router.retry_or_shadow_abandoned: 0
cluster.content-router.update_attempt: 68
cluster.content-router.update_empty: 0
cluster.content-router.update_failure: 0
cluster.content-router.update_no_rebuild: 67
cluster.content-router.update_success: 68
cluster.content-router.upstream_cx_active: 1
cluster.content-router.upstream_cx_close_notify: 0
cluster.content-router.upstream_cx_connect_attempts_exceeded: 0
cluster.content-router.upstream_cx_connect_fail: 1
cluster.content-router.upstream_cx_connect_timeout: 0
cluster.content-router.upstream_cx_destroy: 0
cluster.content-router.upstream_cx_destroy_local: 0
cluster.content-router.upstream_cx_destroy_local_with_active_rq: 0
cluster.content-router.upstream_cx_destroy_remote: 0
cluster.content-router.upstream_cx_destroy_remote_with_active_rq: 0
cluster.content-router.upstream_cx_destroy_with_active_rq: 0
cluster.content-router.upstream_cx_http1_total: 2
cluster.content-router.upstream_cx_http2_total: 0
cluster.content-router.upstream_cx_idle_timeout: 0
cluster.content-router.upstream_cx_max_requests: 0
cluster.content-router.upstream_cx_none_healthy: 0
cluster.content-router.upstream_cx_overflow: 0
cluster.content-router.upstream_cx_protocol_error: 0
cluster.content-router.upstream_cx_rx_bytes_buffered: 3965
cluster.content-router.upstream_cx_rx_bytes_total: 3965
cluster.content-router.upstream_cx_total: 2
cluster.content-router.upstream_cx_tx_bytes_buffered: 0
cluster.content-router.upstream_cx_tx_bytes_total: 3952
cluster.content-router.upstream_flow_control_backed_up_total: 0
cluster.content-router.upstream_flow_control_drained_total: 0
cluster.content-router.upstream_flow_control_paused_reading_total: 0
cluster.content-router.upstream_flow_control_resumed_reading_total: 0
cluster.content-router.upstream_rq_200: 1
cluster.content-router.upstream_rq_2xx: 1
cluster.content-router.upstream_rq_503: 1
cluster.content-router.upstream_rq_5xx: 1
cluster.content-router.upstream_rq_active: 0
cluster.content-router.upstream_rq_cancelled: 0
cluster.content-router.upstream_rq_completed: 2
cluster.content-router.upstream_rq_maintenance_mode: 0
cluster.content-router.upstream_rq_pending_active: 0
cluster.content-router.upstream_rq_pending_failure_eject: 1
cluster.content-router.upstream_rq_pending_overflow: 0
cluster.content-router.upstream_rq_pending_total: 2
cluster.content-router.upstream_rq_per_try_timeout: 0
cluster.content-router.upstream_rq_retry: 0
cluster.content-router.upstream_rq_retry_overflow: 0
cluster.content-router.upstream_rq_retry_success: 0
cluster.content-router.upstream_rq_rx_reset: 0
cluster.content-router.upstream_rq_timeout: 0
cluster.content-router.upstream_rq_total: 2
cluster.content-router.upstream_rq_tx_reset: 0
cluster.content-router.version: 0
cluster.mock-domain.bind_errors: 0
cluster.mock-domain.circuit_breakers.default.cx_open: 0
cluster.mock-domain.circuit_breakers.default.rq_open: 0
cluster.mock-domain.circuit_breakers.default.rq_pending_open: 0
cluster.mock-domain.circuit_breakers.default.rq_retry_open: 0
cluster.mock-domain.circuit_breakers.high.cx_open: 0
cluster.mock-domain.circuit_breakers.high.rq_open: 0
cluster.mock-domain.circuit_breakers.high.rq_pending_open: 0
cluster.mock-domain.circuit_breakers.high.rq_retry_open: 0
cluster.mock-domain.lb_healthy_panic: 0
cluster.mock-domain.lb_local_cluster_not_ok: 0
cluster.mock-domain.lb_recalculate_zone_structures: 0
cluster.mock-domain.lb_subsets_active: 0
cluster.mock-domain.lb_subsets_created: 0
cluster.mock-domain.lb_subsets_fallback: 0
cluster.mock-domain.lb_subsets_removed: 0
cluster.mock-domain.lb_subsets_selected: 0
cluster.mock-domain.lb_zone_cluster_too_small: 0
cluster.mock-domain.lb_zone_no_capacity_left: 0
cluster.mock-domain.lb_zone_number_differs: 0
cluster.mock-domain.lb_zone_routing_all_directly: 0
cluster.mock-domain.lb_zone_routing_cross_zone: 0
cluster.mock-domain.lb_zone_routing_sampled: 0
cluster.mock-domain.max_host_weight: 1
cluster.mock-domain.membership_change: 1
cluster.mock-domain.membership_healthy: 1
cluster.mock-domain.membership_total: 1
cluster.mock-domain.original_dst_host_invalid: 0
cluster.mock-domain.retry_or_shadow_abandoned: 0
cluster.mock-domain.update_attempt: 68
cluster.mock-domain.update_empty: 0
cluster.mock-domain.update_failure: 0
cluster.mock-domain.update_no_rebuild: 67
cluster.mock-domain.update_success: 68
cluster.mock-domain.upstream_cx_active: 0
cluster.mock-domain.upstream_cx_close_notify: 0
cluster.mock-domain.upstream_cx_connect_attempts_exceeded: 0
cluster.mock-domain.upstream_cx_connect_fail: 0
cluster.mock-domain.upstream_cx_connect_timeout: 0
cluster.mock-domain.upstream_cx_destroy: 0
cluster.mock-domain.upstream_cx_destroy_local: 0
cluster.mock-domain.upstream_cx_destroy_local_with_active_rq: 0
cluster.mock-domain.upstream_cx_destroy_remote: 0
cluster.mock-domain.upstream_cx_destroy_remote_with_active_rq: 0
cluster.mock-domain.upstream_cx_destroy_with_active_rq: 0
cluster.mock-domain.upstream_cx_http1_total: 0
cluster.mock-domain.upstream_cx_http2_total: 0
cluster.mock-domain.upstream_cx_idle_timeout: 0
cluster.mock-domain.upstream_cx_max_requests: 0
cluster.mock-domain.upstream_cx_none_healthy: 0
cluster.mock-domain.upstream_cx_overflow: 0
cluster.mock-domain.upstream_cx_protocol_error: 0
cluster.mock-domain.upstream_cx_rx_bytes_buffered: 0
cluster.mock-domain.upstream_cx_rx_bytes_total: 0
cluster.mock-domain.upstream_cx_total: 0
cluster.mock-domain.upstream_cx_tx_bytes_buffered: 0
cluster.mock-domain.upstream_cx_tx_bytes_total: 0
cluster.mock-domain.upstream_flow_control_backed_up_total: 0
cluster.mock-domain.upstream_flow_control_drained_total: 0
cluster.mock-domain.upstream_flow_control_paused_reading_total: 0
cluster.mock-domain.upstream_flow_control_resumed_reading_total: 0
cluster.mock-domain.upstream_rq_active: 0
cluster.mock-domain.upstream_rq_cancelled: 0
cluster.mock-domain.upstream_rq_completed: 0
cluster.mock-domain.upstream_rq_maintenance_mode: 0
cluster.mock-domain.upstream_rq_pending_active: 0
cluster.mock-domain.upstream_rq_pending_failure_eject: 0
cluster.mock-domain.upstream_rq_pending_overflow: 0
cluster.mock-domain.upstream_rq_pending_total: 0
cluster.mock-domain.upstream_rq_per_try_timeout: 0
cluster.mock-domain.upstream_rq_retry: 0
cluster.mock-domain.upstream_rq_retry_overflow: 0
cluster.mock-domain.upstream_rq_retry_success: 0
cluster.mock-domain.upstream_rq_rx_reset: 0
cluster.mock-domain.upstream_rq_timeout: 0
cluster.mock-domain.upstream_rq_total: 0
cluster.mock-domain.upstream_rq_tx_reset: 0
cluster.mock-domain.version: 0
cluster_manager.active_clusters: 2
cluster_manager.cluster_added: 2
cluster_manager.cluster_modified: 0
cluster_manager.cluster_removed: 0
cluster_manager.cluster_updated: 0
cluster_manager.cluster_updated_via_merge: 0
cluster_manager.update_merge_cancelled: 0
cluster_manager.update_out_of_merge_window: 0
cluster_manager.warming_clusters: 0
filesystem.flushed_by_timer: 24
filesystem.reopen_failed: 0
filesystem.write_buffered: 1
filesystem.write_completed: 1
filesystem.write_total_buffered: 0
http.admin.downstream_cx_active: 1
http.admin.downstream_cx_delayed_close_timeout: 0
http.admin.downstream_cx_destroy: 1
http.admin.downstream_cx_destroy_active_rq: 0
http.admin.downstream_cx_destroy_local: 0
http.admin.downstream_cx_destroy_local_active_rq: 0
http.admin.downstream_cx_destroy_remote: 1
http.admin.downstream_cx_destroy_remote_active_rq: 0
http.admin.downstream_cx_drain_close: 0
http.admin.downstream_cx_http1_active: 1
http.admin.downstream_cx_http1_total: 2
http.admin.downstream_cx_http2_active: 0
http.admin.downstream_cx_http2_total: 0
http.admin.downstream_cx_idle_timeout: 0
http.admin.downstream_cx_overload_disable_keepalive: 0
http.admin.downstream_cx_protocol_error: 0
http.admin.downstream_cx_rx_bytes_buffered: 88
http.admin.downstream_cx_rx_bytes_total: 176
http.admin.downstream_cx_ssl_active: 0
http.admin.downstream_cx_ssl_total: 0
http.admin.downstream_cx_total: 2
http.admin.downstream_cx_tx_bytes_buffered: 0
http.admin.downstream_cx_tx_bytes_total: 18966
http.admin.downstream_cx_upgrades_active: 0
http.admin.downstream_cx_upgrades_total: 0
http.admin.downstream_flow_control_paused_reading_total: 0
http.admin.downstream_flow_control_resumed_reading_total: 0
http.admin.downstream_rq_1xx: 0
http.admin.downstream_rq_2xx: 1
http.admin.downstream_rq_3xx: 0
http.admin.downstream_rq_4xx: 0
http.admin.downstream_rq_5xx: 0
http.admin.downstream_rq_active: 1
http.admin.downstream_rq_completed: 1
http.admin.downstream_rq_http1_total: 2
http.admin.downstream_rq_http2_total: 0
http.admin.downstream_rq_idle_timeout: 0
http.admin.downstream_rq_non_relative_path: 0
http.admin.downstream_rq_overload_close: 0
http.admin.downstream_rq_response_before_rq_complete: 0
http.admin.downstream_rq_rx_reset: 0
http.admin.downstream_rq_timeout: 0
http.admin.downstream_rq_too_large: 0
http.admin.downstream_rq_total: 2
http.admin.downstream_rq_tx_reset: 0
http.admin.downstream_rq_ws_on_non_ws_route: 0
http.admin.rs_too_large: 0
http.async-client.no_cluster: 0
http.async-client.no_route: 0
http.async-client.rq_direct_response: 0
http.async-client.rq_redirect: 0
http.async-client.rq_total: 0
http.ingress_http.downstream_cx_active: 0
http.ingress_http.downstream_cx_delayed_close_timeout: 0
http.ingress_http.downstream_cx_destroy: 6
http.ingress_http.downstream_cx_destroy_active_rq: 0
http.ingress_http.downstream_cx_destroy_local: 0
http.ingress_http.downstream_cx_destroy_local_active_rq: 0
http.ingress_http.downstream_cx_destroy_remote: 6
http.ingress_http.downstream_cx_destroy_remote_active_rq: 0
http.ingress_http.downstream_cx_drain_close: 0
http.ingress_http.downstream_cx_http1_active: 0
http.ingress_http.downstream_cx_http1_total: 4
http.ingress_http.downstream_cx_http2_active: 0
http.ingress_http.downstream_cx_http2_total: 0
http.ingress_http.downstream_cx_idle_timeout: 0
http.ingress_http.downstream_cx_overload_disable_keepalive: 0
http.ingress_http.downstream_cx_protocol_error: 0
http.ingress_http.downstream_cx_rx_bytes_buffered: 0
http.ingress_http.downstream_cx_rx_bytes_total: 12810
http.ingress_http.downstream_cx_ssl_active: 0
http.ingress_http.downstream_cx_ssl_total: 3
http.ingress_http.downstream_cx_total: 6
http.ingress_http.downstream_cx_tx_bytes_buffered: 0
http.ingress_http.downstream_cx_tx_bytes_total: 4712
http.ingress_http.downstream_cx_upgrades_active: 0
http.ingress_http.downstream_cx_upgrades_total: 0
http.ingress_http.downstream_flow_control_paused_reading_total: 0
http.ingress_http.downstream_flow_control_resumed_reading_total: 0
http.ingress_http.downstream_rq_1xx: 0
http.ingress_http.downstream_rq_2xx: 1
http.ingress_http.downstream_rq_3xx: 2
http.ingress_http.downstream_rq_4xx: 1
http.ingress_http.downstream_rq_5xx: 1
http.ingress_http.downstream_rq_active: 0
http.ingress_http.downstream_rq_completed: 5
http.ingress_http.downstream_rq_http1_total: 5
http.ingress_http.downstream_rq_http2_total: 0
http.ingress_http.downstream_rq_idle_timeout: 0
http.ingress_http.downstream_rq_non_relative_path: 0
http.ingress_http.downstream_rq_overload_close: 0
http.ingress_http.downstream_rq_response_before_rq_complete: 2
http.ingress_http.downstream_rq_rx_reset: 2
http.ingress_http.downstream_rq_timeout: 0
http.ingress_http.downstream_rq_too_large: 0
http.ingress_http.downstream_rq_total: 5
http.ingress_http.downstream_rq_tx_reset: 0
http.ingress_http.downstream_rq_ws_on_non_ws_route: 0
http.ingress_http.no_cluster: 0
http.ingress_http.no_route: 1
http.ingress_http.rq_direct_response: 2
http.ingress_http.rq_redirect: 0
http.ingress_http.rq_total: 5
http.ingress_http.rs_too_large: 0
http.ingress_http.tracing.client_enabled: 0
http.ingress_http.tracing.health_check: 0
http.ingress_http.tracing.not_traceable: 0
http.ingress_http.tracing.random_sampling: 0
http.ingress_http.tracing.service_forced: 0
listener.0.0.0.0_8000.downstream_cx_active: 0
listener.0.0.0.0_8000.downstream_cx_destroy: 3
listener.0.0.0.0_8000.downstream_cx_total: 3
listener.0.0.0.0_8000.http.ingress_http.downstream_rq_1xx: 0
listener.0.0.0.0_8000.http.ingress_http.downstream_rq_2xx: 0
listener.0.0.0.0_8000.http.ingress_http.downstream_rq_3xx: 2
listener.0.0.0.0_8000.http.ingress_http.downstream_rq_4xx: 0
listener.0.0.0.0_8000.http.ingress_http.downstream_rq_5xx: 0
listener.0.0.0.0_8000.http.ingress_http.downstream_rq_completed: 2
listener.0.0.0.0_8000.no_filter_chain_match: 0
listener.0.0.0.0_8443.downstream_cx_active: 0
listener.0.0.0.0_8443.downstream_cx_destroy: 3
listener.0.0.0.0_8443.downstream_cx_total: 3
listener.0.0.0.0_8443.http.ingress_http.downstream_rq_1xx: 0
listener.0.0.0.0_8443.http.ingress_http.downstream_rq_2xx: 1
listener.0.0.0.0_8443.http.ingress_http.downstream_rq_3xx: 0
listener.0.0.0.0_8443.http.ingress_http.downstream_rq_4xx: 1
listener.0.0.0.0_8443.http.ingress_http.downstream_rq_5xx: 1
listener.0.0.0.0_8443.http.ingress_http.downstream_rq_completed: 3
listener.0.0.0.0_8443.no_filter_chain_match: 0
listener.0.0.0.0_8443.server_ssl_socket_factory.downstream_context_secrets_not_ready: 0
listener.0.0.0.0_8443.server_ssl_socket_factory.ssl_context_update_by_sds: 0
listener.0.0.0.0_8443.server_ssl_socket_factory.upstream_context_secrets_not_ready: 0
listener.0.0.0.0_8443.ssl.ciphers.ECDHE-RSA-AES128-GCM-SHA256: 3
listener.0.0.0.0_8443.ssl.connection_error: 0
listener.0.0.0.0_8443.ssl.fail_verify_cert_hash: 0
listener.0.0.0.0_8443.ssl.fail_verify_error: 0
listener.0.0.0.0_8443.ssl.fail_verify_no_cert: 0
listener.0.0.0.0_8443.ssl.fail_verify_san: 0
listener.0.0.0.0_8443.ssl.handshake: 3
listener.0.0.0.0_8443.ssl.no_certificate: 3
listener.0.0.0.0_8443.ssl.session_reused: 0
listener.admin.downstream_cx_active: 1
listener.admin.downstream_cx_destroy: 1
listener.admin.downstream_cx_total: 2
listener.admin.http.admin.downstream_rq_1xx: 0
listener.admin.http.admin.downstream_rq_2xx: 1
listener.admin.http.admin.downstream_rq_3xx: 0
listener.admin.http.admin.downstream_rq_4xx: 0
listener.admin.http.admin.downstream_rq_5xx: 0
listener.admin.http.admin.downstream_rq_completed: 1
listener.admin.no_filter_chain_match: 0
listener_manager.listener_added: 2
listener_manager.listener_create_failure: 0
listener_manager.listener_create_success: 4
listener_manager.listener_modified: 0
listener_manager.listener_removed: 0
listener_manager.total_listeners_active: 2
listener_manager.total_listeners_draining: 0
listener_manager.total_listeners_warming: 0
runtime.admin_overrides_active: 0
runtime.load_error: 0
runtime.load_success: 0
runtime.num_keys: 0
runtime.override_dir_exists: 0
runtime.override_dir_not_exists: 0
server.concurrency: 2
server.days_until_first_cert_expiring: 285
server.hot_restart_epoch: 0
server.live: 1
server.memory_allocated: 3551584
server.memory_heap_size: 5242880
server.parent_connections: 0
server.total_connections: 0
server.uptime: 340
server.version: 9960072
server.watchdog_mega_miss: 0
server.watchdog_miss: 2
stats.overflow: 0
cluster.content-router.external.upstream_rq_time: P0(nan,3400) P25(nan,3425) P50(nan,3450) P75(nan,3475) P90(nan,3490) P95(nan,3495) P99(nan,3499) P99.5(nan,3499.5) P99.9(nan,3499.9) P100(nan,3500)
cluster.content-router.upstream_cx_connect_ms: P0(nan,0) P25(nan,0) P50(nan,0) P75(nan,0) P90(nan,0) P95(nan,0) P99(nan,0) P99.5(nan,0) P99.9(nan,0) P100(nan,0)
cluster.content-router.upstream_cx_length_ms: P0(nan,0) P25(nan,0) P50(nan,0) P75(nan,0) P90(nan,0) P95(nan,0) P99(nan,0) P99.5(nan,0) P99.9(nan,0) P100(nan,0)
cluster.content-router.upstream_rq_time: P0(nan,3400) P25(nan,3425) P50(nan,3450) P75(nan,3475) P90(nan,3490) P95(nan,3495) P99(nan,3499) P99.5(nan,3499.5) P99.9(nan,3499.9) P100(nan,3500)
cluster.mock-domain.upstream_cx_connect_ms: No recorded values
cluster.mock-domain.upstream_cx_length_ms: No recorded values
http.admin.downstream_cx_length_ms: P0(nan,5) P25(nan,5.025) P50(nan,5.05) P75(nan,5.075) P90(nan,5.09) P95(nan,5.095) P99(nan,5.099) P99.5(nan,5.0995) P99.9(nan,5.0999) P100(nan,5.1)
http.admin.downstream_rq_time: P0(nan,0) P25(nan,0) P50(nan,0) P75(nan,0) P90(nan,0) P95(nan,0) P99(nan,0) P99.5(nan,0) P99.9(nan,0) P100(nan,0)
http.ingress_http.downstream_cx_length_ms: P0(nan,34) P25(nan,34.75) P50(nan,14500) P75(nan,302500) P90(nan,307000) P95(nan,308500) P99(nan,309700) P99.5(nan,309850) P99.9(nan,309970) P100(nan,310000)
http.ingress_http.downstream_cx_length_ms: P0(nan,6) P25(nan,6.075) P50(nan,145000) P75(nan,302500) P90(nan,307000) P95(nan,308500) P99(nan,309700) P99.5(nan,309850) P99.9(nan,309970) P100(nan,310000)
http.ingress_http.downstream_rq_time: P0(nan,0) P25(nan,0) P50(nan,1.05) P75(nan,3425) P90(nan,3470) P95(nan,3485) P99(nan,3497) P99.5(nan,3498.5) P99.9(nan,3499.7) P100(nan,3500)
http.ingress_http.downstream_rq_time: P0(nan,0) P25(nan,0) P50(nan,0) P75(nan,2.05) P90(nan,2.08) P95(nan,2.09) P99(nan,2.098) P99.5(nan,2.099) P99.9(nan,2.0998) P100(nan,2.1)
listener.0.0.0.0_8000.downstream_cx_length_ms: P0(nan,6) P25(nan,6.075) P50(nan,145000) P75(nan,302500) P90(nan,307000) P95(nan,308500) P99(nan,309700) P99.5(nan,309850) P99.9(nan,309970) P100(nan,310000)
listener.0.0.0.0_8443.downstream_cx_length_ms: P0(nan,34) P25(nan,34.75) P50(nan,14500) P75(nan,302500) P90(nan,307000) P95(nan,308500) P99(nan,309700) P99.5(nan,309850) P99.9(nan,309970) P100(nan,310000)
listener.admin.downstream_cx_length_ms: P0(nan,5) P25(nan,5.025) P50(nan,5.05) P75(nan,5.075) P90(nan,5.09) P95(nan,5.095) P99(nan,5.099) P99.5(nan,5.0995) P99.9(nan,5.0999) P100(nan,5.1)
[etapp@rh75docker serviceproxy]$ 

@lizan
Copy link
Member

lizan commented Feb 21, 2019

@bnlcnd thanks for the stat, can you try to make stat_prefix different for 8443 and 8080? let's do ingress_https and ingress_http? http.ingress_http.rq_redirect: 0 is a bit suspicious though I'm not sure what is happening. A clean started envoy with one try to http://hello.com:8080/mock-domain and get the stat would be helpful.

@md-ray
Copy link

md-ray commented Oct 29, 2019

Thanks Lizan. It does work if I add the port number.
But, it seems the HTTP redirection will not work with this setting. Please refer to the config file below:

Hi @bnlcnd
I'm facing the same problem, and cannot resolve it after adding the port number. Are you accessing Envoy using Docker also? What else could be the issue ya

@bnlcnd
Copy link
Author

bnlcnd commented Oct 31, 2019

@md-ray I did not have time to follow it. But it didn't work. I still have to duplicate all the routings and settings on both HTTP and HTTPS port.

@mattklein123 mattklein123 removed the help wanted Needs help! label Dec 13, 2019
@stale
Copy link

stale bot commented Jan 12, 2020

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.

@stale stale bot added the stale stalebot believes this issue/PR has not been touched recently label Jan 12, 2020
@stale
Copy link

stale bot commented Jan 19, 2020

This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted". Thank you for your contributions.

@stale stale bot closed this as completed Jan 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Questions that are neither investigations, bugs, nor enhancements stale stalebot believes this issue/PR has not been touched recently
Projects
None yet
Development

No branches or pull requests

6 participants