-
Notifications
You must be signed in to change notification settings - Fork 688
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TLS redirect_cleartext_from doesn't preserve path #1463
Comments
@kflynn I think this is related to the failing test that I was trying to fix. |
@gsagula Additional note, the issue is not present in Ambassador Pro using the same Ambassador OSS version (OSS |
Strange. If it is a bug in Ambassador, it should fail consistently. I will take a look at. |
@bpehling I can't reproduce it. Do you have the whole config? |
We are facing the same problem with Ambassador |
Have you tried with O.60.2? Can you provide the config, please? Thanks!
--
Gabriel Linden Sagula
|
Ambassador values: ambassador:
replicaCount: 3
image:
repository: quay.io/datawire/ambassador
tag: 0.60.3
pullPolicy: IfNotPresent
env:
AMBASSADOR_ID: api-gateway
service:
loadBalancerIP: <IP>
http:
enabled: true
targetPort: 8080
https:
enabled: true
targetPort: 8443
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Module
name: ambassador
ambassador_id: api-gateway
config:
service_port: 8443
---
apiVersion: ambassador/v1
kind: Module
name: tls
ambassador_id: api-gateway
config:
server:
enabled: True
redirect_cleartext_from: 8080
secret: api-gateway-tls Service config: service:
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: service-root-mapping
ambassador_id: api-gateway
host: sub.myhost.com
prefix: ^/$
prefix_regex: true
rewrite: /website
service: "myservice.mynamespace:80"
bypass_auth: true
---
apiVersion: ambassador/v1
kind: Mapping
name: service-website-mapping
ambassador_id: api-gateway
host: sub.myhost.com
prefix: /website
rewrite: /website
service: "myservice.mynamespace:80"
bypass_auth: true curl http://sub.myhost.com/foobar -v
* Trying <IP>...
* TCP_NODELAY set
* Connected to sub.myhost.com (<IP>) port 80 (#0)
> GET /foobar HTTP/1.1
> Host: sub.myhost.com
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< location: https://sub.myhost.com/
< date: Thu, 02 May 2019 08:33:05 GMT
< server: envoy
< content-length: 0
<
* Connection #0 to host sub.myhost.com left intact We'd expect the 301 to redirect to |
I think this is not related directly with TLS since I'm having the same problem without Ambassador TLS module. I have ambassador loadbalanced by an AWS ELB:
We'd expect a path: /health-check on this last line. and status 200. |
@gsagula I am mistaken, this issue occurs when using pro as well ambassador deploymentapiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "10"
labels:
app.kubernetes.io/instance: ambassador
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: ambassador
helm.sh/chart: ambassador-2.3.1
name: ambassador
namespace: infra
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: ambassador
app.kubernetes.io/name: ambassador
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
labels:
app.kubernetes.io/instance: ambassador
app.kubernetes.io/name: ambassador
spec:
containers:
- args:
- --statsd.listen-udp=:8125
- --web.listen-address=:9102
- --statsd.mapping-config=/statsd-exporter/mapping-config.yaml
image: prom/statsd-exporter:v0.8.1
imagePullPolicy: IfNotPresent
name: prometheus-exporter
ports:
- containerPort: 9102
name: metrics
protocol: TCP
- containerPort: 8125
name: listener
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /statsd-exporter/
name: stats-exporter-mapping-config
readOnly: true
- env:
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
- name: STATSD_ENABLED
value: "true"
- name: STATSD_HOST
value: localhost
- name: AMBASSADOR_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/datawire/ambassador:0.60.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /ambassador/v0/check_alive
port: admin
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
name: ambassador
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 8443
name: https
protocol: TCP
- containerPort: 8877
name: admin
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ambassador/v0/check_ready
port: admin
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- env:
- name: REDIS_SOCKET_TYPE
value: tcp
- name: REDIS_URL
value: ambassador-pro-redis:6379
- name: APRO_AUTH_PORT
value: "8500"
- name: GRPC_PORT
value: "8501"
- name: DEBUG_PORT
value: "8502"
- name: APP_LOG_LEVEL
value: info
- name: AMBASSADOR_LICENSE_KEY
valueFrom:
secretKeyRef:
key: key
name: ambassador-pro-license-key
image: quay.io/datawire/ambassador_pro:amb-sidecar-0.4.0
imagePullPolicy: IfNotPresent
name: ambassador-pro
ports:
- containerPort: 8500
name: grpc-auth
protocol: TCP
- containerPort: 8501
name: grpc-ratelimit
protocol: TCP
- containerPort: 8502
name: http-debug
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 8888
serviceAccount: ambassador
serviceAccountName: ambassador
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
items:
- key: exporterConfiguration
path: mapping-config.yaml
name: ambassador-exporter-config
name: stats-exporter-mapping-config ambassador serviceapiVersion: v1
kind: Service
metadata:
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Module
name: ambassador
config:
enable_grpc_web: true
service_port: 8443
default_label_domain: ambassador
---
apiVersion: ambassador/v1
kind: Module
name: tls
config:
server:
enabled: True
secret: ambassador-cert
redirect_cleartext_from: 8080
alpn_protocols: h2,http/1.1
labels:
app.kubernetes.io/instance: ambassador
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: ambassador
helm.sh/chart: ambassador-2.3.1
name: ambassador
namespace: infra
spec:
clusterIP: <IP>
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 31919
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 30894
port: 443
protocol: TCP
targetPort: 8443
selector:
app.kubernetes.io/instance: ambassador
app.kubernetes.io/name: ambassador
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: <IP> ambassador-pro serviceapiVersion: v1
kind: Service
metadata:
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: AuthService
name: ambassador-pro-auth
proto: grpc
auth_service: 127.0.0.1:8500
allow_request_body: false # setting this to 'true' allows Plugin and External filters to access the body, but has performance overhead
---
# This mapping needs to exist, but is never actually followed.
apiVersion: ambassador/v1
kind: Mapping
name: callback_mapping
prefix: /callback
service: NoTaReAlSeRvIcE
---
apiVersion: ambassador/v1
kind: RateLimitService
name: ambassador-pro-ratelimit
service: 127.0.0.1:8501
labels:
app.kubernetes.io/instance: ambassador
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: ambassador
helm.sh/chart: ambassador-2.3.1
service: ambassador-pro
name: ambassador-pro
namespace: infra
spec:
clusterIP: <IP>
ports:
- name: ratelimit-grpc
port: 80
protocol: TCP
targetPort: 80
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {} node-hello-world serviceapiVersion: v1
kind: Service
metadata:
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: node-hello-world
prefix: /helloworld
service: node-hello-world.default:8080
labels:
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: node-hello-world
app.kubernetes.io/release: node-hello-world
group: monitor
helm.sh/chart: node-hello-world-0.1.0
name: node-hello-world
namespace: default
spec:
clusterIP: <IP>
ports:
- name: http
port: 8080
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/name: node-hello-world
app.kubernetes.io/release: node-hello-world
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {} edit: using chart version |
The same here with 0.61.1 |
0.70.0 sadly has the same problem :( |
Looking into it now. |
I think this may be related envoyproxy/envoy#5292 |
@christianhuening @bpehling Can you please confirm on your end that the above issue is causing the error? Thank you. |
@gsagula this very much looks like the issue, yes. I’ll take a look into our envoy conf to be sure |
@gsagula well... in our config the domain is set to |
@christianhuening Thank you for getting back to me. Would you mind to share your Envoy config, please? |
I think the interesting bit is this: "route_config": {
"virtual_hosts": [
{
"domains": [
"*"
],
"name": "backend",
"routes": [
{
"match": {
"case_sensitive": true,
"prefix": "/.well-known/acme-challenge"
},
"route": {
"priority": null,
"timeout": "3.000s",
"weighted_clusters": {
"clusters": [
{
"name": "cluster_acme_challenge_service",
"weight": 100
}
]
}
}
}, |
we got it solved for now by adding an nginx sidecar which does the redirect |
My config has "route_config": {
"virtual_hosts": [
{
"domains": [
"*"
],
"name": "backend",
"require_tls": "EXTERNAL_ONLY",
"routes": [
{
"match": {
"prefix": "/"
},
"redirect": {
"https_redirect": true,
"path_redirect": "/"
}
}
]
} |
@bpehling @christianhuening Thanks for the info. I will need to debug Envoy. |
The problem is that we shouldn't be setting
I will open a PR shortly with the fix. Thanks! |
@bpehling @christianhuening If you want to give it a try |
@trevex can you try it? I am very busy today. |
@gsagula I'm getting an error pulling the image
|
Hey sorry guys, try this one |
@gsagula It worked for me 👍 |
Describe the bug
url path is not preserved with redirect_cleartext_from set
To Reproduce
Follow TLS Termination documentation to create cert and store as kubernetes secret
Deploy ambassador with helm chart 2.2.1 with values:
Expected behavior
Path should be preserved and redirect to
https://hostname/httpbin/
Versions (please complete the following information):
Additional context
The text was updated successfully, but these errors were encountered: