When trying to expose a Host with a hostname that includes a port such
as 'example.com:8500', causes 404 NR's due to the way envoy was
configured.
In the v1.14, we used to group everything under a wildcard virtual_host
domain "*" and did not include sni matching on the Filter Chain. This
allowed that configuration to work but had the downside of causing
large memory usage and slower route matching due to lumping all
routes into a single virtual host.
If the v2.Y and v3.Y, series this was addressed by creating seperate
Filter Chains for each host. A non-tls Host would get a single Filter
Chain with multiple virtual_hosts per Host. A Host with TLS would
produce a 1-1 FilterChain and Virtualhost. This works fine in most
cases when downstream clients are connecting on standard ports
(80/443) but when a client needs to connect on something like
example.com:8500 this would effectively generate a Filter Chain
that could never match on an incoming request.
In the non-tls scenario there are no changes to what gets generated. In
the TLS scenario we now parse the hostname into two entites, sni and
virtual_host.domain. So, example.com:8500 would have an sni of
example.com and a virtual_host.domain of example.com:8500.
We create a Filter Chain for the sni value and add a virtual host for the
domain value. If a second Host with just `example.com` is provided as
well then we will attempt to merge these into a single Filter Chain with
multiple virtual_host. We can only do this when the same TLSContext
is used because we wouldn't know which attributes to take from which
host since all the transport_socket settings are at the Filter Chain level.
This restores existing behavior in a backwards compatiabile way and
doesn't try to solve the Developer Experience (DX) with the way the
Host is currently designed.
Signed-off-by: Lance Austin <laustin@datawire.io>