Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

h2 server responses exceeding SETTINGS_MAX_HEADER_LIST_SIZE do not result in RST_STREAM or GOAWAY #11822

Closed
niloc132 opened this issue May 20, 2024 · 3 comments · Fixed by #12165
Assignees
Labels
Bug For general bugs on Jetty side

Comments

@niloc132
Copy link

Jetty version(s)

First observed in Jetty 10.0.16, 11.0.16, 12.0.0.beta2. Latest versions (10.0.20, 11.0.20, 12.0.9) all exhibit the same bug. Appears to have been introduced by 420ec7c.

Jetty Environment
Observed with both ee8 and ee9.

Java version/vendor (use: java -version)

$ java -version
openjdk version "21.0.3" 2024-04-16
OpenJDK Runtime Environment (build 21.0.3+9)
OpenJDK 64-Bit Server VM (build 21.0.3+9, mixed mode, sharing)

Also observed with Java 8 for Jetty 10, Java 11 for Jetty 11.

OS type/version
Linux

Description
When a client and server connect, they exchange SETTINGS frames, which permits the client to specify a max size for a headers list. One of those settings is SETTINGS_MAX_HEADER_LIST_SIZE, which "informs a peer of the maximum size of header list that the sender is prepared to accept, in octets".

Later, the spec says "An endpoint can use the SETTINGS_MAX_HEADER_LIST_SIZE to advise peers of limits that might apply on the size of header blocks. This setting is only advisory, so endpoints MAY choose to send header blocks that exceed this limit and risk having the request or response being treated as malformed."

The 420ec7c commit in Jetty improves handling for all things headers related, and introduced handling of this max header size.

The current behavior is that when a HEADERS frame would exceed that limit, the server will end the entire connection abruptly, with TCP FIN rather than killing the affected stream with RST_STREAM, or the h2 connection with GOAWAY. This is the source of the buggy behavior being called out.

It appears that ending the connection SHOULD instead be (first) done with GOAWAY, https://httpwg.org/specs/rfc7540.html#rfc.section.5.4.1,

An endpoint that encounters a connection error SHOULD first send a GOAWAY frame (Section 6.8) with the stream identifier of the last stream that it successfully received from its peer. The GOAWAY frame includes an error code that indicates why the connection is terminating. After sending the GOAWAY frame for an error condition, the endpoint MUST close the TCP connection.

It perhaps would also be possible to make this only a stream error and leave the connection running for other streams via RST_STREAM.

The bug that this introduces is that while with a GOAWAY, the client knows there was an error that the server observed and shut down the connection, while without, clients do not know what happened, and might attempt to reconnect.

Specifically, this is introducing a failure in gRPC-java support for Jetty, where the client cannot tell why the upstream stopped responding - handling of GOAWAY for gRPC is to interpret the failure as INTERNAL, while UNAVAILABLE is the result when the socket just ends as Jetty is doing, which is interpreted to be transient (i.e. nothing that a well behaved server would deliberately do).

Also note, this does not happen with http/1.1 in Jetty, instead a 500 error is sent to the client.

How to reproduce?
Start a h2 server running one of the affected versions, with a simple handler (here using servlets) to send a giant payload:

public class Server {
    public static void main(String[] args) throws Exception {
        org.eclipse.jetty.server.Server s = new org.eclipse.jetty.server.Server(10000);
        ServerConnector sc = (ServerConnector) s.getConnectors()[0];
        HttpConfiguration httpConfiguration = new HttpConfiguration();

        HTTP2CServerConnectionFactory factory =
                new HTTP2CServerConnectionFactory(httpConfiguration) {
                    @Override
                    protected ServerSessionListener newSessionListener(Connector connector,
                                                                       EndPoint endPoint) {
                        return new HTTPServerSessionListener(endPoint) {
                            @Override
                            public void onSettings(Session session, SettingsFrame frame) {
                                HTTP2Session s = (HTTP2Session) session;
                                System.out.println("Client max: " + s.getGenerator().getHpackEncoder().getMaxHeaderListSize());
                            }
                        };
                    }
                };
        factory.setRateControlFactory(new RateControl.Factory() {
        });
        sc.addConnectionFactory(factory);

        ServletContextHandler context =
                new ServletContextHandler(ServletContextHandler.SESSIONS);
        context.setContextPath("/");
        context.addServlet(new ServletHolder(new HugeHeaderServlet()), "/*");
        s.setHandler(context);

        s.start();
        s.join();
    }
}
public class HugeHeaderServlet extends HttpServlet {
    // Jetty client defaults to 8192
    private static final String HUGE_HEADER_VALUE = "a".repeat( 16 * 1024);

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
        resp.setHeader("Huge", HUGE_HEADER_VALUE);
    }
}

Attempt to connect with an h2 client with a header limit below the payload size:

public class Client {
    public static void main(String[] args) throws Exception {
        HTTP2Client client = new HTTP2Client();
        client.setMaxResponseHeadersSize(8 * 1024);
        client.setMaxFrameSize(1024 * 1024);
        client.start();
        CompletableFuture<Session> sessionPromise = client.connect(new InetSocketAddress("localhost", 10000), new ServerSessionListener() {
            @Override
            public void onAccept(Session session) {
                ServerSessionListener.super.onAccept(session);
            }
        });
        Session session = sessionPromise.get(5, TimeUnit.SECONDS);
        MetaData request = new MetaData.Request("GET", HttpURI.from("http://localhost:10000/"), HttpVersion.HTTP_2, null);
        CompletableFuture<Stream> streamPromise = session.newStream(new HeadersFrame(request, null, false), new Stream.Listener() {
            @Override
            public void onHeaders(Stream stream, HeadersFrame frame) {
                System.out.println("HEADERS: " + frame);
            }

            @Override
            public void onFailure(Stream stream, int error, String reason, Throwable failure, Callback callback) {
                System.out.println("FAILURE: " + reason);
            }

            @Override
            public void onReset(Stream stream, ResetFrame frame, Callback callback) {
                System.out.println("RESET: " + frame);
            }
        });

        streamPromise.get(5, TimeUnit.SECONDS);
    }
}
@sbordet
Copy link
Contributor

sbordet commented Aug 14, 2024

A few clarifications:

  • If a peer sends MAX_HEADER_LIST_SIZE, and then receives a HEADERS larger than allowed, we do send a GOAWAY.
  • If a peer receives MAX_HEADER_LIST_SIZE, and then tries to send a HEADERS larger than allowed, the send is failed locally, and we abruptly close the connection without a GOAWAY.

Note that even if the spec hints to send a 431 for the first case, we tear down the connection because we do not want to risk an attack where the HPACK context is tainted with large headers (@lachlan-roberts do you confirm?).
And in the second case, we detect the headers are too large when we possibly have already encoded other headers, so now the HPACK context is in an unrecoverable state and we can only tear down the connection.

sbordet added a commit that referenced this issue Aug 14, 2024
…ZE do not result in RST_STREAM or GOAWAY.

Now HpackException.SessionException is treated specially in HTTP2Flusher.
It is caught, it fails all the entries, and then tries to send a GOAWAY, which will be the only frame allowed into the HTTP2FLusher at that point.

Signed-off-by: Simone Bordet <simone.bordet@gmail.com>
@sbordet sbordet moved this to 🏗 In progress in Jetty 12.0.13 - FROZEN Aug 14, 2024
@sbordet
Copy link
Contributor

sbordet commented Aug 14, 2024

@niloc132 can you try #12165?

@lachlan-roberts
Copy link
Contributor

Note that even if the spec hints to send a 431 for the first case, we tear down the connection because we do not want to risk an attack where the HPACK context is tainted with large headers (@lachlan-roberts do you confirm?).

I don't think an attack is the concern, because if headers are added beyond the SETTINGS_HEADER_TABLE_SIZE then other headers will just be evicted.

I think the issue is that we abort decoding the headers as soon as we see it exceed the MAX_HEADER_LIST_SIZE limit, so the instructions to add to the table have not been fully parsed. And so if we continue the HPACK table will have an inconsistent state between the encoder and the decoder.

@github-project-automation github-project-automation bot moved this from 🏗 In progress to ✅ Done in Jetty 12.0.13 - FROZEN Aug 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug For general bugs on Jetty side
Projects
No open projects
Status: ✅ Done
3 participants