-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Invalid header type: 72 #179
Comments
Which versions of |
To debug this, obtain a full packet capture PCAP file of the communication between client and server. |
I get this error on port 80:
|
@tolysz @vdukhovni I'm definitely willing but may need a little guidance so I don't spin my wheels. Do you have a small script or something I can see (the best place for me to get the capture is on CI in my integration tests). @ocheron strange, also happens when I use |
@tolysz I can also confirm it happens with the following deps in stack.yaml:
|
Create a PCAP file with |
The error you reported clearly indicates 443, however I just wanted to point out that the issue is not necessarily in tls. It could be also in the calling code. The constructor Btw 72 is not 'F' but 'H', the first character of an HTTP response in clear text. |
Oops, indeed, sorry about the misdirection. This is strong evidence that the connection was a service that does use TLS, perhaps some port 443 service is not in fact configured for TLS. |
@vdukhovni I did manage to get past the first two steps but having trouble with the last: I ended up with a dump from a series of api requests on a CI server that look like this (notice the bad request at the bottom): The contents of that packet look like this: This bad request comes from stripe api servers. The
However, as far as I can tell in my stripe dashboard logs, the api request completes successfully. I managed to reproduce and create dumps locally and they all have a similar pattern. I'm a little perplexed at where the source of the error may be so any help would be appreciated. Do you think it's still worth creating the last |
Extract all the traffic for port 39470 into a separate capture file. It is important to know what the client sent. The server's ACK seems to indicate a 32-byte client request, which is much too short for a TLS Hello I think. |
@vdukhovni thanks for the help, here it is:
quick look: external link: Thanks for the help |
The TLS client HELLO is preceded by an "encrypted alert", most likely a "close notify". Most likely you're sending the "close notify" alert for an earlier TLS connection down the wrong open socket! This can happen if the underlying socket is closed (and its file descriptor recycled for a new connection) before the associated TLS object was cleaned up. There are likely also other ways of mixing up file descriptors. |
@vdukhovni thanks, however i'm not sure I understand if there are any actionable items I can look at. I don't know anything about sending "close notify" because I am simply using this library via Also, the same error is printed when this call is made:
which leads me to believe it's not specific to my code. This same error message can also be seen from various other projects online when googling the error message: |
If you're not the one managing the socket connections, open a bug with the maintainers of that library. There should never be an encrypted alert in front of the TLS client HELLO. It is still possible that your own application is mismanaging file descriptors (closing the wrong ones, ...) outside the context of these HTTPS requests, and this too could cause the observed symptoms.
That conclusion is wrong. While the error message is the same, the reason is unrelated. In both cases your client code encounters an unexpected cleartext HTTP error response, but on port 443 it is due to socket mismanagement, and on port 80 it is due to trying to deliberately do TLS on port 80, where it is not supported. |
Though you closed this issue, it would be good to add comments about the final resolution once you've figured where the error originated. Please follow up if you can. One way that you might have run into trouble is perhaps with "lazy I/O". Perhaps there was some lazy unevaluated thunk lying around related to a previous API call whose underlying network socket had been closed, and its file-descriptor re-used. When evaluation finally takes place, it then sends data to the wrong TCP connection. |
Possible source of problem: snoyberg/http-client#252 |
Looks a lot like erikd-ambiata/test-warp-wai#1. |
@ocheron yes, looks like the same underlying issue. Has this all been resolved now? Or is there still a problem somewhere in the stack of libraries? |
It's not very clear what issues can be considered fixed or not. But there is definitely still a leak in connection-0.2.7, this time when the TLS connection is closed (or double-closed to be precise). |
Can be reproduced with: I could get along:
|
Recently I got this same error while playing around with Nginx web server as the SSL backend to http-client-tls. My env is absolutely new: Fedora 38, ghc-9.6.2, http-client-tls-0.3.6.3, tls-1.9.0, crypton-0.33. My client code related to creating https manager and TLS settings is where mkManager name = do
systemCAStore <- getSystemCertificateStore
let (T.unpack -> h, T.encodeUtf8 -> p) =
second (fromMaybe "" . fmap snd . T.uncons) $
T.break (':' ==) name
validateName = const . hookValidateName X509.defaultHooks
defaultParams = (defaultParamsClient h p)
{ clientShared = def
{ sharedCAStore = systemCAStore }
, clientHooks = def
{ onServerCertificate = X509.validate HashSHA256
X509.defaultHooks
{ hookValidateName = validateName h }
X509.defaultChecks
}
, clientSupported = def
{ supportedCiphers = ciphersuite_default }
}
newTlsManagerWith $
mkManagerSettings (TLSSettings defaultParams) Nothing Nginx was listening at 8030 ssl. I was testing it getting various unrelated exceptions until I decided to turn off IPv6 in the system as Nginx listens both IPv4 and IPv6 by default and this was an obstacle for traffic analysis. I did $ echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6 Then I restarted Nginx, fixed unrelated exceptions and began getting exactly this TlsExceptionHostPort (HandshakeFailed (Error_Packet_Parsing "Failed reading: invalid header type: 72\nFrom:\theader\n\n")) error. Then I googled this error and found this discussion and this too. This gave me a clue and I switched traffic to using port 443 which fixed the error. But this wasn't satisfactory to me. I tried to use openssl as a server. $ openssl s_server -key certs/server/server.key -cert certs/server/server.crt -port 8030 -www The error has gone! I began to suspect Nginx. But besides the header type 72 there was another scary effect when I was testing with Nginx. On some restarts Nginx stopped answering to the client at all!. So I decided to switch /proc/sys/net/ipv6/conf/all/disable_ipv6 back and forth. Surprisingly, Nginx has fixed at some moment and the error has gone! Therefore, in my case this error was presumably related to a system misconfiguration. |
Did you dump traffic with Wireshark or something and check if SSL was really used? |
Yes, I ran ngrep filtering lo interface and port 8030 $ ngrep -d lo -W byline '' tcp and port 8030 and was peeking there all the way to see what happened. This was https for sure. Related Nginx settings were server {
listen 8030 ssl;
server_name localhost;
ssl_certificate /home/lyokha/devel/nginx-healthcheck-plugin/simple/certs/server/server.crt;
ssl_certificate_key /home/lyokha/devel/nginx-healthcheck-plugin/simple/certs/server/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5; |
OK. Which version of |
$ cabal-plan --ascii | grep -E 'network|socket'
| | +- network-uri-2.6.4.2
| +- network-3.1.4.0
| | +- network-3.1.4.0 ...
| +- network-3.1.4.0 ...
| | +- network-3.1.4.0 ...
| +- network-3.1.4.0 ...
| +- network-uri-2.6.4.2 ...
| | +- network-3.1.4.0 ...
| +- network-3.1.4.0 ...
| +- network-uri-2.6.4.2 ...
| | +- network-3.1.4.0 ...
| +- network-3.1.4.0 ...
| +- network-uri-2.6.4.2 ...
| | +- network-3.1.4.0 ...
| +- network-3.1.4.0 ...
UnitId "network-uri-2.6.4.2-f44ea940436f8b1483789f15e8f9daade0133b2291f1072c6d75a19536efda61"
UnitId "network-3.1.4.0-13bde0763bc9f6dbc4ac9d573449d21d0c35c10bd2d1e2c00c8e2823e14be965" |
You should use v3.0.0.0 or later. |
Never mind. |
I cannot reproduce this anymore. This looked rather as a weird system glitch because there were other evidences for this like unresponsiveness on port 8030 (which didn't interfered significantly with my tests because I was testing a health check engine). I only wrote here because the glitch somehow triggered the invalid header 72 error, so I thought this could be helpful. |
When making some vanilla calls to the stripe api, I receive some bizarre errors. I've done some sanity checking throughout my code with no luck.
This error happens roughly 35% of the time:
TlsExceptionHostPort (HandshakeFailed (Error_Packet_Parsing "Failed reading: invalid header type: 72\nFrom:\theader\n\n")) "api.stripe.com" 443
Any idea where I should start debugging this?
The text was updated successfully, but these errors were encountered: