Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add compression support in WebSocket #31088

Closed
zlatanov opened this issue Oct 7, 2019 · 59 comments · Fixed by #49304
Closed

Add compression support in WebSocket #31088

zlatanov opened this issue Oct 7, 2019 · 59 comments · Fixed by #49304
Labels
api-approved API was approved in API review, it can be implemented area-System.Net blocking Marks issues that we want to fast track in order to unblock other important work
Milestone

Comments

@zlatanov
Copy link
Contributor

zlatanov commented Oct 7, 2019

Adds support for RFC 7692 "compression extensions for websocket".

API Proposal

// new
public sealed class WebSocketCreationOptions
{
    // taken from existing WebSocket.CreateFromStream params.
    public bool IsServer { get; set; }
    public string? SubProtocol { get; set; }
    public TimeSpan KeepAliveInterval { get; set; }

    // new

    // turns the feature on.
    // consider: instead of multiple below properties on options class,
    // make a new DeflateCompressionOptions class and if null it is turned off.
    public bool UseDeflateCompression { get; set; } = false;

    // configures desired client window size (larger window = better compression, more memory usage)
    public int ClientMaxDeflateWindowBits { get; set; } = 15;

    // controls if the window is re-used between subsequent messages,
    // or if each message starts with a blank window.
    // memory vs compression tradeoff again.
    // might have a better name, "ClientPersistDeflateContext" etc.
    public bool ClientDeflateContextTakeover { get; set; } = true;

    public int ServerMaxDeflateWindowBits { get; set; } = 15;
    public bool ServerDeflateContextTakeover { get; set; } = true;
}

// existing
// this is set in ClientWebSocket.Options before calling Connect() against a URL.
public sealed class ClientWebSocketOptions
{
    // new, same as above.
    public bool UseDeflateCompression { get; set; } = false;
    public int ClientMaxDeflateWindowBits { get; set; } = 15;
    public bool ClientDeflateContextTakeover { get; set; } = true;
    public int ServerMaxDeflateWindowBits { get; set; } = 15;
    public bool ServerDeflateContextTakeover { get; set; } = true;
}

// existing
public abstract class WebSocket
{
    // existing
    public static WebSocket CreateFromStream(Stream stream, bool isServer, string? subProtocol, TimeSpan keepAliveInterval);

    // new
    public static WebSocket CreateFromStream(Stream stream, WebSocketCreationOptions options);
}

Original Request

AB#1118550
See discussion here #20004.

At the moment the WebSocket doesn't support per-message deflate (see https://tools.ietf.org/html/rfc7692#section-7). Adding support for it in the BCL will mean that people (myself including) will no longer resort to implementing custom WebSockets in order to use it.

Proposed API

/// <summary>
/// Options to enable per-message deflate compression for <seealso cref="WebSocket" />.
/// </summary>
public sealed class WebSocketCompressionOptions
{
    /// <summary>
    /// This parameter indicates the base-2 logarithm of the LZ77 sliding window size of the client context.
    /// Must be a value between 8 and 15 or -1 indicating no preferences. The default is -1.
    /// </summary>
    public int ClientMaxWindowBits { get; set; } = -1;

    /// <summary>
    /// When true, the client informs the peer server of a hint that even if the server doesn't include the
    /// "client_no_context_takeover" extension parameter in the corresponding
    /// extension negotiation response to the offer, the client is not going to use context takeover. The default is false.
    /// </summary>
    public bool ClientNoContextTakeover { get; set; }

    /// <summary>
    /// This parameter indicates the base-2 logarithm of the LZ77 sliding window size of the server context.
    /// Must be a value between 8 and 15 or -1 indicating no preferences. The default is -1.
    /// </summary>
    public int ServerMaxWindowBits { get; set; } = -1;

    /// <summary>
    /// When true, the client prevents the peer server from using context takeover. If the peer server doesn't use context
    /// takeover, the client doesn't need to reserve memory to retain the LZ77 sliding window between messages. The default is false.
    /// </summary>
    public bool ServerNoContextTakeover { get; set; }
}

public sealed class ClientWebSocketOptions
{
    /// <summary>
    /// Instructs the <seealso cref="ClientWebSocket" /> to try and negotiate per-message compression.
    /// </summary>
    public WebSocketCompressionOptions Compression { get; set; }
}

public enum WebSocketOutputCompression
{
    /// <summary>
    /// Enables output compression if the underlying <seealso cref="WebSocket" /> supports it.
    /// </summary>
    Default,

    /// <summary>
    /// Suppresses output compression for the next message.
    /// </summary>
    SuppressOne,

    /// <summary>
    /// Suppresses output compression.
    /// </summary>
    Suppress
}

public abstract class WebSocket
{
    /// <summary>
    /// Instructs the socket to compress (or not to) the messages being sent when
    /// compression has been successfully negotiated.
    /// </summary>
    public WebSocketOutputCompression OutputCompression { get; set; } 

    public static WebSocket CreateFromStream(Stream stream, bool isServer, string subProtocol, TimeSpan keepAliveInterval, WebSocketCompressionOptions compression);
}

Rationale and Usage

The main drive behind the API is that we should not introduce any breaking changes. WebSockets already built will work as is. This is why I suggest we add new CreateFromStream method with WebSocketCompressionOptions parameter instead of adding it to the existing one.

There are a few options built in the WebSocket compression protocol that are considered advance use - controlling the size of the LZ77 sliding window, context takeover. We could easily hide them and choose reasonable defaults, but I think there are good use cases for them and as such we should expose them. See this blog post for good example of their usage: https://www.igvita.com/2013/11/27/configuring-and-optimizing-websocket-compression/.

The usage of the WebSocket would not change at all. By default a WebSocket created with compression options would compress all messages if the connection on the other end supports it. We introduce OutputCompression property to allow explicit opt out. The property has no effect if compression is not supported for the current connection.

Here is example of how we would disable compression for specific messages:

var socket = GetWebSocket();

// Disable compression for the next message only
socket.OutputCompression = WebSocketCompressionOptions.SuppressOne;
await socket.SendAsync(...);

Here is example for WebSocketClient:

var client = new WebSocketClient();

// Indicate that we want compression if server supports it
client.Options.Compression = new WebSocketCompressionOptions();

await client.ConnectAsync(...);

// Same as before. If compression is enabled it will be performed automatically.
client.SendAsync(...); 

// If we need to explicitly disable compression for a time or specific messages
client.OutputCompression = WebSocketOutputCompression.Suppress;

// Send one or more messages
// ...

// Restore default compression state
client.OutputCompression = WebSocketOutputCompression.Default; 

Additional work will be required in AspNetCore repository and more specifically WebSocketMiddleware to light up the compression feature.

@davidfowl
Copy link
Member

Something rubs me the wrong way about being able to mutate the options on the object after it's started on a per message basis. What's the rationale for that?

@zlatanov
Copy link
Contributor Author

zlatanov commented Oct 7, 2019

@davidfowl No particular reason. The options would actually be used only once in the WebSocket (in the ctor), no reference to them would be kept. In that regard they could very well be a struct, although it would have to be nullable in WebSocketClientOptions.

I find it more readable to be able to write

new WebSocketCompressionOptions
{
    ServerNoContextTakeover = true
}

than to write

new WebSocketCompressionOptions( serverNoContextTakeover: true );

With that in mind, this is only an opinion. If you think immutable class or struct is better for the case, I will change it.

@davidsh
Copy link
Contributor

davidsh commented Oct 7, 2019

@dotnet/ncl

@scalablecory
Copy link
Contributor

Would this still be usable if it were less configurable? What do we get by exposing the LZ77 window size?

@zlatanov
Copy link
Contributor Author

zlatanov commented Oct 8, 2019

@scalablecory The websocket would be very usable without it, but servers with high number of concurrent connections or clients with low memory requirements will have problems with any defaults we come up with.

The websocket will have to keep 2 separate native memory buffers for inflate and deflate. I've collected benchmark traces for both with the different options to illustrate the point.

Here is how the allocations look for inflate:

Method Window Bits Allocated native memory
Inflate 8 8424 B
Inflate 9 9448 B
Inflate 10 11497 B
Inflate 11 15592 B
Inflate 12 23786 B
Inflate 13 40177 B
Inflate 14 72937 B
Inflate 15 138472 B

Here is how the allocations look for deflate. Here however we have another option to consider that I've not exposed - the memory and compression level and of the deflater (i.e. fast vs best).

The benchmark bellow is running in the default memory level 8 (max is 9) used by DeflateStream for best compression.

Method WindowBits Allocated native memory
Deflate 8 139240 B
Deflate 15 268264 B

The benchmark bellow is running in the default memory level 2.

Method WindowBits Allocated native memory
Deflate 8 10216B
Deflate 15 139240 B

As you can see the memory footprint for a single WebSocket can range anywhere between 18KB and 400KB. I cannot see how we can come up with any defaults that would be considered universally good enough for any web server or client.

@karelz
Copy link
Member

karelz commented Dec 17, 2019

@danmosemsft any idea why it was added to the Project ".NET Core impacting internal partners"?

@davidfowl
Copy link
Member

@karelz that's where it most recently came up.

@zlatanov
Copy link
Contributor Author

@karelz I see that you've changed the milestone to Future. Does it mean you will not consider this proposal for the 5.0 release?

@davidfowl
Copy link
Member

I think it's worth doing if we can flesh out a design and @zlatanov contributes. @zlatanov do you still want to get this in for 5.0?

@zlatanov
Copy link
Contributor Author

@davidfowl Definitely.

@karelz
Copy link
Member

karelz commented Dec 18, 2019

Milestone reflects our intent and priority from our side. It can change if more info appears. Nothing prevents anyone to get Future issues done in specific release, incl. 5.0. We take any high quality contributions.

API design is tricky though in general and requires non-trivial investment from our side first. My understanding was that the original API proposal here seemed to be too complicated to the person driving our API reviews -- @scalablecory can provide more feedback / impressions he has from the API.

@davidfowl what did you mean by "that's where it most recently came up."? Dan sent me offline info and it seems that it came from ASP.NET - maybe we can connect those dots also publicly here?

@davidfowl
Copy link
Member

Milestone reflects our intent and priority from our side. It can change if more info appears. Nothing prevents anyone to get Future issues done in specific release, incl. 5.0. We take any high quality contributions.

👍

API design is tricky though in general and requires non-trivial investment from our side first. My understanding was that the original API proposal here seemed to be too complicated to the person driving our API reviews -- @scalablecory can provide more feedback / impressions he has from the API.

Sure and it has to be costed. That's basically what @zlatanov is looking for, design feedback following our typical API review process. We should spend a little time discussing it to see if it is complicated and make a decision. I don't think we have enough information to make a call yet (on the API).

@davidfowl what did you mean by "that's where it most recently came up."? Dan sent me offline info and it seems that it came from ASP.NET - maybe we can connect those dots also publicly here?

It didn't, it came from another team internally at Microsoft. This issue has come up in the past filed by customers on the ASP.NET repository and we opened an issue here https://github.com/dotnet/corefx/issues/15430 a long time ago.

@msftgits msftgits transferred this issue from dotnet/corefx Feb 1, 2020
@msftgits msftgits added this to the 5.0 milestone Feb 1, 2020
@davidfowl
Copy link
Member

@zlatanov Are you still going to take this on?

cc @BrennanConroy

@zlatanov
Copy link
Contributor Author

@davidfowl Yup.

@karelz karelz modified the milestones: 5.0, Future Jun 5, 2020
@zlatanov
Copy link
Contributor Author

@davidfowl @karelz Now that the main branch is 6.0, will this be considered?

No feedback has been given since the API was proposed, almost 1 year ago (even just to say why it wasn't considered for 5.0 or at all). I'm sorry to say but it feels very off-putting.

@karelz
Copy link
Member

karelz commented Aug 27, 2020

@zlatanov yes, we plan to get back to this in 6.0. Sorry if it felt off-putting, but sadly, we have only limited bandwidth and even though this was important, it was less important than other features/APIs and we just didn't have anyone to do the research and provide valuable input.
This API was literally right below the cut line of 5.0 from the business perspective.

Given our current priorities (finishing off 5.0, YARP, QUIC), I think we should be able to get to this API in couple of months -- with plenty of time to finish it off in 6.0.
Let me know if you want to know more, or if I didn't fully answer your question / concerns. Thanks!

@karelz karelz modified the milestones: Future, 6.0.0 Sep 18, 2020
@jahmai-ca
Copy link

@zlatanov I've been watching this for some time - look forward to you having the chance to get this out there 👍

@scalablecory
Copy link
Contributor

scalablecory commented Dec 3, 2020

Okay, all the API details proposed make sense now that I've skimmed the RFC.

I am not a web socket expert, so here are some thoughts -- @zlatanov @davidfowl does this make sense?:

  • Is it valuable to keep the exact term NoContextTakeover? i.e. is it understood language that users would expect to carry across implementations? We may want to change this -- negative becomes positive ClientContextTakeover, or something a little more descriptive like ClientPersistDeflateContext.
  • The WebSocketOutputCompression enum is a little weird -- it looks like we should put this in as a parameter to SendAsync instead.
  • The memory savings are probably not important client-side, but I can see them being worthwhile server-side. I don't think it'll cost anything to enable for both, so we might as well, but if we want to be conservative we could probably leave the window/takeover settings off of ClientWebSocketOptions.

And general API review thoughts:

  • To avoid adding even more constructors in future, we should consider creating a WebSocketCreationOptions class that is passed to WebSocket.CreateFromStream that we can add properties to in the future.
  • Use nullables instead of -1 for window sizes.
  • Fold the compression options into their parent types instead of introducing a new class. This is consistent with the approach we've taken with grouped settings in SocketsHttpHandler.

So the final API would look something like:

// new
public sealed class WebSocketCreationOptions
{
    // taken from existing WebSocket.CreateFromStream params.
    public bool IsServer { get; set; }
    public string? SubProtocol { get; set; }
    public TimeSpan KeepAliveInterval { get; set; }

    // new
    public bool UseDeflateCompression { get; set; } = false;
    public int? ClientMaxDeflateWindowBits { get; set; } = null;
    public bool ClientDeflateContextTakeover { get; set; } = true; // or ClientPersistDeflateContext etc.
    public int? ServerMaxDeflateWindowBits { get; set; } = null;
    public bool ServerDeflateContextTakeover { get; set; } = true;
}

// existing
public sealed class ClientWebSocketOptions
{
    // new
    public bool UseDeflateCompression { get; set; } = false;
    public int? ClientMaxDeflateWindowBits { get; set; } = null;
    public bool ClientDeflateContextTakeover { get; set; } = true; // or ClientPersistDeflateContext etc.
    public int? ServerMaxDeflateWindowBits { get; set; } = null;
    public bool ServerDeflateContextTakeover { get; set; } = true;
}

// existing
public abstract class WebSocket
{
    // existing
    public static WebSocket CreateFromStream(Stream stream, bool isServer, string? subProtocol, TimeSpan keepAliveInterval);

    // new
    public static WebSocket CreateFromStream(Stream stream, WebSocketCreationOptions options);
}

This appears straightforward to implement.

We can probably use the existing deflate code from System.IO.Compression. I think we'll want to add calls to inflateReset and deflateReset and maintain a pool of inflaters/deflaters for when the NoContextTakeover options are used, so we can realize the memory savings without going bonkers allocating new zlib contexts for every message. @carlossanlop @ericstj might have some guidance here.

@zlatanov
Copy link
Contributor Author

zlatanov commented Dec 3, 2020

@scalablecory thanks for the feedback. Here are my thoughts:

I completely agree that renaming some of the properties towards making more sense for consumers who are not (and probably should not) familiar with the RFC.

The implementation will be the same regardless server / client, but I agree that having this omitted from the client is a good thing, and basically matches what we have in the browsers right now where these options are handled by the browser itself.

Personally I am not fond of changing the API for sending messages. My reasoning is that consumers down the line don't need to be rebuild to support compression, one only need to change the setup of the websocket server / client. One other important fact about having a WebSocketMessageFlags.Compress flag in the API is that consumers must be aware when the socket really has compression enabled. For example a client might be created with compress options configured, but the server might not support compression or even it might decline it during the handshake. The opposite is true as well - the server might have compression enabled, but connected clients might not support it.

I suggested WebSocketOutputCompression enum to be able to temporarily suppress compression for messages where it doesn't make sense, but I can agree that omitting such feature is not a big issue. Again it will align with the browsers' implementations where once you have compression, all messages will be compressed.

@scalablecory
Copy link
Contributor

scalablecory commented Dec 3, 2020

Personally I am not fond of changing the API for sending messages. My reasoning is that consumers down the line don't need to be rebuild to support compression, one only need to change the setup of the websocket server / client. One other important fact about having a WebSocketMessageFlags.Compress flag in the API is that consumers must be aware when the socket really has compression enabled. For example a client might be created with compress options configured, but the server might not support compression or even it might decline it during the handshake. The opposite is true as well - the server might have compression enabled, but connected clients might not support it.

I suggested WebSocketOutputCompression enum to be able to temporarily suppress compression for messages where it doesn't make sense, but I can agree that omitting such feature is not a big issue. Again it will align with the browsers' implementations where once you have compression, all messages will be compressed.

Do you have some specific examples of where you'd want to disable compression? I agree it'd be easy to just leave it off here, but if there's a compelling use case we should keep it.

If we do want to keep it, we can invert the feature -- instead of WebSocketMessageFlags.Compress, use WebSocketMessageFlags.NoCompression. This way existing API users would just get the default of compress every message.

@CarnaViire
Copy link
Member

@zlatanov yes, it would be great if you could create a draft PR with your changes at that moment. You may additionally mark it as WIP if you wish.
It may take me some time to get through your changes (just setting expectations)

@CarnaViire
Copy link
Member

Hey @zlatanov I understand that you might have been busy, but have you got any chance to proceed with the implementation? Is there anything I can help you with?

@zlatanov
Copy link
Contributor Author

@CarnaViire I am working on the implementation, as it turned out little trickier than I thought because there are lots of assumptions inside the current implementation. Do you have somewhere where we could chat and figure out how we can collaborate?

@CarnaViire
Copy link
Member

@zlatanov sure, just drop me an email (it should be visible on GH for now) and we'll figure it out from there.

@ghost ghost added the in-pr There is an active PR which will close this issue when it is merged label Feb 18, 2021
@ghost ghost removed the in-pr There is an active PR which will close this issue when it is merged label Mar 8, 2021
@ghost ghost added the in-pr There is an active PR which will close this issue when it is merged label Mar 8, 2021
@CarnaViire
Copy link
Member

CarnaViire commented Mar 22, 2021

We've discussed that compression may cause security implications if turned on blindly. (You may see a part of that discussion on the PR here #49304 (comment)). We want to make sure that users understand that this is dangerous and encourage them to read the docs and weigh the risks. We also think it is important at this point to be able to turn off the compression for specific messages (if e.g. they contain a secret). That's why we want to add "Dangerous" prefix to the API and also add an API for per-message compression opt-out.

I've highlighted the changes we'd like to add to the proposal.

namespace System.Net.WebSockets
{
    public sealed class WebSocketDeflateOptions
    {
        public int ClientMaxWindowBits { get; set; } = 15;
        public bool ClientContextTakeover { get; set; } = true;
        public int ServerMaxWindowBits { get; set; } = 15;
        public bool ServerContextTakeover { get; set; } = true;
    }

    public sealed class WebSocketCreationOptions
    {
        public bool IsServer { get; set; }
        public string? SubProtocol { get; set; }
        public TimeSpan KeepAliveInterval { get; set; }
-       public WebSocketDeflateOptions? DeflateOptions { get; set; } = null;
+       public WebSocketDeflateOptions? DangerousDeflateOptions { get; set; } = null;
    }

    public partial class ClientWebSocketOptions
    {
-       public WebSocketDeflateOptions? DeflateOptions { get; set; } = null;
+       public WebSocketDeflateOptions? DangerousDeflateOptions { get; set; } = null;
    }

+   [Flags]
+   public enum WebSocketMessageFlags
+   {
+       // taken from existing WebSocket.SendAsync params
+       EndOfMessage = 1,
+
+       // new
+       DisableCompression = 2
+   }

    public partial class WebSocket
    {
        public virtual ValueTask SendAsync(ReadOnlyMemory<byte> buffer, WebSocketMessageType messageType, bool endOfMessage, CancellationToken cancellationToken);
+       public virtual ValueTask SendAsync(ReadOnlyMemory<byte> buffer, WebSocketMessageType messageType, WebSocketMessageFlags messageFlags, CancellationToken cancellationToken = default);
    }    
}

cc @scalablecory

@CarnaViire CarnaViire added api-ready-for-review API is ready for review, it is NOT ready for implementation and removed api-approved API was approved in API review, it can be implemented labels Mar 22, 2021
@danmoseley
Copy link
Member

Would EnableCompression work as well as DisableCompression ? "Negative" sense flags can be confusing, eg "// set disable compression to false". Plus, it sounds like you want people to opt-in to it.

@CarnaViire
Copy link
Member

@danmoseley our reason for naming it DisableCompression/NoCompress/whatever-opt-out was that you'd use that flag rarely. If you decide to use compression, you would want to use it for most messages, and only turn it off if you want to send a secret once in a while. If you send a lot of secrets, it's better not to use compression at all... And if you use the old SendAsync overload, where you don't have a flag, it will mean that compression is on. If we go with EnableCompression, the compression will be off by default, so we'll be forcing to use a new overload everywhere. It may not be such a bad thing though.

@zlatanov
Copy link
Contributor Author

zlatanov commented Mar 23, 2021

I like the added proposal changes.

I don't think if you invert the flag to EnableCompression is a good choice. I think that it will pollute the existing code too much and it will give the impression that if I send a message with this flag on, the message will be compressed. This is not the case because it also depends whether compression has been enabled in the first place. This double enabling seems more confusing than having to opt out.

@danmoseley
Copy link
Member

I defer to whatever API review says ☺

@scalablecory scalablecory added the blocking Marks issues that we want to fast track in order to unblock other important work label Mar 29, 2021
@bartonjs
Copy link
Member

bartonjs commented Mar 30, 2021

Video

  • We discussed whether the DisableCompression should be Enable or Disable, and agree Disable makes sense as a per-message option
  • The WebSocketMessageFlags enum should name the zero member (None, Default, etc)
  • Consider defaulting the WebSocketMessageFlags parameter in SendAsync
  • We're OK with the new "Dangerous" prefix, so long as it's really expected to be rarely used -- don't dilute the meaning of "Dangerous".
    • Ensure that there's consistency or followup action for higher level APIs that may be doing similar things, such as SignalR/ASP.NET.
namespace System.Net.WebSockets
{
    public sealed class WebSocketCreationOptions
    {
        public bool IsServer { get; set; }
        public string? SubProtocol { get; set; }
        public TimeSpan KeepAliveInterval { get; set; }
-       public WebSocketDeflateOptions? DeflateOptions { get; set; } = null;
+       public WebSocketDeflateOptions? DangerousDeflateOptions { get; set; } = null;
    }

    public partial class ClientWebSocketOptions
    {
-       public WebSocketDeflateOptions? DeflateOptions { get; set; } = null;
+       public WebSocketDeflateOptions? DangerousDeflateOptions { get; set; } = null;
    }

+   [Flags]
+   public enum WebSocketMessageFlags
+   {
+       None = 0,
+
+       // taken from existing WebSocket.SendAsync params
+       EndOfMessage = 1,
+
+       // new
+       DisableCompression = 2
+   }

    public partial class WebSocket
    {
+       public virtual ValueTask SendAsync(ReadOnlyMemory<byte> buffer, WebSocketMessageType messageType, WebSocketMessageFlags messageFlags, CancellationToken cancellationToken = default);
    }    
}

@bartonjs bartonjs added api-approved API was approved in API review, it can be implemented and removed api-ready-for-review API is ready for review, it is NOT ready for implementation labels Mar 30, 2021
@CarnaViire
Copy link
Member

We've discussed the things left for consideration.

1) Whether the "Dangerous" prefix might be overused so it will lose its purpose
We agreed on keeping the dangerous prefix. We think the danger is completely non-obvious so it's good to call it out. We don't think enough people will use it to dilute the prefix's meaning.

2) Defaulting WebSocketMessageFlags parameter in SendAsync
We don't think None will be a good default. It seems likely that you're either calling EndOfMessage every time, or are calling it some of the time... but definitely not never. EndOfMessage is also not a good default, because then it will be counter-intuitive to call SendAsync explicitly with None to state the message will have a continuation. That's why we believe it makes sense to ask the caller to always specify the flag, no defaulting.

If anyone sees any issues with our decisions, please let us know.

@danmoseley
Copy link
Member

Is it possible that there might in future be a solid mitigation to this dangerousness? Eg., a form of compression that pads to a fixed length. In that case, might we have a WebSocketDeflateOptions member that wasn't dangerous, and yet was accessed with DangerousDeflateOptions ?

@geoffkizer
Copy link
Contributor

Is it possible that there might in future be a solid mitigation to this dangerousness? Eg., a form of compression that pads to a fixed length. In that case, might we have a WebSocketDeflateOptions member that wasn't dangerous, and yet was accessed with DangerousDeflateOptions ?

I doubt it. For compression to be useful, it has to compress to something smaller than the original. If it does, then it seems like it's still subject to a CRIME attack.

@geoffkizer
Copy link
Contributor

To be clear, the general mitigation is to ensure you don't use the same compression context to compress data from different sources, at least one of which is untrusted. Unfortunately it's not easy to figure out how to enforce that generally.

CarnaViire pushed a commit that referenced this issue Apr 28, 2021
Adds support for RFC 7692 "compression extensions for websocket".

Closes #31088
@ghost ghost removed the in-pr There is an active PR which will close this issue when it is merged label Apr 28, 2021
@ghost ghost locked as resolved and limited conversation to collaborators May 28, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
api-approved API was approved in API review, it can be implemented area-System.Net blocking Marks issues that we want to fast track in order to unblock other important work
Projects
None yet
Development

Successfully merging a pull request may close this issue.