Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use short keys for OTLP/JSON #413

Conversation

tigrannajaryan
Copy link
Member

@tigrannajaryan tigrannajaryan commented Jul 5, 2022

Resolves #412

This change sets short one or two letter keys for all fields when JSON
encoding is used. This results in about 1.3-1.5 times smaller uncompressed
payload.

Here is size comparison using some sample data from exp-otelproto test bench:

===== Encoded sizes                                                       
Encoding                       Uncomp/json[Improved]   zlib/json[Improved]
OTLP 0.18/Logs                 52577 by   [1.000]      4601 by [1.000]    
ShortKeys/Logs                 39668 by   [1.325]      4344 by [1.059]    
                                                                          
Encoding                       Uncomp/json[Improved]   zlib/json[Improved]
OTLP 0.18/Trace/Attribs        41704 by   [1.000]      3189 by [1.000]    
ShortKeys/Trace/Attribs        31998 by   [1.303]      3060 by [1.042]    
                                                                          
Encoding                       Uncomp/json[Improved]   zlib/json[Improved]
OTLP 0.18/Trace/Events         49302 by   [1.000]      1917 by [1.000]    
ShortKeys/Trace/Events         34396 by   [1.433]      1806 by [1.061]    
                                                                          
Encoding                       Uncomp/json[Improved]   zlib/json[Improved]
OTLP 0.18/Metric/Histogram     42376 by   [1.000]      1067 by [1.000]    
ShortKeys/Metric/Histogram     27071 by   [1.565]       839 by [1.272]    
                                                                          
Encoding                       Uncomp/json[Improved]   zlib/json[Improved]
OTLP 0.18/Metric/MixOne       184836 by   [1.000]      2778 by [1.000]    
ShortKeys/Metric/MixOne       119031 by   [1.553]      2143 by [1.296]    
                                                                          
Encoding                       Uncomp/json[Improved]   zlib/json[Improved]
OTLP 0.18/Metric/MixSeries    707615 by   [1.000]     11482 by [1.000]    
ShortKeys/Metric/MixSeries    457010 by   [1.548]      9829 by [1.168]    

Unfortunately this is a breaking change for default configuration of
Protobuf/JSON marshaler, which marshals field names in lowerCamelCase.
This is not a breaking change for marshalers which use the "OrigName=true"
JSON marshalling option. Nothing changes in the output when "OrigName=true"
is used.

I do not see an easy way to make this change gracefully. It will require
duplicating the entire proto, give the duplicate messages different names,
then handle the duplicates in the receivers. It is quite a lot of work
that can be also error prone. I think in this particular case we should
not attempt to handle it gracefully and simply stick to our formal
stability guarantees, which for JSON are "Alpha" and allow any changes
any time.

Example Outputs

Current JSON output before this change, using default lowerCamelCase marshaler:

{
    "resourceSpans": [
        {
            "resource": {
                "attributes": [
                    {
                        "key": "StartTimeUnixnano",
                        "value": {
                            "intValue": "12345678"
                        }
                    },
                    {
                        "key": "Pid",
                        "value": {
                            "intValue": "1234"
                        }
                    },
                    {
                        "key": "HostName",
                        "value": {
                            "stringValue": "fakehost"
                        }
                    },
                    {
                        "key": "ServiceName",
                        "value": {
                            "stringValue": "generator"
                        }
                    }
                ]
            },
            "scopeSpans": [
                {
                    "scope": {
                        "name": "io.opentelemetry"
                    },
                    "spans": [
                        {
                            "traceId": "AQAAAAAAAADw3rwKeFY0Eg==",
                            "spanId": "AQAAAAAAAAA=",
                            "name": "load-generator-span",
                            "kind": "SPAN_KIND_CLIENT",
                            "startTimeUnixNano": "1572516672000000013",
                            "endTimeUnixNano": "1572516672000000013",
                            "attributes": [
                                {
                                    "key": "db.mongodb.collection",
                                    "value": {
                                        "stringValue": "!##$"
                                    }
                                }
                            ],
                            "events": [
                                {
                                    "timeUnixNano": "1572516672000000013",
                                    "attributes": [
                                        {
                                            "key": "te",
                                            "value": {
                                                "intValue": "1"
                                            }
                                        }
                                    ]
                                }
                            ]
                        }
                    ]
                }
            ]
        }
    ]
}

JSON output before and after this change, using OrigName=true marshaler.

{
    "resource_spans": [
        {
            "resource": {
                "attributes": [
                    {
                        "key": "StartTimeUnixnano",
                        "value": {
                            "int_value": "12345678"
                        }
                    },
                    {
                        "key": "Pid",
                        "value": {
                            "int_value": "1234"
                        }
                    },
                    {
                        "key": "HostName",
                        "value": {
                            "string_value": "fakehost"
                        }
                    },
                    {
                        "key": "ServiceName",
                        "value": {
                            "string_value": "generator"
                        }
                    }
                ]
            },
            "scope_spans": [
                {
                    "scope": {
                        "name": "io.opentelemetry"
                    },
                    "spans": [
                        {
                            "trace_id": "AQAAAAAAAADw3rwKeFY0Eg==",
                            "span_id": "AQAAAAAAAAA=",
                            "name": "load-generator-span",
                            "kind": "SPAN_KIND_CLIENT",
                            "start_time_unix_nano": "1572516672000000013",
                            "end_time_unix_nano": "1572516672000000013",
                            "attributes": [
                                {
                                    "key": "db.mongodb.collection",
                                    "value": {
                                        "string_value": "!##$"
                                    }
                                }
                            ],
                            "events": [
                                {
                                    "time_unix_nano": "1572516672000000013",
                                    "attributes": [
                                        {
                                            "key": "te",
                                            "value": {
                                                "int_value": "1"
                                            }
                                        }
                                    ]
                                }
                            ]
                        }
                    ]
                }
            ]
        }
    ]
}

JSON output after this change, using proposed short keys and default
lowerCamelCase marshaler:

{
    "s": [
        {
            "r": {
                "a": [
                    {
                        "k": "StartTimeUnixnano",
                        "v": {
                            "i": "12345678"
                        }
                    },
                    {
                        "k": "Pid",
                        "v": {
                            "i": "1234"
                        }
                    },
                    {
                        "k": "HostName",
                        "v": {
                            "s": "fakehost"
                        }
                    },
                    {
                        "k": "ServiceName",
                        "v": {
                            "s": "generator"
                        }
                    }
                ]
            },
            "s": [
                {
                    "i": {
                        "n": "io.opentelemetry"
                    },
                    "s": [
                        {
                            "ti": "AQAAAAAAAADw3rwKeFY0Eg==",
                            "si": "AQAAAAAAAAA=",
                            "n": "load-generator-span",
                            "k": "SPAN_KIND_CLIENT",
                            "s": "1572516672000000013",
                            "e": "1572516672000000013",
                            "a": [
                                {
                                    "k": "db.mongodb.collection",
                                    "v": {
                                        "s": "!##$"
                                    }
                                }
                            ],
                            "ev": [
                                {
                                    "t": "1572516672000000013",
                                    "a": [
                                        {
                                            "k": "te",
                                            "v": {
                                                "i": "1"
                                            }
                                        }
                                    ]
                                }
                            ]
                        }
                    ]
                }
            ]
        }
    ]
}

@tigrannajaryan tigrannajaryan requested review from a team July 5, 2022 18:34
@tigrannajaryan
Copy link
Member Author

@open-telemetry/javascript-approvers please review.

@tigrannajaryan
Copy link
Member Author

@t2t2 please review.

@kentquirk
Copy link
Member

Vendors will not be able to avoid constructing and managing two copies of the protobuf as customers adopt this piecemeal. We've already had a difficult time dealing with previous breaking changes.

Under compression, which should be the usual case in a high-volume environment, there would be very little effective change in payload size. Wouldn't we be better off just recommending that receivers accept compressed payloads?

@tigrannajaryan
Copy link
Member Author

tigrannajaryan commented Jul 5, 2022

Wouldn't we be better off just recommending that receivers accept compressed payloads?

Receivers are already required to accept compressed payloads.

This change is a performance optimization for JS sources which don't have an easy/cheap way to compress the payload.

Resolves open-telemetry#412

This change sets short one or two letter keys for all fields when JSON
encoding is used. This results in about 1.3-1.5 times smaller uncompressed
payload.

Here is size comparison using some sample data from exp-otelproto test bench:

```
===== Encoded sizes
Encoding                       Uncomp/json[Improved]   zlib/json[Improved]
OTLP 0.18/Logs                 52577 by   [1.000]      4601 by [1.000]
ShortKeys/Logs                 39668 by   [1.325]      4344 by [1.059]

Encoding                       Uncomp/json[Improved]   zlib/json[Improved]
OTLP 0.18/Trace/Attribs        41704 by   [1.000]      3189 by [1.000]
ShortKeys/Trace/Attribs        31998 by   [1.303]      3060 by [1.042]

Encoding                       Uncomp/json[Improved]   zlib/json[Improved]
OTLP 0.18/Trace/Events         49302 by   [1.000]      1917 by [1.000]
ShortKeys/Trace/Events         34396 by   [1.433]      1806 by [1.061]

Encoding                       Uncomp/json[Improved]   zlib/json[Improved]
OTLP 0.18/Metric/Histogram     42376 by   [1.000]      1067 by [1.000]
ShortKeys/Metric/Histogram     27071 by   [1.565]       839 by [1.272]

Encoding                       Uncomp/json[Improved]   zlib/json[Improved]
OTLP 0.18/Metric/MixOne       184836 by   [1.000]      2778 by [1.000]
ShortKeys/Metric/MixOne       119031 by   [1.553]      2143 by [1.296]

Encoding                       Uncomp/json[Improved]   zlib/json[Improved]
OTLP 0.18/Metric/MixSeries    707615 by   [1.000]     11482 by [1.000]
ShortKeys/Metric/MixSeries    457010 by   [1.548]      9829 by [1.168]
```

Unfortunately **this is a breaking change** for default configuration of
Protobuf/JSON marshaler, which marshals field names in lowerCamelCase.
This is not a breaking change for marshalers which use the "OrigName=true"
JSON marshalling option. Nothing changes in the output when "OrigName=true"
is used.

I do not see an easy way to make this change gracefully. It will require
duplicating the entire proto, give the duplicate messages different names,
then handle the duplicates in the receivers. It is quite a lot of work
that can be also error prone. I think in this particular case we should
not attempt to handle it gracefully and simply still to our formal
stability guarantees, which for JSON are "Alpha" and allow any changes
any time.

### Example Outputs

Current JSON output before this change, using default lowerCamelCase marshaler:

```json
{
    "resourceSpans": [
        {
            "resource": {
                "attributes": [
                    {
                        "key": "StartTimeUnixnano",
                        "value": {
                            "intValue": "12345678"
                        }
                    },
                    {
                        "key": "Pid",
                        "value": {
                            "intValue": "1234"
                        }
                    },
                    {
                        "key": "HostName",
                        "value": {
                            "stringValue": "fakehost"
                        }
                    },
                    {
                        "key": "ServiceName",
                        "value": {
                            "stringValue": "generator"
                        }
                    }
                ]
            },
            "scopeSpans": [
                {
                    "scope": {
                        "name": "io.opentelemetry"
                    },
                    "spans": [
                        {
                            "traceId": "AQAAAAAAAADw3rwKeFY0Eg==",
                            "spanId": "AQAAAAAAAAA=",
                            "name": "load-generator-span",
                            "kind": "SPAN_KIND_CLIENT",
                            "startTimeUnixNano": "1572516672000000013",
                            "endTimeUnixNano": "1572516672000000013",
                            "attributes": [
                                {
                                    "key": "db.mongodb.collection",
                                    "value": {
                                        "stringValue": "!##$"
                                    }
                                }
                            ],
                            "events": [
                                {
                                    "timeUnixNano": "1572516672000000013",
                                    "attributes": [
                                        {
                                            "key": "te",
                                            "value": {
                                                "intValue": "1"
                                            }
                                        }
                                    ]
                                }
                            ]
                        }
                    ]
                }
            ]
        }
    ]
}
```

JSON output before and after this change, using OrigName=true marshaler.

```json
{
    "resource_spans": [
        {
            "resource": {
                "attributes": [
                    {
                        "key": "StartTimeUnixnano",
                        "value": {
                            "int_value": "12345678"
                        }
                    },
                    {
                        "key": "Pid",
                        "value": {
                            "int_value": "1234"
                        }
                    },
                    {
                        "key": "HostName",
                        "value": {
                            "string_value": "fakehost"
                        }
                    },
                    {
                        "key": "ServiceName",
                        "value": {
                            "string_value": "generator"
                        }
                    }
                ]
            },
            "scope_spans": [
                {
                    "scope": {
                        "name": "io.opentelemetry"
                    },
                    "spans": [
                        {
                            "trace_id": "AQAAAAAAAADw3rwKeFY0Eg==",
                            "span_id": "AQAAAAAAAAA=",
                            "name": "load-generator-span",
                            "kind": "SPAN_KIND_CLIENT",
                            "start_time_unix_nano": "1572516672000000013",
                            "end_time_unix_nano": "1572516672000000013",
                            "attributes": [
                                {
                                    "key": "db.mongodb.collection",
                                    "value": {
                                        "string_value": "!##$"
                                    }
                                }
                            ],
                            "events": [
                                {
                                    "time_unix_nano": "1572516672000000013",
                                    "attributes": [
                                        {
                                            "key": "te",
                                            "value": {
                                                "int_value": "1"
                                            }
                                        }
                                    ]
                                }
                            ]
                        }
                    ]
                }
            ]
        }
    ]
}
```

JSON output after this change, using proposed short keys and default
lowerCamelCase marshaler:

```json
{
    "s": [
        {
            "r": {
                "a": [
                    {
                        "k": "StartTimeUnixnano",
                        "v": {
                            "i": "12345678"
                        }
                    },
                    {
                        "k": "Pid",
                        "v": {
                            "i": "1234"
                        }
                    },
                    {
                        "k": "HostName",
                        "v": {
                            "s": "fakehost"
                        }
                    },
                    {
                        "k": "ServiceName",
                        "v": {
                            "s": "generator"
                        }
                    }
                ]
            },
            "s": [
                {
                    "i": {
                        "n": "io.opentelemetry"
                    },
                    "s": [
                        {
                            "ti": "AQAAAAAAAADw3rwKeFY0Eg==",
                            "si": "AQAAAAAAAAA=",
                            "n": "load-generator-span",
                            "k": "SPAN_KIND_CLIENT",
                            "s": "1572516672000000013",
                            "e": "1572516672000000013",
                            "a": [
                                {
                                    "k": "db.mongodb.collection",
                                    "v": {
                                        "s": "!##$"
                                    }
                                }
                            ],
                            "ev": [
                                {
                                    "t": "1572516672000000013",
                                    "a": [
                                        {
                                            "k": "te",
                                            "v": {
                                                "i": "1"
                                            }
                                        }
                                    ]
                                }
                            ]
                        }
                    ]
                }
            ]
        }
    ]
}
```
@tigrannajaryan tigrannajaryan force-pushed the feature/tigran/short_json branch from d74b93f to ed39dc5 Compare July 5, 2022 20:56
@alanwest
Copy link
Member

alanwest commented Jul 5, 2022

I can't help but think it's hilarious that this is PR #413. https://www.rfc-editor.org/rfc/rfc9110.html#name-413-content-too-large

@Aneurysm9
Copy link
Member

I would second the sentiment that compression at the transport layer is the answer here. This impairs human readability and is likely to result in duplicate effort for receivers as noted by @kentquirk. The CompressionStreams API would provide a native solution for this and is supported in Chromium-based browsers already. For browsers where that is not an option there are pure JS implementations that can be used.

This seems like a significant change for questionable benefit and I am not in favor.

@Oberon00
Copy link
Member

Oberon00 commented Jul 6, 2022

The original motivation for this ticket was that in the only scenario where we need JSON (the browser) we don't want to use transport compression because of performance issues. open-telemetry/oteps#208 (comment)

@Oberon00
Copy link
Member

Oberon00 commented Jul 6, 2022

Also copying my comment from the issue here:

We could even use the numeric tag values from the binary protocol (stringized) as JSON keys (decimal or hex or base64). Then we would also get more stable / renaming-resistant keys.

That would be a significant benefit (would further decrease readability of course, you have to guess by the values then)

@ladd
Copy link

ladd commented Jul 6, 2022

I'd vote for preserving the longer names. I've been using OTLP-JSON in a non-browser application (native mobile), partly to avoid the extra weight of including a Protobuf implementation. In that environment, it's easy to compress a JSON payload.

@kentquirk
Copy link
Member

kentquirk commented Jul 6, 2022

If I understand it, the argument is "some combinations of lightweight browsers and small applications want to avoid the performance hit from implementing compression in the browser".

But those same applications are going to be so heavily instrumented that it's worth obfuscating all the JSON for all the clients worldwide (and doing it with a breaking change)? I feel like that's a significant stretch.

I think there might be room for a lightweight client-side spec that is intended for cheap implementation in JSON, and also intended to be sent to a specific receiver in the collector that could expand it into the standard protocol. After all, if your client is that thin, you almost certainly don't want it sending data direct to third party sites anyway. But that should start as an OTEP.

@jsuereth
Copy link
Contributor

jsuereth commented Jul 8, 2022

It's things like this where I wish transit had gained more popularity.

Specifically, in transit, they found out that encoding a symbol table of JSON keys, replacing with unique 1-liner strings and deconstructing actually led to better performance than raw JSON, as it's a rather efficient encoding practice and a simple idea. It seems like it's the same idea you have here for JSON just the symbol-table is "offline".

From my perspective:

  • We SHOULD provide the symbol-table as an option for performance sensitive users. I think this is a good feature.
  • We SHOULD NOT require this for our JSON format. There's an aspect of human-readable JSON that's important.

@Aneurysm9
Copy link
Member

From my perspective:

* We SHOULD provide the symbol-table as an option for performance sensitive users.  I think this is a good feature.
* We SHOULD NOT require this for our JSON format.  There's an aspect of human-readable JSON that's important.

I don't see how we reconcile these two points. If we provide the option for producers then we effectively require that consumers implement it.

@MadVikingGod
Copy link

From my perspective:

* We SHOULD provide the symbol-table as an option for performance sensitive users.  I think this is a good feature.
* We SHOULD NOT require this for our JSON format.  There's an aspect of human-readable JSON that's important.

I don't see how we reconcile these two points. If we provide the option for producers then we effectively require that consumers implement it.

One way to support these simultaneously is to not accept them with identical qualifiers (endpoint/header). Using something like a content-type header or a separate receiver (endpoint) to distinguish minified vs not. By providing how we expect to minify and a reference client/receiver we could push the complexity of this change to just code paths that use the minified version.

The current proposal is going to cause a lot of pain for anyone currently receiving json.

@dyladan
Copy link
Member

dyladan commented Jul 11, 2022

From my perspective:

* We SHOULD provide the symbol-table as an option for performance sensitive users.  I think this is a good feature.
* We SHOULD NOT require this for our JSON format.  There's an aspect of human-readable JSON that's important.

I don't see how we reconcile these two points. If we provide the option for producers then we effectively require that consumers implement it.

I think the idea is to require it for consumers but not producers. The requirement is just written from a client/producer perspective. Similar to how gzip is implemented today.

@Aneurysm9
Copy link
Member

I think the idea is to require it for consumers but not producers. The requirement is just written from a client/producer perspective. Similar to how gzip is implemented today.

That makes it required. Even if producers don't need to implement it consumers will because they can't choose their partners. For something like gzip that is effectively transparent and can be implemented via existing, off-the-shelf proxies that's a non-issue. This is much more significant.

@MSNev
Copy link

MSNev commented Jul 13, 2022

From a browser perspective, bringing in a compression library and the CPU to require GZIP or to construct this (even with short codes) is problematic.

Using the numeric value (which has been mentioned several times already) of the field rather than an additional "shortcode" would be MUCH more preferable as having the short code would require an additional lookup table. And I'm personally not interested in the human readable argument, my focus is getting the events packaged and off the box as quickly and efficiently as possible without affecting the user experience.

Using GZIP has several problems

  • The package size for including the compression library
  • The time required to perform the compression, as this (will likely, by default) be occurring on the main thread the UI will "pause" while the payload is being compressed. Some applications have extremely tight requirements when it comes to CPU usage and even a "pause" of 1ms is noticeable by the user. Yes there are web workers to help with this, but that has other issues in relation to final page navigation and ensuring that all "events" are sent.

A MUCH better approach would be to have the Attributes encoded WAY more efficiently from a JSON perspective as something like the following with the caveat that when using this format some of the types just are not supported (specifically the difference between int and double and bytes would always need to be encoded as a string or just not supported). As the decoder would need to make assumptions about the type.

{
  "key1": true / false => boolean
  "key2": 0, 0.1, -1 => just a numeric so use double??
  "key3": "xxx" => string
  "key4": [] => Array of something
  "key5": {    => Nested Attribute representation
    "key1": true / false => boolean
    "key2": 0, 0.1, -1 => just a numeric so use double??
    "key3": "xxx" => string
    "key4": [] => Array of something
  }
}
{
    "resourceSpans": [
        {
            "resource": {
                "attributes": {
                    "StartTimeUnixnano": "12345678",
                    "Pid": "1234",
                    "HostName": "fakehost",
                    "ServiceName": "generator"
                }
            },
            "scopeSpans": [
                {
                    "scope": {
                        "name": "io.opentelemetry"
                    },
                    "spans": [
                        {
                            "traceId": "AQAAAAAAAADw3rwKeFY0Eg==",
                            "spanId": "AQAAAAAAAAA=",
                            "name": "load-generator-span",
                            "kind": "SPAN_KIND_CLIENT",
                            "startTimeUnixNano": "1572516672000000013",
                            "endTimeUnixNano": "1572516672000000013",
                            "attributes": {
                                "db.mongodb.collection": "!##$"
                            },
                            "events": [
                                {
                                    "timeUnixNano": "1572516672000000013",
                                    "attributes": {
                                        "te": "1"
                                    }
                                }
                            ]
                        }
                    ]
                }
            ]
        }
    ]
}

@yurishkuro
Copy link
Member

yurishkuro commented Jul 13, 2022

Just throwing this out there, but why can't we just implement a proto encoding in Javascript? I remember, back in the days, Zipkin Java SDK, which was used on Android and was also subject to size limitations, simply ditched the dependency on Thrift library and implemented serialization manually, which was like a 100 lines of code.

@Aneurysm9
Copy link
Member

A MUCH better approach would be to have the Attributes encoded WAY more efficiently from a JSON perspective

This would be a different protocol. If we want to do that and treat it separately with its own content type, decoupling it from OTLP and removing this issue as a blocker for OTLP 1.0, I'd have no problems with moving in that direction. I think ultimately it might be a bad idea as it then means we need to maintain two protocols and many consumers would still have to implement two protocols, but my primary concern right now is stabilizing OTLP so that it can be taken as a dependency by systems that have stability requirements.

Honestly, I think we're making a mountain out of a molehill and I'm strongly dubious of claims that a 1ms latency is user perceptible. Even at 60FPS each frame is 16ms and detecting single frame drops is not something I experience. Package size for including a pure-JS gzip implementation is a concern, but given the standardization effort for including compressed streams in browsers with native implementations is already well underway this is a temporary concern at best. We're designing a protocol that will be used for years, do we really want to upend it at the last minute for an issue that may disappear well before the end of the protocol's useful life?

@ladd
Copy link

ladd commented Jul 14, 2022

Specifically, it looks like CompressionStream already has fairly good uptake: https://caniuse.com/?search=compressionstream

@Aneurysm9
Copy link
Member

Just throwing this out there, but why can't we just implement a proto encoding in Javascript?

I'm not even sure we'd need to implement it ourselves. https://github.com/protobufjs/protobuf.js/ exists and appears to bu under active development, with a v7.0.0 release just ten days ago. It claims excellent performance compared to JSON serialization.

@yurishkuro
Copy link
Member

@Aneurysm9 nice! "protobufjs/minimal" is 6.5Kb.

@tigrannajaryan
Copy link
Member Author

Let me try to summarize the important findings:

  • There is an option to do gzip compression using CompressionStreams API available in Chromium only.
  • There is an option to use binary Protobuf, and https://github.com/protobufjs/protobuf.js/ is 6.5KB and claims performance faster than native JSON. Protobuf is going to be more compact than JSON with short keys (about 1.6-2.2 in my benchmarks).

We also discussed this in the Spec meeting yesterday and no-one in the call believed that we need the JSON with short keys given that other options are available. JSON with short keys loses its readability, while not gaining as much in the performance as the other options we have.

Given the above, I am closing this PR and will marked the linked issue as "won't do". If there are new arguments the issue can be re-opened (but it must happen before we declare JSON format Stable).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Consider adding short keys for JSON encoding