-
Notifications
You must be signed in to change notification settings - Fork 75
Eth2.0 Call 37 Agenda #141
Comments
Hi Danny, I have one proposed item to add to the agenda. The agenda item: Can we use a protobufs-based spec as the basis for a uniform REST API in Eth 2.0, and can we use Prysm's as a starting point? The context: The Infura team has been considering how we are going to support the upcoming multi-client testnet, and of course we are planning for the eventual mainnet genesis as well. As Infura has extensively experienced the pains resulting from Eth 1.0's API not being standardized across client implementations, we want to help and be involved early in the API standardization process for Eth 2.0 so that users don't suffer from poor interoperability in the future. For this reason, we will soon open an issue in the eth2.0-APIs repo proposing the community adopt Prysm’s protobufs-based spec as a starting point for a common Eth 2.0 API spec. As OpenAPI can be generated from protobufs we will not be advocating for all clients to adopt gRPC, merely that we use a protobufs-based spec as the basis for a uniform REST API. We appreciate that non-spec and Prysm-specific elements of the protobufs spec will need to be excised, that documentation will need to be improved in the Infura has already begun internally to dedicate resources to this effort, and we intend on producing an API conformance testing tool on the basis of the (hopefully generative) spec that will be adopted in the eth2.0-apis repo. |
@skmgoldin, with regards to licensing, prysmaticlabs/ethereumapis is Apache 2.0 and has no dependencies on Prysm or other GPL license. The only dependencies within the protobufs are github.com/grpc-ecosystem/grpc-gateway (BSD-3) for swagger and well known protos like https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/timestamp.proto. We've taken great care to avoid any Prysm specific elements in this repository, but I understand there may be additional RPC's in our specifications that would not be mandatory for a minimum implementation, if such a definition is required. For additional context, there have been previous open discussions:
Some other interesting links
|
Hello. I know this is an Eth2 call, but I believe there is an overlap. The EIP Improvement Process group currently has a survey searching for feedback for how EIPs are currently decided for deployment. We are reaching out to stakeholders in the EIP process. Feedback from the Eth2 team would also be appreciated. Survey link: https://docs.google.com/forms/d/e/1FAIpQLSeadXscoQgrKznUOAEB_jSzNNFKHWDEFJxKH1LpDsDsC6mXpw/viewform |
I won't be on the call tonight but I have some thoughts on the topic. If the one and only goal is to build a HTTP REST API with concise and polished OpenAPI documentation then I would find it difficult to argue that defining it with protobuf and then machine translating it into OpenAPI is the best way to go about it. However, I assume that we don't have only one goal. There's another goal in here that is perhaps inferred: ensure the API is compatible with gRPC. So now we have two known goals:
The idea from @skmgoldin is to define one of these formats and then derive the other one from it. This is sensible, but given we're talking about two disparate languages I think it's inevitable that the translation is lossy or restrictive. Seeing as these two goals conflict a little I think it will help to prioritize. From what I can tell, @skmgoldin's goal is to define a "uniform REST API in Eth 2.0" whilst "not ... advocating for all clients to adopt gRPC" . Therefore I'd have to assume the priorities are:
The current suggestion is to define protobuf and then derive OpenAPI. I've done some research into this and here's what I've found:
Considering these findings, I think that (1) stands out the most; that's quite an onerous and tangential learning path if you want to modify a JSON HTTP spec and I think low-barrier-to-entry collaboration is crucial here. I couldn't get detail on (2) so I find it hard to judge the impact. If there's some downsides to converting from protobuf to OpenAPI, then perhaps we can go the other way; define OpenAPI and derive protobuf? This would seem to fit better with our priorities by shielding the highest priority goal from the losses of conversion. Indeed this seems possible:
Given that our primary goal is to produce a slick OpenAPI specification that's easy to collaborate on then I would say that we stick with YAML. It's already the defacto human-readable serialization for Eth2 and it's the most expressive for the primary task at hand (slick HTTP API docs). Seeing as gRPC compatibility is something that is desired, add an openapi->protobuf converter in the CI for the repository that fails whenever the OpenAPI doesn't map to protobuf. The end result is a maximally collaborative and expressive OpenAPI along with protobufs for those so inclined. |
I also agree with the points raised by @paulhauner. The second point that was raised was:
I've not looked into Prysm's API currently (so cannot talk to the differences to the current specification this would entail), but we (Lighthouse) have been working under the assumption that the current specification in the eth2-api's repo was the specification to follow. The Eth2 spec has shifted and we have adjusted our API slightly without maintaining this repo, so our API currently differs (minimally) from the API currently in the repository. I would propose we keep the current specification as a base and iterate from there. I'm curious to know if other teams have also been implementing based on the current version of this API. |
For background on the choice of OpenAPI, ethereum/consensus-specs#1012 - the core idea being that we want to maximise compatibility with all kinds of existing internet infrastructure, for example make requests for simple things like blocks cache-friendly. Let's assume that we do not wish to revisit this choice at this point - this is what nimbus has been doing (we're in the process of building out the infrastructure needed to implement the eth2-api repo specification). Going protobuf -> openapi to design an OpenAPI specification, this seems backwards for all the reasons @paulhauner points out - it is difficult enough to produce a high-quality API, that we will want to avoid additional tooling and constraints. Regarding gRPC, if we want to introduce the additional feature that the spec also comply with gRPC constraints, it would be useful to know more about what these constraints would be and whether they are reasonable from an OpenAPI point of view - every compatibility and conversion layer incurs a cost, so finding out what that cost is would be a first step - as well as having ideas about where this cost should be borne (by not using the full OpenAPI potential or by having a less natural gRPC interface). Regarding using Prysm API as base - if infura wishes to dedicate resources to conversion, it would indeed be useful to make the conversion once and compare the outcome - also to identify flaws and weak spots in the API as it's published in eth2-api repo - this would be a useful base for discussion on how to iterate on those API and make them better. |
I've finally adapted my fork choice integrations tests so that they match the existing eth2.0 spec test format (and also work with the new BLS signatures), spec version is v0.10.1. |
From my understanding, one of the blockers in using the eth2.0-apis repo for Prysm was the choice to use That said, this is not a hill worth dying on. The system level eth2 APIs are divergent in many other respects from Eth1. I'd like to address if this is really the sticking point on Prysmatic's end, and if others have strong thoughts about the use of hex-string vs another encoding for bytes. |
Quick notes from the call - with actions! 😀 |
No strong feelings. Pubkeys and hashes in |
Ethereum 2.0 Implementers Call 37 Notes |
Eth2.0 Call 37 Agenda
Meeting Date/Time: Thursday 2020/4/9 at 14:00 GMT
Meeting Duration 1.5 hours
YouTube Live Stream Link
The text was updated successfully, but these errors were encountered: