-
Notifications
You must be signed in to change notification settings - Fork 13.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SIP-39] Global Async Query Support #9190
Comments
WebSockets are pretty heavy-handed here, in my opinion. Short polling works well and is very load balancer friendly. The "con" listed above due to connection limits needn't be an insurmountable problem since these are quick polling calls (or am I missing something?). I'm pretty concerned about the practical implications of this deploying in corporate environments. Routing HTTP is a well-understood, easy-to-scale problem that is most-likely to play nicely with any environment folks are deploying in and won't require infra changes to LBs etc which are not always under the control of the Superset "owners". I'm just not seeing the ROI on this vs keeping it simple. |
We've seen significant load on both webservers and the metadata database managing the SQL Lab short polling requests. We're quite concerned that if we add short polling for every chart in a dashboard it would have a bunch of performance implications. One option might be to do a bulk short poll on dashboards, making a single short poll to track all the in flight chart requests. Also, we've seen issues around the short polling in SQL Lab where if any short poll fails, then the entire query is marked as failed on the front end. I wonder if the reliability of short polling makes it less ideal here than using websockets that reconnect when dropped |
The bulk polling could make sense to reduce load, but my bigger point here is that scaling a fleet of HTTP servers (or even routing a particular endpoint to a distinct dedicated fleet) is a pretty simple problem that can be accomodated in most environments, whereas websockets and sidecars have a lot of interesting problems in that area and may not even be feasible in some environments. For the last point, that sounds like a front-end bug, not a problem inherent with small HTTP GET requests for short polling. Polling is much simpler in terms of the tech involved (and can be stateless), so I am skeptical that small front-end bugs will cease to be a problem if we adopt a more complicated technology to replace the foundation. And to be clear, I'm not meaning to dismiss the idea of websockets, I just want to make sure that the simpler alternative is discussed and the trade-offs. It also might be worth discussing the relative strength of the "cons" for SSE. "Some browsers" don't support it, which means the subset of the IE11 users (1.55% globally) that is not on Windows 10, as well as Opera. |
Short polling is an option, but it adds considerable load to the metadata database at scale to authenticate and check permissions for each pending object in the polling request(s). If the user has a number of open tabs, we then have each client potentially hammering the metadata DB pretty hard. Contrast this with the websocket option (or SSE+HTTP/2), which requires a single authentication action upon connect, and a single authorization for each result set only when fetched. Websockets are more work to setup at the infrastructure level, though there are similar concerns with enabling HTTP/2 in many load balancers, without which, SSE is not a viable option. My initial inclination when drafting this SIP was to recommend SSE+HTTP/2 rather than websockets, but the Flask app is not well suited for persistent connections, making a sidecar app more feasible. Websockets are also arguably more ubiquitous for realtime communication at the client at this point. With regard to scaling, the fact that sticky sessions are not required due to the reconnection strategy allows for flexible horizontal scaling of the websocket sidecar app, and there are several patterns for load balancing websocket servers. Are there specific scaling or implementation concerns that we should address? Async query support is currently optional in Superset, and should remain so, IMO. The async solution we agree upon should be a balance of performance and feasibility, but we should consider short polling for a fallback if websockets are not available for whatever reason. |
For the metadata/auth load problem, could we just issue short-lived tokens for the session with cache to remove the need to hammer that database? (Decomposition into simple problems with simple solutions may be preferable) Last pushback on the bigger idea 😁: Websockets have their uses--for example, I've found them very useful in the past for things like streaming 60hz point cloud updates and telemetry data--but ultimately they are an RTC protocol and that is really not the situation we have here, and to me this feels like overkill. I concede it will get the job done, but I fear it will be unnecessarily "expensive", it will introduce complexity, it will introduce a host of new bugs and other fallout, and I think the infra/ops requirements may lead to reduced adoption in environments where there is friction to messing with LBs etc. I think I've said my 2 cents on that... looking forward, if the community agrees to go with WebSockets i would ask that the SIP be modified to account for these concerns and keep it enterprise friendly. A couple of requests for consideration:
|
Socket.io defaults to long-polling, upgrading to websockets if available. Using Socket.io without websockets would almost certainly saturate the browser's HTTP connection limit with just a few tabs open (without HTTP/2). This was the main reason behind recommending vanilla websockets vs Socket.io. I'm interested in hearing more about your experience with |
I'm sure you've seen all of this @robdiciuccio but leaving it here for the general audience, it touches on a lot of the tradeoffs discussed here: https://moduscreate.com/blog/fast-polling-vs-websockets-2/ I had forgotten that the polling transport used long polling. :-/ I think there are some options you can configure on the transport, but I'm less confident now that short-polling with socket.io is an option. When I used it previously, we were using websockets (not long polling) on I know I said I had said my 2 cents, but a couple more questions came to mind (sorry!):
|
Hi @DiggidyDave I want to chime in here. I don't think short-polling is a good option architecturally. Short-polling necessarily introduces latency as the system waits between polling intervals. The lower the latency desired in the system, the greater the load placed on the server to answer short-poll requests. Maintaining an open websocket connection solves this problem. To respond to your question on tabs, yes, I think "many tabs" is a problem. Our user research indicates that a small number of power-users create the majority of content inside of organizations. These users tend to have job titles like "business analyst", and for them a large number of open tabs is the norm. For this reason I would say we want non-active tabs to be actively updated. Think of a situation where you fire a query against a data warehouse in one tab, then switch away to work on something else. Ideally, there would be some manner of notification on update to let the user know that the query has finished. This is possible with active websockets without increasing server overhead. With short-polling, we can DDOS ourselves from inactive tabs but if we disable short-polling on inactive tabs we're going to negatively impact the user experience. |
Those are valid points, but quick alternative solutions:
Like I said, I agree websockets will work. I'm just afraid it will cause a lot of complexity, instability, and will cause reduced adoption, in exchange for little or no benefit over simpler approaches. |
Feedback on the alternate solution:
|
To the first bullet: I don't disagree at all that websockets have the potential to outperform polling w.r.t. frontend-to-backend latency (assuming the backend--which is, of course, polling something, somewhere--is written efficiently). What I am questioning is whether reducing the extra 500-1000ms max of latency for long-lived queries (and 250-500ms for short-lived ones) on non-active tabs is actually a requirement (I obviously think it is not) and whether it is worth complicating and destabilizing Superset over it. OK, I genuinely think I've said my peace now :-) Thanks for the good faith back-and-forth here. I think its healthy. |
What are the expected I also don't see any discussion about the migration plan/implications for the Additionally, we rely heavily on the |
@DiggidyDave The use cases for long-running queries in SQL Lab and loading dashboards in a performant manner feel like very different use cases. Polling could (and currently does) satisfy the SQL Lab use case, though there is significant room for improvement there. The latency introduced by polling in dashboards would be a much larger impact to user experience. Agreed that this discussion is healthy! Architecture proposals should be actively debated and scrutinized so we all benefit from better informed decision-making. @williaster |
We're going to take another look at using Socket.io in order to support environments where websockets are not an option. |
Much appreciated. FWIW here is a bit about bypassing the intial/default logpolling connection (which it usually establishes first as a fallback): https://socket.io/docs/client-api/#With-websocket-transport-only |
@DiggidyDave Socket.io requires sticky sessions at the load balancer in order to properly perform long-polling: https://socket.io/docs/using-multiple-nodes/. Not sure if this is possible in your environment? |
Interesting, I'm not actually sure about that, I'd have to dig into it. :-/ The main thing I think we care about is any client-side interface between the business logic and raw native websockets, to reserve that option to swap out the impl with a short-polling approach. (socket.io is just a super popular wrapper that happens to have the transport abstraction built-in, as is https://github.com/sockjs/sockjs-client and others) There are other options that are not coupled with a server architecture, like this one that is a small client-side wrapper: https://github.com/lukeed/sockette Or this one that is a promise-based wrapper (obvs, a polling impl could fulfill promises just as easily as far as client code is concerned): https://github.com/vitalets/websocket-as-promised Or anything else. As long as websockets are behind an interface of some kind there will be a path to unblock environments that can't use ws. Something as simple as this wrapping ws would be perfectly fine:
Anyway, I appreciate you looking at that. |
@DiggidyDave the abstraction approach sounds good, as it appears that Socket.io is not really appropriate for this use case. I've updated the body of the SIP with notes on the client-side abstraction. |
When I enable |
The vote for this SIP PASSED with 5 binding +1, 3 non binding +1 and 0 -1 votes on 3/21/2020 |
With this SIP still being behind an experimental feature flag, and not actively maintained, I've been thinking about ways we could simplify the architecture, and finally make this generally available in a forthcoming Superset release. Specifically, I found that the websocket implementation didn't significantly improve the UX compared to the polling solution. In retrospect, I feel most of @DiggidyDave 's comments turned out to be true - the solution ended up becoming too complex, and didn't gain critical adoption within the community. However, the feature is still as relevant today as it was when this SIP was opened, and I think stabilizing this feature is very important is because Superset's current synchronous query execution model causes lots of issues:
To simplify the architecture and reuse existing functionality, I propose the following:
The async execution flow is changed to be similar to SQL Lab async execution, with the following changes:
Some random thoughts:
I assume we need a new SIP for this, but I wanted to drop this comment here to get initial feedback. |
Thank you for the comment @villebro. I really like the idea of removing extra layers and reusing existing features such as
I think this point ☝🏼 would be essential/required as the result of this work. We need to reduce complexity and only keep solutions that are maintained.
Yep. We definitely need a SIP to discuss the details. |
Agreed with Michael on all points. And I'm very thankful for all of your insights and input here. I'd be SO excited to see this feature mainstream, and I think both the UX will improve as well as the infra scenario. Is there any down side to just making Redis a required component, if that simplifies things further? Also, just to play devil's advocate on the removal of websockets, there are a few distinct advantages I kind of dream of:
At some point, I think we'll want any or all of the above, so we'll want to revisit the idea of having a websocket solution in place. If this is not the time, so be it... mainstreaming GAQ would be a clear priority. But if there's anything worth keeping/shelving here for a future effort, it seems potentially worth it. |
@rusackas I think we could use Server-sent events for those use cases, since all those use cases are unidirectional. It's a simpler architecture than Websockets. One thing I'm worried about having an async-mode only (which I think it a good idea overall) is the additional latency when using sub-second databases. We should make sure Superset is not adding a lot of latency when running the queries asynchronously — eg, if we're going to poll we should poll aggressively at first and then back off. |
@villebro Maybe it would be good to open a SIP as [WIP], and paste these last 4 comments there, so we don't lose this valuable feedback when discussing the SIP. |
If there's no major opposition to moving ahead with GAQ2 as a SIP then I can open it up today. |
@betodealmeida this is actually a really good point - Celery will definitely add unpleasant overhead to sub-second dbs. So maybe we shouldn't totally remove async mode after all. But for Trino-type OLAPs I think it'll definitely be great. |
[SIP-39] Global Async Query Support
Motivation
Proposed Change
Provide a configuration setting to enable async data loading for charts (dashboards, Explore) and SQL Lab. Instead of multiple synchronous requests to the server to load dashboard charts, we issue async requests to the server which enqueue a background job and return a job ID. A single persistent websocket connection to the server is opened to listen for results in a realtime fashion.
Websockets via a sidecar application
Pros
Cons
Approach
Each open tab of Superset would create a unique "channel" ID to subscribe to. A websocket connection is established with the standalone websocket server as an HTTP request that is then upgraded to
wss://
if authentication with the main Flask app is successful. Requests for charts or queries in SQL Lab are sent via HTTP to the Flask web app, including the tab's channel ID. The server enqueues a celery job and returns a job ID to the caller. When the job completes, a notification is sent over the WSS connection, which triggers another HTTP request to fetch the cached data and display the results.Why a separate application?
The current Flask application is not well suited for persistent websocket connections. We have evaluated several Python and Flask-based solutions, including flask-SocketIO and others, and have found the architectural changes to the Flask web app to be overly invasive. For that reason, we propose that the websocket service be a standalone application, decoupled from the main Flask application, with minimal integration points. Superset's extensive use of Javascript and the mature Node.js websocket libraries make Node.js and TypeScript (SIP-36) an obvious choice for implementing the sidecar application.
Reconnection
The nature of persistent connections is that they will, at some point, disconnect. The system should be able to reconnect and "catch up" on any missed events. We evaluated several PubSub solutions (mainly in Redis) that could enable this durable reconnection story, and have determined that Redis Streams (Redis ≥ 5.0) fits this use case well. By storing a last received event ID, the client can pass that ID when reconnecting to fetch all messages in the channel from that point forward. For security reasons, we should periodically force the client to reconnect to revalidate authentication status.
Why not send result data over the socket connection?
While it is possible to send result data over the websocket connection, keeping the scope of the standalone service to event notifications will reduce the security footprint of the sidecar application. Fetching (potentially sensitive) data will still require necessary authentication and authorization checks at load time by routing through the Flask web app. Sending large datasets over the websocket protocol introduces potential unknown performance and consistency issues of its own. Websockets are not streams, and "the client will only be notified about a new message once all of the frames have been received and the original message payload has been reconstructed."
Query Cancellation
Queries may be "cancelled" by calling the
/superset/stop_query
endpoint in SQL Lab, which simply setsquery.status = QueryStatus.STOPPED
for the running query. This cancellation logic is currently implemented only for queries running against hive and presto databases. Queries that have been enqueued could be cancelled prior to executing the query by adding a check in the celery worker logic. It is also possible to revoke a Celery task, which will skip execution of the task, but it won’t terminate an already executing task. Due to the limited query cancellation support in DB-API drivers, some of which is discussed here, comprehensive query cancellation functionality should be explored in a separate SIP. That said, query cancellation requests may still be issued to the existing (or similar) endpoint when users intentionally navigate away from a dashboard or Explore view with charts in a loading state.Query deduplication
The
query
table in the Superset metadata database currently includes only queries run via SQL Lab. Adapting this for use with dashboards and charts may have a larger impact than we're willing to accept at this time. Instead, a separate key-value store (e.g. Redis) may be used for tracking and preventing duplicate queries. Using a fast KV store also allows us to check for duplicate queries more efficiently in the web request cycle.Each query issued to the backend can be fingerprinted using a hashing algorithm (SHA-256 or similar) to generate a unique key based on the following:
Prior to executing a query, a hash (key) is generated and checked against a key-value store. If the key does not exist, it is stored with a configured TTL, containing an object value with the following properties:
Another key is created to track the Channels and Job IDs that should be notified when this query completes (e.g.
<hash>_jobs
→List[ChannelId:JobId]
). If a duplicate query is issued while currently running, the Job ID is pushed onto the list, and all relevant channels are notified via websocket when the query completes. If a query is issued and a cache key exists withstate == "success"
, a notification is triggered immediately via websocket to the client. If queries are "force" refreshed, query deduplication is performed only for currently running queries.New or Changed Public Interfaces
New dependencies (optional)
Migration Plan and Compatibility
Asynchronous query operations are currently an optional enhancement to Superset, and should remain that way. Configuring and running Celery workers should not be required for basic Superset operation. As such, this proposed websocket approach to async query operations should be an optional enhancement to Superset, available via a configuration flag.
For users who opt to run Superset in full async mode, the following requirements will apply under the current proposal:
Browsers that do not support websockets (very few) should fallback to synchronous operation or short polling.
Migration plan:
Rejected Alternatives
SSE (EventSource) over HTTP/2
Server-Sent Events (SSE) streams data over a multiplexed HTTP/2 connection. SSE is a native HTML5 feature that allows the server to keep the HTTP connection open and push data to the client. Thanks to the multiplexing feature of the HTTP/2 protocol, the number of concurrent requests per domain is not limited to 6-8, but it is virtually unlimited.
Pros
SETTINGS_MAX_CONCURRENT_STREAMS
, but this should be no smaller than 100Cons
NOTE: HTTP/2 multiplexing could still potentially be valuable alongside the websocket features, and should be investigated further.
Long polling (aka Comet)
Pros
Cons
Short polling
Pros
Cons
Thanks to @etr2460 @suddjian @nytai @willbarrett @craig-rueda for feedback and review.
Update 2020-03-10
Per the below discussion, an abstracted interface will be used on the client in order to support transport mechanisms other than Websockets and the proposed sidecar application. The final form of this abstraction will take shape during implementation, but the goal will be to have UI elements interact with the generic interface, while configuration will determine which transport is used underneath.
The text was updated successfully, but these errors were encountered: