-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Compression Mode #24
Comments
If somebody want's to work on this, here's the link to the MySQL spec. |
I see this issue closed and the comment that it wasn't quicker and all connections in the pool would be affected. While I understand that there are 2 cases where I have seen the use of the compressed protocol to be beneficial: |
@sjmudd this issue is not closed. The referenced one is (278). Just out of curiosity so I can better see where you're coming from: |
Sorry they are 2 different use cases. I recently built something (for me to learn go) called pstop: see github.com/sjmudd/pstop. Connecting from home via a vpn to the company servers to see example statistics showed this to be quite slow (performance_schema data can be quite large and my link was due to tunneling etc) quite slow. Here I think the reduction in the data sent to the client would speed up the end resulting query times. This is one example. The other example is on a normal network: a cluster of dedicated processing clients collecting information from a cluster of db servers are very aggressive in the data they pull out and the result sets are very large (a lot of processing happens on the client to off-load the database servers some of the work). Here it has been observed that the 1Gb network link was saturated sometimes and this thus triggered tcp back off and thus extra delays. Enabling the compression algorithm on this cluster of boxes allowed the network bandwidth to drop (yes there was likely to be a small extra overhead in using compression) but the gain as the network link was not saturated was definitely worth it. For this cluster in particular the application explicitly allows compression and benefits from it (as a whole). I guess I was hoping the option would be there. There is no explicit comment in the documentation I am aware of that this is not available so I had to hunt around and finally found this issue which explains this has not been implemented yet. I understand your reasoning, but given I have seen a real use case where compression was helpful, thought it might be worthwhile to mention it. I hope that clarifies my previous post? |
https://github.com/go-sql-driver/mysql/issues/1125
https://github.com/go-sql-driver/mysql/issues/278 I have read these issue above, and I found that this issue is opened from 2013 to now 2023, but now go-sql-driver still not support compressed mode. I wish go-sql-driver could support compressed mode, because it could improve the performance much when reading many rows(100k and more) from mysql. In my project, the program will read many rows from mysql when program starts, and the program often execute large sql(600MB+) in a transaction through go-sql-driver But now it seems that go-sql-driver will never support compressed mode. |
Also upvoting this as we've discovered with using Aurora DB, our IO costs are starting to be >50% of the overall cluster cost. So if we can reduce the amount of data transmitted/received to/from Aurora we'll have substantial cost savings. This is mainly an issue as we're saving TEXT columns on table rows totaling more than 5kB on each row. |
In sysown/proxysql#4204 is being discussed the possibility of using ProxySQL as a middle layer to enable compression:
|
Implement connection compression
The text was updated successfully, but these errors were encountered: