Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add toxic slicer #66

Merged
merged 6 commits into from
Jul 23, 2015
Merged

Add toxic slicer #66

merged 6 commits into from
Jul 23, 2015

Conversation

connor4312
Copy link
Contributor

When working on relative low-level network applications/clients, it's easy to assume that TCP data will always some in nice grouped "packets" that have all the data you need in them, rather than viewing the connection as a stream. When working locally that may even be its apparent behaviour, but over real-world network connections that's certainly not something that can be relied upon.

This PR adds a "slicer" filter which takes data and slices it up into nice little random chunks before sending them out. The slider also includes a simple delay option, to bypass buffering behaviour that could otherwise nullify the sliced bytes.

@pushrax
Copy link

pushrax commented Jul 16, 2015

Great idea for a toxic!

@xthexder
Copy link
Contributor

This is actually really cool. I found a bug in one of my apps about a week ago related to this exact problem.
As it is, toxiproxy will cause this behaviour in some cases (usually if latency or bandwidth toxics are enabled), but having it explicit like this is great.

I'm not the biggest fan of adding the delay in this toxic, since it's semi-duplicated functionality and reduces the bandwidth the toxic can process.
I haven't looked at this in-depth but TCPConn.SetWriteBuffer might be another option.

type SlicerToxic struct {
Enabled bool `json:"enabled"`
// Average number of bytes to slice at
AverageSize int `json:"averageSize"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The convention we've been using for json would be average_size.

@connor4312
Copy link
Contributor Author

I've adjusted it in response to the comments.

As far as the delay goes: I agree it's not a beautiful solution, but often the OS and/or standard networking libraries buffer input until it's ready to be processed. Go sets TCP NoDelay to be true by default, so there should be no outgoing buffer, but the consumer's setup is unknown. This was the simplest way I saw around that; the other was to change the order so that the latency filter was run after the slicer, but requiring two toxics to be used in tandem to be effective in many cases seemed awkward. If there's a more graceful solution I'm all for it.


select {
case <-stub.interrupt:
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this toxic is interrupted at this point in the loop, the remainder of c will be dropped. In this case we would want to just send off the remainder like in the latency toxic.
Toxics can be interrupted without the stream closing, so we need to make sure it's in a usable state.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@xthexder
Copy link
Contributor

I think that's it for my comments.
@sirupsen maybe you want to take a look at this too.

@sirupsen
Copy link
Contributor

@connor4312 LGTM, nice job! can you add it to the CHANGELOG as well?

@connor4312
Copy link
Contributor Author

Thanks! Changelog has been updated.

@connor4312
Copy link
Contributor Author

Rebased off master due to conflict

sirupsen added a commit that referenced this pull request Jul 23, 2015
@sirupsen sirupsen merged commit b84a58e into Shopify:master Jul 23, 2015
@sirupsen
Copy link
Contributor

@connor4312 released as part of 1.2.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants