Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tmax adjustments for individual adapters/bidders #3965

Open
scr-oath opened this issue Oct 9, 2024 · 15 comments
Open

tmax adjustments for individual adapters/bidders #3965

scr-oath opened this issue Oct 9, 2024 · 15 comments
Assignees
Labels
needs docs Docs are required for this PR or Issue

Comments

@scr-oath
Copy link
Contributor

scr-oath commented Oct 9, 2024

As an example need, we have a bidder that's only in one location, however we have deployed PBS around the world. We need to adjust that bidder's network latency buffer accordingly in locations that are "farther" away so that it is told the appropriate amount of time it can use to answer.

Proposal:

  • Extend the per-bidder to include the tmax adjustments
  • Do the calculations per-bidder in addition to globally (or falling back to global for bidders that aren't overridden)
  • Ensure the timeout honors these ^ calculations
  • Ensure the bidder receives its adjusted tmax
@bretg
Copy link
Contributor

bretg commented Oct 18, 2024

Thanks @scr-oath , If we supported the "secondary bidders" feature, different timeouts for secondary bidders would make sense.

But until then, I'm not real fond of having different timeouts for different bidders. PBS gets only one response to the client, so there's only one timeout value. It doesn't make sense to me that bidderA would be told 200ms when bidderB gets 300ms because their endpoint is far away. We might as well let bidderA have that extended time.

FWIW, we added region-specific timeout adjustment in #2398 -- the way this works specifically in PBS-Go is described in https://docs.prebid.org/prebid-server/endpoints/openrtb2/pbs-endpoint-auction.html#pbs-go-1

For instance, we've changed our network buffer in APAC because our network there isn't as good as in other reasons. But all bidders get the same tmax value.

@bretg
Copy link
Contributor

bretg commented Oct 18, 2024

Actually give bidders the same time out, but optionally decrement the tmax actually sent to bidders.

@bretg bretg changed the title [Placeholder from PMC] tmax adjustments for individual adapters/bidders tmax adjustments for individual adapters/bidders Oct 18, 2024
@bretg bretg moved this from Triage to Needs Requirements in Prebid Server Prioritization Oct 18, 2024
@bretg bretg self-assigned this Oct 18, 2024
@Slind14
Copy link

Slind14 commented Oct 20, 2024

My understanding is that this is not about the PBS Adapter timeout at all. It is about signaling the external bidder how much time they have to run the auction. This signaling should consider the network latency between the originating PBS instance and the bidder.

E.g. if both are in the same region and the network latency is < 20ms, then the bidder may use 800ms, however if it is in another region with a network latency of < 200ms, then the bidder may only use 400-600ms to not be timed-out

I believe the best approach here would be for PBS to measure the pure network latency, calculate the P90 and apply that to the tmax calculation.

@linux019
Copy link
Contributor

The another potential issue that large timeouts >1s will increase queue of HTTP requests to bidders and number of working goroutines and active connections because PBS spawns a new goroutine for each bidder request.

@scr-oath
Copy link
Contributor Author

My understanding is that this is not about the PBS Adapter timeout at all.

AGREE:
https://github.com/prebid/prebid-server/blob/master/config/config.go#L1336-L1353
As I understand it, globally at the moment, there are a few settings involved in deciding the tmax sent to bidders:

  1. The entire auction has a tmax chosen through either the request or a configured max
  2. The time already taken when the bidder requests happen is subtracted off
  3. There is a configured value for "how much work/time the PBS will do/take after responses come in" that is also subtracted off
  4. A buffer for network latency is subtracted as well

The resulting tmax is intended to be the amount of time a bidder can take in the handler to respond in time to go across the network and for the auction to wrap up.

The actual timeout should be the tmax reported to the bidder + the network latency.

E.g. if both are in the same region and the network latency is < 20ms, then the bidder may use 800ms, however if it is in another region with a network latency of < 200ms, then the bidder may only use 400-600ms to not be timed-out

YES This is the crux - if a PBS server is deployed to multiple regions, but a bidder is only in one location, then network latency will be higher for them in "farther" (or slower) regions.

I believe the best approach here would be for PBS to measure the pure network latency, calculate the P90 and apply that to the tmax calculation.

This is an interesting extension to the idea - yet, while I love the idea of measuring the exact thing, I do wonder about the added complexity for something that can be mostly statically determined and tuned.

I'm curious but feel like it's a distraction, perhaps, to the feature how one might measure network latency - would this require each bidder set up a "ping" endpoint - something that does zero work but just answers a request as fast a possible - so the p99 of the resulting observed time could be dynamically set as the amount to subtract off from the tmax?

@bretg
Copy link
Contributor

bretg commented Nov 5, 2024

Sorry, lost some context here... why couldn't the host company just set biddertmax_percent lower? It doesn't make sense to me to have too many controls that do essentially the same thing. Would you prefer to work in absolute values? It could make sense to support a mutually exclusive config setting for setting absolute instead of percent.

@Slind14
Copy link

Slind14 commented Nov 6, 2024

Sorry, lost some context here... why couldn't the host company just set biddertmax_percent lower? It doesn't make sense to me to have too many controls that do essentially the same thing. Would you prefer to work in absolute values? It could make sense to support a mutually exclusive config setting for setting absolute instead of percent.

Is it possible to set that just for one bidder? (not the full auction)

@bretg
Copy link
Contributor

bretg commented Nov 6, 2024

Ok, so my comment from 3 weeks ago is incorrect... bidders are not told the same tmax value. Rather, the goal here is manage slow bidders by telling them tmax-SLOW_BIDDER_BUFFER.

Ok, here's a proposed solution... add two new configs:

auction.biddertmax.slowbidderbufferms: 200
auction.biddertmax.slowbidders
  - bidderA
  - bidderB

If the bidder is on the slow bidder array, decrement its tmax by slowbidderbufferms, still subject to auction.biddertmax.min.

@bretg bretg moved this from Needs Requirements to Community Review in Prebid Server Prioritization Nov 19, 2024
@Slind14
Copy link

Slind14 commented Nov 20, 2024

If it is all the same and doesn't exist yet, I would suggest:

auction.biddertmax.offset:
  - bidderA=-200 // bidder specific offset in ms
  - bidderB=-100 // bidder specific offset in ms

Or to the existing adapter settings.

@bretg
Copy link
Contributor

bretg commented Nov 20, 2024

Not too late! Refining that direction, how about changing 'offset' to 'bidderoffset'?

auction.biddertmax.bidderoffset:
  - bidderA=-200 // bidder specific offset in ms
  - bidderB=-100 // bidder specific offset in ms

I wouldn't suggest putting it in existing adapter settings -- these values would depend on the host company and we try to keep those files easily mergable.

@Slind14
Copy link

Slind14 commented Nov 25, 2024

Not too late! Refining that direction, how about changing 'offset' to 'bidderoffset'?

auction.biddertmax.bidderoffset:
  - bidderA=-200 // bidder specific offset in ms
  - bidderB=-100 // bidder specific offset in ms

I wouldn't suggest putting it in existing adapter settings -- these values would depend on the host company and we try to keep those files easily mergable.

Any naming would be fine for us. The option to define it by bidder is what would be helpful.

@bretg
Copy link
Contributor

bretg commented Nov 25, 2024

@bsardo , @SyntaxNode - what are your thoughts about defining bidderoffsets with negative numbers? I'm leaning towards just saying that "offsets" are always positive numbers that are subtracted from the tmax. I don't want to send a tmax to a bidder that's larger than the PBS tmax.

@Slind14
Copy link

Slind14 commented Nov 25, 2024

@bsardo , @SyntaxNode - what are your thoughts about defining bidderoffsets with negative numbers? I'm leaning towards just saying that "offsets" are always positive numbers that are subtracted from the tmax. I don't want to send a tmax to a bidder that's larger than the PBS tmax.

Fine from my end. Maybe we use a better defining name then? e.g. bidder_tmax_deduction_ms

@bretg bretg added the needs docs Docs are required for this PR or Issue label Dec 4, 2024
@bretg
Copy link
Contributor

bretg commented Dec 4, 2024

Discussed in committee. Everything was changed around again. This is where we settled:

Host-level config:

adapters.ADAPTER:
  - tmax_deduction_ms=200 // bidder specific offset in ms

This just adjusts the tmax sent to the bidder to give them a higher sense of urgency given the expected network delays. It doesn't affect the actual timeout enforced by PBS.

@bretg bretg moved this from Community Review to Ready for Dev in Prebid Server Prioritization Dec 4, 2024
@bretg
Copy link
Contributor

bretg commented Dec 10, 2024

The team is implementing in PBS-Java. Several clarification questions have revealed the need for additional details.

  1. The PBS HTTP timeout on bidder connections does not change.
  2. All that changes is what we tell certain bidders in the tmax.

bidder_specific_tmax = tmax - request_processing_time - bidder_network_latency_buffer_ms - bidder_response_duration_min_ms - bidder_specific_tmax_deduction_ms

  1. This value is subject to any configured auction.biddertmax.min and auction.biddertmax.max

If bidder_specific_tmax < auction.biddertmax.min, bidder_specific_tmax = auction.biddertmax.min
If bidder_specific_tmax > auction.biddertmax.max, bidder_specific_tmax = auction.biddertmax.max

Default auction.biddertmax.min remains the same at 50
Default auction.biddertmax.max remains the same at: 5000

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs docs Docs are required for this PR or Issue
Projects
Status: Ready for Dev
Development

No branches or pull requests

4 participants