Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rate limiting #250

Closed
1 task done
DavidM-D opened this issue Aug 9, 2023 · 10 comments
Closed
1 task done

Rate limiting #250

DavidM-D opened this issue Aug 9, 2023 · 10 comments
Assignees
Labels
Emerging Tech Emerging Tech flying formation at Pagoda Near BOS NEAR BOS team at Pagoda

Comments

@DavidM-D
Copy link
Contributor

DavidM-D commented Aug 9, 2023

Create simple configurable limits on the number of requests that a given SDK user can make.

Aim of the game is to stop them bringing down the MPC recovery service, not that we accurately bill them.

Tasks

Preview Give feedback
  1. Decision Emerging Tech Near BOS
    volovyks
@volovyks
Copy link
Collaborator

volovyks commented Aug 14, 2023

Looks like GCP has what we want out of the box.
https://cloud.google.com/armor/docs/rate-limiting-overview

I think throttling based on ip/domain is what we need. Each partner should have their own domain since it's a separate Firebase app.

We can add protection from individual users too.

cc @itegulov do you think it's the best way to do it?

@volovyks volovyks self-assigned this Aug 15, 2023
@volovyks
Copy link
Collaborator

My current plan is to set up the load balancer using Terraform and Google Armor.

@trechriron trechriron added Near BOS NEAR BOS team at Pagoda Emerging Tech Emerging Tech flying formation at Pagoda labels Sep 7, 2023
@volovyks volovyks linked a pull request Sep 14, 2023 that will close this issue
@volovyks
Copy link
Collaborator

Details for ratelimiting are here: #289 (comment)
and other comments.

@kmaus-near
Copy link
Collaborator

Hey guys, I've got some URLs for you to use that put MPC behind Kong for rate limiting policies to take effect:

Mainnet/Prod: https://near-mpc-recovery-mainnet.api.pagoda.co

Testnet/Dev: https://mpc-recovery-leader-testnet.dev.api.pagoda.co

Nothing special required to use these URLs, but I think once we fully transition to use these, we should switch the default CloudRun URLs to internal only so we don't circumvent the Kong rate limit/loadbalancer.

@volovyks
Copy link
Collaborator

volovyks commented Oct 5, 2023

@kmaus-near seems like you are the best person to make this change. Can you help us with that? Let's finish with it for 100%.

@volovyks
Copy link
Collaborator

volovyks commented Oct 5, 2023

@kmaus-near You will probably need to sync with @esaminu
cc @itegulov

@kmaus-near
Copy link
Collaborator

Update from my side of this, in order to make those cloudrun auto generated URLs private, I'll be using an internal loadbalancer so the Kong proxy can still reach the services. Once I finish that LB and test I'll reach out to @esaminu and make sure things are good.

@kmaus-near
Copy link
Collaborator

We'll have to somewhat work together on this, I created a branch @ https://github.com/near/mpc-recovery/tree/kmaus-near/add-internal-lb/infra

Take a look at this and lmk if something like this would work for your env. If not we might want to put prod and dev TF code into their own directories so we can have a bit of separation going on as long as it doesn't break any of your workflows.

@itegulov
Copy link
Contributor

@kmaus-near yeah I think this is reasonable. We can create some testnet/mainnet separation if necessary. Shouldn't be too big of a deal.

@volovyks
Copy link
Collaborator

Added on a load balancer level.

@github-project-automation github-project-automation bot moved this from Backlog to Done in Emerging Technologies Nov 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Emerging Tech Emerging Tech flying formation at Pagoda Near BOS NEAR BOS team at Pagoda
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

5 participants