You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you play around with settings and importing parts too much, you might get yourself throttled or blocked by the providers. Adding caching would prevent you from repeating the same queries to the providers, saving you from this fate.
Might make sense to use something like diskcache (sqlite-based) to cache all provider queries based on the arguments, with a configurable TTL (say 6h for a default).
This would imply that if you have caching enabled, whenever you search for a part or similar, the result is cached. If you search for it again in the next n hours, you'll just get the cached response and you won't hit the API again, lessening the chance of getting yourself throttled.
Might make sense to add a CLI arg for avoiding cache, and a subcommand or flag for clearing it.
The text was updated successfully, but these errors were encountered:
I've thought about this a while ago, but ultimately decided it's not worth the effort really. With normal use (i.e. not during initial setup/testing) duplicate request shouldn't really happen between runs. So I'm already caching requests during runtime, which imo is sufficient.
you might get yourself throttled or blocked by the providers
This only applies to suppliers that I use crawling for (Mouser, LCSC, reichelt), from those Mouser currently just doesn't work at all, because they hardened their protections (see #49) and I haven't had many problems with blocking from either LCSC or reichelt.
If you play around with settings and importing parts too much, you might get yourself throttled or blocked by the providers. Adding caching would prevent you from repeating the same queries to the providers, saving you from this fate.
Might make sense to use something like diskcache (sqlite-based) to cache all provider queries based on the arguments, with a configurable TTL (say 6h for a default).
This would imply that if you have caching enabled, whenever you search for a part or similar, the result is cached. If you search for it again in the next n hours, you'll just get the cached response and you won't hit the API again, lessening the chance of getting yourself throttled.
Might make sense to add a CLI arg for avoiding cache, and a subcommand or flag for clearing it.
The text was updated successfully, but these errors were encountered: