-
-
Notifications
You must be signed in to change notification settings - Fork 378
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
import performance: large codebases that use a lot of attrs
classes start taking a long time to load
#575
Comments
I know I’m starting to sound like a broken record but this seems like something we/you can start experimenting once we have pluggable method makers. You could basically wrap an existing one and do you shenanigans? |
I'm not sure why I'd want pluggability here — I just want it to be always fast. Pluggability would help with alternate runtimes, I guess? Like brython or something? |
Pluggability allows for experimenting that may or may not lead to main line inclusion. I’m always big on allowing for incremental improvement and the whole shitshow around |
I am working on an application sensitive to startup time and attrs is responsible for >60% slowdown (used by markdown-it-py). Since the processing is cpu bound (due to |
I gave markdown-it-py a brief look and it appears that it defines a grand total of four attrs classes. I find it hard to believe that the creation of four classes causes this much overhead. I've also tried importing it and it looks instantaneous. Are you sure we're not talking about runtime cost – aka you're parsing a Markdown file on startup or something? |
I need to do some more rigorous analysis of this, so my level of confidence here is "strong hunch", but: using attrs extensively, across a large codebase, can contribute non-trivially to Python's scourge of slow load times.
Many things in Python are slow: calling functions, importing modules, and so on. But one of the slowest things is reading and parsing Python code. This is why Python compiles and caches
.pyc
files - it's a substantial optimization to startup time.Using attrs, unfortunately, partially undoes this optimization, because many of the methods in
attr._make
:compile()
it, theneval()
the bytecode they just compiled.Now, they do this for a good reason. Transparency and debug-ability are great, and more than once being able to "see through" the attrs validator stack while debugging has been legitimately useful. So I wouldn't want to propose a technique that makes a substantially different tradeoff. Not to mention that friendliness to PyPy's JIT is achieved via this mechanism, and definitely wouldn't want to trade that off. But could we possibly piggyback on Python's native caching technique here, and cache both the
.pyc
output and the source code, in the__pycache__
directory or thereabouts?The text was updated successfully, but these errors were encountered: