-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jitted Code Pitching Support #6506
Comments
The earlier versions of CLR had this mechanism. It was called code-pitching but it was removed because of it was very unreliable. The code for it was part of "Shared Source Common Language Infrastructure" that you can find around if you would like to take a look. The hard problem with code pitching is to guarantee that the method being discarder is not executing, and that nobody is holding pointer to it. |
cc @noahfalk @dotnet/jit-contrib |
Is it possible to define that the method is not executing by stopping the program execution and inspection of stack frames for all threads? |
Yes, the original code pitching implementation have done something similar. |
except for FCALLs? Because I read in comments |
FCalls are a problem because they are implemented in C++ or hand-written assembly, and so they cannot be discarder. |
It may be a good idea to do analysis on how much memory can be saved by doing this for the typical applications. The JITed code does not typically dominate the working set. |
For data we have garbage collection... So it sort of has a control on the memory increase for data. @jkotas I wonder if you mean the meta-data more than just data compared to the code here... |
Also it's interesting if it's possible to write a test using serialization an de-serialization or something like that which causes an unbounded increase in the memory for the code of dynamic methods. |
Do you mean the backpatching thing here or arbitrary pointers to the code? |
I also believe such a feature could be useful for the application of an aggressive inlining etc. |
@jkotas From my impression on average the JITed code size of a small function on ARM is 3 times larger than the corresponding IL assembly of the method (not talking about too small functions). Of course, the functions which have a lot of calls interfered with the local variables might have the factor of 8 or similar. |
E.g. if we take a test JIT/SIMD/CircleInConvex_r/CircleInConvex_r.exe: Method Also the method So, I believe there might be a considerably big application which has a lot of executed code. |
First, I do agree that having ability to drop JITed code would be really nice. Based on my experience, I do not believe that it will move the needle significantly (e.g. more than 1.5x) in total RAM consumption of managed process, or in ability to run on very low end devices, for typical apps. |
But if you had to have a Facebook and a Browser application on a low-end device, it would be a non-typical but a very resource consuming app. |
Still Facebook presumably would have a lot of class description meta-data present in the RAM. |
@jkotas By the way, is the class unloading feature present in coreclr? Well, I don't think it would be, provided, no code unloading is supported. |
Class unloading is present for Reflection.Emit ( |
Out of interest, I quickly converted convex_hull(List a) to C++ by adding my own C#-like declarations for Point and List and making a, up, down references (fewer syntax changes than making them pointers). clang++ -Os compiled it to 0x27a (634) bytes of Thumb2 armhf code. Note: cw() and ccw() were both fully inlined. Making all the functions virtual increased it to 0x2d6 (726) bytes. struct Point {float X; float Y;};
template <class T>
class List{
public:
List();
int Count;
void Add(T);
void RemoveAt(int);
void Clear();
T& operator[](int i);
T& Last();
typedef int cmp_fn(T&, T&);
void Sort(cmp_fn);
}; |
@sergign60 Is this still an active issue tracking work being done? |
@gkhanna79 I'm still working on it My latest results on the CoreCLR test suite are
But I still have five tests with parallel threads fail. I'm looking for reasons of these fails (I believe that there is one reason) You can look at my code in the repository https://github.com/sergign60/coreclr |
We propose to support releasing memory that is allocated for methods that have been executed and will never be executed again or can be recompiled. That is needed for devices with limited memory capacity.
Also when GDB JIT feature (#6278) is enabled there is additional memory consumption (~700 bytes on ARM32 per method with several lines) for ELF debugging information. Such memory can be released when jitted code for method itself released.
The existing execution model of coreclr without releasing memory for executed code and debugging info creates serious obstacles for applications running on devices with limited memory or even can make impossible their execution in certain cases.
@papaslavik @brucehoult @Dmitri-Botcharnikov @leemgs @lemmaa @wateret
The text was updated successfully, but these errors were encountered: