You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey everyone, I have faced with an issue. I don't know, is that a bug or am I doing something wrong. I have trying to benchmark a parallel file reading with Parallel.ForEach and File.ReadLines("path"), but the memory part (Allocated, GCs) of the benchmark result is very weird. I have prepared a demo project.
[Benchmark]publicvoidParallelTest(){varcounter=newConcurrentDictionary<string,float>();
Parallel.ForEach(File.ReadLines("data.csv"),(line)=>{ ProcessLineParallel(counter, line);});if(counter.Count !=1000)thrownew Exception("Not expected count of persons.");}[Benchmark]publicvoidNormalTest(){varcounter=newDictionary<string,float>();varallLines= File.ReadAllLines("data.csv");foreach(var line in allLines){
ProcessLine(counter, line);}if(counter.Count !=1000)thrownew Exception("Not expected count of persons.");}
The weird thing is allocated column. I haven't do any memory optimization so why is it decreased so much. You can say it seems ok but my real project's result is show the difference. (with bigger file)
To tell the long story short .NET Core exposes only System.GC.GetAllocatedBytesForCurrentThread() method and it does not implement AppDomain.CurrentDomain.MonitoringTotalAllocatedMemorySize as Full Framework does. So we currently don't have an option to do it properly for multi threaded benchmarks for .NET Core.
A workaround is to run the benchmark for Full Framework, but it requires Windows and I can see that you are running on MacOS so it won't help ;/
For now the only thing I can recommend is to look at the numbers of GC Collects in Gen 0/1/2 to compare two multithreaded benchmarks.
We already have an issue for that: #723, so I am going to close this one as duplicate.
Hey everyone, I have faced with an issue. I don't know, is that a bug or am I doing something wrong. I have trying to benchmark a parallel file reading with Parallel.ForEach and File.ReadLines("path"), but the memory part (Allocated, GCs) of the benchmark result is very weird. I have prepared a demo project.
https://github.com/lyzerk/parallelFileReadingBenchmark/blob/adc7276ff40310489e8a01fb2773981d7b42f9f5/FileReaderTester.cs#L13-L38
ProcessLine
andProcessLineParallel
almost same. Only difference is ConcurrentDictionary and couple lines.And the result is;
The weird thing is allocated column. I haven't do any memory optimization so why is it decreased so much. You can say it seems ok but my real project's result is show the difference. (with bigger file)
~4MB against ~481MB, Am I doing something wrong, is this an issue or is this what should we expect ?
The text was updated successfully, but these errors were encountered: