Skip to content
This repository has been archived by the owner on Jan 23, 2023. It is now read-only.
/ corefx Public archive

Commit

Permalink
Merge pull request #35781 from adamsitnik/removePerfTests
Browse files Browse the repository at this point in the history
Switch to the dotnet/performance repo
  • Loading branch information
ViktorHofer authored Mar 21, 2019
2 parents 71c986e + be8fe7d commit d395daf
Show file tree
Hide file tree
Showing 285 changed files with 15 additions and 25,009 deletions.
149 changes: 2 additions & 147 deletions Documentation/project-docs/benchmarking.md
Original file line number Diff line number Diff line change
@@ -1,150 +1,5 @@
# Benchmarking .NET Core applications

We recommend using [BenchmarkDotNet](https://github.com/dotnet/BenchmarkDotNet) as it allows specifying custom SDK paths and measuring performance not just in-proc but also out-of-proc as a dedicated executable.
All Benchmarks have been moved to the [dotnet/performance/](https://github.com/dotnet/performance/) repository.

```
<ItemGroup>
<PackageReference Include="BenchmarkDotNet" Version="0.11.3" />
</ItemGroup>
```

## Defining your benchmark

See [BenchmarkDotNet](https://benchmarkdotnet.org/articles/guides/getting-started.html) documentation -- minimally you need to adorn a public method with the `[Benchmark]` attribute but there are many other ways to customize what is done such as using parameter sets or setup/cleanup methods. Of course, you'll want to bracket just the relevant code in your benchmark, ensure there are sufficient iterations that you minimise noise, as well as leaving the machine otherwise idle while you measure.

# Benchmarking local CoreFX builds

Since `0.11.1` BenchmarkDotNet knows how to run benchmarks with CoreRun. So you just need to provide it the path to CoreRun! The simplest way to do that is via console line arguments:

dotnet run -c Release -f netcoreapp3.0 -- -f *MyBenchmarkName* --coreRun "C:\Projects\corefx\artifacts\bin\testhost\netcoreapp-Windows_NT-Release-x64\shared\Microsoft.NETCore.App\9.9.9\CoreRun.exe"

**Hint:** If you are curious to know what BDN does internally you just need to apply `[KeepBenchmarkFiles]` attribute to your class or set `KeepBenchmarkFiles = true` in your config file. After running the benchmarks you can find the auto-generated files in `%pathToBenchmarkApp\bin\Release\$TFM\` folder.

The alternative is to use `CoreRunToolchain` from code level:

```cs
class Program
{
static void Main(string[] args)
=> BenchmarkSwitcher.FromAssembly(typeof(Program).Assembly)
.Run(args, DefaultConfig.Instance.With(
Job.ShortRun.With(
new CoreRunToolchain(
new FileInfo(@"C:\Projects\corefx\bin\testhost\netcoreapp-Windows_NT-Release-x64\shared\Microsoft.NETCore.App\9.9.9\CoreRun.exe")
))));
}
```


**Warning:** To fully understand the results you need to know what optimizations (PGO, CrossGen) were applied to given build. Usually, CoreCLR installed with the .NET Core SDK will be fully optimized and the fastest. On Windows, you can use the [disassembly diagnoser](http://adamsitnik.com/Disassembly-Diagnoser/) to check the produced assembly code.

## New API

If you are testing some new APIs you need to tell BenchmarkDotNet where is `dotnet cli` that is capable of building the code. You can do that by using the `--cli` command line argument.

# Running in process

If you want to run your benchmarks without spawning a new process per benchmark you can do that by passing `-i` console line argument. Please be advised that using [InProcessToolchain](https://benchmarkdotnet.org/articles/configs/toolchains.html#sample-introinprocess) is not recommended when one of your benchmarks might have side effects which affect other benchmarks. A good example is heavy allocating benchmark which affects the size of GC generations.

dotnet run -c Release -f netcoreapp2.1 -- -f *MyBenchmarkName* -i

# Recommended workflow

1. Before you start benchmarking the code you need to build entire CoreFX in Release which is going to generate the right CoreRun bits for you:

C:\Projects\corefx>build.cmd -c Release -arch x64

After that, you should be able to find `CoreRun.exe` in a location similar to:

C:\Projects\corefx\artifacts\bin\testhost\netcoreapp-Windows_NT-Release-x64\shared\Microsoft.NETCore.App\9.9.9\CoreRun.exe

2. Create a new .NET Core console app using your favorite IDE
3. Install BenchmarkDotNet (0.11.1+)
4. Define the benchmarks and pass the arguments to BenchmarkSwitcher

```cs
class Program
{
static void Main(string[] args) => BenchmarkSwitcher.FromAssembly(typeof(Program).Assembly).Run(args);
}
```
5. Run the benchmarks using `--coreRun` from the first step. Save the results in a dedicated folder.

dotnet run -c Release -f netcoreapp3.0 -- -f * --coreRun "C:\Projects\corefx\artifacts\bin\testhost\netcoreapp-Windows_NT-Release-x64\shared\Microsoft.NETCore.App\9.9.9\CoreRun.exe" --artifacts ".\before"

6. Go to the corresponding CoreFX source folder (for example `corefx\src\System.Collections.Immutable`)
7. Apply the optimization that you want to test
8. Rebuild given CoreFX part in Release:

dotnet msbuild /p:ConfigurationGroup=Release

You should notice that given `.dll` file have been updated in the `CoreRun` folder.

9. Run the benchmarks using `--coreRun` from the first step. Save the results in a dedicated folder.

dotnet run -c Release -f netcoreapp3.0 -- -f * --coreRun "C:\Projects\corefx\artifacts\bin\testhost\netcoreapp-Windows_NT-Release-x64\shared\Microsoft.NETCore.App\9.9.9\CoreRun.exe" --artifacts ".\after"

10. Compare the results and repeat steps `7 - 9` until you are happy about the results.

## Benchmarking APIs implemented within System.Private.Corelib

1. The steps for this scenario are very similar to the above recommended workflow with a couple of extra steps to copy bits from one repo to the other. Before you start benchmarking the code you need to build entire CoreCLR in Release which is going to generate the `System.Private.Corelib.dll` for you:

C:\Projects\coreclr>build.cmd -release -skiptests

After that, you should be able to find `System.Private.Corelib.dll` in a location similar to:

C:\Projects\coreclr\bin\Product\Windows_NT.x64.Release

2. Build entire CoreFX in Release using your local private build of coreclr (See [Testing With Private CoreCLR Bits](https://github.com/dotnet/corefx/blob/master/Documentation/project-docs/developer-guide.md#testing-with-private-coreclr-bits))

C:\Projects\corefx>build.cmd -c Release /p:CoreCLROverridePath=C:\Projects\coreclr\bin\Product\Windows_NT.x64.Release

After that, you should be able to find `CoreRun.exe` in a location similar to:

C:\Projects\corefx\artifacts\bin\testhost\netcoreapp-Windows_NT-Release-x64\shared\Microsoft.NETCore.App\9.9.9\CoreRun.exe

3. Create a new .NET Core console app using your favorite IDE
4. Install BenchmarkDotNet (0.11.1+)
5. Define the benchmarks and pass the arguments to BenchmarkSwitcher

```cs
class Program
{
static void Main(string[] args) => BenchmarkSwitcher.FromAssembly(typeof(Program).Assembly).Run(args);
}
```
6. Run the benchmarks using `--coreRun` from the second step. Save the results in a dedicated folder.

dotnet run -c Release -f netcoreapp3.0 -- -f * --coreRun "C:\Projects\corefx\artifacts\bin\testhost\netcoreapp-Windows_NT-Release-x64\shared\Microsoft.NETCore.App\9.9.9\CoreRun.exe" --artifacts ".\before"

7. Go to the corresponding CoreCLR source folder where the API you want to change exists (for example `coreclr\src\System.Private.CoreLib\shared\System`)
8. Apply the optimization that you want to test
9. Rebuild System.Private.Corelib with your change (optionally adding `-skipnative` if the change is isolated to managed code):

C:\Projects\coreclr>build.cmd -release -skiptests -skipnative -skipbuildpackages

10. For the next step, you have one of two options:

- Rebuild given CoreFX part in Release:

C:\Projects\corefx>build.cmd -c Release /p:CoreCLROverridePath=C:\Projects\coreclr\bin\Product\Windows_NT.x64.Release

- Force refresh of CoreCLR hard-link copy (this ends up being much faster than the first option):

C:\Projects\corefx>build.cmd -restore -c Release /p:CoreCLROverridePath=C:\Projects\coreclr\bin\Product\Windows_NT.x64.Release

11. Run the benchmarks using `--coreRun` from the first step. Save the results in a dedicated folder.

dotnet run -c Release -f netcoreapp3.0 -- -f * --coreRun "C:\Projects\corefx\artifacts\bin\testhost\netcoreapp-Windows_NT-Release-x64\shared\Microsoft.NETCore.App\9.9.9\CoreRun.exe" --artifacts ".\after"

12. Compare the results and repeat steps `8 - 11` until you are happy about the results.

# Reporting results

Often in a Github Pull Request or issue you will want to share performance results to justify a change. If you add the `MarkdownExporter` job in the configuration (as you can see in Alternative 3), BenchmarkDotNet will have created a Markdown (*.md) file in the `BenchmarkDotNet.Artifacts` folder which you can paste in, along with the code you benchmarked.

# References
- [BenchmarkDotNet](http://benchmarkdotnet.org/)
- [BenchmarkDotNet Github](https://github.com/dotnet/BenchmarkDotNet)
- [.NET Core SDK](https://github.com/dotnet/core-setup)
Please read the [Benchmarking workflow for CoreFX](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-corefx.md) document to find out how to build and run the Benchmarks.
8 changes: 7 additions & 1 deletion Documentation/project-docs/developer-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ For more details on the build configurations see [project-guidelines](../coding-

The most common workflow for developers is to call `build` from the root once and then go and work on the individual library that you are trying to make changes for.

By default build only builds the product libraries and none of the tests. If you want to build the tests you can call `build -buildtests`. If you want to run the tests you can call `build -test` or `build -performanceTest`. To build and run the tests combine both arguments: `build -buildtests -test`. To build both the product libraries and the test libraries pass `build -build -buildtests` to the command line. If you want to further configure which test libraries to build you can pass `/p:TestProjectFilter=Tests|PerformanceTests` to the command.
By default build only builds the product libraries and none of the tests. If you want to build the tests you can call `build -buildtests`. If you want to run the tests you can call `build -test`. To build and run the tests combine both arguments: `build -buildtests -test`. To build both the product libraries and the test libraries pass `build -build -buildtests` to the command line.

If you invoke the build script without any argument the default arguments will be executed `-restore -build`. Note that -restore and -build are only implicit if no actions are passed in.

Expand Down Expand Up @@ -206,6 +206,12 @@ One can build in Debug or Release mode from the root by doing `build -c Release`

One can build 32- or 64-bit binaries or for any architecture by specifying in the root `build -arch [value]` or in a project `/p:ArchGroup=[value]` after the `dotnet msbuild` command.

### Benchmarks

All Benchmarks have been moved to the [dotnet/performance/](https://github.com/dotnet/performance/) repository.

Please read the [Benchmarking workflow for CoreFX](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-corefx.md) document to find out how to build and run the Benchmarks.

### Tests

We use the OSS testing framework [xunit](http://xunit.github.io/).
Expand Down
87 changes: 2 additions & 85 deletions Documentation/project-docs/performance-tests.md
Original file line number Diff line number Diff line change
@@ -1,89 +1,6 @@
Performance Tests
======================

This document contains instructions for building, running, and adding Performance tests.
All Performance Tests have been moved to the [dotnet/performance/](https://github.com/dotnet/performance/) repository.

Building and Running Tests
-----------
Performance test files (if present) are stored within a library's ```tests/Performance``` directory and contain test methods that are all marked with a perf-specific *Benchmark* attribute.

**Step # 1:** Prior to running performance tests, a full build from the repo root must be completed: `build -c Release`

**Step # 2:** Change directory to the performance tests directory: ```cd path/to/library/tests/Performance```

**Step # 3:** Build and run the tests:
- Windows ```dotnet msbuild /t:BuildAndPerformanceTest /p:ConfigurationGroup=Release```
- Linux: ```dotnet msbuild /t:BuildAndPerformanceTest /p:ConfigurationGroup=Release```

The results files will be dropped in corefx/artifacts/bin/tests/TESTLIBRARY/FLAVOR/. The console output will also specify the location of these files.

To build and run all tests from root, execute the following command: `build -performanceTest -buildtests -c Release`.

**Getting memory usage**

To see memory usage as well as time, add the following property to the command lines above: `/p:CollectFlags=stopwatch+gcapi`.

Adding New Performance Tests
-----------
Performance tests for CoreFX are built on top of xunit and [the Microsoft xunit-performance framework](https://github.com/Microsoft/xunit-performance/).

Performance tests should reside within their own "Performance" folder within the tests directory of a library (e.g. [corefx/src/System.IO.FileSystem/tests/Performance](https://github.com/dotnet/corefx/tree/master/src/System.IO.FileSystem/tests/Performance) contains perf tests for FileSystem).

It's easiest to copy and modify an existing example like the one above. Notice that you'll need these lines in the tests csproj:
```
<ItemGroup>
<ProjectReference Include="$(CommonTestPath)\Performance\PerfRunner\PerfRunner.csproj">
<Project>{69e46a6f-9966-45a5-8945-2559fe337827}</Project>
<Name>PerfRunner</Name>
</ProjectReference>
</ItemGroup>
```
Once that’s all done, you can actually add tests to the file.

Writing Test Cases
-----------
```C#
using Xunit;
using Microsoft.Xunit.Performance;

namespace System.Collections.Tests
{
public class Perf_Dictionary
{
private volatile Dictionary<int, string> dict;

[Benchmark(InnerIterationCount = 2000)]
public void ctor()
{
foreach (var iteration in Benchmark.Iterations)
using (iteration.StartMeasurement())
for (int i = 0; i < Benchmark.InnerIterationCount; i++)
{
dict = new Dictionary<int, string>();
}
}
}
}
```

The above benchmark will test the performance of the Dictionary<int, string> constructor. Each iteration of the benchmark will call the constructor 2000 times (`InnerIterationCount`).

Test cases should adhere to the following guidelines, within reason:

* Individual test cases should be of the "microbenchmark" variety. They should test a single function, in as isolated an environment as possible.
* The "real work" must be done inside of the `using (iteration.StartMeasurement())` block. All extra work (setup, cleanup) should be done outside of this block, so as to not pollute the data being collected.
* Individual iterations of a test case should take from 100 milliseconds to 1 second. This is everything inside of the `using (iteration.StartMeasurement())` block.
* Test cases may need to use an "inner iteration" concept in order for individual invocations of the "outer iteration" to last from 100 ms to 1s. The example above shows this.
* Some functions are prone to being entirely optimized out from test cases. For example, if the results of `Vector3.Add()` are not stored anywhere, then there are no observable side-effects, and the entire operation can be optimized out by the JIT. For operations which are susceptible to this, care must be taken to ensure that the operations are not entirely skipped. Try the following:
* Pass intermediate values to a volatile static field. This is done in the example code above.
* If the value is a struct, compute a value dependent on the structure, and store that in a volatile static field.
* There are two main ways to detect when a test case is being "optimized out":
* Look at the disassembly of the function (with the Visual Studio disassembler, for example).
* Observe unusual changes in the duration metric. If your test suddenly takes 1% of its previous time, odds are something has gone wrong.
* Before using intrinsic data types (int, string, etc) to represent value and reference types in your test, consider if the code under test is optimized for those types versus normal classes and structs.
* Also consider interfaces. For example, methods on ```List<T>``` using equality will be much faster on Ts that implement ```IEquatable<T>```.

Avoid the following performance test test anti-patterns:
* Tests for multiple methods which all end up calling the same final overload. This just adds noise and extra duplicate data to sift through.
* Having too many test cases which only differ by "input data". For example, testing the same operation on a collection with size 1, 10, 100, 1000, 10000, etc. This is a common pitfall when using `[Theory]` and `[InlineData]`. Instead, focus on the key scenarios and minimize the numbers of test cases. This results in less noise, less data to sift through, and less test maintenance cost.
* Performing more than a single operation in the "core test loop". There are times when this is necessary, but they are few and far between. Take extra care if you notice that your test case is doing too many things, and try to focus on creating a small, isolated microbenchmark.
Please read the [Benchmarking workflow for CoreFX](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-corefx.md) document to find out how to build and run the Performance Tests.
3 changes: 1 addition & 2 deletions eng/build.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ function Get-Help() {
Write-Host " -buildtests Build all test projects"
Write-Host " -rebuild Rebuild all source projects"
Write-Host " -test Run all unit tests (short: -t)"
Write-Host " -performanceTest Run all performance tests"
Write-Host " -pack Package build outputs into NuGet packages"
Write-Host " -sign Sign build outputs"
Write-Host " -publish Publish artifacts (e.g. symbols)"
Expand All @@ -54,7 +53,7 @@ if ($MyInvocation.InvocationName -eq ".") {
}

# Check if an action is passed in
$actions = "r","restore","b","build","rebuild","deploy","deployDeps","test","integrationTest","performanceTest","sign","publish","buildtests"
$actions = "r","restore","b","build","rebuild","deploy","deployDeps","test","integrationTest","sign","publish","buildtests"
$actionPassedIn = @(Compare-Object -ReferenceObject @($PSBoundParameters.Keys) -DifferenceObject $actions -ExcludeDifferent -IncludeEqual).Length -ne 0
if ($null -ne $properties -and $actionPassedIn -ne $true) {
$actionPassedIn = @(Compare-Object -ReferenceObject $properties -DifferenceObject $actions.ForEach({ "-" + $_ }) -ExcludeDifferent -IncludeEqual).Length -ne 0
Expand Down
3 changes: 1 addition & 2 deletions eng/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ usage()
echo " --buildtests Build all test projects"
echo " --rebuild Rebuild all source projects"
echo " --test Run all unit tests (short: -t)"
echo " --performanceTest Run all performance tests"
echo " --pack Package build outputs into NuGet packages"
echo " --sign Sign build outputs"
echo " --publish Publish artifacts (e.g. symbols)"
Expand All @@ -51,7 +50,7 @@ extraargs=''
checkedPossibleDirectoryToBuild=false

# Check if an action is passed in
declare -a actions=("r" "restore" "b" "build" "rebuild" "deploy" "deployDeps" "test" "integrationTest" "performanceTest" "sign" "publish" "buildtests")
declare -a actions=("r" "restore" "b" "build" "rebuild" "deploy" "deployDeps" "test" "integrationTest" "sign" "publish" "buildtests")
actInt=($(comm -12 <(printf '%s\n' "${actions[@]/#/-}" | sort) <(printf '%s\n' "${@/#--/-}" | sort)))
if [ ${#actInt[@]} -eq 0 ]; then
arguments="-restore -build"
Expand Down
Loading

0 comments on commit d395daf

Please sign in to comment.