-
Notifications
You must be signed in to change notification settings - Fork 128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[examples] Add a top-level README file #468
Merged
ChrisCummins
merged 7 commits into
facebookresearch:development
from
ChrisCummins:examples-docs
Oct 20, 2021
Merged
[examples] Add a top-level README file #468
ChrisCummins
merged 7 commits into
facebookresearch:development
from
ChrisCummins:examples-docs
Oct 20, 2021
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
LGTM |
mostafaelhoushi
approved these changes
Oct 14, 2021
Codecov Report
@@ Coverage Diff @@
## development #468 +/- ##
================================================
- Coverage 87.55% 69.05% -18.50%
================================================
Files 110 110
Lines 6121 6123 +2
================================================
- Hits 5359 4228 -1131
- Misses 762 1895 +1133
Continue to review full report at Codecov.
|
facebook-github-bot
added
the
CLA Signed
This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
label
Oct 14, 2021
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
CLA Signed
This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This adds a top-level examples/README.md file which provides an overview of the various scripts and subdirectories.
This also updates a few implementation details in the example scripts.
Here is a preview of the new README:
CompilerGym Examples
This directory contains code samples for everything from implementing simple
RL agents to adding support for entirely new compilers. Is there an example that
you think is missing? If so, please contribute!
Table of contents:
Autotuning
Performing a random walk of an environment
The random_walk.py script runs a single episode of a
CompilerGym environment, logging the action taken and reward received at each
step. Example usage:
$ python random_walk.py --env=llvm-v0 --step_min=100 --step_max=100 \ --benchmark=cbench-v1/dijkstra --reward=IrInstructionCount === Step 1 === Action: -lower-constant-intrinsics (changed=False) Reward: 0.0 Step time: 805.6us === Step 2 === Action: -forceattrs (changed=False) Reward: 0.0 Step time: 229.8us ... === Step 100 === Action: -globaldce (changed=False) Reward: 0.0 Step time: 213.9us Completed 100 steps in 91.6ms (1091.3 steps / sec). Total reward: 161.0 Max reward: 111.0 (+68.94% at step 31)
For further details run:
python random_walk.py --help
.GCC Autotuning (genetic algorithms, hill climbing, + more)
The gcc_search.py script contains implementations of several
autotuning techniques for the GCC environment. It was used to produce the
results for the GCC experiments in the CompilerGym
whitepaper. For further details run:
python gcc_search.py --help
.Makefile integration
The makefile_integration directory demonstrates a
simple integration of CopmilerGym into a C++ Makefile config. For details see
the Makefile.
Random search using the LLVM C++ API
While not intended for the majority of users, it is entirely straightforward to
skip CompilerGym's Python frontend and interact with the C++ APIs directly. The
RandomSearch.cc file demonstrates a simple parallelized
random search implemented for the LLVM compiler service. Run it using:
For further details run:
bazel run -c opt //examples:RandomSearch -- --help
Reinforcement learning
PPO and integration with RLlib
The rllib.ipynb notebook demonstrates integrating CompilerGym
with the popular RLlib reinforcement
learning library. In notebook covers registering a custom environment using a
constrained subset of the LLVM environment's action space a finite time horizon,
and trains a PPO agent using separate train/val/test datasets.
Actor-critic
The actor_critic script contains a simple actor-critic
example using PyTorch. The objective is to minimize the size of a benchmark
(program) using LLVM compiler passes. At each step there is a choice of which
pass to pick next and an episode consists of a sequence of such choices,
yielding the number of saved instructions as the overall reward. For
simplification of the learning task, only a (configurable) subset of LLVM passes
are considered and every episode has the same (configurable) length.
For further details run:
python actor_critic.py --help
.Tabular Q learning
The tabular_q script contains a simple tabular Q learning
example for the LLVM environment. Using selected features from Autophase
observation space, given a specific training program as gym environment, find
the best action sequence using online Q learning.
For further details run:
python tabular_q.py --help
.Extending CompilerGym
Example CompilerGym service
The example_compiler_gym_service directory
demonstrates how to extend CompilerGym with support for new compiler problems.
The directory contains bare bones implementations of backends in Python or C++
that can be used as the basis for adding new compiler environments. See the
README.md file for further details.
Example loop unrolling
The example_unrolling_service directory
demonstrates how to implement support for a real compiler problem by integrating
with commandline loop unrolling flags for the LLVM compiler. See the
README.md file for further details.
Miscellaneous
Exhaustive search of bounded action spaces
The brute_force.py script runs a parallelized brute force of
an action space. It enumerates all possible combinations of actions up to a
finite episode length and evaluates them, logging the incremental rewards of
each. Example usage:
For further details run:
python brute_force.py --help
.The explore.py script evaluates all possible combinations of
actions up to a finite limit, but partial sequences of actions that end up in
the same state are deduplicated, sometimes dramatically reducing the size of the
search space. This script can also be configured to do a beam search.
Example usage:
For further details run:
python brute_force.py --help
.Estimating the immediate and cumulative reward of actions and benchmarks
The sensitivity_analysis directory contains a pair of
scripts for estimating the sensitivity of the reward signal to different
environment parameters:
This script estimates the immediate reward that running a specific action has
by running trials. A trial is a random episode that ends with the determined
action.
This script estimates the cumulative reward for a random episode on a
benchmark by running trials. A trial is an episode in which a random number of
random actions are performed and the total cumulative reward is recorded.