Skip to content

Commit

Permalink
Leaderboard implementation of tabular Q (facebookresearch#141)
Browse files Browse the repository at this point in the history
* leaderboard implementation

* fails due to env selection

* fails at 60th evaluation

* Update to WIP Tabular Q leaderboard submission (facebookresearch#1)

* [llvm] Temporarily disable polybench

Mitigates facebookresearch#55.

* [validation] Add a flakiness retry loop around validation.

Add a retry loop around the granular individual validation callbacks
for cBench-v1.

Mitigates facebookresearch#144.

* [validation] Catch timeouts in retry loop.

* [docs/faq] "I updated with 'git pull' and not it doesn't work

* [tests] Extend timeout on datasets test.

* [tests] Update regression tests.

* [tests] Reduce validation regression test retry counts.

* Call env.reset() just after creation - fixes facebookresearch#150

* [docs] Update LLVM actions table.

* Add a target to rename the manylinux file.

* Force UTF-8 on README decoding.

* [util] Improve runfiles docstrings.

* [llvm] Remove LLVM binaries from wheel.

This patch removes the LLVM binaries from the shipped wheel. This is
to reduce the package size to be under the 100MB default maximum
imposed by PyPi.

Instead of shipping the files in the wheel, the LLVM binaries are
downloaded from an archive hosted by Facebook when needed. The
circumstances for needing them are: (1) starting an LLVM service, (2)
attempting to resolve the path to an LLVM binary.

* Defer evaluation of cBench runtime data directory.

* [tests] Remove tests that overwrite site data path.

These no longer work now that site data requires LLVM binaries to be
present.

* Release v0.1.5.

* Add a fast path check for downloaded LLVM files.

* [tests] Use full URI for benchmark.

* Correct retry count in error message.

* [env] Include last error on init failure.

* [env] Add a special error message for UNKNOWN errors.

* [rpc] Allow loglines() when logs directory does not exist.

* [rpc] Include service logs in error message on init failure.

* [rpc] Include final error message on retry loop failure.

* [rpc] Add decoded signal name on init error.

* [llvm] Replace DCHECK() with Status error.

* [tests] Remove tests that interfere with site data path.

Site data directory is now a pre-requisite of the LLVM environment and
cannot be moved.

* [tests] Fix caught exception type.

* [llvm] Add a check for runfile requirement.

* [llvm] Add a file existing check.

* [rpc] Disable logs buffering on debugging runs.

* [tests] Fix error message comparison tests

* [bin/manual_env] Update prompt after reset().

Running `reset()` with no benchmark set will select a random program,
so the prompt must be updated.

* [tests] Add workaround for prompt issue.

* Release v0.1.6.

This release focuses on hardening the LLVM environments, providing improved
semantics validation, and improving the datasets. Many thanks to @JD-at-work,
@bwasti, and @mostafaelhoushi for code contributions.

- [llvm] Added a new `cBench-v1` dataset which changes the function attributes
  of the IR to permit inlining. `cBench-v0` is deprecated and will be removed no
  earlier than v0.1.6.
- [llvm] Removed 15 passes from the LLVM action space: `-bounds-checking`,
  `-chr`, `-extract-blocks`, `-gvn-sink`, `-loop-extract-single`,
  `-loop-extract`, `-objc-arc-apelim`, `-objc-arc-contract`, `-objc-arc-expand`,
  `-objc-arc`, `-place-safepoints`, `-rewrite-symbols`,
  `-strip-dead-debug-info`, `-strip-nonlinetable-debuginfo`, `-structurizecfg`.
  Passes are removed if they are: irrelevant (e.g. used only debugging), if they
  change the program semantics (e.g. inserting runtimes bound checking), or if
  they have been found to have nondeterministic behavior between runs.
- Extended `env.step()` so that it can take a list of actions that are all
  performed in a single batch. This improve efficiency.
- Added default reward spaces for `CompilerEnv` that are derived from scalar
  observations (thanks @bwasti!)
- Added a new Q learning example (thanks @JD-at-work!).
- *Deprecation:* The next release v0.1.5 will introduce a new datasets API that
  is easier to use and more flexible. In preparation for this, the `Dataset`
  class has been renamed to `LegacyDataset`, the following dataset operations
  have been marked deprecated: `activate()`, `deactivate()`, and `delete()`. The
  `GetBenchmarks()` RPC interface method has also been marked deprecated..
- [llvm] Improved semantics validation using LLVM's memory, thread, address, and
  undefined behavior sanitizers.
- Numerous bug fixes and improvements.

* [tests] Add temporary workaround for flaky init benchmark.

* Add missing copyright header to make_specs.py.

* [util] Force string type in truncate().

* [bin/service]: Fix reporting of observation space shape.

* [bin/service]: Fix reporting of observation space shape.

* [util] Force string type in truncate().

* [llvm] Add an InstCount observation space.

This adds new observation spaces that expose the -instcount pass
values. The -instcount pass counts the number of instructions of each
type in a program, along with the total number of instructions, total
number of blocks, and total number of functions.

There are four new observation spaces: `InstCount`, which returns the
feature vector as a numpy array, `InstCountDict`, which returns the
values as a dictionary of named features, and `InstCountNorm` and
`InstCountNormDict`, which are the same as above but the counts are
instead normalized to the total number of instructions in the program.

Example usage:

    >>> import gym
    >>> import compiler_gym
    >>> env = gym.make("llvm-v0")
    >>> env.observation_space = "InstCountDict"
    >>> env.reset("cBench-v0/crc32")
    {'TotalInstsCount': 196, 'TotalBlocksCount': 29,
    'TotalFuncsCount': 13, 'RetCount': 5, 'BrCount': 24,
    'SwitchCount': 0, 'IndirectBrCount': 0, 'InvokeCount': 0,
    'ResumeCount': 0, 'UnreachableCount': 0, 'CleanupRetCount': 0,
    'CatchRetCount': 0, 'CatchSwitchCount': 0, 'CallBrCount': 0,
    'FNegCount': 0, 'AddCount': 5, 'FAddCount': 0, 'SubCount': 0,
    'FSubCount': 0, 'MulCount': 0, 'FMulCount': 0, 'UDivCount': 0,
    'SDivCount': 0, 'FDivCount': 0, 'URemCount': 0, 'SRemCount': 0,
    'FRemCount': 0, 'ShlCount': 0, 'LShrCount': 3, 'AShrCount': 0,
    'AndCount': 3, 'OrCount': 1, 'XorCount': 8, 'AllocaCount': 24,
    'LoadCount': 51, 'StoreCount': 38, 'GetElementPtrCount': 5,
    'FenceCount': 0, 'AtomicCmpXchgCount': 0, 'AtomicRMWCount': 0,
    'TruncCount': 1, 'ZExtCount': 5, 'SExtCount': 0, 'FPToUICount': 0,
    'FPToSICount': 0, 'UIToFPCount': 0, 'SIToFPCount': 0,
    'FPTruncCount': 0, 'FPExtCount': 0, 'PtrToIntCount': 0,
    'IntToPtrCount': 0, 'BitCastCount': 0, 'AddrSpaceCastCount': 0,
    'CleanupPadCount': 0, 'CatchPadCount': 0, 'ICmpCount': 10,
    'FCmpCount': 0, 'PHICount': 0, 'CallCount': 13, 'SelectCount': 0,
    'UserOp1Count': 0, 'UserOp2Count': 0, 'VAArgCount': 0,
    'ExtractElementCount': 0, 'InsertElementCount': 0,
    'ShuffleVectorCount': 0, 'ExtractValueCount': 0,
    'InsertValueCount': 0, 'LandingPadCount': 0, 'FreezeCount': 0}

The InstCount observation spaces are quick to compute and
lightweight. They have similar computational complexity as Autophase.

Fixes facebookresearch#149.

* [ci] Enable test workflows on Python 3.9.

Issue facebookresearch#162.

* Bump grpcio from 1.34 to 1.36.

Issue facebookresearch#162.

* Bump bazel requirement to 4.0.0.

This is required to build grpcio 1.36.0.

Issue facebookresearch#162.

* [ci] Reverse order of sudo in setup.

Issue facebookresearch#162.

* Add libjpeg-dev to list of required linux packages.

This to enable compiling Pillow from source on Python 3.9.

Issue facebookresearch#162.

* Bump the gym dependency to 0.18.0.

Issue facebookresearch#162.

* [examples] Fix initialization of temporary directory variable.

* Add zlib to macOS dependencies.

This is to fix compilation of Pillow using Python 3.9.

Issue facebookresearch#162.

* [readme] Recommend python 3.9 for conda environments.

Issue facebookresearch#162.

* [ci] Use python 3.9 for continuous integration jobs.

Issue facebookresearch#162.

* [setup.py] Add a list of supported python versions

* [setup.py] Bump development status to Alpha.

* [README] Use non-sudo instructions for linux setup.

* [README] Simplify table of contents.

This adds <!-- omit from toc --> annotations to some of the minor
subheadings to keep the table of contents as simple as possible.

This uses the "Markdown All in One" plugin for VSCode to automatically
keep the table of contents up to date:

https://marketplace.visualstudio.com/items?itemName=yzhang.markdown-all-in-one#table-of-contents

* [README] Use syntax highlighting for installation instructions.

* [README] Small tweak to wording.

* [README] Use -U in pip install example.

* [README] Don't use '$' prefix on shell commands.

It makes it harder to copy and paste the commands.

* [README] Add explicit "proceed to all platforms" below.

* Add missing load() of bazel rules.

* [leaderboard] Move leaderboard utility into compiler_gym namespace.

This adds a compiler_gym.leaderboard module that contains the LLVM
codesize leaderboard helper code. New API docs provide improved
explanation of how to use it.

Issue facebookresearch#158.

* [leaderboard] Rename --logfile to --results_logfile.

This is to break the duplicate flag error from
//tests/benchmarks:parallelization_load_test.

* [leaderboard] Make it clear that users can set observation spaces.

Issue facebookresearch#142.

* [CONTRIBUTING] Improve leaderboard submission instructions.

Re-order the file so that leaderboard submissions appear directly
below pull requests. Then provide more details about the submission
review process.

* [leaderboard] Rename LLVM codesize to instruction count.

Be clear that this leaderboard evaluates performance at reducing the
instruction count of LLVM-IR, not the binary codesize.

* Add leaderboard package as a dependency of //compiler_gym.

* [CONTRIBUTING] Use random-agent PR as example for leaderboard.

* leaderboard implementation

* fails due to env selection

* fails at 60th evaluation

* Rebase Tabular Q leaderboard on latest development.

* Add load() for bazel symbol.

Co-authored-by: Bram Wasti <bwasti@fb.com>
Co-authored-by: Jiadong Guo <jdguo@fb.com>

* Add JD's tabular-q leaderboard submission

* updated smoke test and readme

Co-authored-by: Jiadong Guo <jdguo@fb.com>
Co-authored-by: Chris Cummins <chrisc.101@gmail.com>
Co-authored-by: Bram Wasti <bwasti@fb.com>
Co-authored-by: Chris Cummins <cummins@fb.com>
  • Loading branch information
5 people committed Aug 3, 2021
1 parent 187c001 commit 93d7a9d
Show file tree
Hide file tree
Showing 9 changed files with 645 additions and 10 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,9 @@ environment on the 23 benchmarks in the `cBench-v1` dataset.
| Facebook | Greedy search | [write-up](leaderboard/llvm_instcount/e_greedy/README.md), [results](leaderboard/llvm_instcount/e_greedy/results_e0.csv) | 2021-03 | 169.237s | 1.055× |
| Facebook | Random search (t=60) | [write-up](leaderboard/llvm_instcount/random_search/README.md), [results](leaderboard/llvm_instcount/random_search/results_p125_t60.csv) | 2021-03 | 91.215s | 1.045× |
| Facebook | e-Greedy search (e=0.1) | [write-up](leaderboard/llvm_instcount/e_greedy/README.md), [results](leaderboard/llvm_instcount/e_greedy/results_e10.csv) | 2021-03 | 152.579s | 1.041× |
| Jiadong Guo | Tabular Q (N=5000, H=10) | [write-up](leaderboard/llvm_instcount/tabular_q/README.md), [results](leaderboard/llvm_instcount/tabular_q/results-H10-N5000.csv) | 2021-04 | 2534.305 | 1.036× |
| Facebook | Random search (t=10) | [write-up](leaderboard/llvm_instcount/random_search/README.md), [results](leaderboard/llvm_instcount/random_search/results_p125_t10.csv) | 2021-03 | **42.939s** | 1.031× |
| Jiadong Guo | Tabular Q (N=2000, H=5) | [write-up](leaderboard/llvm_instcount/tabular_q/README.md), [results](leaderboard/llvm_instcount/tabular_q/results-H5-N2000.csv) | 2021-04 | 694.105 | 0.988× |


# Contributing
Expand Down
1 change: 1 addition & 0 deletions examples/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,7 @@ py_test(
py_binary(
name = "tabular_q",
srcs = ["tabular_q.py"],
visibility = ["//visibility:public"],
deps = [
"//compiler_gym",
"//compiler_gym/util",
Expand Down
24 changes: 14 additions & 10 deletions examples/tabular_q.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@
"Indices of Alphaphase features that are used to construct a state",
)
flags.DEFINE_float("learning_rate", 0.1, "learning rate of the q-learning.")
flags.DEFINE_integer("episodes", 5000, "number of episodes used to learn.")
flags.DEFINE_integer("episodes", 2000, "number of episodes used to learn.")
flags.DEFINE_integer(
"log_every", 50, "number of episode interval where progress is reported."
)
Expand Down Expand Up @@ -86,7 +86,6 @@ def make_q_table_key(autophase_feature, action, step):
Finally, we add the action index to the key.
"""

return StateActionTuple(
*autophase_feature[FLAGS.features_indices], step, FLAGS.actions.index(action)
)
Expand Down Expand Up @@ -118,6 +117,8 @@ def rollout(qtable, env, printout=False):
action_seq.append(a)
observation, reward, done, info = env.step(env.action_space.flags.index(a))
rewards.append(reward)
if done:
break
if printout:
print(
"Resulting sequence: ", ",".join(action_seq), f"total reward {sum(rewards)}"
Expand All @@ -133,17 +134,19 @@ def train(q_table, env):
# policy improvement happens directly after one another.
for i in range(1, FLAGS.episodes + 1):
current_length = 0
obs = env.reset()
observation = env.reset()
while current_length < FLAGS.episode_length:
# Run epsilon greedy policy to allow exploration.
a = select_action(q_table, obs, current_length, FLAGS.epsilon)
hashed = make_q_table_key(obs, a, current_length)
a = select_action(q_table, observation, current_length, FLAGS.epsilon)
hashed = make_q_table_key(observation, a, current_length)
if hashed not in q_table:
q_table[hashed] = 0
# Take a stap in the environment, record the reward and state transition.
# Effectively we are evaluating the policy by taking a step in the
# environment.
obs, reward, done, info = env.step(env.action_space.flags.index(a))
observation, reward, done, info = env.step(env.action_space.flags.index(a))
if done:
break
current_length += 1

# Compute the target value of the current state, by using the current
Expand All @@ -154,7 +157,7 @@ def train(q_table, env):
# can be used to emphasize on immediate early rewards, and encourage
# the agent to achieve higher rewards sooner than later.
target = reward + FLAGS.discount * get_max_q_value(
q_table, obs, current_length
q_table, observation, current_length
)

# Update Q value. Instead of replacing the Q value at the current
Expand All @@ -166,7 +169,7 @@ def train(q_table, env):
+ (1 - FLAGS.learning_rate) * q_table[hashed]
)

if i % FLAGS.log_every == 0:
if FLAGS.log_every and i % FLAGS.log_every == 0:

def compare_qs(q_old, q_new):
diff = [q_new[k] - v for k, v in q_old.items()]
Expand All @@ -186,15 +189,16 @@ def main(argv):
q_table: Dict[StateActionTuple, float] = {}
benchmark = benchmark_from_flags()
assert benchmark, "You must specify a benchmark using the --benchmark flag"
env = gym.make("llvm-autophase-ic-v0", benchmark=benchmark)
env = gym.make("llvm-ic-v0", benchmark=benchmark)
env.observation_space = "Autophase"

try:
# Train a Q-table.
with Timer("Constructing Q-table"):
train(q_table, env)

# Rollout resulting policy.
rollout(q_table, env, True)
rollout(q_table, env, printout=True)

finally:
env.close()
Expand Down
26 changes: 26 additions & 0 deletions leaderboard/llvm_instcount/tabular_q/BUILD
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
load("@rules_python//python:defs.bzl", "py_binary")

py_binary(
name = "tabular_q_eval",
srcs = ["tabular_q_eval.py"],
deps = [
"//compiler_gym/leaderboard:llvm_instcount",
"//examples:tabular_q",
],
)

py_test(
name = "tabular_q_test",
timeout = "moderate",
srcs = ["tabular_q_test.py"],
deps = [
":tabular_q_eval",
"//compiler_gym/leaderboard:llvm_instcount",
"//tests:test_main",
"//tests/pytest_plugins:llvm",
],
)
68 changes: 68 additions & 0 deletions leaderboard/llvm_instcount/tabular_q/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Tabular Q

**tldr;**

A tabular, online Q-learning algorithm that is trained on each program in the test set.

**Authors:**
JD

**Results:**
1. [Episode length 5, 2000 training episodes](results-H5-N2000.csv).
2. [Episode length 10, 5000 training episodes](results-H10-N5000.csv)


**Publication:**


**CompilerGym version:**
0.1.7

**Open source?**
Yes, MIT licensed. [Source Code](tabular_q_eval.py).

**Did you modify the CompilerGym source code?**
No.

**What parameters does the approach have?**
* Episode length during the Q-table creation *H*.
* Learning rate. *λ*
* Discount fatcor. *γ*
* Actions that are considered by the algorithm. *a*
* Features that are used from the Autophase feature set. *f*
* Number of episodes used during Q-table learning. *N*

**What range of values were considered for the above parameters?**
* H=5, λ=0.1, γ=1.0, 15 selected actions, 3 selected features, N=2000 (short).
* H=10, λ=0.1, γ=1.0, 15 selected actions, 3 selected features, N=5000 (long).

**Is the policy deterministic?**
The policy itself is deterministic after its trained. However the training
process is non-deterministic, so the behavior is different when trained again.

## Description

Tabular Q learning is a standard reinforcement learning technique that computes the
expected accumulated reward from any state action pair, and store them in a table.
Through interaction with the environment, the algorithm improves the estimation by
using step-wise reward and existing entries of the q table.

The implementation is online, thus for every step taken in the environment, the reward
is immediately used to improve the current Q-table.

### Experimental Setup

| | Hardware Specification |
| ------ | --------------------------------------------- |
| OS | Ubuntu 20.04 |
| CPU | Intel Xeon Gold 6230 CPU @ 2.10GHz (80× core) |
| Memory | 754.5 GiB |

### Experimental Methodology

```sh
# short
$ python tabular_q_eval.py --episodes=2000 --episode_length=5 --learning_rate=0.1 --discount=1 --log_every=0
# long
$ python tabular_q_eval.py --episodes=5000 --episode_length=10 --learning_rate=0.1 --discount=1 --log_every=0
```
Loading

0 comments on commit 93d7a9d

Please sign in to comment.