Skip to content

Commit

Permalink
Remove pandas dataframe from quantum volume. (#136)
Browse files Browse the repository at this point in the history
* Remove pandas dataframe from quantum volume.

* Move wrapper functions to examples notebook.
  • Loading branch information
kylegulshen authored Jun 11, 2019
1 parent fe15c06 commit e341cdb
Show file tree
Hide file tree
Showing 4 changed files with 151 additions and 286 deletions.
10 changes: 5 additions & 5 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ v0.6 (June 11, 2018)
--------------------
Breaking Changes:

- `operator_estimation.py` is entirely replaced.
- `operator_estimation.py` is entirely replaced. All changes from (gh-135) except where stated otherwise.

- `pyquil.operator_estimation` dependencies replaced with `forest.benchmarking.operator_estimation`
- `pyquil.operator_estimation` dependencies replaced with `forest.benchmarking.operator_estimation` (gh-129,132,133,134,135)

- `operator_estimation.TomographyExperiment.out_op` -> `operator_estimation.ObservablesExperiment.out_observable`

Expand All @@ -17,13 +17,13 @@ Breaking Changes:

- `utils.all_pauli_terms` -> `utils.all_traceless_pauli_terms`

- `DFEData` and `DFEEstimate` dataclasses removed in favor of `ExperimentResult` and tuple of results respectively.
- `DFEData` and `DFEEstimate` dataclasses removed in favor of `ExperimentResult` and tuple of results respectively (gh-134).

- plotting moved out of `qubit_spectroscopy`; instead, use `fit_*_results()` to get a `lmfit.model.ModelResult` and pass this into `analysis.fitting.make_figure()`

- `pandas.DataFrame` is no longer used in `randomized_benchmarking`, `qubit_spectroscopy`, and `robust_phase_estimation`. These now make use of `operator_estimation.ObservablesExperiment`, and as such the API has changed substantially. Please refer to example notebooks for new usage.
- `pandas.DataFrame` is no longer used in `randomized_benchmarking` (gh-133), `qubit_spectroscopy` (gh-129), and `robust_phase_estimation` (gh-135). These now make use of `operator_estimation.ObservablesExperiment`, and as such the API has changed substantially. Please refer to example notebooks for new usage.

- `pandas.DataFrame` methods removed from `quantum_volume`. See examples notebook for alternative usage.
- `pandas.DataFrame` methods removed from `quantum_volume`. See examples notebook for alternative usage (gh-136).

- `utils.determine_simultaneous_grouping()` removed in favor of similar functionality in `operator_estimation.group_settings`

Expand Down
99 changes: 81 additions & 18 deletions examples/quantum_volume.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -213,7 +213,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## For more fine-grained control, create and maintain a dataframe"
"## Run intermediate steps yourself"
]
},
{
Expand All @@ -222,19 +222,61 @@
"metadata": {},
"outputs": [],
"source": [
"from forest.benchmarking.quantum_volume import (generate_quantum_volume_experiments,\n",
" add_programs_to_dataframe,\n",
" acquire_quantum_volume_data,\n",
" acquire_heavy_hitters,\n",
" get_results_by_depth,\n",
" extract_quantum_volume_from_results)"
"from forest.benchmarking.quantum_volume import generate_abstract_qv_circuit, _naive_program_generator, collect_heavy_outputs\n",
"from pyquil.numpy_simulator import NumpyWavefunctionSimulator\n",
"from pyquil.gates import RESET\n",
"import time\n",
"\n",
"def generate_circuits(depths):\n",
" for d in depths:\n",
" yield generate_abstract_qv_circuit(d)\n",
"\n",
"def convert_ckts_to_programs(qc, circuits, qubits=None):\n",
" for idx, ckt in enumerate(circuits):\n",
" if qubits is None:\n",
" d_qubits = qc.qubits() # by default the program can act on any qubit in the computer\n",
" else:\n",
" d_qubits = qubits[idx]\n",
"\n",
" yield _naive_program_generator(qc, d_qubits, *ckt)\n",
"\n",
"\n",
"def acquire_quantum_volume_data(qc, programs, num_shots = 1000, use_active_reset = False):\n",
" for program in programs:\n",
" start = time.time()\n",
"\n",
" if use_active_reset:\n",
" reset_measure_program = Program(RESET())\n",
" program = reset_measure_program + program\n",
"\n",
" # run the program num_shots many times\n",
" program.wrap_in_numshots_loop(num_shots)\n",
" executable = qc.compiler.native_quil_to_executable(program)\n",
"\n",
" results = qc.run(executable)\n",
"\n",
" runtime = time.time() - start\n",
" yield results\n",
"\n",
"\n",
"def acquire_heavy_hitters(abstract_circuits):\n",
" for ckt in abstract_circuits:\n",
" perms, gates = ckt\n",
" depth = len(perms)\n",
" wfn_sim = NumpyWavefunctionSimulator(depth)\n",
"\n",
" start = time.time()\n",
" heavy_outputs = collect_heavy_outputs(wfn_sim, perms, gates)\n",
" runtime = time.time() - start\n",
"\n",
" yield heavy_outputs\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get a dataframe with (depth x n_circuits) many \"Abstract Ckt\"s that describe each model circuit for each depth."
"Generate (len(unique_depths) x n_circuits) many \"Abstract Ckt\"s that describe each model circuit for each depth."
]
},
{
Expand All @@ -244,16 +286,17 @@
"outputs": [],
"source": [
"n_circuits = 100\n",
"depths = [2,3]\n",
"df = generate_quantum_volume_experiments(depths, n_circuits)\n",
"df"
"unique_depths = [2,3]\n",
"depths = [d for d in unique_depths for _ in range(n_circuits)]\n",
"ckts = list(generate_circuits(depths))\n",
"print(ckts[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use the default program_generator to synthesize native pyquil programs that implement each ckt natively on the qc."
"Use the _naive_program_generator to synthesize native pyquil programs that implement each ckt natively on the qc."
]
},
{
Expand All @@ -262,8 +305,8 @@
"metadata": {},
"outputs": [],
"source": [
"df = add_programs_to_dataframe(df, noisy_qc)\n",
"print(df[\"Program\"].values[0])"
"progs = list(convert_ckts_to_programs(noisy_qc, ckts))\n",
"print(progs[0])"
]
},
{
Expand All @@ -279,7 +322,8 @@
"metadata": {},
"outputs": [],
"source": [
"df = acquire_quantum_volume_data(df, noisy_qc, num_shots=10)"
"num_shots=10\n",
"results = list(acquire_quantum_volume_data(noisy_qc, progs, num_shots=num_shots))"
]
},
{
Expand All @@ -295,8 +339,24 @@
"metadata": {},
"outputs": [],
"source": [
"df = acquire_heavy_hitters(df)\n",
"print(df[\"Num HH Sampled\"].values[0:10])"
"ckt_hhs = acquire_heavy_hitters(ckts)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Count the number of heavy hitters that were sampled on the qc"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from forest.benchmarking.quantum_volume import count_heavy_hitters_sampled\n",
"num_hh_sampled = count_heavy_hitters_sampled(results, ckt_hhs)"
]
},
{
Expand All @@ -312,7 +372,9 @@
"metadata": {},
"outputs": [],
"source": [
"results = get_results_by_depth(df)\n",
"from forest.benchmarking.quantum_volume import get_prob_sample_heavy_by_depth\n",
"\n",
"results = get_prob_sample_heavy_by_depth(depths, num_hh_sampled, [num_shots for _ in depths])\n",
"results"
]
},
Expand All @@ -329,6 +391,7 @@
"metadata": {},
"outputs": [],
"source": [
"from forest.benchmarking.quantum_volume import extract_quantum_volume_from_results\n",
"qv = extract_quantum_volume_from_results(results)\n",
"qv"
]
Expand Down
Loading

0 comments on commit e341cdb

Please sign in to comment.