qdao is a quantum data access optimization framework. It leverages secondary storage to simulate large scale quantum circuits and minimizes the data movement between memory and secondary storage. The memory requirement of full state quantum circuit simulation grows exponentially with the number of qubits. For example, on a typical PC, simulate a circuit with more than 30 qubits can easily result in out-of-memory error. With qdao, the memory occupation of simulation is completely under your control.
git clone https://github.com/Zhaoyilunnn/qdao.git
cd qdao
pip install .
pytest tests/
The following example can be found in the examples/
directory and more examples will be added in the future.
The following code snippet shows the basic usage of qdao. You can configure the num_primary
parameter to reduce the memory occupation.
from qdao import Engine
from qiskit.circuit.random import random_circuit
from qiskit import transpile
from qiskit_aer import Aer
num_qubits = 12
num_primary = 10
num_local = 8
# Create a qiskit quantum circuit `circ`
circ = random_circuit(num_qubits, 10, measure=False, max_operands=2)
backend = Aer.get_backend("aer_simulator")
circ = transpile(circ, backend=backend)
# `num_primary`: size of a compute unit, i.e., $m$ in QDAO paper
# `num_local`: size of a storage unit, i.e., $t$ in QDAO paper
eng = Engine(circuit=circ, num_primary=num_primary, num_local=num_local)
eng.run()
To use GPU for simulation and use host memory to store the entire statevector, try following configurations.
First you need to install qiskit-aer-gpu
. Please refer to the official document.
eng = Engine(circuit=circ, num_primary=num_primary, num_local=num_local, sv_location="memory", device="GPU")
You can specify a backend simulator by using backend
option, currently only qiskit and pyquafu are supported.
# First transform qiskit circuit to a quafu circuit
from quafu import QuantumCircuit
quafu_circ = QuantumCircuit(1)
# For qiskit < 1.0
quafu_circ.from_openqasm(circ.qasm())
# For qiskit >= 1.0, `qasm()` api has been deprecated.
from qiskit.qasm2 import dumps
quafu_circ.from_openqasm(dumps(circ))
# Create a new engine using quafu backend
eng = Engine(circuit=quafu_circ, num_primary=num_primary, num_local=num_local, backend="quafu")
eng.run()
We're working on to support measurement in qdao, currently please obtain state vector after simulation as follows.
from qdao.util import retrieve_sv
res = retrieve_sv(num_qubits, num_local=num_local)
print(res)
If you find our work useful, please kindly cite our paper as below.
@inproceedings{qdao2023,
title={Full State Quantum Circuit Simulation Beyond Memory Limit},
author={Zhao, Yilun and Chen, Yu and Li, He and Wang, Ying and Chang, Kaiyan and Wang, Bingmeng and Li, Bing and Han, Yinhe},
booktitle={2023 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)},
year={2023},
organization={IEEE}
}
See the CONTRIBUTING.md.
There are some key features to be supported in the future
- GPU simulation
- Noisy simulation
When using qiskit backend, setting initial statevector in qiskit incurs significant overhead due To
- Qiskit treats state vector as parameters and implements time-coonsuming validation logics, see #14.
- There exists additional data copy when submitting circuit with initial state vector to Aer. E.g., if you set 1 GB initial state vector, it actually consumes 2 GB memory during simulation.
- You can use the tagged stable version, which requires qiskit < 1.0 and we did some tricky modifications in qiskit to eliminate the validation overhead.
git checkout stable/0.1
pip install .
-
You can use
quafu
backend. Algthough itself is slower than Qiskit, it does not have the problems of qiskit when using QDAO. Indeed, QDAO-PyQuafu is faster than QDAO-Qiskit for large circuit (>=28 Qubits). -
Since the additional overhead of qiskit is completely implementation issue, you might want to measure the ideal performance of QDAO-Qiskit. To do so, you could comment this line, which however leads to wrong simulations results.
- We will implement a customized C++ high-performance backend and integrate it into QDAO. Please stay tuned.
Please file an issue or contact zyilun8@gmail.com if you encounter any problems.