Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/main' into exp-data-pickle
Browse files Browse the repository at this point in the history
  • Loading branch information
wshanks committed Dec 21, 2023
2 parents abcd8ad + 726f422 commit 254c2a2
Show file tree
Hide file tree
Showing 33 changed files with 121 additions and 88 deletions.
2 changes: 2 additions & 0 deletions .github/workflows/cron-staging.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ on:

jobs:
qiskit-main-tests:
if: github.repository_owner == 'Qiskit-Extensions'
name: tests-python${{ matrix.python-version }}-${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
Expand Down Expand Up @@ -46,6 +47,7 @@ jobs:
TEST_TIMEOUT: 120
OMP_NUM_THREADS: 1
docs:
if: github.repository_owner == 'Qiskit-Extensions'
name: docs
runs-on: ubuntu-latest
steps:
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ on:

jobs:
deploy:
if: github.repository_owner == 'Qiskit-Extensions'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/docs_dev.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ on:

jobs:
deploy:
if: github.repository_owner == 'Qiskit-Extensions'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ concurrency:
cancel-in-progress: true
jobs:
tests:
if: github.repository_owner == 'Qiskit-Extensions'
name: tests-python${{ matrix.python-version }}-${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
Expand Down Expand Up @@ -62,6 +63,7 @@ jobs:
run: stestr history remove all

lint:
if: github.repository_owner == 'Qiskit-Extensions'
name: lint
runs-on: ubuntu-latest
steps:
Expand All @@ -80,6 +82,7 @@ jobs:
- name: Run lint
run: tox -elint
docs:
if: github.repository_owner == 'Qiskit-Extensions'
name: docs
runs-on: ubuntu-latest
steps:
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/neko.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ concurrency:
cancel-in-progress: true
jobs:
neko:
if: github.repository_owner == 'Qiskit-Extensions'
name: Qiskit Neko Integration Tests
runs-on: ubuntu-latest
steps:
Expand Down
4 changes: 0 additions & 4 deletions constraints.txt
Original file line number Diff line number Diff line change
@@ -1,4 +0,0 @@
# Numpy 1.25 deprecated some behaviours that we used, and caused some
# tests to flake. See https://github.com/Qiskit/qiskit/issues/10305,
# remove pin when resolving that.
numpy<1.25
16 changes: 7 additions & 9 deletions docs/howtos/cloud_service.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Save and load experiment data with the cloud service
.. note::
This guide is only for those who have access to the cloud service. You can
check whether you do by logging into the IBM Quantum interface
and seeing if you can see the `database <https://quantum-computing.ibm.com/experiments>`__.
and seeing if you can see the `database <https://quantum.ibm.com/experiments>`__.

Problem
-------
Expand Down Expand Up @@ -44,13 +44,13 @@ backend and not a simulator to be able to save the experiment data. This is done
.. jupyter-output::

You can view the experiment online at
https://quantum-computing.ibm.com/experiments/10a43cb0-7cb9-41db-ad74-18ea6cf63704
https://quantum.ibm.com/experiments/10a43cb0-7cb9-41db-ad74-18ea6cf63704

Loading
~~~~~~~

Let's load a `previous T1
experiment <https://quantum-computing.ibm.com/experiments/9640736e-d797-4321-b063-d503f8e98571>`__
experiment <https://quantum.ibm.com/experiments/9640736e-d797-4321-b063-d503f8e98571>`__
(requires login to view), which we've made public by editing the ``Share level`` field:

.. jupyter-input::
Expand All @@ -59,13 +59,11 @@ experiment <https://quantum-computing.ibm.com/experiments/9640736e-d797-4321-b06
service = ExperimentData.get_service_from_backend(backend)
load_expdata = ExperimentData.load("9640736e-d797-4321-b063-d503f8e98571", service)

To display the figure, which is serialized into a string, we need the
``SVG`` library:
Now we can display the figure from the loaded experiment data:

.. jupyter-input::

from IPython.display import SVG
SVG(load_expdata.figure(0).figure)
load_expdata.figure(0)

.. image:: ./experiment_cloud_service/t1_loaded.png

Expand Down Expand Up @@ -143,7 +141,7 @@ The :meth:`~.ExperimentData.auto_save` feature automatically saves changes to th

.. jupyter-output::

You can view the experiment online at https://quantum-computing.ibm.com/experiments/cdaff3fa-f621-4915-a4d8-812d05d9a9ca
You can view the experiment online at https://quantum.ibm.com/experiments/cdaff3fa-f621-4915-a4d8-812d05d9a9ca
<ExperimentData[T1], backend: ibmq_lima, status: ExperimentStatus.DONE, experiment_id: cdaff3fa-f621-4915-a4d8-812d05d9a9ca>

Setting ``auto_save = True`` works by triggering :meth:`.ExperimentData.save`.
Expand Down Expand Up @@ -188,5 +186,5 @@ Web interface
~~~~~~~~~~~~~

You can also view experiment results as well as change the tags and share level at the `IBM Quantum Experiments
pane <https://quantum-computing.ibm.com/experiments?date_interval=last-90-days&owner=me>`__
pane <https://quantum.ibm.com/experiments?date_interval=last-90-days&owner=me>`__
on the cloud.
2 changes: 1 addition & 1 deletion docs/howtos/experiment_times.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ are all of type ``datetime.datetime`` and in your local timezone:
.. note::
The below attributes are only relevant for those who have access to the cloud service. You can
check whether you do by logging into the IBM Quantum interface
and seeing if you can see the `database <https://quantum-computing.ibm.com/experiments>`__.
and seeing if you can see the `database <https://quantum.ibm.com/experiments>`__.

- :attr:`.ExperimentData.creation_datetime` is the time when the experiment data was saved via the
service. This defaults to ``None`` if experiment data has not yet been saved.
Expand Down
10 changes: 6 additions & 4 deletions docs/howtos/rerun_analysis.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,12 +37,14 @@ job IDs:

expdata = ExperimentData(experiment = experiment)
expdata.add_jobs([provider.retrieve_job(job_id) for job_id in job_ids])
experiment.analysis.run(expdata)
experiment.analysis.run(expdata, replace_results=True)

# Block execution of subsequent code until analysis is complete
expdata.block_for_results()

``expdata`` will be the new experiment data object containing results of the rerun analysis.
``expdata`` will be the new experiment data object containing results of the rerun analysis. Note that if
``replace_results`` isn't set, running the analysis will return a new :class:`.ExperimentData` object
instead of overwriting the existing one.

If you have the job data in the form of a :class:`~qiskit.result.Result` object, you can
invoke the :meth:`.add_data` method instead of :meth:`.add_jobs`:
Expand All @@ -66,7 +68,7 @@ contain correct results.

In the case where jobs are not directly accessible from the provider but you've
downloaded the jobs from the
`IQS dashboard <https://quantum-computing.ibm.com/jobs>`_, you can load them from
`IQS dashboard <https://quantum.ibm.com/jobs>`_, you can load them from
the downloaded directory into :class:`~qiskit.result.Result` objects with this code:

.. jupyter-input::
Expand Down Expand Up @@ -115,7 +117,7 @@ first component experiment.

data = ExperimentData(experiment=pexp)
data.add_jobs([provider.retrieve_job(job_id) for job_id in job_ids])
pexp.analysis.run(data)
pexp.analysis.run(data, replace_results=True)

See Also
--------
Expand Down
2 changes: 1 addition & 1 deletion docs/howtos/runtime_sessions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Problem
-------

You want to run experiments in a `Runtime session
<https://qiskit.org/ecosystem/ibm-runtime/sessions.html>`_ so that jobs can run in close temporal proximity.
<https://docs.quantum.ibm.com/run/sessions>`_ so that jobs can run in close temporal proximity.

Solution
--------
Expand Down
9 changes: 4 additions & 5 deletions docs/tutorials/calibrations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,8 @@ measurement data manually.
In this tutorial, we demonstrate how to calibrate single-qubit gates using the
calibration framework in Qiskit Experiments. We will run experiments on our test pulse
backend, :class:`.SingleTransmonTestBackend`, a backend that simulates the underlying
pulses with `Qiskit Dynamics <https://qiskit.org/documentation/dynamics/>`_ on a
three-level model of a transmon. You can also run these experiments on any real backend
with Pulse enabled (see
pulses with :mod:`qiskit_dynamics` on a three-level model of a transmon. You can also
run these experiments on any real backend with Pulse enabled (see
:external+qiskit:doc:`tutorials/circuits_advanced/08_gathering_system_information`).

We will run experiments to
Expand Down Expand Up @@ -196,8 +195,8 @@ Run the calibration experiment:

The instance of ``calibrations`` has been automatically updated with the measured
frequency, as shown below. In addition to the columns shown below, ``calibrations`` also
store the group to which a value belongs, whether a values is valid or not and the
experiment id that produce a value.
stores the group to which a value belongs, whether a value is valid or not, and the
experiment id that produced a value.

.. jupyter-execute::

Expand Down
2 changes: 1 addition & 1 deletion qiskit_experiments/calibration_management/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
otherwise indicated.
Calibrating qubit setups is the task of finding the pulse shapes and parameter
values that maximizes the fidelity of the resulting quantum operations. This
values that maximize the fidelity of the resulting quantum operations. This
therefore requires experiments which are analyzed to extract parameter values.
Furthermore, the resulting parameter values and schedules must be managed. The
calibration management module in Qiskit experiments allows users to manage
Expand Down
10 changes: 5 additions & 5 deletions qiskit_experiments/calibration_management/calibrations.py
Original file line number Diff line number Diff line change
Expand Up @@ -1270,12 +1270,12 @@ def schedules(self) -> List[Dict[str, Any]]:
Returns:
data: A list of dictionaries with all the schedules in it. The key-value pairs are
* 'qubits': the qubits to which this schedule applies. This may be an empty
tuple () if the schedule is the default for all qubits.
* 'schedule': The schedule.
* 'parameters': The parameters in the schedule exposed for convenience.
* ``qubits``: the qubits to which this schedule applies. This may be an empty
tuple () if the schedule is the default for all qubits.
* ``schedule``: The schedule.
* ``parameters``: The parameters in the schedule exposed for convenience.
This list of dictionaries can easily be converted to a data frame.
This list of dictionaries can easily be converted to a data frame.
"""
data = []
for key, sched in self._schedules.items():
Expand Down
13 changes: 7 additions & 6 deletions qiskit_experiments/framework/analysis_result_table.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,13 +85,14 @@ def filter_columns(self, columns: Union[str, List[str]]) -> List[str]:
Args:
columns: Specifying a set of columns to return. You can pass a list of each
column name to return, otherwise builtin column groups are available.
column name to return, otherwise builtin column groups are available:
* ``all``: Return all columns, including metadata to communicate
with experiment service, such as entry IDs.
* ``default``: Return columns including analysis result with supplementary
information about experiment.
* ``minimal``: Return only analysis subroutine returns.
* "all": Return all columns, including metadata to communicate
with experiment service, such as entry IDs.
* "default": Return columns including analysis result with supplementary
information about experiment.
* "minimal": Return only analysis subroutine returns.
Raises:
ValueError: When column is given in string which doesn't match with any builtin group.
Expand Down
43 changes: 26 additions & 17 deletions qiskit_experiments/framework/experiment_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,7 @@
ExperimentDataSaveFailed,
)


if TYPE_CHECKING:
# There is a cyclical dependency here, but the name needs to exist for
# Sphinx on Python 3.9+ to link type hints correctly. The gating on
Expand Down Expand Up @@ -192,7 +193,7 @@ def _repr_svg_(self):
if isinstance(self.figure, str):
return self.figure
if isinstance(self.figure, bytes):
return str(self.figure)
return self.figure.decode("utf-8")
return None


Expand All @@ -205,7 +206,7 @@ class ExperimentData:
.. note::
Saving experiment data to the cloud database is currently a limited access feature. You can
check whether you have access by logging into the IBM Quantum interface
and seeing if you can see the `database <https://quantum-computing.ibm.com/experiments>`__.
and seeing if you can see the `database <https://quantum.ibm.com/experiments>`__.
This class handles the following:
Expand Down Expand Up @@ -1071,6 +1072,13 @@ def _retrieve_data(self):
"""Retrieve job data if missing experiment data."""
# Get job results if missing in experiment data.
if self.provider is None:
# 'self._result_data' could be locked, so I check a copy of it.
if not self._result_data.copy():
# Adding warning so the user will have indication why the analysis may fail.
LOG.warning(
"Provider for ExperimentData object doesn't exist, resulting in a failed attempt to"
" retrieve data from the server; no stored result data exists"
)
return
retrieved_jobs = {}
jobs_to_retrieve = [] # the list of all jobs to retrieve from the server
Expand Down Expand Up @@ -1528,25 +1536,25 @@ def analysis_results(
index: Index of the analysis result to be returned.
Several types are accepted for convenience:
* None: Return all analysis results.
* int: Specific index of the analysis results.
* slice: A list slice of indexes.
* str: ID or name of the analysis result.
* None: Return all analysis results.
* int: Specific index of the analysis results.
* slice: A list slice of indexes.
* str: ID or name of the analysis result.
refresh: Retrieve the latest analysis results from the server, if
an experiment service is available.
block: If True block for any analysis callbacks to finish running.
block: If ``True``, block for any analysis callbacks to finish running.
timeout: max time in seconds to wait for analysis callbacks to finish running.
columns: Specifying a set of columns to return. You can pass a list of each
column name to return, otherwise builtin column groups are available.
column name to return, otherwise builtin column groups are available:
* "all": Return all columns, including metadata to communicate
with experiment service, such as entry IDs.
* "default": Return columns including analysis result with supplementary
information about experiment.
* "minimal": Return only analysis subroutine returns.
* ``all``: Return all columns, including metadata to communicate
with the experiment service, such as entry IDs.
* ``default``: Return columns including analysis result with supplementary
information about experiment.
* ``minimal``: Return only analysis subroutine returns.
dataframe: Set True to return analysis results in the dataframe format.
dataframe: Set to ``True`` to return analysis results in the dataframe format.
Returns:
Analysis results for this experiment.
Expand Down Expand Up @@ -1681,7 +1689,7 @@ def save(
Args:
suppress_errors: should the method catch exceptions (true) or
pass them on, potentially aborting the experiment (false)
pass them on, potentially aborting the experiment (false)
max_workers: Maximum number of concurrent worker threads (capped by 10)
save_figures: Whether to save figures in the database or not
save_children: For composite experiments, whether to save children as well
Expand Down Expand Up @@ -1779,7 +1787,7 @@ def save(
if not self.service.local and self.verbose:
print(
"You can view the experiment online at "
f"https://quantum-computing.ibm.com/experiments/{self.experiment_id}"
f"https://quantum.ibm.com/experiments/{self.experiment_id}"
)
# handle children, but without additional prints
if save_children:
Expand Down Expand Up @@ -2293,6 +2301,7 @@ def copy(self, copy_results: bool = True) -> "ExperimentData":
new_instance = ExperimentData(
backend=self.backend,
service=self.service,
provider=self.provider,
parent_id=self.parent_id,
job_ids=self.job_ids,
child_data=list(self._child_data.values()),
Expand Down Expand Up @@ -2523,7 +2532,7 @@ def get_service_from_backend(backend):
@staticmethod
def get_service_from_provider(provider):
"""Initializes the service from the provider data"""
db_url = "https://auth.quantum-computing.ibm.com/api"
db_url = "https://auth.quantum.ibm.com/api"
try:
# qiskit-ibmq-provider style
if hasattr(provider, "credentials"):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ def update_calibrations(self, experiment_data: ExperimentData):
The frequency of the qubit is updated according to
..math::
.. math::
f \to f - \frac{{\rm d}\theta}{2\pi\tau{\rm d}t}
Expand Down
2 changes: 1 addition & 1 deletion qiskit_experiments/library/calibration/half_angle_cal.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ def update_calibrations(self, experiment_data: ExperimentData):
in the complex amplitude of the pulse. The update rule for the half angle calibration is
therefore:
..math::
.. math::
A \to A \cdot e^{-i{\rm d}\theta_\text{hac}/2}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ class CorrelatedReadoutErrorAnalysis(BaseAnalysis):
:math:`A` is a :math:`2^n\times 2^n` matrix :math:`A` such that :math:`A_{y,x}`
is the probability to observe :math:`y` given the true outcome should be :math:`x`.
In the experiment, for each :math:`x`a circuit is constructed whose expected
In the experiment, for each :math:`x` a circuit is constructed whose expected
outcome is :math:`x`. From the observed results on the circuit, the probability for
each :math:`y` is determined, and :math:`A_{y,x}` is set accordingly.
Expand Down
2 changes: 1 addition & 1 deletion qiskit_experiments/library/characterization/rabi.py
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ def _metadata(self):

class EFRabi(Rabi):
r"""An experiment that scans the amplitude of a pulse inducing rotations on the
:math:`|1\rangle` <-> :math:`|2\rangle` transition.
:math:`|1\rangle` <-> :math:`|2\rangle` transition.
# section: overview
Expand Down
Loading

0 comments on commit 254c2a2

Please sign in to comment.