-
Notifications
You must be signed in to change notification settings - Fork 586
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added support for output_dim to be a tuple #1070
Added support for output_dim to be a tuple #1070
Conversation
The two tests that are failing are because they indirectly make sure that the |
Thanks @kessler-frost for the PR! Let us know when it's ready for final code review, and someone from the team will be able to have a look :)
If the logical changes cause assumptions made in previous tests to fail, then it is fine to update the tests to match the newer logic. |
…pennylane into tuples-keraslayer
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #1070 +/- ##
=======================================
Coverage 97.74% 97.74%
=======================================
Files 154 154
Lines 11726 11728 +2
=======================================
+ Hits 11461 11463 +2
Misses 265 265 ☔ View full report in Codecov by Sentry. |
I was writing the test
I've been trying to figure out the output shape of our Edit: Turns out I was sampling |
As discussed in the related issue, if the layer next to our |
@trbromley The PR is now ready for review. |
Awesome, thanks @kessler-frost 💪 Someone from the team will be able to have a look shortly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @kessler-frost! Looks great! I have left some comments and can give the ✔️ once resolved.
One thing to bear in mind is I suggested we add a note in the docstring to warn users about casting - returning the state or dm is complex and we need to be careful. Also, we only support differentiating the state or dm if the device is default.qubit.tf
using diff_method=backprop
, but this was recently made the default so we probabaly don't need to add a comment for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @kessler-frost! Apologies for being a bit slow to get back to this, we've been having fun with QHack this week and last.
This looks good and I approve! Although I still think we should consider adding the comment about complex/real dtypes, but would be interested to get your thoughts too.
Finally, I saw this commit: 1a2354c for updating the changelog, but couldn't see the changelog in the latest updates. Please add the change as well as your name 😄 Thanks!!
@@ -47,13 +47,50 @@ def model(get_circuit, n_qubits, output_dim): | |||
return model | |||
|
|||
|
|||
@pytest.fixture |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this work without
@pytest.mark.usefixtures("get_circuit_dm")
?
(we have @pytest.mark.usefixtures("get_circuit")
for model()
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, apparently it is mentioned in pytest
's documentation that fixtures
cannot use other fixtures
in this way. I suggest we remove the @pytest.mark.usefixtures("get_circuit")
for model()
too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK the fixture get_circuit
or get_circuit_dm
is getting used when we call the @pytest.mark.usefixture
decorator on TestKerasLayerIntegration
and TestKerasLayerIntegrationDM
classes respectively. Thus, we do not need to use it for model()
and model_dm()
.
Thanks for the review @trbromley! I have added the cautionary comment and updated the changelog as well. The PR may finally be ready for merging 😅. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @kessler-frost for the contribution! Merging in shortly!! 💯
Context:
Until now, only
int
and flat tuples like(5,)
were acceptable values by theoutput_dim
parameter inqnn.KerasLayer
. After the addition of thedensity_matrix()
return type by a QNode in PR #878 the output from a QNode may not necessarily be flat. This PR adds the support for tuples to be passed as an acceptable value to theoutput_dim
parameter.Description of the Change:
Now values like
(5, 5)
as well as integers like2
or flat tuples like(5,)
can be passed to theoutput_dim
parameter. It is also to be noted that if anIterable
is passed (e.g, a list like[5, 5]
) instead, it will be converted to a tuple. Internally the value ofoutput_dim
will either be a tuple or an integer.Example:
Here,
c
is a circuit that outputs adensity_matrix()
qnode with the shapeoutput_dim
(a square tuple), andw
is the shape of weights to be used in the circuit. We need to flatten the outputs if we are expecting the following layer to receive inputs with only 1 axis (excluding batch size as it remains unchanged when flattening). We also add a lambda layer so that only real values in the density matrix are used for processing (read the discussion below to know more).Benefits:
output_dim
without breaking the current usage patterns.Conv2D
with ourKerasLayer
in the future.Related GitHub Issues:
Issue #936