Arithmetic between real arrays and Python complex scalars (revisited) #841
Labels
API change
Changes to existing functions or objects in the API.
Needs Discussion
Needs further discussion.
topic: Complex Data Types
Complex number data types.
topic: Type Promotion
Type promotion.
Milestone
The array API standard is now clear (in Type Promotion Rules):
This means that following does not have defined behavior:
However, the behavior is defined consistently in NumPy, CuPy, PyTorch, jax.numpy1, Dask array, and Tensorflow2 in the intuitive way: the real and imaginary component dtypes of the complex output array match the dtype of the input array.
This issue was discussed explicitly in gh-478 beginning with #478 (comment). I see several comments that seem supportive of allowing this operation, e.g.
The comment I see against it (#478 (comment)) is
It looked like the summary comments suggested that this would be allowed, but it was not specifically addressed in the PR that closed the issue. (See postscript for more information.)
I thought it might help to add some perspective on how this impacts a developer translating code to the standard: when tests including an operation like
begin to fail with
array_api_strict
only, the developer is faced with the choice of skipping thearray_api_strict
tests or figuring out what is wrong and changing the code to something like:If they also want it to be able to preserve the lower precision types supported by some libraries (e.g. for NumPy), there would be additional hurdles. I tested for a few libraries, and array API standard compatible version of the code also increases the execution time notably for small arrays. Individually, the inconvenience, complexity, and overhead are small, but they add up. I am excited about adding array API support to SciPy, but not everyone is supportive, and performance regressions and complicated-looking diffs can make it difficult to garner support.
I would suggest that the operation be defined, even if it means adding an exception to the simply stated rules for array/Python arithmetic.
The more specific language about this not being defined was added in gh-513, which provided the justification:
I think that is referring to the language:
That guidance was added in gh-74, but I can't trace the origin further. Perhaps there can be exceptions to that rule?
Footnotes
when higher precision support is enabled, see changelog ↩
With NumPy experimental behavior (https://github.com/data-apis/array-api/issues/478#issuecomment-1272631595). Vanilla Tensorflow doesn't seem to support the other array-scalar operations that are defined, but already this was not deemed to be sufficient reason to prevent their inclusion in the standard (https://github.com/data-apis/array-api/issues/478#issuecomment-1270409660). ↩
The text was updated successfully, but these errors were encountered: