Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zero-dimensional tensor behavior #98

Closed
StefanKarpinski opened this issue Jul 8, 2011 · 4 comments
Closed

zero-dimensional tensor behavior #98

StefanKarpinski opened this issue Jul 8, 2011 · 4 comments
Labels
bug Indicates an unexpected problem or unintended behavior

Comments

@StefanKarpinski
Copy link
Member

This is not at all what should happen for a zero-dimensional tensor:

julia> a = Array{Int32,0}(())
Array(Int32,())

julia> a[] = 1
error: stack overflow

julia> a[]
Array(Int32,())

Instead, a[] = 1 should assign the unique value contained in the zero-tensor and a[] should retrieve it.

@JeffBezanson
Copy link
Member

I was just trying to remember why our Scalars aren't 0-Tensors any more. Do you remember?

Maybe making a 0-dimensional array should be an error.

I noticed that a[:,1] gives something 2-dimensional. I recall we wanted the rank of the result to be the sum of the ranks of the indexes, but I guess we changed that because the type system can't express it?

@StefanKarpinski
Copy link
Member Author

There were all sorts of nasty type issues with having Float64 and Tensor{Float64,0} be identified types.

I'm not really sure what we should do here. Clearly having t[1,2,3] return a zero-dimensional tensor object is stupid unless that's the same thing as a scalar. Having T = Tensor{T,0} caused all sorts of problems with method declarations, but it is strangely compelling.

@StefanKarpinski
Copy link
Member Author

I'm making this not a v1.0 issue. This isn't causing immediate problems and can be resolved later. Still reasonable to discuss it now though.

@JeffBezanson
Copy link
Member

Ah, ok, I guess we decided that being a container is not directly related to tensor rank. If it were, it would be hard to justify a[] doing a rank-reducing operation (i.e. taking an element out of a container).

I can fix this with a 3-line change:

diff --git a/j/array.j b/j/array.j
index 017862a..483ce2b 100644
--- a/j/array.j
+++ b/j/array.j
@@ -58,6 +58,7 @@ float64{T,n}(x::Array{T,n}) = convert(Array{Float64,n}, x)
 ## Indexing: ref ##

 ref(a::Array, i::Index) = arrayref(a,i)
+ref{T}(a::Array{T,0}) = arrayref(a,1)
 ref{T}(a::Array{T,1}, i::Index) = arrayref(a,i)
 ref(a::Array{Any,1}, i::Index) = arrayref(a,i)
 ref{T}(a::Array{T,2}, i::Index, j::Index) = arrayref(a, (j-1)*arraysize(a,1)+i)
@@ -68,6 +69,7 @@ assign(A::Array{Any}, x::Tensor, i::Index) = arrayset(A,i,x)
 assign(A::Array{Any}, x::ANY, i::Index) = arrayset(A,i,x)
 assign{T}(A::Array{T}, x::Tensor, i::Index) = arrayset(A,i,convert(T, x))
 assign{T}(A::Array{T}, x, i::Index) = arrayset(A,i,convert(T, x))
+assign{T}(A::Array{T,0}, x) = arrayset(A,1,convert(T, x))

 ## Dequeue functionality ##

diff --git a/src/array.c b/src/array.c
index 1f8feb2..a24bb9f 100644
--- a/src/array.c
+++ b/src/array.c
@@ -42,7 +42,6 @@ static jl_array_t *_new_array(jl_type_t *atype, jl_tuple_t *dimst,
     void *data;
     jl_array_t *a;

-    if (ndims == 0) nel = 0;
     for(i=0; i < ndims; i++) {
         nel *= dims[i];
     }

So should I just push this?

JeffBezanson added a commit that referenced this issue Jul 8, 2011
making zero-dimensional Arrays do something more sane. the whole thing is
debatable, but this fix is definitely an improvement.
StefanKarpinski added a commit that referenced this issue Jul 8, 2011
* 'master' of github.com:JuliaLang/julia:
  adding @spawnat (null) expr should print as "nothing"
  fixes issue #98 making zero-dimensional Arrays do something more sane. the whole thing is debatable, but this fix is definitely an improvement.
  switching to better set of llvm optimization passes
LilithHafner pushed a commit to LilithHafner/julia that referenced this issue Oct 11, 2021
Simplify mad and add details to doc
inkydragon pushed a commit that referenced this issue Dec 15, 2024
Stdlib: SHA
URL: https://github.com/JuliaCrypto/SHA.jl.git
Stdlib branch: master
Julia branch: master
Old commit: aaf2df6
New commit: 8fa221d
Julia version: 1.12.0-DEV
SHA version: 0.7.0(Does not match)
Bump invoked by: @inkydragon
Powered by:
[BumpStdlibs.jl](https://github.com/JuliaLang/BumpStdlibs.jl)

Diff:
JuliaCrypto/SHA.jl@aaf2df6...8fa221d

```
$ git log --oneline aaf2df6..8fa221d
8fa221d ci: update doctest config (#120)
346b359 ci: Update ci config (#115)
aba9014 Fix type mismatch for `shake/digest!` and setup x86 ci (#117)
0b76d04 Merge pull request #114 from JuliaCrypto/dependabot/github_actions/codecov/codecov-action-5
5094d9d Update .github/workflows/CI.yml
45596b1 Bump codecov/codecov-action from 4 to 5
230ab51 test: remove outdate tests (#113)
7f25aa8 rm: Duplicated const alias (#111)
aa72f73 [SHA3] Fix padding special-case (#108)
3a01401 Delete Manifest.toml (#109)
da351bb Remvoe all getproperty funcs (#99)
4eee84f Bump codecov/codecov-action from 3 to 4 (#104)
15f7dbc Bump codecov/codecov-action from 1 to 3 (#102)
860e6b9 Bump actions/checkout from 2 to 4 (#103)
8e5f0ea Add dependabot to auto update github actions (#100)
4ab324c Merge pull request #98 from fork4jl/sha512-t
a658829 SHA-512: add ref to NIST standard
11a4c73 Apply suggestions from code review
969f867 Merge pull request #97 from fingolfin/mh/Vector
b1401fb SHA-512: add NIST test
4d7091b SHA-512: add to docs
09fef9a SHA-512: test SHA-512/224, SHA-512/256
7201b74 SHA-512: impl SHA-512/224, SHA-512/256
4ab85ad Array -> Vector
8ef91b6 fixed bug in padding for shake, addes testcases for full code coverage (#95)
88e1c83 Remove non-existent property (#75)
068f85d shake128,shake256: fixed typo in export declarations (#93)
176baaa SHA3 xof shake128 and shake256  (#92)
e1af7dd Hardcode doc edit backlink
```

Co-authored-by: Dilum Aluthge <dilum@aluthge.com>
DilumAluthge added a commit that referenced this issue Dec 17, 2024
Stdlib: Distributed
URL: https://github.com/JuliaLang/Distributed.jl
Stdlib branch: master
Julia branch: master
Old commit: 6c7cdb5
New commit: c613685
Julia version: 1.12.0-DEV
Distributed version: 1.11.0(Does not match)
Bump invoked by: @DilumAluthge
Powered by:
[BumpStdlibs.jl](https://github.com/JuliaLang/BumpStdlibs.jl)

Diff:
JuliaLang/Distributed.jl@6c7cdb5...c613685

```
$ git log --oneline 6c7cdb5..c613685
c613685 Merge pull request #116 from JuliaLang/ci-caching
20e2ce7 Use julia-actions/cache in CI
9c5d73a Merge pull request #112 from JuliaLang/dependabot/github_actions/codecov/codecov-action-5
ed12496 Merge pull request #107 from JamesWrigley/remotechannel-empty
010828a Update .github/workflows/ci.yml
11451a8 Bump codecov/codecov-action from 4 to 5
8b5983b Merge branch 'master' into remotechannel-empty
729ba6a Fix docstring of `@everywhere` (#110)
af89e6c Adding better docs to exeflags kwarg (#108)
8537424 Implement Base.isempty(::RemoteChannel)
6a0383b Add a wait(::[Abstract]WorkerPool) (#106)
1cd2677 Bump codecov/codecov-action from 1 to 4 (#96)
cde4078 Bump actions/cache from 1 to 4 (#98)
6c8245a Bump julia-actions/setup-julia from 1 to 2 (#97)
1ffaac8 Bump actions/checkout from 2 to 4 (#99)
8e3f849 Fix RemoteChannel iterator interface (#100)
f4aaf1b Fix markdown errors in README.md (#95)
2017da9 Merge pull request #103 from JuliaLang/sf/sigquit_instead
07389dd Use `SIGQUIT` instead of `SIGTERM`
```

Co-authored-by: Dilum Aluthge <dilum@aluthge.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Indicates an unexpected problem or unintended behavior
Projects
None yet
Development

No branches or pull requests

2 participants