Replies: 4 comments 9 replies
-
TensorTensor provides a native row-major (RM) data structure that implements all of the underlying numpy-like capabilities. See numpy for matlab users for a nice concise summary of numpy (and MATLAB) syntax. Literals// 1D:
[1,2,3]
// 2D:
[[1,2], [3,4]]
=
[1,2]
[3,4]
// 3D:
[[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
=
[1,2], [5,6]
[3,4], [7,8] AccessTwo types of access: indexing with single indexes or ranges (sub-spaces), and boolean filtering (boolean mask). As in Go, indexed sub-slices are references to the original (modify original if written), and you use explicit a := [1,2,3,4,5]
b := [[1,2], [3,4]]
// [1,2]
// [3,4]
a[0] // 1
a[-1] // 5, negative indexing from end: len(a)-1
a[2:4] // [3,4] = subspace
b[0, :] // [1,3] = first column
b[:, 1] // [3,4] = 2nd row
b[:,-1] // [3,4] = last row
a <= 3 // [true, true, true, false, false] bit mask
a[a<3] // [1,2,3] bit mask indexing |
Beta Was this translation helpful? Give feedback.
-
Scope and NamingI'm increasingly thinking that it doesn't make sense to maintain two separate shell-like Go language transpiler systems, and we should just make one all-purpose language that does everything. Essentially a Go version of IPython, with numpy and shell commands builtin (except better of course, because it is Go syntax and we don't have to use the "magic" Thus, a key step is to come up with a good name for this thing. Some considerations:
|
Beta Was this translation helpful? Give feedback.
-
It seems unlikely that we're going to be able to transpile even simple expressions like
In any case, dealing with the unnamed results of expressions will require some amount of type inference, e.g., knowing that It is also possible that if we're tracking the nature of the relevant variables, we could avoid the need for |
Beta Was this translation helpful? Give feedback.
-
Go level impl: Indexed TensorThe workhorse for all math computations will be the We can share the Indexes between Table and Tensor, but the Indexed type itself which just has the indexes and the pointer to the underlying Tensor or Table cannot be shared. tensor.Sort and tensor.Filter operate on Indexed, and filter implements Generic named functionsAll computation boils down to applying a function to indexed tensor(s). In a duck typed language like python, there is no problem doing this with any call signature, but that doesn't work in Go. Can we devise a fully generic call signature? Names are important for auto-naming the results of functions, e.g., in the Sim logging example, you want to be able to pass in a vararg list of stats functions to call, and then have it name the resulting data accordingly. Standard functions get enums, but free-form string names are necessary for arbitrary user-defined functions. Do we have some kind of giant map of all such functions? and a giant enum of all standard cases? The existing
Some of these operate on raw The Tensor case needs to be able to operate on high-dimensional "Cell" values, e.g., the Mean stats on tensor returns a flat slice of float64 with the mean for each sub-tensor value.. Per above, we need to consolidate on an indexed tensor as the one data type to operate on universally. Element-wise vs. whole tensor functionsThe simplest case is all functions operate element-wise, and you just call them with the If you need to do some combination of element-wise and aggregate computation (as in the SumSquares in metric), you could aggregate into a 1-row tensor that holds the aggregate values, and then the aggregate function operates on those. Can we establish a hard simplifying constraint that However, in Thus, you need a function that maps the outer iteration index into an type ElemFunc func(idx int, tensors ...tensor.Indexed) // idx can be expanded by func into actual effective indexes
type NFunc func(tensors ...tensor.Indexed) int // gives the N to iterate over in `VMap`
func VMap(nfunc NFunc, elemFunc ElemFunc, tensors ...tensor.Indexed) {
n := nfunc(tensors)
for i := range n {
elemFunc(i, tensors...)
}
} By making everything so generic, there is a significant possibility of runtime errors in calling the wrong combination of funcs and tensors, which could be mitigated by Next step: write all of the existing stats functions in the above framework and see how it looks! |
Beta Was this translation helpful? Give feedback.
-
This discussion is for spec of math mode in
numbers
, which builds oncosh
shellMath mode
cosh
establishes two modes: Go vs.exec
, with explicit{ }
= go, backticks = exec and implicit parsing rules to determine the mode.Math mode is always explicit, using
$ <expr> $
.Hopefully, the inline form will be sufficient for integrating with standard Go mode control flow, as in:
If we need more powerful math-mode augmented control flow expressions, we could add an
$$ <stmt> $$
syntax.As in
cosh
, it is essential that all math-mode expressions are just transpiled into corresponding Go-syntax code, which can then either be compiled into Go programs, or run interactively using the yaegi interpreter.Two major domains: tensors and special math expressions
There are two major domains of special expression syntax: tensor (array) and special math expressions and functions, each of which is discussed in separate sections below. In general the tensor is backed by the core tensor package, and uses the
numpy
syntax where it is sensible.Special math expressions include direct access to
cos
,sin
etc functions, which are automatically transpiled to correspondingmath
package calls. The other major issue is default numerical types (e.g., default tofloat64
) but also support forfloat32
where needed.Also possible: more extended functional programming and automatic GPU execution
The Jax framework is the latest buzz in ML, which can speed up lots of code by adopting functional programming ideas that naturally apply to the GPU. We could potentially leverage gosl to provide the same kind of functionality. This would be a huge win.
Beta Was this translation helpful? Give feedback.
All reactions