Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Random number generator improvements #2251

Closed
3 tasks done
victoriacity opened this issue Apr 8, 2021 · 0 comments · Fixed by #2297
Closed
3 tasks done

Random number generator improvements #2251

victoriacity opened this issue Apr 8, 2021 · 0 comments · Fixed by #2297
Labels
feature request Suggest an idea on this project

Comments

@victoriacity
Copy link
Member

victoriacity commented Apr 8, 2021

Concisely describe the proposed feature

A versatile and robust random number generator is essential for the accuracy of various simulation methods involving stochastic processes (e.g., Monte Carlo simulations). I am planning to extend the functionalities of random number generation in Taichi in terms of features below:

1. Float64 random number

The current implementation (0.7.15) generates a float32 random number and casts to float64 when calling ti.random(dtype=ti.f64). This approach gives rise to granularity problems since only 32 bits of information are used to generate the random number. This feature can be tested as follows:

import taichi as ti
import numpy as np

ti.init(arch=ti.cpu, default_fp=ti.f64)

n = int(2 ** 23)
x = ti.field(float, shape=(n,))

@ti.kernel
def test():
    for i in range(n):
        x[i] = ti.random(dtype=float)

test()
#x.from_numpy(np.random.rand(n))
rand = x.to_numpy()
diff = np.diff(np.sort(rand), 1)
assert np.min(diff) > 0

Float32 has 23 significant bits, thus generating 2^23 random float64 numbers by casting will always result in identical values, which fails the test. A proper float64 random number generator (by uncommenting x.from_numpy()) will pass this test.

2. Gaussian random number

This has been raised in #2235. It would be a useful feature to directly provide functions in Taichi frontend to generate Gaussian random numbers, for example, ti.randn(). #2235 also suggested using ti.random(dist='normal'). I believe the choice between them will depend on whether we would like to implement sampling from more types of distributions.

3. Specification of random seed

The random seed in Taichi is currently initialized as backend compile-time constants. On the same machine, running the same Taichi program multiple times will result in (partly) identical random number sequences. This makes it difficult to perform multiple simulation with the same configuration and gather the statistics of the results. Below is a test that fails when two Taichi sessions give the same random number sequence:

import taichi as ti
import numpy as np

ti.init(arch=ti.cpu,
        #random_seed=0
        )
n = 10
x = ti.field(float, shape=(n,))

@ti.kernel
def test():
    for i in range(n):
        x[i] = ti.random(dtype=float)

test()
rand1 = x.to_numpy()

ti.reset()
ti.init(arch=ti.cpu,
        #random_seed=1
        )
x = ti.field(float, shape=(n,))
test()
rand2 = x.to_numpy()

assert not np.allclose(rand1, rand2)

Describe the solution you'd like (if any)

1. Float64 random number [#2253]

  • ti.random(dtype=ti.f64) is implemented for CPU and CUDA backends at f64 rand_f64(Context *context) in the LLVM runtime. This would by a very straightforward fix by calculating the random f64 from rand_u64(context).

2. Gaussian random number

  • Given random normal numbers are usually calculated from random uniform numbers, this can be implemented in the Python frontend from ti.random(). I would suggest adding a random.py file to /python/taichi/lang which includes utility functions involving random numbers.

3. Specification of random seed

  • This can be done by adding a int random_seed argument to runtime_initialize() function and correspondingly set up and export a random_seed as a configuration to ti.init(). The index argument for the initialize_rand_state call might also need some modification to prevent some threads in two separate Taichi "sessions" from having the same random state.

Additional comments

As discussed in #2235, it would be good to carefully choose a implementation of Gaussian (perhaps also uniform) random number generation before coding. Below are some studies related to random number generation:

If the proposed implementation looks good, I will start working on the code and opening relevant pull requests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request Suggest an idea on this project
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant