You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A versatile and robust random number generator is essential for the accuracy of various simulation methods involving stochastic processes (e.g., Monte Carlo simulations). I am planning to extend the functionalities of random number generation in Taichi in terms of features below:
1. Float64 random number
The current implementation (0.7.15) generates a float32 random number and casts to float64 when calling ti.random(dtype=ti.f64). This approach gives rise to granularity problems since only 32 bits of information are used to generate the random number. This feature can be tested as follows:
Float32 has 23 significant bits, thus generating 2^23 random float64 numbers by casting will always result in identical values, which fails the test. A proper float64 random number generator (by uncommenting x.from_numpy()) will pass this test.
2. Gaussian random number
This has been raised in #2235. It would be a useful feature to directly provide functions in Taichi frontend to generate Gaussian random numbers, for example, ti.randn(). #2235 also suggested using ti.random(dist='normal'). I believe the choice between them will depend on whether we would like to implement sampling from more types of distributions.
3. Specification of random seed
The random seed in Taichi is currently initialized as backend compile-time constants. On the same machine, running the same Taichi program multiple times will result in (partly) identical random number sequences. This makes it difficult to perform multiple simulation with the same configuration and gather the statistics of the results. Below is a test that fails when two Taichi sessions give the same random number sequence:
ti.random(dtype=ti.f64) is implemented for CPU and CUDA backends at f64 rand_f64(Context *context) in the LLVM runtime. This would by a very straightforward fix by calculating the random f64 from rand_u64(context).
2. Gaussian random number
Given random normal numbers are usually calculated from random uniform numbers, this can be implemented in the Python frontend from ti.random(). I would suggest adding a random.py file to /python/taichi/lang which includes utility functions involving random numbers.
3. Specification of random seed
This can be done by adding a int random_seed argument to runtime_initialize() function and correspondingly set up and export a random_seed as a configuration to ti.init(). The index argument for the initialize_rand_state call might also need some modification to prevent some threads in two separate Taichi "sessions" from having the same random state.
Additional comments
As discussed in #2235, it would be good to carefully choose a implementation of Gaussian (perhaps also uniform) random number generation before coding. Below are some studies related to random number generation:
GPU Gems 3 (which suggested the Box-Muller transform is the most GPU-friendly for Gaussian distributions)
Concisely describe the proposed feature
A versatile and robust random number generator is essential for the accuracy of various simulation methods involving stochastic processes (e.g., Monte Carlo simulations). I am planning to extend the functionalities of random number generation in Taichi in terms of features below:
1. Float64 random number
The current implementation (0.7.15) generates a float32 random number and casts to float64 when calling
ti.random(dtype=ti.f64)
. This approach gives rise to granularity problems since only 32 bits of information are used to generate the random number. This feature can be tested as follows:Float32 has 23 significant bits, thus generating 2^23 random float64 numbers by casting will always result in identical values, which fails the test. A proper float64 random number generator (by uncommenting
x.from_numpy()
) will pass this test.2. Gaussian random number
This has been raised in #2235. It would be a useful feature to directly provide functions in Taichi frontend to generate Gaussian random numbers, for example,
ti.randn()
. #2235 also suggested usingti.random(dist='normal')
. I believe the choice between them will depend on whether we would like to implement sampling from more types of distributions.3. Specification of random seed
The random seed in Taichi is currently initialized as backend compile-time constants. On the same machine, running the same Taichi program multiple times will result in (partly) identical random number sequences. This makes it difficult to perform multiple simulation with the same configuration and gather the statistics of the results. Below is a test that fails when two Taichi sessions give the same random number sequence:
Describe the solution you'd like (if any)
1. Float64 random number [#2253]
ti.random(dtype=ti.f64)
is implemented for CPU and CUDA backends atf64 rand_f64(Context *context)
in the LLVM runtime. This would by a very straightforward fix by calculating the random f64 fromrand_u64(context)
.2. Gaussian random number
ti.random()
. I would suggest adding arandom.py
file to /python/taichi/lang which includes utility functions involving random numbers.3. Specification of random seed
int random_seed
argument toruntime_initialize()
function and correspondingly set up and export arandom_seed
as a configuration toti.init()
. The index argument for theinitialize_rand_state
call might also need some modification to prevent some threads in two separate Taichi "sessions" from having the same random state.Additional comments
As discussed in #2235, it would be good to carefully choose a implementation of Gaussian (perhaps also uniform) random number generation before coding. Below are some studies related to random number generation:
Dividing Integers: a Case Study
If the proposed implementation looks good, I will start working on the code and opening relevant pull requests.
The text was updated successfully, but these errors were encountered: