Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix inconsistency with 1 dof #55

Merged
merged 2 commits into from
Dec 31, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 7 additions & 12 deletions src/vegas/montecarlo.jl
Original file line number Diff line number Diff line change
Expand Up @@ -31,19 +31,19 @@ end
This function implements the Vegas algorithm, a Monte Carlo method specifically designed for multi-dimensional integration. The underlying principle and methodology of the algorithm can be explored further in the [Vegas documentation](https://vegas.readthedocs.io/en/latest/background.html#importance-sampling).

# Overview
The Vegas algorithm employs an importance sampling scheme. For a one-dimensional integral with the integrand ``f(x)``, the algorithm constructs an optimized distribution ``\\rho(x)`` that approximates the integrand as closely as possible (a strategy known as the Vegas map trick; refer to [`Dist.Continuous`](@ref) for more details).
The Vegas algorithm employs an importance sampling scheme. For a one-dimensional integral with the integrand ``f(x)``, the algorithm constructs an optimized distribution ``\\rho(x)`` that approximates the integrand as closely as possible (a strategy known as the Vegas map trick; refer to [`Dist.Continuous`](@ref) for more details).

The variable ``x`` is then sampled using the distribution ``\\rho(x)``, and the integral is estimated by averaging the estimator ``f(x)/\\rho(x)``.

# Note
- If there are multiple integrals, all of them are sampled and measured at each Monte Carlo step.
# Note
- If there are multiple integrals, all of them are sampled and measured at each Monte Carlo step.
- This algorithm is particularly efficient for low-dimensional integrations but might be less efficient and robust than the Markov-chain Monte Carlo solvers for high-dimensional integrations.


# Arguments
- `integrand`: A user-defined function evaluating the integrand. The function should be either `integrand(var, config)` or `integrand(var, weights, config)` depending on whether `inplace` is `false` or `true` respectively. Here, `var` are the random variables and `weights` is an output array to store the calculated weights. The last parameter passes the MC `Configuration` struct to the integrand, so that user has access to userdata, etc.

- `measure`: An optional user-defined function to accumulate the integrand weights into the observable. The function signature should be `measure(var, obs, relative_weights, config)`. Here, `obs` is a vector of observable values for each component of the integrand and `relative_weights` are the weights calculated from the integrand multiplied by the probability of the corresponding variables.
- `measure`: An optional user-defined function to accumulate the integrand weights into the observable. The function signature should be `measure(var, obs, relative_weights, config)`. Here, `obs` is a vector of observable values for each component of the integrand and `relative_weights` are the weights calculated from the integrand multiplied by the probability of the corresponding variables.

The following are the snippets of the `integrand` and `measure` functions:
```julia
Expand Down Expand Up @@ -104,7 +104,7 @@ function montecarlo(config::Configuration{Ni,V,P,O,T}, integrand::Function, neva
@assert (config.observable isa AbstractVector) && (length(config.observable) == config.N) && (eltype(config.observable) == T) "the default measure can only handle observable as Vector{$T} with $(config.N) elements!"
end
############## initialization ################################
# don't forget to initialize the variables
# don't forget to initialize the variables
for var in config.var
Dist.initialize!(var, config)
end
Expand Down Expand Up @@ -138,14 +138,9 @@ function montecarlo(config::Configuration{Ni,V,P,O,T}, integrand::Function, neva
# weights = @_expanded_integrand(config, integrand, 1) # very fast, but requires explicit N
# weights = integrand_wrap(config, integrand) #make a lot of allocations
if inplace
(fieldcount(V) == 1) ? integrand(config.var[1], weights, config) : integrand(config.var, weights, config)
integrand((isone(fieldcount(V)) ? config.var[1] : config.var), weights, config)
else
if Ni == 1 # we want the output weights stored in a vector even if there is only one element
weights[1] = (fieldcount(V) == 1) ? integrand(config.var[1], config) : integrand(config.var, config)
else
weights = (fieldcount(V) == 1) ? integrand(config.var[1], config) : integrand(config.var, config)
end
@assert typeof(weights) <: Tuple || typeof(weights) <: AbstractVector "the integrand should return a vector with $(Ni) elements, but it returns a vector elements! $(typeof(weights)))"
weights .= integrand((isone(fieldcount(V)) ? config.var[1] : config.var), config)
end

# println("before: ", weights, "with jac = ", jac)
Expand Down
25 changes: 10 additions & 15 deletions src/vegas_mc/montecarlo.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,12 @@

This algorithm combines Vegas with Markov-chain Monte Carlo.
For multiple integrands invoves multiple variables, it finds the best distribution
ansatz to fit them all together. In additional to the original integral, it also
ansatz to fit them all together. In additional to the original integral, it also
introduces a normalization integral with integrand ~ 1.

Assume we want to calculate the integral ``f_1(x)`` and ``f_2(x, y)``, where x, y are two different types of variables.
The algorithm will try to learn a distribution ``\\rho_x(x)`` and ``\\rho_y(y)`` so that ``f_1(x)/\\rho_x(x)`` and ``f_2(x, y)/\\rho_x(x)/\\rho_y(y)``
are as flat as possible.
are as flat as possible.

The algorithm then samples the variables x and y with a joint distribution using the Metropolis-Hastings algorithm,
```math
Expand All @@ -23,8 +23,8 @@ This algorithm reduces to the vanilla Vegas algorithm by setting ``r_0 = 1`` and

NOTE: If there are more than one integrals, all of them are sampled and measured at each Markov-chain Monte Carlo step!

This algorithm is as efficient as the Vegas algorithm for low-dimensional integration, and
tends to be more robust than the Vegas algorithm for high-dimensional integration.
This algorithm is as efficient as the Vegas algorithm for low-dimensional integration, and
tends to be more robust than the Vegas algorithm for high-dimensional integration.

# Arguments
- `integrand` : User-defined function with the following signature:
Expand Down Expand Up @@ -68,7 +68,7 @@ Integral 1 = 0.6640840471808533 ± 0.000916060916265263 (reduced chi2 = 0.945)
This function applies a Markov-chain Monte Carlo (MCMC) technique combined with the Vegas algorithm to compute integrals. In addition to calculating the original integrals, it also introduces a normalization integral with an integrand ~ 1, which enhances the efficiency and robustness of high-dimensional integration tasks.

# Overview
Given multiple integrands involving multiple variables, the algorithm finds the best distribution ansatz that fits all integrands together. For instance, consider we want to calculate two integrals: ``f_1(x)`` and ``f_2(x, y)``, where ``x`` and ``y`` are two different types of variables. The algorithm learns distributions ``\\rho_x(x)`` and ``\\rho_y(y)`` such that ``f_1(x)/\\rho_x(x)`` and ``f_2(x, y)/\\rho_x(x)/\\rho_y(y)`` are as flat as possible.
Given multiple integrands involving multiple variables, the algorithm finds the best distribution ansatz that fits all integrands together. For instance, consider we want to calculate two integrals: ``f_1(x)`` and ``f_2(x, y)``, where ``x`` and ``y`` are two different types of variables. The algorithm learns distributions ``\\rho_x(x)`` and ``\\rho_y(y)`` such that ``f_1(x)/\\rho_x(x)`` and ``f_2(x, y)/\\rho_x(x)/\\rho_y(y)`` are as flat as possible.

Then, it samples variables ``x`` and ``y`` using the Metropolis-Hastings algorithm with a joint distribution `p(x, y)`,
```math
Expand All @@ -83,7 +83,7 @@ The algorithm defaults to the standard Vegas algorithm if ``r_0 = 1`` and ``r_{i
# Arguments
- `integrand`: A user-defined function evaluating the integrand. The function should be either `integrand(var, config)` or `integrand(var, weights, config)` depending on whether `inplace` is `false` or `true` respectively. Here, `var` are the random variables and `weights` is an output array to store the calculated weights. The last parameter passes the MC `Configuration` struct to the integrand, so that user has access to userdata, etc.

- `measure`: An optional user-defined function to accumulate the integrand weights into the observable. The function signature should be `measure(var, obs, relative_weights, config)`. Here, `obs` is a vector of observable values for each component of the integrand and `relative_weights` are the weights calculated from the integrand multiplied by the probability of the corresponding variables.
- `measure`: An optional user-defined function to accumulate the integrand weights into the observable. The function signature should be `measure(var, obs, relative_weights, config)`. Here, `obs` is a vector of observable values for each component of the integrand and `relative_weights` are the weights calculated from the integrand multiplied by the probability of the corresponding variables.

The following are the snippets of the `integrand` and `measure` functions:
```julia
Expand Down Expand Up @@ -153,14 +153,9 @@ function montecarlo(config::Configuration{N,V,P,O,T}, integrand::Function, neval
end

if inplace
(length(config.var) == 1) ? integrand(config.var[1], _weights, config) : integrand(config.var, _weights, config)
integrand((isone(length(config.var)) ? config.var[1] : config.var), _weights, config)
else
if N == 1
_weights[1] = (length(config.var) == 1) ? integrand(config.var[1], config) : integrand(config.var, config)
else
_weights = (length(config.var) == 1) ? integrand(config.var[1], config) : integrand(config.var, config)
end
@assert length(_weights) == N "the integrand should return a vector with $(N) elements, but it returns a vector with $(length(_weights)) elements! $(typeof(_weights)))"
_weights .= integrand((isone(length(config.var)) ? config.var[1] : config.var), config)
end

padding_probability .= [Dist.padding_probability(config, i) for i in 1:N+1]
Expand Down Expand Up @@ -194,8 +189,8 @@ function montecarlo(config::Configuration{N,V,P,O,T}, integrand::Function, neval
if debug && (isfinite(probability) == false)
@warn("integrand probability = $(probability) is not finite at step $(neval)")
end
# WARNING: Don't turn it on, because some integral may actually vanish (for example, circle are)
# if debug && (all(x -> isfinite(x), weights))
# WARNING: Don't turn it on, because some integral may actually vanish (for example, circle are)
# if debug && (all(x -> isfinite(x), weights))
# @warn("integrand = $(weights) is not all finite at step $(neval)")
# end

Expand Down
6 changes: 6 additions & 0 deletions test/interface_tests.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
using MCIntegration

integrand = (x, c) -> [1.0]
vars = Continuous(0, 1)
dof = [(1,)]
integrate(integrand; dof, vars)
1 change: 1 addition & 0 deletions test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ if isempty(ARGS)
include("bubble.jl")
include("bubble_FermiK.jl")
include("mpi.jl")
include("interface_tests.jl")
else
include(ARGS[1])
end
Expand Down
Loading