Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gridap and solvers from DifferentialEquations.jl #962

Open
JanSuntajs opened this issue Dec 6, 2023 · 10 comments
Open

Gridap and solvers from DifferentialEquations.jl #962

JanSuntajs opened this issue Dec 6, 2023 · 10 comments

Comments

@JanSuntajs
Copy link

Hello,

I am using Gridap to solve a coupled system of differential and algebraic equations (DAE). Namely, I am dealing with the
diffusion equation in which the concentration field is coupled to some external fields that change on time scales much faster
than that of the diffusive problem. Currently, I explicitly discretize the time-propagation scheme and use an iterative scheme to propagate the solution a timestep $\Delta t$ forward in time. It should probably also be evident from the above that the problem is a multiphysics one.

For speed and convergence reasons, I would like to use a specialized DAE solver instead, such as the ones available in
DifferentialEquations.jl suite. Is there a way to achieve this in Gridap?

I stumbled upon a similar issue in GridapODEs where some attempt at a solution has been made within the module DiffEqsWrappers. Could I achieve the desired outcome if I went along the same lines? I've noticed that since integration of GridapODEs to Gridap the module DiffEqsWrappers is no longer used:

"""

The exported names are
$(EXPORTS)
"""
module ODEs

using DocStringExtensions

include("ODETools/ODETools.jl")

include("TransientFETools/TransientFETools.jl")

# include("DiffEqsWrappers/DiffEqsWrappers.jl")

end #module

const GridapODEs = ODEs

Regards,
Jan

@JordiManyer
Copy link
Member

@JanSuntajs Have you tried replicating/running this test?

@JanSuntajs
Copy link
Author

JanSuntajs commented Dec 7, 2023

@JordiManyer thanks for the suggestion. I have not yet tried replicating or running the test, however,
I am pretty sure it will throw an UndefVarError no later than after line 8:

using Gridap.ODEs.DiffEqWrappers

This relates to my original post, namely, the module ODEs does not include
the file with DiffEqWrappers any longer. I am using Gridap v0.17.20.

Edit: perhaps I misunderstood your suggestion initially. I will look into the contents of the
DiffEqsWrappers module and then try to produce its working implementation within my
project.

@JordiManyer
Copy link
Member

JordiManyer commented Dec 7, 2023

@JanSuntajs I am using the master branch of Gridap, and the DiffEqWrappers still exists. I don't know if it works, since the test file I pointed at does not run within our CI tests.

@JanSuntajs
Copy link
Author

JanSuntajs commented Dec 7, 2023

@JordiManyer I am also using the master branch and while the DiffEqWrappers still exists, it is not included in the
ODEs.jl
file:

"""

The exported names are
$(EXPORTS)
"""
module ODEs

using DocStringExtensions

include("ODETools/ODETools.jl")

include("TransientFETools/TransientFETools.jl")

# include("DiffEqsWrappers/DiffEqsWrappers.jl")

end #module

const GridapODEs = ODEs

@JanSuntajs
Copy link
Author

JanSuntajs commented Jan 9, 2024

@JordiManyer, I have now tried running the said test.

Since I could not include DiffEqsWrappers.jl due to the reasons mentioned
above, I included its contents into my own project. The content of the module
is as follows:

# """

# The exported names are
# $(EXPORTS)
# """
module DiffEqWrappers

using Test

using Gridap.ODEs.TransientFETools: TransientFEOperator

using Gridap.ODEs.ODETools: allocate_cache
using Gridap.ODEs.ODETools: update_cache!
using Gridap.ODEs.ODETools: residual!
using Gridap.ODEs.ODETools: jacobians!
using Gridap.ODEs.ODETools: jacobian!

using Gridap.Algebra: allocate_jacobian

using Gridap.FESpaces: get_algebraic_operator
using LinearAlgebra: fillstored!

export prototype_jacobian
export prototype_mass
export prototype_stiffness

export diffeq_wrappers

"""
  This method takes a `FEOperator` and returns some methods that can be used
  in `DifferentialEquations.jl`. Assuming we want to solve the nonlinear ODE
  `res(t,u,du) = 0`, we return:
  1. `residual!(res, du, u, p, t)`: It returns the residual (in `res`) at
     `(u,du,t)`, following the signature in `DifferentialEquations`.
     For the moment, we do not support parameters.
  2. `jacobian!(jac, du, u, p, gamma, t)`: Idem for the Jacobian. It returns
     `∂res/∂du*γ + ∂res/∂u`
  3. `mass!(mass, du, u, p, t)`: Idem for the mass matrix. It returns
     `∂res/∂du`
  4. `stiffness!(stif, du, u, p, t)`: Idem for the mass matrix. It returns
     `∂res/∂u`
"""
function diffeq_wrappers(op)

    ode_op = get_algebraic_operator(op)
    ode_cache = allocate_cache(ode_op)

    function _residual!(res, du, u, p, t)
        # TO DO (minor): Improve update_cache! st do nothing if same time t as in the cache
        # now it would be done twice (residual and jacobian)
        ode_cache = update_cache!(ode_cache, ode_op, t)
        residual!(res, ode_op, t, (u, du), ode_cache)
    end

    function _jacobian!(jac, du, u, p, gamma, t)
        ode_cache = update_cache!(ode_cache, ode_op, t)
        z = zero(eltype(jac))
        fillstored!(jac, z)
        jacobians!(jac, ode_op, t, (u, du), (1.0, gamma), ode_cache)
    end

    function _mass!(mass, du, u, p, t)
        ode_cache = update_cache!(ode_cache, ode_op, t)
        z = zero(eltype(mass))
        fillstored!(mass, z)
        jacobian!(mass, ode_op, t, (u, du), 2, 1.0, ode_cache)
    end

    function _stiffness!(stif, du, u, p, t)
        ode_cache = update_cache!(ode_cache, ode_op, t)
        z = zero(eltype(stif))
        fillstored!(stif, z)
        jacobian!(stif, ode_op, t, (u, du), 1, 1.0, ode_cache)
    end

    return _residual!, _jacobian!, _mass!, _stiffness!

end

"""
  It allocates the Jacobian (or mass or stiffness) matrix, given the `FEOperator`
  and a vector of size total number of unknowns
"""
function prototype_jacobian(op::TransientFEOperator, u0)
    ode_op = get_algebraic_operator(op)
    ode_cache = allocate_cache(ode_op) # Not acceptable in terms of performance
    return allocate_jacobian(ode_op, u0, ode_cache)
end

const prototype_mass = prototype_jacobian

const prototype_stiffness = prototype_jacobian

end #module

Then, I try running the test, which returns an error at the last line in the code below:

using Gridap
using Gridap.ODEs
using Gridap.ODEs.ODETools
using Gridap.ODEs.TransientFETools
using MyProject.DiffEqWrappers

# using DifferentialEquations
# using Sundials
using Gridap.Algebra: NewtonRaphsonSolver
using Base.Iterators

# FE problem (heat eq) using Gridap
function fe_problem(u, n)

  f(t) = x -> ∂t(u)(x, t) - Δ(u(t))(x)

  domain = (0, 1, 0, 1)
  partition = (n, n)
  model = CartesianDiscreteModel(domain, partition)

  order = 1

  reffe = ReferenceFE(lagrangian,Float64,order)
  V0 = FESpace(
    model,
    reffe,
    conformity = :H1,
    dirichlet_tags = "boundary",
  )
  U = TransientTrialFESpace(V0, u)

  Ω = Triangulation(model)
  degree = 2 * order
  dΩ = Measure(Ω, degree)

  a(u, v) = ( (v)  (u) )dΩ
  b(v, t) = ( v * f(t) )dΩ
  m(u, v) = ( v * u )dΩ

  res(t, u, v) = a(u, v) + m(∂t(u), v) - b(v, t)
  jac(t, u, du, v) = a(du, v)
  jac_t(t, u, dut, v) = m(dut, v)

  op = TransientFEOperator(res, jac, jac_t, U, V0)

  U0 = U(0.0)
  uh0 = interpolate_everywhere(u(0.0), U0)
  u0 = get_free_dof_values(uh0)

  return op, u0

end

# Solving the heat equation using Gridap.ODEs and DiffEqs
tspan = (0.0, 1.0)

u(x, t) = t
u(t) = x -> u(x, t)

# ISSUE 1: When I choose n > 2, even though the problem that we will solve is
# linear, the Sundials solvers seems to have convergence issues in the nonlinear
# solver (?). Ut returns errors
# [IDAS ERROR]  IDACalcIC Newton/Linesearch algorithm failed to converge.
# ISSUE 2: When I pass `jac_prototype` the code gets stuck

n = 3 # cells per dim (2D)
op, u0 = fe_problem(u,n)
res!, jac!, mass!, stif! = diffeq_wrappers(op)
J = prototype_jacobian(op,u0)

The error relates to the function allocate_jacobian(...) that is being called inside prototype_jacobian(...) and is as follows:
Screenshot from 2024-01-09 08-04-24

Has the interface for allocate_jacobian(...) changed since the time this was last working and how should I go about it?

These are the packages I am currently using in my project:
pkgst

Regards,
Jan

@JanSuntajs
Copy link
Author

@JordiManyer I got back to this problem after some time and here's what I found:

I can get the test example working, but only after commenting out the
line

J = prototype_jacobian(op, u0)

Then, in the f_ipp definition, I do not pass the Jacobian, hence

# # To explore the Sundials solver options, e.g., BE with fixed time step dtd
f_iip = DAEFunction{true}(res!)#, jac_prototype=J)

I suppose I need an appropriate interface to pass the assembled Jacobian to the f_ipp
function, but I am unsure about which one to pick. Any suggestions would be much appreciated.

Regards,
Jan

@JanSuntajs
Copy link
Author

JanSuntajs commented Feb 27, 2024

To offer some update regarding my progress on the issue, I am appending a MWE of a working script that simulates diffusion on a sphere which we initially uniformly fill with an incoming flux and then stop the filling altogether:

"""
A script providing a minimal working example showcasing simple
diffusion simulated using both `Gridap` and ˙Sundials˙ solvers.

"""

# using MKL
import Sundials as snd


# Gridap-based tools
using Gridap
import GridapGmsh as ggmsh
import Gridap.Geometry as ggeo
using Gridap.ODEs
using Gridap.ODEs.ODETools
using Gridap.ODEs.TransientFETools

include("./DiffEqWrappers.jl")
using .DiffEqWrappers: diffeq_wrappers, prototype_jacobian

const meshname = "./sphere_shape_1.0_lc_0.1_.msh"
const dtensor = TensorValue([1. 0. 0.; 0. 1. 0.; 0. 0. 1.])

# timespan
t₀ = 0.
tf = 10
tfill = 5

# boundary flux
function bflux(x, t::Real)

    if (t <= tfill)

        return 1. / (3 * tfill)
    else
        return 0.
    end
end # bflux

bflux(t::Real) = x -> bflux(x, t)

function weak_form(Ω, dΩ, dΓ,  n_Γ, D, flux)

    # chemical residual part
    res(t, w, z) = ((z  ∂t(w)) + ((D  (w))  (z))) *- (flux(t)  z) *# jacobian 
    jac(t, w, dw, z) =  ((D  (dw))  (z)) *jac_t(t, w, dwₜ, z) = (dwₜ  z) *return res, jac, jac_t
end

# initial conc
cd0(x) = 0.

function main()

    # ------------------------------------
    #
    # IMPORT MODEL
    # 
    # ------------------------------------
    model = DiscreteModelFromFile(meshname)

    # -------------------------------------
    #
    # DEFINE SPACES
    #
    # -------------------------------------
    refFE = ReferenceFE(lagrangian,Float64,1)
    V= TestFESpace(model,refFE, conformity=:H1, # no dirichlet
    )

    U = TransientTrialFESpace(V, )#g1_c) # [g1_c, g1_c])
    
    order = 1
    degree = 2 * order
    Ω = Triangulation(model)
    dΩ = Measure(Ω, degree)

    neumann_tags = ["Casing"] #["Casing", "Left edge", "Right edge"]
    Γ = BoundaryTriangulation(model, tags=neumann_tags)
    dΓ = Measure(Γ, degree)
    n_Γ = get_normal_vector(Γ)

    # -------------------------------------
    #
    # DEFINE OPERATORS
    #
    # -------------------------------------
    res, jac, jac_t = weak_form(Ω, dΩ, dΓ, n_Γ, dtensor,  bflux)
    op = TransientFEOperator(res,jac,jac_t,U,V)
    c0 = interpolate_everywhere(cd0, U(0.))

    csnd = get_free_dof_values(c0)

    res!, jac!, _, _ = diffeq_wrappers(op)

    
    J_ = prototype_jacobian(op, csnd)
    r = copy(csnd)
    θ = 0.1; dt = 0.1; tθ = 0.0; dtθ = dt*θ
    res!(r, csnd, csnd, [], tθ)
    jac!(J_, csnd, csnd, [], (1 / dtθ), tθ)
    tspan = (t₀, tf)

    # ----------------------------------------------------
    #
    #
    # SUNDIALS phase
    #   
    # 
    # # To explore the Sundials solver options, e.g., BE with fixed time step dtd
    # println(cusnd)
    println("Entering solver phase!")

    diff_vars = fill(true, length(csnd))
    f_iip = snd.DAEFunction{true}(res!;jac_prototype=J_, jac=jac!) 
    # 
    prob_iip = snd.DAEProblem{true}(f_iip, csnd, csnd, tspan, (), differential_vars=diff_vars);
    sol_iip = snd.solve(prob_iip, snd.IDA(linear_solver=:KLU, init_all=false); dtmax=1., 
    abstol=1e-06, reltol=1e-8,  saveat=collect(t₀:0.1:tf), tstops=[tfill])

    # -------------------------------------
    #
    # SAVING RESULTS
    #
    # -------------------------------------

    println("Saving results")


    savename_result = "mwe_result"
    savefolder = "./results"
    if !ispath(savefolder)
        mkpath(savefolder)
    end
    
    createpvd("$savefolder/$(savename_result)") do pvd
        for i in eachindex(sol_iip.t)
        # println(i)
        t = sol_iip.t[i]
        cₕ = FEFunction(U(t), sol_iip.u[i], get_dirichlet_dof_values(U(t)))
        pvd[t] = createvtk(Ω, "$(savefolder)/$(savename_result)_t_$t.vtu",
            cellfields=["ch" => cₕ, "gradc" => (cₕ), "flux" => -dtensor  (cₕ)
            ])

        end
    end
end # main

if abspath(PROGRAM_FILE) == abspath(@__FILE__)
    main()
end 

I am also interested in whether I am doing the interpolation back to the computational grid in the saving phase in an optimal matter - specifically, I am referring to this line:

cₕ = FEFunction(U(t), sol_iip.u[i], get_dirichlet_dof_values(U(t)))

The aditional files (the .msh file and the file defining the wrappers for some functions from the DifferentialEquations.jl library can be found here, along with the animation of the results:
mwe_github

Edit: after fixing some issues with interpolation of results in the multiphysics case, it seems the same approach can be extended to a transient multiphysics case as well. If anyone is interested in that, I can perhaps prepare a MWE for that case as well, but it is indeed pretty straightforward once the operators are defined.

@JanSuntajs
Copy link
Author

@JordiManyer @santiagobadia @fverdugo

In the light of the recent refactoring of the ODE module, I would like to know how to get my above code (working with Gridap v17.23.0) to work with Gridap v>=18.0.0.

Specifically, I am particularly interested in what are now the analogs of the following functions:

using Gridap.ODEs.TransientFETools: TransientFEOperator

using Gridap.ODEs.ODETools: allocate_cache
using Gridap.ODEs.ODETools: update_cache!
using Gridap.ODEs.ODETools: residual!
using Gridap.ODEs.ODETools: jacobians!
using Gridap.ODEs.ODETools: jacobian!

Regards,

Jan

@JordiManyer
Copy link
Member

@AlexandreMagueresse

@AlexandreMagueresse
Copy link
Collaborator

@JanSuntajs

In the new version of the ODE module, the residuals and jacobians are no longer evaluated at the TransientFEOperator level, but at the ODEOperator level (the ODE operator is the algebraic counterpart of the FE operator). Here are the new locations and signatures of the functions above:

  • allocate_cache: allocate_odeopcache(odeop::ODEOperator, t::Real, us::Tuple{Vararg{AbstractVector}}, args...).
  • update_cache!: update_odeopcache!(odeopcache, odeop::ODEOperator, t::Real, args...).
  • residual!: residual!(r::AbstractVector, odeop::ODEOperator, t::Real, us::Tuple{Vararg{AbstractVector}}, odeopcache) overloads Algebra.residual!.
  • jacobians! has been removed, and replaced by jacobian! with appropriate weighting factors (see below).
  • jacobian!: jacobian!(J::AbstractMatrix, odeop::ODEOperator, t::Real, us::Tuple{Vararg{AbstractVector}}, ws::Tuple{Vararg{Real}}, odeopcache) overloads Algebra. By default, it calls jacobian_add!, which has the signature jacobian_add!(J::AbstractMatrix, odeop::ODEOperator, t::Real, us::Tuple{Vararg{AbstractVector}}, ws::Tuple{Vararg{Real}}, odeopcache). Here ws are factors that weight the time derivatives (in increasing order, i.e. first-order time-derivative first).

The best way to get a feel for how these functions work together is to have a look at ODEOpsFromTFEOp. You can also read the documentation of the ODE module, in particular its low-level implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants