-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementing another kernel and curiosity about derivative=gradient aspect #11
Comments
Hi Ahmed, thank you for your nice message and your good work on implementing a different kernel! To adress your questions and points:
Please note that I defined the kernels so far to have I found some performance improvements for your gradient computation: function kernel_grad_2D(kernel::DSPH_Quintic{T}, u::Real, h_inv::Real, x::AbstractArray, y::AbstractArray, r::Real) where T
n = kernel.norm_2D * h_inv^2
if u < 2
u_d1 = 5/8
u_d2 = (u - 2)
dWdq = u_d1 * u * (u_d2 * u_d2 * u_d2) |> T
else
return 0.0, 0.0 .|> T
end
# Calculate result for each direction
x_diff = x[1] - y[1]
y_diff = x[2] - y[2]
tmp = dWdq*r*h_inv
Wgx = tmp*x_diff |> T
Wgy = tmp*y_diff |> T
# returning as Tuple instead of Array avoids 1 allocation
# doing the type conversion only once gets rid of the other allocation
return Wgx, Wgy
end
# BenchmarkTools:
@btime Wgx, Wgy = kernel_grad_2D($k, $u2, $h_inv,$x,$y,$r)
# 4.870 ns (0 allocations: 0 bytes) I like your idea of having an actual gradient implementation in the package! function kernel_grad_2D(k::SPHKernel, h_inv::Real, pos1::Vector{<:Real}, pos2::Vector{<:Real})
x_diff = pos1[1] - pos2[1]
y_diff = pos1[2] - pos2[2]
r = √(x_diff^2 + y_diff^2)
u = r * h_inv
dW = kernel_deriv_2D(k, u, h_inv)
tmp = dW * u
Wgx = tmp * x_diff
Wgy = tmp * y_diff
return Wgx, Wgy
end What do you think? |
Thanks for the response! Great to see you agreeing to change the name of the functionality, even if potentially might break some older scripts. It makes it much clearer to the general user. Out of curiosity though, can you tell me where you have used dW only? Most of the time I only use W or grad(W) (I come from CFD background) and never dW (only in the gradient term). I think that n_neighbours should be removed. It has no relevant meaning for the kernel function, and I have never seen it being used (again mostly from CFD background) so I hope that we can agree to that, unless you have an extremely clear and common use case. Just to simplify it a bit. Regarding the if-statement, I have to agree to keep it that way. We cannot guarantee that every kernel above Regarding the kernel distance, u = r/h or u = r/2h, I must say that I have never seen the previous form before (ofc now you have showed me 😊). Is it common to use it that way? In CFD SPH (Computational Fluid Dynamics Smoothed Particle Hydrodynamics), I have only ever seen the latter. Would it be possible to accept that different kernels can use different formulations of u? Perhaps we can call yours "u" and the one I am using "q", just as something visual? The kernels I am using are derived mostly using the principle of u = r/kh, where k commonly taken as 2, some cases 3 etc. You can see a lot of kernels defined here in the notation I am most used to: https://pysph.readthedocs.io/en/latest/reference/kernels.html Thanks for the improved function in regards to allocations! I ended up doing something similar - I was a bit surprised that [] compared to () would give 2 vs 0 allocs, but I guess thats how it is. Continuing on, I think it would be great to have the gradient and perhaps even the Laplacian in this package! Would make it the "go to" Julia package for SPH interpolations. I am just curios if one of us made a typo, I see that you write it as:
Where we now agree on dW (except r/h or 2h), but I believe that your "u" in tmp should have been Denominator is Another Idea for Future Discussions Sorry if the message is not formatted as well as yours, I believe I have answered your points though and very happy to see we agree on a lot of it. Kind regards, Ahmed |
Hi Ahmed, I've used For the kernel definition I followed the notation used in Gadget2 since I work with a variant of that code. So it's just a different normalisation. Good catch on that typo! ;) You're correct, that was wrong, I fixed it in the meantime, since I did some more refactoring. I will push these changes to a new branch and link the commit to this issue. I like your Idea with getting rid of the different dimensionalities! struct WendlandC6_3D{T} <: SPHKernel
norm::T
end and then send this into multiple dispatch. Then we could reduce the functions to What do you think about that? Do you see a more elegant way to do it? And if we're considering breaking changes we could also change Another thing to consider is version numbers: When I added this package I have to admit that I didn't know nearly enough about git (and I doubt that I do now). So I had to fight a bit to get the version number working and the only version number I could choose at that point was 1.0. Would you prefer if these changes were added to a potential v2.0? I think that would be the correct version number to choose, but it again misrepresents the state of the package... I'm looking forward to your input! Cheers, |
Potential breakthrough in the process of automatic differentiation, see my question and answer 4. Basically the summary is that by defining just the Wendland function it is possible to automatically take its derivative (dW) - this is highly useful for us! Kind regards |
Hi Ahmed, You're right we should discuss this in seperate issues. I opened #13, #14 and #15. For your suggestion with the Very interesting that ForwardDiff is that performant! I was not aware of that, as I've never really used it. |
Hi!
Thank you very much for this effort. I have spend quite a bit of time on this code to produce some nice examples of what I want to ask about, so I hope you enjoy the read :)
Basically I had to import a kernel I know just to be sure about what I am doing - I have looked at how you code and tried to replicate it as close as possible - seems to also improve performance which is very nice, benchmarks at the bottom. Extending the code was quite easy to, you have made a very nice setup for this kind of thing.
I don't agree with your notation saying that grad(W) = kernel_deriv_dim and you can see the full explanation of why in kernel_grad_2d. What are your thoughts about this?
If you agree I think it should be split in kernel_value, kernel_deriv (dWdq) and kernel_gradient. Else I am looking forward to learn if I have misunderstood something; which could also be likely.
The code should run instantly without any other dependencies than the library here.
Kind regards
The text was updated successfully, but these errors were encountered: