-
-
Notifications
You must be signed in to change notification settings - Fork 890
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Systematic use of coord_t conversions, enabling to change unit size #1710
Conversation
I had a discussion with @casperlamboo just today about this library https://mpusz.github.io/units/ and seeing this PR from you come makes me very happy. But at a first glance (on my phone) I think you're trying to solve something here for which could also be done with the mpusz unit library (which is subject to C++23 ISO standard already.) Which should take care of unit and prefixes conversions at compile time and I'm all in favor of adding those kind of check. Are you familiar with the mpusz unit library and can you tell me if I made a correct assumption that you try to solve something similar here. If not what is the difference? Didn't look at the code yet. Also tagged @rburema because he is probably also interested in this. |
@Piezoid before you spend time answering, let me first actually read the code and the intent of this PR, because my previous comment, might have been a bit premature and I think you might solving something different then what I first assumed. I will put my phone away now and enjoy the evening first. 😉 |
I'm not familiar with this library. But I would say no. The objective of this PR is to make fixed point precision variable at compile time and have all the constants change automatically in the code. The usage of user-defined literals for additional units to mpusz' library could indeed handle that last bit about units. However I don't know if it can work with fixed points numbers the way CuraEngine does. Usually, fixed point representations divide the result of multiplications by the scaling factor. For example, representing milimeters with a scaling factor of 1000, an area computation looks like: For example, you can increase At the current scale factor, I have seen some cylindrical models slice with a number of beads that vary around the perimeter. Increasing the scale factor made this unevenness disappear: To summary, this is about adding some flexibility for testing and future proofing. Also having all the dimensional constants indexed by few commits can be useful 🙂 Edit: spelling, sample image, and
good evening! |
Uh? The CI built origin/main (75c3d06) , not the PR branch (4d81c52). @jellespijker |
Also what I see: https://github.com/Ultimaker/CuraEngine/runs/8134929036?check_suite_focus=true
|
However, I see that it goes right for the updated workflows for the other PR's, so I'll try to open and close this like @jellespijker told me to (to restart the unit tests). |
I think that the issue is the
|
Yes I think that was a remnant of an old attempt to fix it. We only need to distinction between PR from forks and internal PR in our reusable version-workflow. I updated the the unit test work flow. Sorry for being our guinea pig |
😞 we have a failing test, but the workflow seems to work again. Although the results aren't published
|
No worries, I pushed because I saw some activity in the workflows and wanted to give it a try. Fails as expected, that confirms the source commit. |
We actually love this PR and something like this was on our wishlist for a while now. Let us know when it is ready to merged. |
@jellespijker Thanks! Well, I think it's ready, even if I occasionally find missing units on literal when debugging my branch with increased precision. Although, i'm unsure I this PR can achieve complete exhaustiveness in a reasonable time. I'm open to renaming suggestions for constants and functions in |
I ended up adding to this PR the reimplementation of the serialization of The tests pass with a integer unit ranging from 1 nm to 0.1 mm (INT10POW_PER_MM 1 to 6). This doesn't means that Cura is usable at these scales, it will suffer from rounding issues and overflows very quickly. For example, measuring prime time tower volume in 1 nm^3 units will certainly overflow. I'm experimenting on my dev branch with using more floating point in computations to solve this. The method is to have all irrational functions (like |
These literals scales values for mitigating rounding errors, set tolerances or detect close enough vertices. We may or may not want them to scale with the integer unit size.
Allows setting a different fixed-point precision at compile time.
No support for |
- In LayerPlan::addWall: there was no scaling - In Simplify(const Settings& settings): wrongly scaled mm->coord_t, as noted in #1677.
Even if ranges-v3 is included in the cmake dependencies, I could not include it:
No ranges-v3 here... I also added a commit fixing the scaling of the |
Hi @Piezoid sorry that this PR is taking such a long time to get merged, even though we want something like this in there for a long time now. We have been pressed for time a while now, but we are blocking our calendar at least for an hour a week as team to collectively pick-up more PR's. I dibs this one. Devs see CURA-9775. |
Did you do a new |
I would love to see the conversion functions written a bit more generic; I think there are to much specializations atm, maybe something like this: #include <concepts>
template<class T>
concept is_number = std::integral<T> || std::floating_point<T>;
static constexpr std::integral auto INTEGER_MM_FACTOR = 1000;
template<size_t dimension = 1>
constexpr std::floating_point auto to_mm(const is_number auto& number) noexcept
{
if constexpr (dimension == 0)
{
return 1. ;
}
if constexpr (std::is_integral_v<decltype(number)>)
{
constexpr std::floating_point auto rounding = 0.5;
return number / std::pow(INTEGER_MM_FACTOR, dimension) + std::copysign(rounding, number);
}
return number / std::pow(INTEGER_MM_FACTOR, dimension);
} Which could then be used as: coord_t lvalue_coordt = 12345678;
auto p_mm = to_mm<>(lvalue_coordt);
auto p_mm_rvalue = to_mm<>(12345678);
auto p_mm2 = to_mm<2>(lvalue_coordt);
auto p_mm2_rvalue = to_mm<2>(12345678); The suggestion show the same performance as your code, but adds more flexibility imo. https://quick-bench.com/q/uu0jfBz57K5gVqLtAHZTbLcScoc With the use of concepts and constrains we can easily add an specialization for points #include <concepts>
#include <cmath>
template<class T>
concept is_number = std::integral<T> || std::floating_point<T>;
template<class T>
concept is_point_2d = requires(T p){
{ is_number<decltype(p.X)> };
{ is_number<decltype(p.Y)> };
};
template<class T>
concept is_point_3d = requires(T p){
{ is_number<decltype(p.x)> };
{ is_number<decltype(p.y)> };
{ is_number<decltype(p.z)> };
};
template<class T>
concept is_point = is_point_2d<T> || is_point_3d<T>;
static constexpr std::integral auto INTEGER_MM_FACTOR = 1000;
template<size_t dimension = 1>
constexpr std::floating_point auto to_mm(const is_number auto& number)
{
if constexpr (dimension == 0)
{
return 1. ;
}
if constexpr (std::is_integral_v<decltype(number)>)
{
constexpr std::floating_point auto rounding = 0.5;
return number / std::pow(INTEGER_MM_FACTOR, dimension) + std::copysign(rounding, number);
}
return number / std::pow(INTEGER_MM_FACTOR, dimension);
}
template<size_t dimension = 1>
constexpr is_point auto to_mm(const is_point auto& point)
{
using point_t = decltype(point);
if constexpr (is_point_2d<point_t>)
{
return point_t{ to_mm<dimension>(point.X), to_mm<dimension>(point.Y) };
}
return point_t{ to_mm<dimension>(point.x), to_mm<dimension>(point.y), to_mm<dimension>(point.z) };
} In order for that to work however we do need to modernize the class What is your opinion about such a change, we can help out here, by opening a PR against your branch. |
I'll look into that (I will probably not be available enough in the forthcoming weeks). I've been using a generic version on my branch that adds floating point computations: Coord_t.h and IntPoint.h. It's more of a playground in constant evolution, but I could back port some of it. Your version brings some improvement that I'll merge and back port to this PR. |
Are you open to accepting PR's from us to help this PR forward? |
Yes, PRs are welcome! Edit: some minor observations on the generic code:
|
I personally think that to usage of mp-units would also solve this, while saving us from implementing this type handling our self. Especially since it should be part of the C++23 standard. Following code is based on me playing around a bit with definition of points and polygons main...polygonamory using base_length = units::isq::si::micrometre; // The base length unit, use a different prefix for more or less precision
template<class T>
concept Scalar = std::is_integral_v<T> || std::is_integral_v<typename T::rep>;
template<Scalar Tp, std::size_t Nm = 2>
using Point = std::array<Tp, Nm>;
// Length unit specialization of Point
template<Scalar Tp, std::size_t Nm = 2>
using Position = Point<units::isq::si::length<unit::base_length, Tp>, Nm>;
auto point_1 = Position<std::int64_t>{ 2000 * u::mm, 5 * u::km };
auto point_2 = Position<std::int64_t>{ 2 * u::m, 40 * u::um };
auto point_3 = point_1 + point_2;
EXPECT_EQ(point_3.at(0), 4 * u::m); We discussed this PR in the team however and we came to the conclusion that we first will focus on systematic usage of the Once we use |
@Piezoid just want to explicitly say: You're appreciated! thx |
Just to make sure you are aware: |
This is a first shot at enabling unit size to be changed globally at compile time.
INT_EPSILON = 10
(arc precision, snapping, andINT_PRECISION_COMP = 10000
(length of normed vectors for minimizing rounding error during some operation like rotations).There should be no change of behavior, since the constants remain identical.
With one exception:
INT_PRECISION_COMP
is used at some places where the constant was previously lower (see commit scale (or not) magic numbers related to precision and snapping)