Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Systematic use of coord_t conversions, enabling to change unit size #1710

Closed
wants to merge 9 commits into from
Closed

Systematic use of coord_t conversions, enabling to change unit size #1710

wants to merge 9 commits into from

Conversation

Piezoid
Copy link
Contributor

@Piezoid Piezoid commented Aug 7, 2022

This is a first shot at enabling unit size to be changed globally at compile time.

  • More user-defined literals use,
  • Refactor or annotate magic numbers,
  • New constants that don't scale with the unit size: INT_EPSILON = 10 (arc precision, snapping, and INT_PRECISION_COMP = 10000 (length of normed vectors for minimizing rounding error during some operation like rotations).

There should be no change of behavior, since the constants remain identical.
With one exception: INT_PRECISION_COMP is used at some places where the constant was previously lower (see commit scale (or not) magic numbers related to precision and snapping)

@Piezoid Piezoid changed the title Scale units Systematic use of coord_t conversions, enabling to change unit size Aug 8, 2022
@jellespijker
Copy link
Member

I had a discussion with @casperlamboo just today about this library https://mpusz.github.io/units/ and seeing this PR from you come makes me very happy. But at a first glance (on my phone) I think you're trying to solve something here for which could also be done with the mpusz unit library (which is subject to C++23 ISO standard already.)

Which should take care of unit and prefixes conversions at compile time and I'm all in favor of adding those kind of check.

Are you familiar with the mpusz unit library and can you tell me if I made a correct assumption that you try to solve something similar here. If not what is the difference?

Didn't look at the code yet.

Also tagged @rburema because he is probably also interested in this.

@jellespijker
Copy link
Member

jellespijker commented Aug 8, 2022

@Piezoid before you spend time answering, let me first actually read the code and the intent of this PR, because my previous comment, might have been a bit premature and I think you might solving something different then what I first assumed.

I will put my phone away now and enjoy the evening first. 😉

@Piezoid
Copy link
Contributor Author

Piezoid commented Aug 8, 2022

Are you familiar with the mpusz unit library and can you tell me if I made a correct assumption that you try to solve something similar here. If not what is the difference?

I'm not familiar with this library. But I would say no. The objective of this PR is to make fixed point precision variable at compile time and have all the constants change automatically in the code. The usage of user-defined literals for additional units to _mu (_mm, _cm) is just a convenience and is orthogonal to this issue.

mpusz' library could indeed handle that last bit about units. However I don't know if it can work with fixed points numbers the way CuraEngine does.

Usually, fixed point representations divide the result of multiplications by the scaling factor. For example, representing milimeters with a scaling factor of 1000, an area computation looks like:
[1.234 mm * 0.500 mm] = 1234 * 5000 / 1000 = 6170 = [6.170 mm²] -> The scaling factor for areas is kept at 1000.
Cura doesn't use real fixed points numbers, instead everything is in µm and areas are in µm². This is simpler and faster when the scaling factor is not a power of 2. However it creates an excess of precision which can lead to overflows.
This PR keeps Cura's "pseudo-fixed point system", but enables changing the scaling factor (or base unit).

For example, you can increase INT10POW_PER_MM from 3 to 4 and have Cura do all the computations in 10th of micrometers (0.1 µm). Technically this could allow to print a house or aim a ion beam deposition machine without manually scaling back and forth the input and the output.
A more practical improvement is that all the constants are passed to the same scaling code. There is no longer the implicit assumption that 1000 means 1 mm. During testing, this allows to track overflows and precision issues.

At the current scale factor, I have seen some cylindrical models slice with a number of beads that vary around the perimeter. Increasing the scale factor made this unevenness disappear:
cylinder
Also, near intersections are less of a problem with a higher scaling factor. For example with 0.1µm units, this model Ultimaker/Cura#12984 can be sliced without hitting CuraEngine/src/SkeletalTrapezoidation.cpp:335: void cura::SkeletalTrapezoidation::computeSegmentCellRange(vd_t::cell_type &, cura::Point &, cura::Point &, vd_t::edge_type *&, vd_t::edge_type *&, const std::vector<Point> &, const std::vector<Segment> &): Assertion '! (v0 == to && v1 == from)' failed.

To summary, this is about adding some flexibility for testing and future proofing. Also having all the dimensional constants indexed by few commits can be useful 🙂

Edit: spelling, sample image, and

I will put my phone away now and enjoy the evening first. 😉

good evening!

@Piezoid Piezoid mentioned this pull request Aug 31, 2022
2 tasks
@github-actions
Copy link
Contributor

github-actions bot commented Sep 1, 2022

Unit Test Results

25 tests  ±0   25 ✔️ ±0   10s ⏱️ -1s
  1 suites ±0     0 💤 ±0 
  1 files   ±0     0 ±0 

Results for commit 4d81c52. ± Comparison against base commit 36968e3.

♻️ This comment has been updated with latest results.

@Piezoid
Copy link
Contributor Author

Piezoid commented Sep 1, 2022

Uh? The CI built origin/main (75c3d06) , not the PR branch (4d81c52). @jellespijker

@rburema
Copy link
Member

rburema commented Sep 1, 2022

Also what I see: https://github.com/Ultimaker/CuraEngine/runs/8134929036?check_suite_focus=true

Resolving deltas: 100% (21/21), done.
  From https://github.com/Ultimaker/CuraEngine
   * [new ref]         75c3d060df97e294cb586053f7f9a3e257107[384](https://github.com/Ultimaker/CuraEngine/runs/8134929036?check_suite_focus=true#step:3:389) -> origin/main
Determining the checkout info
Checking out the ref
  /usr/bin/git checkout --progress --force -B main refs/remotes/origin/main
  Switched to a new branch 'main'
  branch 'main' set up to track 'origin/main'.
/usr/bin/git log -1 --format='%H'
'75c3d060df97e294cb586053f7f9a3e257107384'

@rburema
Copy link
Member

rburema commented Sep 1, 2022

However, I see that it goes right for the updated workflows for the other PR's, so I'll try to open and close this like @jellespijker told me to (to restart the unit tests).

@rburema rburema closed this Sep 1, 2022
@rburema rburema reopened this Sep 1, 2022
@Piezoid
Copy link
Contributor Author

Piezoid commented Sep 1, 2022

I think that the issue is the pull_request_target event source:

This event runs in the context of the base of the pull request, rather than in the context of the merge commit, as the pull_request event does.

@jellespijker
Copy link
Member

Yes I think that was a remnant of an old attempt to fix it.

We only need to distinction between PR from forks and internal PR in our reusable version-workflow.

I updated the the unit test work flow.
Just to be sure I will close and reopen this PR.

Sorry for being our guinea pig

@jellespijker jellespijker reopened this Sep 1, 2022
@jellespijker
Copy link
Member

jellespijker commented Sep 1, 2022

😞 we have a failing test, but the workflow seems to work again.

Although the results aren't published

Warning: This action is running on a pull_request event for a fork repository. It cannot do anything useful like creating check runs or pull request comments. To run the action on fork repository pull requests, see https://github.com/EnricoMi/publish-unit-test-result-action/blob/v1.20/README.md#support-fork-repositories-and-dependabot-branches

@Piezoid
Copy link
Contributor Author

Piezoid commented Sep 1, 2022

Sorry for being our guinea pig

No worries, I pushed because I saw some activity in the workflows and wanted to give it a try.

Fails as expected, that confirms the source commit.

@jellespijker
Copy link
Member

We actually love this PR and something like this was on our wishlist for a while now. Let us know when it is ready to merged.

@Piezoid
Copy link
Contributor Author

Piezoid commented Sep 7, 2022

We actually love this PR and something like this was on our wishlist for a while now. Let us know when it is ready to merged.

@jellespijker Thanks!

Well, I think it's ready, even if I occasionally find missing units on literal when debugging my branch with increased precision. Although, i'm unsure I this PR can achieve complete exhaustiveness in a reasonable time.

I'm open to renaming suggestions for constants and functions in Coord_t.h.

@Piezoid
Copy link
Contributor Author

Piezoid commented Sep 13, 2022

I ended up adding to this PR the reimplementation of the serialization of coord_t into decimal string, I couldn't deal with the old code there (This superseeds #1584). Beside, it is faster than calling sprintf: the duration of the export phase was reduced by 40% (torture test toaster scaled at 150%, export phase duration reduced from 21.8 s to 13.1 s).

The tests pass with a integer unit ranging from 1 nm to 0.1 mm (INT10POW_PER_MM 1 to 6). This doesn't means that Cura is usable at these scales, it will suffer from rounding issues and overflows very quickly. For example, measuring prime time tower volume in 1 nm^3 units will certainly overflow.

I'm experimenting on my dev branch with using more floating point in computations to solve this. The method is to have all irrational functions (like vSize) return a floating point and mechanically pushing it down untill it is converted back to a 'coord_t' for insertion into a Polygon. Some of these changes are very extensive, like these led to converting the line widths into floating point inside Beading strategies from top to bottom.
From a performance standpoint it doesn't make sense on modern CPU to use integer computations exclusively. The integer ALU execution ports are probably saturated by address computations while the FPU ports sit mostly unused. Conversions from/to floating points can have surprising effects and need special care, but doing them intentionally reduces the number of implicit truncation.
This is very much a WIP, some things are currently broken, but it slice every "failed to slice" models from the bug tacker without Lukáš Hejl's patches (#1731, though they should definitively be merged!). It also improves the issues of wandering seams and walls transitioning back and forth on even contours (like in the screenshot above).
(The code published a linearization of my dev branches that I use for bisecting during tests, thus it contains different intents and may be rebased at any time)

@Piezoid
Copy link
Contributor Author

Piezoid commented Sep 27, 2022

No support for <span> in gcc 9. 😞
I guess I could use <range/v3/view/span.hpp>?

 - In LayerPlan::addWall: there was no scaling
 - In Simplify(const Settings& settings): wrongly scaled mm->coord_t, as noted
   in #1677.
@Piezoid
Copy link
Contributor Author

Piezoid commented Sep 30, 2022

I guess I could use <range/v3/view/span.hpp>?

Even if ranges-v3 is included in the cmake dependencies, I could not include it: fatal error: 'range/v3/view/span.hpp' file not found
This is the list of include path passed to the compiler:

-I./include
-I./tests/../include
-isystem $HOME/.conan/data/spdlog/1.10.0/_/_/package/808f890cf60c33f93ca0487b4edb472493ec000d/include
-isystem $HOME/.conan/data/fmt/9.0.0/_/_/package/290cacd6edfcde8615e4e76e8a93b473347b0321/include
-isystem $HOME/.conan/data/arcus/5.2.0-alpha/piezo/testing/package/ce11c40a6970dfa7b6e489d3e84271fb2bc7f688/include
-isystem $HOME/.conan/data/protobuf/3.21.4/_/_/package/22d9ad5d81e60c43a5f5b504fd8e70c9bf2de2cf/include
-isystem $HOME/.conan/data/zlib/1.2.12/_/_/package/24d596ecc3e7cfef35630620092c5615473ba82a/include
-isystem $HOME/.conan/data/clipper/6.4.2/_/_/package/44b32a5143880d5efe1678c0bc6f3bd000783e6f/include

No ranges-v3 here...

I also added a commit fixing the scaling of the meshfix_maximum_extrusion_area_deviation setting, which was wrong in the constructor of Simplify. That might explain why you had to adjust profiles (it was divided by 1000).

@jellespijker
Copy link
Member

Hi @Piezoid sorry that this PR is taking such a long time to get merged, even though we want something like this in there for a long time now. We have been pressed for time a while now, but we are blocking our calendar at least for an hour a week as team to collectively pick-up more PR's.

I dibs this one. Devs see CURA-9775.

@jellespijker
Copy link
Member

Did you do a new conan install . --build=missing --update && conan install . --build=missing --update -s build_type=Debug again, because Ranges-V3 should be installed then

@jellespijker
Copy link
Member

jellespijker commented Oct 21, 2022

I would love to see the conversion functions written a bit more generic; I think there are to much specializations atm, maybe something like this:

#include <concepts>

template<class T>
concept is_number = std::integral<T> || std::floating_point<T>;

static constexpr std::integral auto INTEGER_MM_FACTOR = 1000;

template<size_t dimension = 1>
constexpr std::floating_point auto to_mm(const is_number auto& number) noexcept
{
    if constexpr (dimension == 0)
    {
        return  1. ;
    }
    if constexpr (std::is_integral_v<decltype(number)>)
    {
        constexpr std::floating_point auto rounding = 0.5;
        return number / std::pow(INTEGER_MM_FACTOR, dimension) + std::copysign(rounding, number);
    }
    return number / std::pow(INTEGER_MM_FACTOR, dimension);
}

Which could then be used as:

coord_t lvalue_coordt = 12345678;
auto p_mm = to_mm<>(lvalue_coordt);
auto p_mm_rvalue = to_mm<>(12345678);
auto p_mm2 = to_mm<2>(lvalue_coordt);
auto p_mm2_rvalue = to_mm<2>(12345678);

The suggestion show the same performance as your code, but adds more flexibility imo.
image

https://quick-bench.com/q/uu0jfBz57K5gVqLtAHZTbLcScoc

With the use of concepts and constrains we can easily add an specialization for points

#include <concepts>
#include <cmath>

template<class T>
concept is_number = std::integral<T> || std::floating_point<T>;

template<class T>
concept is_point_2d = requires(T p){
    { is_number<decltype(p.X)> };
    { is_number<decltype(p.Y)> };
};

template<class T>
concept is_point_3d = requires(T p){
    { is_number<decltype(p.x)> };
    { is_number<decltype(p.y)> };
    { is_number<decltype(p.z)> };
};

template<class T>
concept is_point = is_point_2d<T> || is_point_3d<T>;

static constexpr std::integral auto INTEGER_MM_FACTOR = 1000;

template<size_t dimension = 1>
constexpr std::floating_point auto to_mm(const is_number auto& number)
{
    if constexpr (dimension == 0)
    {
        return  1. ;
    }
    if constexpr (std::is_integral_v<decltype(number)>)
    {
        constexpr std::floating_point auto rounding = 0.5;
        return number / std::pow(INTEGER_MM_FACTOR, dimension) + std::copysign(rounding, number);
    }
    return number / std::pow(INTEGER_MM_FACTOR, dimension);
}

template<size_t dimension = 1>
constexpr is_point auto to_mm(const is_point auto& point)
{
    using point_t = decltype(point);
    if constexpr (is_point_2d<point_t>)
    {
        return point_t{ to_mm<dimension>(point.X), to_mm<dimension>(point.Y) };
    }
    return point_t{ to_mm<dimension>(point.x), to_mm<dimension>(point.y), to_mm<dimension>(point.z) };
}

In order for that to work however we do need to modernize the class Point3 and class Point but you get the idea.

What is your opinion about such a change, we can help out here, by opening a PR against your branch.
Suggestion__Contributes_to_CURA-9775_Systematic_use_of_coord_t_conversions,_enabling_to_ch.patch.txt

@Piezoid
Copy link
Contributor Author

Piezoid commented Oct 21, 2022

I would love to see the conversion functions written a bit more generic; I think there are to much specializations atm, maybe something like this:

I'll look into that (I will probably not be available enough in the forthcoming weeks).

I've been using a generic version on my branch that adds floating point computations: Coord_t.h and IntPoint.h. It's more of a playground in constant evolution, but I could back port some of it.

Your version brings some improvement that I'll merge and back port to this PR.

@jellespijker
Copy link
Member

I will probably not be available enough in the forthcoming weeks

Are you open to accepting PR's from us to help this PR forward?

@Piezoid
Copy link
Contributor Author

Piezoid commented Oct 23, 2022

I will probably not be available enough in the forthcoming weeks

Are you open to accepting PR's from us to help this PR forward?

Yes, PRs are welcome!

Edit: some minor observations on the generic code:

  • whether the trunc->round offset is applied or not should depend on the return type rather than the argument.
  • It can be useful to have a separate function for rounding floating to coord (for example using it in VoronoiUtils)
  • multiplying number * (1 / power) can be marginally faster than double division,
  • I'd rather have a coordinate_t that restrict to coord_t | std::floating_point, this would catch some use of narrower int types, more subject to overflows.

@jellespijker
Copy link
Member

jellespijker commented Nov 11, 2022

I'm not familiar with this library. But I would say no. The objective of this PR is to make fixed point precision variable at compile time and have all the constants change automatically in the code. The usage of user-defined literals for additional units to _mu (_mm, _cm) is just a convenience and is orthogonal to this issue.

mpusz' library could indeed handle that last bit about units. However I don't know if it can work with fixed points numbers the way CuraEngine does.

Usually, fixed point representations divide the result of multiplications by the scaling factor. For example, representing milimeters with a scaling factor of 1000, an area computation looks like:
[1.234 mm * 0.500 mm] = 1234 * 5000 / 1000 = 6170 = [6.170 mm²] -> The scaling factor for areas is kept at 1000.
Cura doesn't use real fixed points numbers, instead everything is in µm and areas are in µm². This is simpler and faster when the scaling factor is not a power of 2. However it creates an excess of precision which can lead to overflows.
This PR keeps Cura's "pseudo-fixed point system", but enables changing the scaling factor (or base unit).

For example, you can increase INT10POW_PER_MM from 3 to 4 and have Cura do all the computations in 10th of micrometers (0.1 µm). Technically this could allow to print a house or aim a ion beam deposition machine without manually scaling back and forth the input and the output.
A more practical improvement is that all the constants are passed to the same scaling code. There is no longer the implicit assumption that 1000 means 1 mm. During testing, this allows to track overflows and precision issues.

I personally think that to usage of mp-units would also solve this, while saving us from implementing this type handling our self. Especially since it should be part of the C++23 standard.

Following code is based on me playing around a bit with definition of points and polygons main...polygonamory

using base_length = units::isq::si::micrometre; // The base length unit, use a different prefix for more or less precision

template<class T>
concept Scalar = std::is_integral_v<T> || std::is_integral_v<typename T::rep>;

template<Scalar Tp, std::size_t Nm = 2>
using Point = std::array<Tp, Nm>;

// Length unit specialization of Point
template<Scalar Tp, std::size_t Nm = 2>
using Position = Point<units::isq::si::length<unit::base_length, Tp>, Nm>;

auto point_1 = Position<std::int64_t>{ 2000 * u::mm, 5 * u::km };
auto point_2 = Position<std::int64_t>{ 2 * u::m, 40 * u::um };

auto point_3 = point_1 + point_2;
EXPECT_EQ(point_3.at(0), 4 * u::m);

We discussed this PR in the team however and we came to the conclusion that we first will focus on systematic usage of the coord_t type throughout the code (devs see CURA-9802) This ticket is on our backlog and will be picked up on of these weeks as part of our Technical Improvements epic.

Once we use coord_t consistently through out the code unit-wise calculation and scaling such as you also cover in this ticket or I myself suggest with the help of mp-units will be revisited as a discussion (devs see CURA-9802). Both approaches have merit imo, and both would greatly reduce bug-prone behavior as what you're fixing in PR #1738

@Piezoid
Copy link
Contributor Author

Piezoid commented Nov 11, 2022

We discussed this PR in the team however and we came to the conclusion that we first will focus on systematic usage of the coord_t type throughout the code (devs see CURA-9802)

If that can help, I have few commits substituting s/int/coord_t/ in (I hope) the right places: d65bdad, ead7283, 70590f5

@jellespijker
Copy link
Member

@Piezoid just want to explicitly say: You're appreciated!

thx

@github-actions github-actions bot added the PR: Community Contribution 👑 Community Contribution PR's label Nov 30, 2022
@DerGenaue
Copy link
Contributor

Just to make sure you are aware:
Reducing Curas internal 64 bit fixed point numbers from 1µm to 10nm units DOES reduce the theoretical maximal representable distance from the entire solar system (heliopause is ~18 billion km) to only the orbital radius of the earth (~150 million km).
So this change might make Cura unusable for some interplanetary scale print jobs

@Piezoid Piezoid closed this by deleting the head repository Dec 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
PR: Community Contribution 👑 Community Contribution PR's
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants