Skip to content

Commit

Permalink
Update Docs
Browse files Browse the repository at this point in the history
  • Loading branch information
alandefreitas committed Dec 23, 2020
1 parent 8c3cc06 commit 34c935a
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 5 deletions.
10 changes: 7 additions & 3 deletions docs/quick-start/front-container.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ The `pareto::front` object defines a container for **Pareto fronts**, which is b

```python
# Three-dimensional Pareto front
# This dimension will be set when you insert your first point
# The dimension will be set when you insert the first element
m = pareto.front()
```

Expand Down Expand Up @@ -73,10 +73,14 @@ These objectives often go in different directions (e.g., minimize price vs. maxi
!!! tip "Plotting Fronts"
The header `pareto/matplot/front.h` includes some snippets to plot these fronts with [Matplot++](https://github.com/alandefreitas/pareto/blob/master/).

Data scientists often use linear lists to represent these fronts, with a cost of $O(mn^2)$ for several operations. This makes it unfeasible to represent the thousands or millions of solutions we usually have in a non-polynomial multidimensional optimization problem due to the curse of dimensionality. With spatial indexes, this cost reduces to only $O(m \log n)$.
Data scientists often use linear lists to represent these fronts, with a cost of $O(mn^2)$ for several operations. This
makes it unfeasible to represent the thousands or millions of solutions we usually have in a non-polynomial
multidimensional optimization problem due to the curse of dimensionality. With spatial indexes, this cost reduces to
only $O(m \log n)$.

!!! tip "Indicators"
Because Pareto fronts include solutions that are incomparable by definition, we need metrics to tell us the quality of a front. The `front` objects implements lots of performance indicators that can give us measures of:
Because Pareto fronts include solutions that are incomparable by definition, we need metrics to tell us the quality of a
front. The `front` objects implement lots of performance indicators that can give us measures of:

* hypervolume
* convergence
Expand Down
10 changes: 8 additions & 2 deletions docs/quick-start/spatial-containers.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,10 +51,16 @@ Many applications already need to implement such kinds of containers, although i
!!! info "Runtime dimensions"
Some problems are so dynamic that even the number of dimensions changes at runtime. In these applications, you can set the number of compile-time dimensions to `0`, and the containers will accept keys with any number of dimensions. This, of course, comes at a cost of an extra dynamic memory allocation per element.

The usual `find(k)`, `lower_bound(k)`, and `upper_bound(k)` functions of unidimensional maps are not enough for spatial containers. We fix this with **query iterators**, that explore the spatial data according to a list of predicates. Queries can limit or expand their search region with a conjunction of predicates such as intersections, disjunctions, and nearest points.
The usual `find(k)`, `lower_bound(k)`, and `upper_bound(k)` functions of unidimensional maps are not enough for spatial
containers. We fix this with **query iterators**, that explore the spatial data according to a list of predicates.
Queries can limit or expand their search region with a conjunction of predicates such as intersections, disjunctions,
and nearest points.

!!! tip "Predicate Lists"
To make queries more efficient, the `pareto::predicate_list` object compresses redundant predicates and sort these predicates by how restrictive they are. All tree nodes store their minimum bounding rectangles, and these underlying data structures are then explored to avoid nodes that might not pass the predicate list. This allows us to find each query element in $O(m \log n)$ time, regardless of how complex the query is.
To make queries more efficient, the `pareto::predicate_list` object compresses redundant predicates and sorts these
predicates by how restrictive they are. All tree nodes store their minimum bounding rectangles, and these underlying
data structures are then explored to avoid nodes that might not pass the predicate list. This allows us to find each
query element in $O(m \log n)$ time, regardless of how complex the query is.



Expand Down

0 comments on commit 34c935a

Please sign in to comment.