Skip to content

Commit

Permalink
docs: fix absolute links
Browse files Browse the repository at this point in the history
  • Loading branch information
percevalw committed Sep 7, 2023
1 parent 404c2ff commit ae5123d
Show file tree
Hide file tree
Showing 7 changed files with 12 additions and 6 deletions.
2 changes: 1 addition & 1 deletion docs/pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ model(pdf_bytes)
model.pipe([pdf_bytes, ...])
```

For more information on how to use the pipeline, refer to the [Inference](../inference) page.
For more information on how to use the pipeline, refer to the [Inference](/inference) page.

## Hybrid models

Expand Down
2 changes: 1 addition & 1 deletion edspdf/pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -854,7 +854,7 @@ def __exit__(ctx_self, type, value, traceback):
return context()


def load(config: Union[Path, str, Config]):
def load(config: Union[Path, str, Config]) -> Pipeline:
error = "The load function expects a Config or a path to a config file"
if isinstance(config, (Path, str)):
path = Path(config)
Expand Down
4 changes: 2 additions & 2 deletions edspdf/pipes/embeddings/box_transformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ class BoxTransformer(TrainablePipe[EmbeddingOutput]):
Initializing with a value close to 0 can help the training converge.
attention_mode: Sequence[RelativeAttentionMode]
Mode of relative position infused attention layer.
See the [relative attention](relative_attention) documentation for more
information.
See the [relative attention][edspdf.layers.relative_attention.RelativeAttention]
documentation for more information.
n_layers: int
Number of layers in the Transformer
"""
Expand Down
5 changes: 3 additions & 2 deletions edspdf/pipes/embeddings/huggingface_embedding.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ class HuggingfaceEmbedding(TrainablePipe[EmbeddingOutput]):
occurrence that is the closest to the center of its window.
Here is an overview how this works in a classifier model :
![Transformer windowing](./assets/transformer-windowing.svg)
![Transformer windowing](/assets/images/transformer-windowing.svg)
Examples
--------
Expand Down Expand Up @@ -82,7 +82,8 @@ class HuggingfaceEmbedding(TrainablePipe[EmbeddingOutput]):
)
```
This model can then be trained following the [training recipe](/recipes/training/).
This model can then be trained following the
[training recipe](/recipes/training/).
Parameters
----------
Expand Down
4 changes: 4 additions & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,7 @@ plugins:
show_root_toc_entry: false
show_signature: false
merge_init_into_class: true
- autolinks
- glightbox:
touchNavigation: true
loop: false
Expand Down Expand Up @@ -155,3 +156,6 @@ markdown_extensions:
- pymdownx.emoji:
emoji_index: !!python/name:materialx.emoji.twemoji
emoji_generator: !!python/name:materialx.emoji.to_svg

validation:
absolute_links: ignore
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ docs = [
"mkdocstrings~=0.20",
"mkdocstrings-python~=1.1",
"mkdocs-autorefs@git+https://github.com/percevalw/mkdocs-autorefs.git@0.4.1.post0",
"mkdocs-autolinks-plugin~=0.7.1",
"mkdocs-gen-files~=0.4.0",
"mkdocs-literate-nav~=0.6.0",
"mkdocs-material~=9.1.0",
Expand Down

0 comments on commit ae5123d

Please sign in to comment.